title
string
paper_decision
string
review_1
string
rebuttals_1
string
review_2
string
rebuttals_2
string
review_3
string
rebuttals_3
string
review_4
string
rebuttals_4
string
global_rebuttals
string
dataset_source
string
conference_year
int64
review_5
string
rebuttals_5
string
review_6
string
rebuttals_6
string
review_7
string
rebuttals_7
string
review_8
string
rebuttals_8
string
An Improved Relaxation for Oracle-Efficient Adversarial Contextual Bandits
Accept (poster)
Summary: This paper investigates the contextual bandit problem where the contexts are sequentially sampled from a known i.i.d. source, and losses are generated adversarially. The primary focus is on developing oracle-efficient algorithms that minimize the expected regret (the expectation over both the context's randomness and the algorithm's internal randomness). Specifically, the paper demonstrates that for any finite policy set $\Pi$ with an value-ERM oracle (capable of finding the minimal policy loss given any sequence of losses), one can achieve an oracle-efficient regret bound of the order $T^{2/3}(K\log|\Pi|)^{1/3}$, where $K$ is the number of arms. This finding surpasses the previous bound of order $(TK)^{2/3}(\log |\Pi|)^{1/3}$ in Syrgkanis (2016). The proof technique essentially follows the relaxation-based argument from Rakhlin and Sridharan (2016) and Syrgkanis (2016). However, a distinguishing feature of this paper is the introduction of novel Rademacher vectors when defining the relaxation. The authors successfully incorporate these new Rademacher vectors into the analysis framework established by Syrgkanis (2016), thereby advancing the current state of the art. Strengths: The main strength of this paper is the oracle-efficient regret bound that improves the state of the art. The authors also introduces several new ideas and techniques that is of independent interests. Weaknesses: The submission does not seem to have any significant weaknesses. However, I have a few minor comments concerning the presentation: 1. The paper contains several typographical errors, including: - line 1: "for for" - line 284: isn't (ii) follows from the independence of $\epsilon_t$? - line 394: should be "Proof of Lemma 4" 2. It would be better if the authors could reference the corresponding appendix section in the main text where proof is provided. 3. For a more direct comparison with previous bounds, it would be beneficial to consider some specific values for $K$, such as setting $K=T^{\alpha}$, where $\alpha<1$. This would provide more tangible examples and help in understanding the practical implications of the bounds. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: I have a couple of questions for the authors: 1. The paper currently focuses on the finite policy set. Could the techniques developed in this paper be utilized to enhance the Rademacher complexity-based bounds, as discussed in Rakhlin and Sridharan (2016)? 2. Could the i.i.d. assumption be relaxed? For instance, could it accommodate smooth adversaries as proposed in Haghtalab et al. (2022)? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: No issue with negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments. We are glad that you think our ideas and techniques are of independent interests. Please find below our response to the questions and comments mentioned in your review. **Questions** Q1: We initially obtain a "Rademacher style" bound (Equation 8 in our paper) in our proofs, but we upper bound this using Theorem 6. The bounds in Equation 8 are similar to the bounds of Rakhlin and Sridharan (Equation 8 in their paper) but do not seem to be directly comparable given the existence of the $Z_t$ term (see Equation 4 of their paper for their definition of $\mathcal{R}$). Q2: Our main focus in this paper was improving the existing regret bounds in the setting based on prior work as the problem is already difficult. We agree however that the mentioned relaxation of the assumption is an interesting direction for future work and will add a discussion in the conclusion section for our revision. (see also "Relaxing the assumption of the setting" in our general response) **Weaknesses** > The paper contains several... Thank you for pointing out these errors. We will fix them for the revised version of our paper. > It would be better if the authors... We agree that this helps improve the readability of the paper and will add the section numbers for the revised version of our paper. > For a more direct comparison with previous bounds,... Thanks for a great suggestion. By setting $K = T^{\alpha}$, the suggested bounds by Rahklin and Sridharan 2016 and independently by Syrgkanis et al. 2016a equal $O(T^{3/4+\alpha/2} \log(|\Pi|)^{1/4})$. This is further improved by Syrgkanis et al. 2016b to the ratio of $O(T^{2/3+2/3\cdot \alpha} \log(|\Pi|)^{1/3})$, which has been state-of-the-art before our work. Our technique guarantees $O(T^{2/3+1/3\cdot \alpha} \log(|\Pi|)^{1/3})$. For example, putting $\alpha = 1/2$ yields a vague regret bound of $O(T \log(|\Pi|)^{1/3})$ for Syrgkanis et al. 2016b, whereas we obtain $O(T^{5/6} \log(|\Pi|)^{1/3})$ which is still sub-linear for $K = T^{1/2}$ with respect to $T$. Moreover, the upper bound of Syrgkanis et al. 2016b hold upon the condition $T \ge K^2 \log (|\Pi|)$, thus their result does not apply if $K = \Omega(T^{1/2})$. --- Rebuttal Comment 1.1: Title: no additional questions Comment: Thank you for addressing my questions. I have no additional questions at this point.
Summary: In this paper, the authors consider a classic contextual bandit problem with adversarial loss and stochastic context. Specifically, they proposed a relaxation-based algorithm which achieves $O(K^{1/3}T^{2/3}\log^{1/3}|Pi|)$ expected regret bound, improving upon the best known $O(K^{2/3}T^{2/3}\log^{1/3}|Pi|)$ obtained by Syrgkanis et al., 2016 under the same assumption. The algorithm is oracle-efficient, which requires $K+1$-number of call to the ERM oracle. The main improvement upon the algorithm proposed in [Syrgkanis et al., 2016] is a new relaxation expression. Specifically, they replace the random vector $\epsilon_t$ with all entries a Radamacher random variable by a random vector with a uniformly chosen entry being the Radamacher random variable. The reason for this improvement I think is due to the fact that the loss for each action has a uniform $\frac{\gamma}{K}$ probability to be observed and recorded in the loss estimator construction, which is missed in the construction of [Syrgkanis et al., 2016]. Strengths: - This paper considers a classic contextual bandit problem with a provable better theoretical guarantee in the expected regret bound. - The writing of this paper is clear. - The proposed algorithm is clear, which is mainly based on the relaxation-based algorithm proposed in [Syrgkanis et al., 2016] but with a better construction on the relaxation function $Rel$. - The proofs also look correct to me. Although the modification compared with [Syrgkanis et al., 2016] is not that much, the new construction on this relaxation function looks interesting to me. Weaknesses: - One concern is the significance of the obtained results. Compared with [Syrgkanis et al., 2016], the improvement is a factor of $K^{1/3}$. Although the author argued that this improvement can be significant when considering continuous action space with a discretization, I think the order on $T$ may be more important. The main difficulty from obtaining the $\sqrt{T}$ regret bound seems to be the fact that the loss estimator is meaningful only when the algorithm explore, which also appears in the BISTRO+ algorithm proposed in [Syrgkanis et al., 2016]. - Minor part: I think the algorithm actually does not require the context distribution to be known but instead require sampling from the context distribution. So I think the abstract description is not accurate, meaning that the result is stronger. - Minor typo: (1) Line 160: minimizes -> minimize (2) Line 266-267: missing a right bracket Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - I wonder whether it is possible to further generalize this relaxation-based algorithm to achieve $O(\sqrt{T})$ type result? - In addition, the current algorithm requires fresh samples of the stochastic context distribution, which is the same as what is assumed in [Syrgkanis et al., 2016]. I wonder whether it is possible to remove this assumption, since in some applications, this assumption may not hold. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: See Weakness and Questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments. We are glad that you found the writing of our paper to be clear. Please find below our response to the questions and comments mentioned in your review. **Questions** Q1: We note that the regret lower bound of $\sqrt{TK\log(|\Pi|)}$ holds for all (not necessarily efficient) algorithms. It is not known whether such a regret bound can be obtained efficiently as oracle-efficient algorithms do not always obtain optimal regret rates (Hazan and Koren, 2016). Whether or not this is the case for our problem remains an open problem. We will further expand on our discussions of this point in the conclusion section. (see also "Gap between upper and lower bounds" in our general response) Q2: Our focus on this paper was improving the existing regret bounds in the setting based on prior work as the problem is already difficult. We agree however that the mentioned relaxation of the assumption is an interesting direction for future work and will add a discussion in the conclusion section for our revision. (see also "Relaxing the assumption of the setting" in our general response) **Weaknesses** > One concern is the significance of the obtained results, Please see "Importance of the improvements" in our general response > I think the algorithm actually does not require the context distribution to be known but instead require sampling from the context distribution Thank you for pointing this out. This is correct and we will further emphasize this in the paper. Typos: Thank you for pointing out the typos; we will fix them for our revision. --- Rebuttal Comment 1.1: Title: Thanks Comment: Thank you for your response to my questions. Through I still think the improvement over K is not that significant (from order 2/3 to order 1/3), I agree that the technique used in the analysis is interesting and I keep my original score.
Summary: The paper studies the problem of minimizing regret for online adversarial contextual bandits using an ERM oracle. If one doesn't care about oracle efficiency, then the well known EXP4 algorithm achieves the optimal rate. However, the best previous oracle-efficient result is due to [Syrgkanis et al. '16]. This paper improves a factor of $K^{1/3}$ in the regret for this problem over prior work, where $K$ is the number of arms. The algorithm uses $O(K)$ calls to the oracle in every round. Their proof relies on a new relaxation under the relax-and-randomize framework of [Rakhlin-Sridharan] which has reduced variance compared to prior works. Strengths: - The paper is exceptionally well written. The problem statement, relationship to prior works, explanation of the main contributions, and technical aspects of the proof are all very clear. As such, it was easy for me to follow and understand. - The relaxation function that is used to improve the dependence on $K$ seems to be novel. Weaknesses: - The only weakness is that I'm not sure how significant the contribution/impact of the paper is: it only amounts to a $K^{1/3}$ improvement in the regret. There is still a substantial gap between the upper bound and lower bounds. The paper also doesn't really touch upon whether their rate is improvable for oracle efficient algorithms (we know EXP4 attains the optimal regret if we don't care about oracle efficiency), potentially hinting at some sort of separation result. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: I have checked almost all of the proofs of the paper. I had a few questions, ranging from questions on the technical details to more higher-level questions. 1. What is the oracle complexity for prior works? Can some discussion on this be included (if it is interesting)? 2. What can be done if the learner does not have sample access to the distribution $\mathcal{D}$? Is learning possible here? EXP4 does not require this additional assumption. 3. Line 191: what is meant by the "discretization scheme"? 4. Any intuition for what the parameter $\gamma$ controls? Should we expect any improvements in the final bound with a changing $\gamma_t$? 5. While I understand the construction of the relaxation function (specifically the design of $\epsilon_t$ and $Z_t$), it is still quite mysterious. I see how it is used in Lemma 9. Perhaps the authors could include more intuition on this. 6. In theorem 6, what is the quantifier over the $x_t$? Are we also taking expectation wrt $x_t$? 7. writing comment: In Lemma 9, perhaps $\delta$ should be properly defined. It is defined in the previous lemma, but as is, Lemma 9 cannot be read "independently" Maybe state the definition of $\delta$ before the statements of Lemma 8 and 9? 8. writing comment: In appendix, you write Theorem 4, but in the main text, it is stated as Lemma 4. 9. In line 398, the derivation here could include more explanation. 10. line 403: "We reiterate that each $p_t(i) \le \gamma$" I thought by construction we had $p_t(i) \le \gamma/K$, so we can't put $\gamma$ mass on any arm $i\in [K]$ (as stated in line 211). Am I missing something here? 11. line 432: I did not understand why the probability of $\hat{c}_t$ taking the value $Ke_i/\gamma$ is at most $\gamma/K$. By Eq (6), don't we have the probability is at least $\gamma/K$? More speculative, for my own understanding: - Do you think that this relax/randomize strategy has limitations? It seems like we cannot substantially improve this bound to get $\sqrt{T}$-style regret. - Can the arguments from [Hazan-Koren '16] be used to get lower bounds against oracle-efficient algorithms? - Superficially, the guarantee that you get seems to be what one would expect if they adapted a "PAC" algorithm that got $K/\epsilon^2$ sample complexity using an online2batch conversion (of course, here in the adversarial setting "PAC" doesn't make sense because the costs are adversarial). Does this hint at a limitation of oracle-efficient algorithms? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 2 fair Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments. We are glad that you found our paper to be "exceptionally well written". Please find below our response to the questions and comments mentioned in your review. **Questions** Q1: The oracle complexity is equivalent to that of Syrgkanis et al. 2016 and our main result is in the improved relaxation, not the leveraged oracle. We will make this more explicit in the final version of our paper. Q2: Currently our techniques rely on knowing D for constructing the hallucinated reward vectors and it is unclear if this can be easily relaxed. We believe that obtaining results that relax the assumption, even with a potentially worse regret rate, is an interesting direction for future research and will discuss this in the conclusion section (see also "Relaxing the assumption of the setting" in our general response). We note that the sampling access assumption is also made in both of the papers that operate in our setting (Rakhlin and Sridharan, 2016; Syrgkanis et al. 2016). Q3: Discretization here refers to the approximation of a continuous set with a discrete set (e.g., see chapter 8.2 in Slivkins (2019)). Q4: The main purpose of $\gamma$ is to ensure that each arms is pulled with some non-negligible probability and, through Equation (3), the parameter intuitively affects the variance of the rewards. Our current techniques do not seem to benefit from varying $\gamma$ with time as we require the same bound on the variance of rewards through time. Q5: We will expand on the intuition of these variables for both our novel aspects and the aspects based on prior work in our revision. Due to the space constraints of the rebuttal, we focus on the novelty aspect here. The intuition of the change has two parts: the admissibility of the change and why it helps obtain a better regret bound: - Admissibility: The main intuition for why only a single Rademacher variable is sufficient for ensuring admissibility is that the symmetrization step of the Relax and randomize framework applies only to a single (random) action (see Lines 440-443 as well as the LHS of Lemma 9). Applying noise to all the entries leads to valid upper bound (as is done by prior work), but is not tight. The main reason these works apply this bound is (in our understanding) that the distribution of the noise is not known (as it depends on $c$ and the algorithm), and therefore they take the upper bound of applying it everywhere. Our main insight here is that knowing this distribution is not necessary. Since the variance of the reward vector is maximized when $\hat{c}_t$ is largest, we can effectively consider the case of $c_t$ taking its maximum value (as formalized in lines 273-277). Focusing on this case however, the distribution of $\hat{c}_t$ is known because, regardless of what $q_t$ is, it takes the value $K\gamma^{-1}e_i$ with probability $\gamma / K$ (see Equation 3). - Why it helps: The reason the change helps can be seen by looking at the LHS of Theorem 6. Consider fixed values for $Z_{1:T}$. If we put random noise on all the coordinates, then the supremum leads to a policy function $\pi$ that assigns $x_t$ to coordinates with noise values equal to 1. By restricting the noise, we are restricting the power of the supremum. Q6: The values of $x_t$ are fixed; since the claim holds for any fixed $x_t$, it also holds for a random choice of $x_t$ by iterated expectation. Q7, 8: Thank you for pointing out these issues; we will apply your comments in the revised version of our paper. Q9: The rewrite is more easily seen by recalling the definition of $q_t^*(\rho_t)$ from line 213 and expanding the expectation while substituting the equation on line 201 for $R((x, c_t)_{1:t}, \rho_t)$ and further leveraging the definition of $\psi_i$ from line 397. We will expand on this substitution of variables for the full version of our result to be more clear. Q10: Thank you for pointing out this typo. The upper bound on $p_t(i)$ should be $\gamma / K$ as you correctly noted. We will fix this in our revision. Q11: Equation 6 ensures that the action $i$ is chosen with probability at least $\gamma/K$. However, choosing the $i$-th action does not necessarily lead to $\hat{c}_i$ being set to $Ke_i/\gamma$ however because of the extra noise in Equation (3). Given this Equation, the probability of $\hat{c}_t(i)=Ke_i/\gamma$ for $\hat{y}_t \sim q_t$ equals $c_t(i) \gamma/K$ which is at most $\gamma/K$ given $c_t(i) \le 1$. Lower bounds / Limitations of oracle-efficient algorithms: Our intuition is similar and it seems to us that at least the existing techniques used for relax and randomized are insufficient for improving the bound. While we were not able to prove any improved lower bounds, we consider this to be an important direction for future work and we agree that the arguments from [Hazan and Koren] are a good starting point for exploring this direction. (see also "Gap between upper and lower bounds" in our general response) **Weaknesses** > not sure how significant the contribution/impact ... only amounts to a $K^{1/3}$ improvement in the regret Please see "Importance of the improvements" in our general response. > There is still a substantial gap between the upper bound and lower bounds … potentially hinting at some sort of separation result. Please see "Gap between upper and lower bounds" in our general response. --- Rebuttal Comment 1.1: Title: Thanks Comment: Thank you for your response. I have no further questions.
Summary: This paper proposed a new algorithm for Oracle-Efficient Adversarial Contextual Bandits that achieves the best known regret bound and improve upon the previous best bound in its dependency on the action set size $K$. Strengths: The algorithm is elegant. The paper is well-written and pleasant to read. The theoretical result is sound. Weaknesses: The theoretical results are sound, but I believe for any work that make effort towards some type of computational efficiency, oracle-efficiency being one of them, experimental evaluation is always appreciated. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: NA Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments. We are glad that you found the paper well-written and that you believe our algorithm is elegant. Please find below our response to the questions and comments mentioned in your review. > any work that make effort towards some type of computational efficiency ... experimental evaluation is always appreciated. Our main focus on this paper was on the theoretical aspects of the problem. We note that the other existing works for adversarial contextual bandits (Rakhlin and Sridharan, 2016; Syrgkanis et al. 2016a, Syrgkanis et al., 2016b), also do not present experimental evaluation. Given the similarities between our frameworks, we intuitively expect our algorithm to outperform Bistro (Rakhlin and Sridharan 2016) and Bistro+ (Syrgkanis et al. 2016b) in practice since our main difference is in the construction of the relaxation function and we reduce the variance of the Rademacher vectors in each round. We agree however that exploring the practical challenges of the problem and comparing algorithms designed for different settings (stochastic, adversarial, bounded difference, non-stationary, ... ) is an important direction for future work and will further emphasize this point in the conclusion section.
Rebuttal 1: Rebuttal: We thank the reviewers for their comments. We here respond to the concerns that were shared by more than one reviewer. **Relaxing the assumption of the setting (Reviewers yx68, gZc2, and 2pq4)** Our focus on this paper was improving the existing regret bounds in the setting based on prior work (Rakhlin and Sridharan, 2016; Syrgkanis et al., 2016) because the problem is already difficult. As pointed out the reviewers, the problem setting can be generalized by removing the sample access to the context distribution. We agree that this is an interesting direction for future work and we will discuss this in the conclusion section for our revision. **Importance of the improvements (Reviewers yx68 and gZc2)** In our view, the improved dependence on the number of arms (i.e., $K$) is significant because $K$ can be large in many applications such as recommender systems. Additionally, the value can be large if the bandit algorithm is used as part of a *reduction* that defines "fake" or "synthetic" arms for a given problem. In our view, for large values of $K$, our improvement can be very significant. Indeed, if we consider the regime of $K=T^{\alpha}$ as suggested by Reviewer 2pq4, then we can see that our results can still imply sub-linear regret bounds for $\alpha > 1/2$, while the results in prior work cannot. Finally, given the importance of obtaining oracle-efficient algorithms as evidenced by the large body of work (see lines 44-50) and the lack of progress for the problem after the work of Syrgkanis et al. (2016), we consider our result to be significant. **Gap between upper and lower bounds (Reviewers yx68 and gZc2)** Currently, there is a gap between the upper and lower bounds known for our problem. The best upper bound is $O(T^{2/3}(K\log(\Pi))^{1/3})$ as shown by our paper while the best known lower bound is $\Omega(\sqrt{TK\log(\Pi)})$ which holds for all (including non-efficient) algorithms. This lower bound may not be tight for oracle-efficient algorithms however as these algorithms do not always obtain optimal regret rates (Hazan and Koren, 2016). As mentioned correctly by the reviewers, there can potentially be be a strict separation in the regret rates obtained by efficient and non-efficient (e.g., EXP4) algorithms for adversarial contextual bandits. Whether or not this is the case remains open. We briefly discuss this in the end of Section 2 and the conclusion section and will further expand on our discussion for our revised version.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings
Accept (oral)
Summary: The authors propose "ToolkenGPT." This method allows language models to take advantage of external tools without full model fine-tuning, and with the ability to show more tools and with greater detail than in-context learning typically allows. They show that use of "toolkens" (tool tokens) improves performance over baselines on arithmetic, knowledge-base QA, and embodied plan generation. Strengths: In terms of originality, the paper introduces a new and innovative approach to external tool access for language models. It seems like a hybrid of fine-tuning and few-shot learning, but uses both in a novel and interesting way. Though there's not much direct comparison to prior work, it is at least acknowledged. In terms of quality, the experimental setup is sound. It seems evident that ToolkenGPT performs better than simple (but well designed) baselines. In terms of clarity, the paper is well-written and easy to follow. I haven't read the appendix in detail, but it seems to provide enough information for reproducing the experiments. In terms of significance, this work brings up new ideas that seem very useful. Weaknesses: * The biggest weakness of this paper is in evaluation. * ToolkenGPT is not directly compared to prior work. It is only compared to things along the lines of zero-shot or few-shot baselines. At the end of the paper I'm left wondering how ToolkenGPT would stack up against e.g., a full model fine-tuning method like Toolformer. I can see that ToolkenGPT allows for demonstrating more tools with more detail than few-shot. It also seems like it's probably cheaper to use ToolkenGPT than to fine-tune. But I'm not really sure of how accuracy for the two tuning methods compare. * In evaluation there are sometimes confounding variables that render results less useful. For example, in Section 4.2 5x more data is used for training ToolkenGPT (sup) than ToolkenGPT (syn). Is the improved performance due to more training samples or higher data quality? * Evaluation sometimes lacks specificity. For example, in Section 4.2 it's unclear whether the 30 relations ICL has access to in the e.g., 234 relation case are required to include the correct relation. That seems like a very important detail. * Overall this paper covers an impressive breadth of evaluation, but I wish there was one in-depth evaluation where # of tools, model size, and number of training data points were carefully considered for strong few-shot and fine-tuning (e.g., fine-tuning on all the exact data used to train the toolkens) baselines along with ToolkenGPT while carefully controlling for other variables. * I think the comparison between fine-tuning, few-shot, and ToolkenGPT could cleaner * I'm not sure of the difference between "plug-and-play" and "frozen LM" in Table 1. I wouldn't say ToolkenGPT is "plug-and-play" given the model-specific training required for embeddings. * I'm not sure what is really meant by "massive tools." What is stopping a fine-tuning method from learning such tools? Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weaknesses Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have a nice discussion of limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviwer QLCo for acknowledging that our approach is new, innovative and interesting, brining up new and useful ideas to the field. Below we address all the reviewer's concerns and questions. (a) Comparing with full-model fine-tuning. We appreciate that the reviewer acknowledge that our proposed ToolkenGPT allows for more tools with more details than few-shot and is much cheaper than fine-tuning. We agree that a comaprison with full-model fine-tuning would bolster our evaluation and help clarify the unique advantages of our method. We explicitly address such an important comparison in the global response (b). (b) **Training data for ToolkenGPT (sup) and ToolkenGPT (syn)**. We appreciate the great observation from the reviewer and provide further experiment results to investigate the influence on training sample quantity and quality. Specifically, we sample 10/20/40 training examples of each tool for both ToolkenGPT (sup) and ToolkenGPT (syn), and report the accuracy on the 30-tools testset. RTable 5: Influence of training quantity and quality on ToolkenGPT. | # Examples per tool | ToolkenGPT (syn) | ToolkenGPT (sup) | | -------- | -------- | -------- | | 10 | 0.364 | 0.564 | | 20 | 0.464 | 0.896 | | 40 | 0.524 | 0.948 | RTable 5 shows the performance of ToolkenGPT (sup) and ToolkenGPT (syn) under the same amount of training data. As we can see, both the size and source of training data matter here, and the supervised data clearly contribute to the higher performance in the Figure 2 of the paper. We will include such ablation study into the revised version and we hope this could clarify the question from the reviewer. (c\) **Tool selection for ICL baseline**. We appreciate the reviewer for mentioning this very important detail. To illustrate this process clearly, let's index all the relations from 1 to 234. As explained in Line 273 of the paper, we have 4 datasets that involve the relations 1-30 (30-relation dataset), 1-60 (60-relation dataset), 1-100 (100-relation dataset), 1-234 (234-relation dataset), respectively. - The prompt for `ICL` includes the descriptions and the demonstrations of relations 1-30. - The prompt for `ICL (desc)` inclues the descriptions of relations 1-30 (for the first dataset) or 1-60 (for other datasets), and demonstrations of 8 random relations. It's worth noting that, even in the cases where `ICL` / `ICL (desc)` has access to every helpful tools for the dataset, i.e., for the 30-relations dataset, the performance is still inferior to ToolkenGPT (see Figure 2, datapoints in the first column). This indicates that even if the context length limit is resolved, in-context learning methods could still struggle to choose from massive tools. We hope this will clarify the question from the reviewer and we will clarify this detail clearly in the revised version. (d) **Comparison between fine-tuning, few-shot and ToolkenGPT**. We thank the reviewer for brining up this clarification question. First, we refer the reviewer to the global response (a) for a comprehensive comparison with fine-tuning. We also address the comment on the "plug-and-play" capaciy of ToolkenGPT in the global response (b). To sum up, we understand the reviewer's concern on the overlapping between "frozen LMs" and "plug-and-play". The aspect of "frozen LMs" in our paper focuses on the training efficiency, while "plug-and-play" is more challenging and requires more elegant design of parameters/prompts to reuse previous tools and plug them into new tasks. Regarding the definition of "Massive tools", we mean a large collection of tools that our model can learn and utilize. We agree that fine-tuning methods can learn such tools, but the innovation in our work is the ability to learn them with quick adaption and without exhaustive fine-tuning. When massive tools are presented, fine-tuning requires more training data (because we have more parameters to optimize) and more computing resources, which is costly to obtain, while our ToolkenGPT can "scales well to a massive number of tools with limited examples" mentioned by R-C7TV. More importantly, an implicit characteristic of "massive tools" is the dynamics of tools such that new tools are kept invented and updated every day. Thus, the growing massive tools make fine-tuning prohibitive to update model parameters frequently. We hope such a more detailed explanation could resolve the reviewer's concern and we will definitely clarify such a comparison more clearly in the revised version. --- Rebuttal Comment 1.1: Comment: With respect to (a) I think including this line of experiment into the paper will be important. It seems like based on RTable 1 the conclusion is that full fine-tuning reaches better accuracy, but at a high computation cost. It would be nice for the final version of the paper to emphasize the computation cost vs accuracy trade-off. Thanks for clarifying (b) and (c). I think including that information in the paper will be useful. I'm happy you're planning to revise/clarify the "plug and play" term (d). While it's perhaps a small point, I still find the term "Massive tools" a bit misleading. "Massive tools" seems to imply the tools are individually complex. But it seems like what you're trying to emphasize is that the model can handle many tools (regardless of their individual complexity). Anyways, it's a small thing but maybe worth some thought. Overall, thanks for the additional experiments. I trust the new results/clarifications will be added to the final version of the paper and I have accordingly recommended acceptance of the paper.
Summary: This paper proposes a new method to help LLM utilize tools. Essentially, the method trains an additional output embedding token (toolken) for each tool. During inference, if LLM outputs a toolken, a special routine of using the corresponding tool is triggered and the returned result of the tool is then replaced with the toolken for continual inference. As a result, the proposed method can accommodate more tools and unfamiliar tools, compared to in-context prompting approaches like ReAct. The author has shown the effectiveness and advantage of the proposed method across multiple domains including math reasoning, knowledge based QA and embodied AI. Strengths: 1. Compared to full model finetuning like Toolformer or TALM, the proposed method is much more efficient as it doesn't require gradient of LLM parameters and only tunes an additional output embedding matrix. 2. Empirical results on multiple domains verified the advantage of ToolkenGPT over ReAct, as it can better accommodate more tools. 3. ToolkenGPT is easy to implement and is able to be applied for various domains, which can potentially foster more research on the direction of enhancing LLMs with tools. Weaknesses: 1. several typos are present: a) line 2, missing period after "problems"; b) line 92, missing space "whichhelps"; c) line 204, versatile *and* adaptive; d) line 234, 503.2 --> 50*3.2 2. One limitation is ToolkenGPT requires access to model activations thus not capable of being applied on closed-sourced LLMs like ChatGPT, while pure prompting method like ReAct doesn't have such limitation. As such, it's desirable to compare with a baseline of ReAct w/ ChatGPT. However, in the experiments, the author only compare with ChatGPT with 0-shot prompting (Table 2) 3. There are two parts of the proposed methods: toolken embedding and sub-routine of tool call. It's desirable to show some ablation of the latter part. For example, show a baseline of ReAct + sub-routine of tool call with demonstrations. In this way, readers can better assess the benefit of each component of proposed method Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. The paper did not provide empirical comparison between ToolkenGPT with finetuning approach like Toolformer and TALM. Though you mentioned that ToolkenGPT requires much less GPU memory than those methods, is it possible to provide some empirical comparison to those methods? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: The author has adequately discussed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer 5YmU for acknowledging that our proposed method is much more efficient, easy to implement, and can be applied to various domains to foster more research in the field of tool-augmented LLMs. We also thank the reviewer for spotting several typos and we will fix them all in the revised version. Below we resolve the reviewer's concerns and questions. (a) **Baseline of ReAct w/ ChatGPT**. We understand the reviewer's concern regarding the potential limitation of ToolkenGPT in its inability to be applied on closed-sourced LLMs like ChatGPT. From the perspective of the open-sourced community, we agree that a comparison with a baseline of ReAct w/ ChatGPT would provide a broader context and help further validate the efficacy of our method. As shown in Table 2, we already compare our method with ReAct but based on the same based model (LLaMA-30B), mainly for a fair comparison. As ChatGPT is a much stronger base model than LLaMA-30B, applying ReAct to ChatGPT is expected to boost the performance. However, our ToolkenGPT should also be benefited from stronger open-sourced LLMs, like the recent Llama 2 models, which we believe should be an interesting future work to explore. (b) **Ablation study**. Thanks for the great suggestion! To better assess the benefit of each component of ToolkenGPT, we conduct an ablation study for the two main components of our method - toolken embedding and sub-routine of tool call (tool mode) as follows. RTable 4: Ablation study on tool embeddings and tool mode. | Method | FuncQA (one-hop) | FuncQA (multi-hop) | | -------- | -------- | -------- | | ReAct | 0.57 | 0.06 | | ReAct + tool mode | 0.60 | 0.07 | | ToolkenGPT | 0.73 | 0.15 | RTable 4 is closely related to Table 2 from the paper, where we use a hard math dataset, FuncQA with 13 math tools. We further implement a baseline of combing ReAct prompt and the sub-routine of tool call (tool mode) from our ToolkenGPT. As shown in this table, the tool mode used in ToolkenGPT could help improve the correctness of tool calling and marginally improve the vanilla ReAct prompting method. We also see that ToolkenGPT still outperforms this improved baseline by a large margin, indicating that our toolken embeddings still outperform ReAct greatly in terms of deciding when and which tool to call. We hope these additional results could help provide clear evidence of the contribution of each component to the performance of ToolkenGPT. (c\) Comparing with fine-tuning approaches. Please refer to the global response (b) for addressing the comparison with full-model fine-tuning methods, like Toolformer. --- Rebuttal Comment 1.1: Title: Response to Author's Rebuttal Comment: Thank you for providing the detailed responses. I have read it carefully and hope the additional experiments will go into your future version.
Summary: This paper presents ToolkenGPT, a novel approach to generalizing LLMs to massive external tools. The key is to represent each tool as a "token", such that when LLMs generate the tool token(toolken), the associated tool will be invoked and executed. The learning of toolken is easy and efficient since there is no need to update the whole LLMs or backward the gradients through LLMs. Toolkens are appended at the last layer of the LLMs, making the cost similar to just doing inference. The authors have conducted extensive experiments and compared to proper baselines. The results are convincing. Strengths: The proposed method could make a good contribution to tool learning. In particular, the use of toolken is a novel yet elegant way to solve the massive tools adaption and generation problem. The proposed method is efficient, yet to learn and extend. The paper is clearly written and easy to follow. The evaluation is comprehensive and convincing. Weaknesses: It would make the evaluation stronger if the authors can include some of the following experiments: 1. Lacking comparison with fine-tuning-based methods, e.g., toolformer. In Numerical Reasoning tasks, I believe these methods could work quite well. 2. The target is the massive tool-learning scenario, it would be great to try some tasks that involve multiple quite different tool calls at the same time, e.g., qa and calculator. Although the experiments on VirturalHome are great, these 58 tools are mostly similar. 3. Testing whether the training and inclusion of toolken will affect the language generation capacity, i.e., adding some NLU tasks to see if there is any performance drop. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: NA Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer Mpoh for acknowledging that our proposed method is novel and elegant to learn massive tools efficiently, and makes a good contribution to the field of tool learning. We address the reviewer's comments on the evaluation as follows: (a) Comparing with full-model fine-tuning. Thanks for the great comment! For the comparison with Toolformer, please see our global response (a). (b) **Consider different tools simultaneously**. Thanks for the great suggestion and we appreciate the interesting setting pointed out by the reviewer. We initially focused on demonstrating the potential of ToolkenGPT in a scenario where a set of task-specific tools are used to solve one single task. As the reviewer mentioned, we've tried our best to increase the number and the diversity of such tools, i.e., 13 tools in math, 234 tools in KBQA, and 58 tools in VirtualHome. Since these tools are created for similar goals in specific tasks, they somehow are related to each other due to the nature of the evaluated tasks. Considering distinct tools, like QA and calculator, would require creating more challenging scenarios and tasks. We believe that such a set of experiments involving multiple, diverse tool calls could further highlight the flexibility and capability of ToolkenGPT, which serves as a realistic, multi-tool scenario for future works. (c) **Influence of Toolken on LLM generation**. Thanks for such an interesting comment! The potential influence on language generation is an interesting angle to discuss further. First of all, the training of toolken won't affect the generation capability of LLMs, as it doesn't change any pre-trained parameters for the generation. Second, for some NLG tasks like the summarization task that don't require external tools, the pure generation capacity from LLMs is enough, and thus, we can easily disable the learned tools by masking their toolkens in the output vocabulary, keeping the original generation capacity intact. This serves as a demonstration of the plug-and-play capacity of ToolkenGPT, as explained in the global response (a). Last but not least, in the most generic setting where we just keep the learned toolken embeddings and simply prompt the LLM for a generation task, we conducted an additional experiment to test the influence of toolken embeddings on NLG tasks. Specifically, we leverage the trained toolkens from math datasets and apply ToolkenGPT to two data-to-text generation datasets, WebNLG [1] and E2E [2]. We performed 10-shot learning (_randomly picked_) on the dev set of the WebNLG dataset (872) and the test set of the E2E dataset (630). We report the traditional NLG metrics, ROUGE-2 score from Huggingface evaluate and use greedy decoding for the generation. RTable 3: Performance of ToolkenGPT on NLG tasks. | Dataset | Llama-30B | Llama-30B w/ toolkens | \#Func Calls | |--------------|----------|-----------------------|------------| | WebNLG | 0.56 | 0.56 | 0 | | E2E | 0.46 | 0.46 | 0 | RTable 3 shows that the inclusion of toolkens does not have any influence on the generation capabilities of LLMs on these two NLG tasks. As we can see from RTable 3, toolkens are not activated during the generation with the number of function calls as zero. However, in rare cases where the generation tasks are very similar to the toolken training text, the LLM may mistakenly call external tools, which is an interesting problem to investigate in the future. Nonetheless, we hope these additional results could further clarify the impact of toolken embeddings on NLG tasks and we will include them in the revised version. **References**: [1] Claire Gardent, Anastasia Shimorina, Shashi Narayan, and Laura Perez-Beltrachini. The WebNLG challenge: Generating text from RDF data. ICNL, 2017. [2] Jekaterina Novikova, Ondřej Dušek, and Verena Rieser. The E2E dataset: New challenges for end-to-end generation. arXiv preprint arXiv:1706.09254, 2017.
Summary: The paper presents ToolkenGPT, a framework for extending LMs with tool use. For each new tool, a new token is added to the output vocabulary of the LM and the embedding for that token is trained with annotated or synthetic examples. When the LM generates that token, the model is switched to a different mode and prompted with examples for that particular tool in order to generate the necessary arguments for that tool call. Strengths: The method seems novel and useful, pushing forward the area of research on extending LM capabilities by teaching them to call external tools. Weaknesses: Evaluation on three different types of tasks is great. However, the choice of baselines is the main weakness of the paper. The method seems to use substantial amounts of data for fine-tuning the token embeddings, whereas all the chosen baselines are only restricted to zero-shot or few-shot prompting. Comparison with Toolformer, which fine-tunes the whole model using similar data, would be more fair. There are a number of writing errors. Please make sure to proof-read the paper. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: On line 152, "and does requires the gradients of LLM parameters" should presumably be "and does not require the gradients of LLM parameters"? When generating the training data, do you do any filtering to make sure that the tool can actually correctly handle the given query? Seems like that would help. For most of the tasks it is unclear how many synthetic data examples you generate. Please clarify. With in-context learning and 234 tools to describe, are you actually able to fit all the tool examples or descriptions into the context? Or does some of this information simply get truncated? Please clarify. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The potential risks of letting LMs make calls to external APIs, with automatically generated arguments, should also be addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer xBHJ for acknowledging that our method is novel and useful, and will push forward the field of tool-augmented LLMs. We appreciate the reviewer's great suggestions and questions. We will further proofread the paper and fix the typos in the final version. Herein, we address the reviewer's questions as follows: (a) **Choices of baselines**. Thanks for the thoughtful comment! We thank the reviewer for recognizing the coverage of our evaluation and please refer to our global response (a) for following the reviewer's suggestion to compare with Toolformer and fine-tuning a model on similar data. (b) **Training data generation and filtering**. Thanks for the great clarification question and we apologize if the process of generating training data is not very clear in the experiment section. As shown in the appendix, for math datasets, we prompt ChatGPT to produce questions and their corresponding answers. Instead of giving specific numbers, we instruct the model to use placeholders that follow our particular format. We retain only those cases that are parsable and align with our format requirements. Afterward, we programmatically assign random numbers and compute the results. For example, ``` Question: If the price of a stock increases by 10% every day, by how many times will its initial value increase after [ARG_0] days? Answer: After [ARG_0] days, the value of the stock will increase by a factor of 1.1^[ARG_0]=<power>(1.1, [ARG_0])=[ANSWER] Args: ['int'] ``` (c) **Number of synthetic data examples**. Regarding the training samples we synthesized, for the math dataset FuncQA, we synthesize 50 examples for each operation; for KAMEL, we synthesize at most 50 examples per relation (some examples are not successfully parsed and removed from the dataset, leading to 40 examples on average per relation). (d) **In-context learning on 234 tools**. Thanks for the question! It would be infeasible to fit all tool examples into the context. As indicated in Section 4.2 and Figure 2, we tried our best to fit as many tools into the context, reaching the 2048-tokens limit of LLaMA, which is about . The detailed mechanism of this baseline will be clarified in the revised version. (e) **Discussing potential risks of connecting LLMs with external APIs**. We appreciate this great comment on the potential risks of tool-augmented LLMs. We believe addressing such a potential risk requires community-wide efforts. We will delve deeper into this issue in the future work section of our revised paper. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I have reviewed the reply and stick with my original score, which is already quite high. I think the inclusion of RTable 1 is useful - as this is a parameter-efficient method, it is expected that it doesn't perform quite the same as some more resource-hungry approaches. However, RTable 2 doesn't seem particularly useful. It just shows that Toolformer was trained for longer using more powerful GPUs - something that could be easily done with ToolkenGPT as well. Without knowing the performance, or at least training time, for both systems, there is no real comparison point.
Rebuttal 1: Rebuttal: We are grateful to all reviewers for their detailed and constructive feedback! We are encouraged to see that reviewers find: - Our proposed ToolkenGPT is an "innovative, novel and useful", "a novel yet elegant way" (R-C7TV, R-xBHJ, R-QLCo, R-Mpoh), which "brings up new ideas", "fosters more research" and "pushes forward the area of research" on tool-augmented LLMs (R-xBHJ, R-5YmU, R-QLCo); - ToolkenGPT is "efficient, yet to learn and extend" (R-Mpoh), offers a "much more efficient" method (R-5YmU) that "scales well to a massive number of tools with limited examples" (R-C7TV); - Our "experimental setup is sound" with "well-designed baselines" (R-QLCo), our "empirical results on multiple domains" verify its advantage (R-5YmU), "the evaluation is comprehensive and convincing" (R-Mpoh) - ToolkenGPT "is easy to implement and is able to be applied for various domains" (R-5YmU), and we "provide enough information for reproducing the experiments" (R-QLCo) We have addressed all the questions raised by reviewers with additional experiments and thorough clarifications via separate responses to each reviewer. There are two common concerns we would like to address in the global response. (a) **Comparison with full-model fine-tuning**. As shown in Table 1 from the paper, there are many advantages of ToolkenGPT in comparison with full-model fine-tuning, including training efficiency, quick adaptation to new tools, scaling to massive tools with limited examples, etc. While most reviewers agree on the above advantages, it is also interesting to compare with the full-modal fine-tuning, in terms of computation efficiency and performance. We implement different methods with Llama-7B on FuncQA, and show the comparison in RTable 1. RTable 1: Comparison between ToolkenGPT and finetuning (using LoRA) in terms of training cost and performance on FuncQA dataset (using Llama-7B). | Model | One-hop | Multi-hop | Computing Resource | Training Time (all) | |-----------------|---------|----------| ---|---| | Fine-tune w/ LoRA | 0.62 | 0.074 | 1 \* A100 (80G) | ~40 min | | ToolkenGPT | 0.55 | 0.058 | 1 \* RTX3090 (24G) | ~ 2 min | | ReAct | 0.40 | 0.030 | - | - | | Baseline | 0.10 | 0.00 | - | - | Though finetuning performs better than ToolkenGPT, it's significantly more expensive. Even though we use LoRA (parameter-efficient finetuning) and more computing resource (e.g., GPU memories, to increase the batch size), finetuning still costs about 20× more time than training toolken embeddings. We also compare ToolkenGPT and Toolformer [1], a recent full-model fine-tuning method, in terms of efficiency on the learning of mathematic tools. Note that it's infeasible for us to compare performance, as Toolformer is not open-sourced and is based on GPT-J, a different base LM than ours (Llama). It's also computationally infeasible for us to reproduce Toolformer, given the extreme computational cost as shown in the table below. RTable 2: Comparison between ToolkenGPT and Toolformer (math). | Method | Computing Resource | Number of Training Examples |Time (per epoch)| |---|---|---|---| | ToolkenGPT (30B) | 4 \* RTX3090 (24GB) | 5k | ~16min | | ToolkenGPT (13B) | 2 \* RTX3090 (24GB)| 5k | ~7min | | ToolkenGPT (7B) | 1 \* RTX3090 (24GB)| 5k | ~5min | | ToolFormer | 8 \* A100 (40GB) | 25k (only for math) | Unknown | RTable 2 presents the comparison of computing resources and time for training our ToolkenGPT and Toolformer. It's clearly evident that our ToolkenGPT can be trained with much less computing resources, compared with the full-model fine-tuning method, Toolformer. Note that Toolformer's training time is not reported in the original paper. (b) **Plug-and-play capacity of ToolkenGPT**. As mentioned by R-C7TV and R-QLCo, the definition of "plug-and-play" in Table 1 of the paper could be further clarified. By “plug-and-play”, we mean that new toolken embeddings are disentangled with other parameters, and can be enabled/disabled or directly added to an existing tool set effortlessly. The "plug-and-play" defined in the paper doesn't mean there is no need to train the embedding, but refers to that we are able to reuse the trained toolken embeddings and plug them into any downstream settings flexibly when needed. We will clarify this term in the updated paper. Besides, it's also possible to train toolken embeddings without preparing training data manually. As demonstrated in Section 4.1 (FuncQA) and 4.2 (ToolkenGPT-syn), we can synthesize the training data for new tools. We can observe that ToolkenGPT (syn)---no in-domain training data is used---outperforms strong CoT baselines. **References** [1] Schick, Timo, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. "Toolformer: Language models can teach themselves to use tools." arXiv preprint arXiv:2302.04761 (2023). [2] Miao, Shen-Yun, Chao-Chun Liang, and Keh-Yih Su. "A Diverse Corpus for Evaluating and Developing English Math Word Problem Solvers." In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 975-984. 2020.
NeurIPS_2023_submissions_huggingface
2,023
Summary: This study introduces an approach to teach Large Language Models (LLMs) to utilize numerous tools through learning embeddings for these tools. The embeddings are incorporated into the LLMs' output layer, invoking the use of tools at relevant steps. The inference process alternates between reasoning and tool-use modes to tackle novel tasks. Empirical results validate the efficacy of ToolkenGPT in utilizing a massive number of tools across various tasks, including numerical reasoning, knowledge-based question answering, and embodied plan generation. Strengths: - The proposed method is innovative and scales well to a massive number of tools with limited examples. - The method is effective across a variety of tasks and demonstrates advanced multi-turn planning capability. - The paper is well-structured and clearly written. Weaknesses: - While the authors assert the training efficiency of ToolkenGPT, empirical results reflecting computation costs and performance comparisons are missing. - Given the current feasibility of instruction tuning (e.g., Alpaca, Vicuna), it would be beneficial to compare the proposed method with the fine-tuning of the entire model using the same training data. This comparison could clarify if there's a tradeoff between performance and computation cost. - Although the authors claim a plug-and-play capability for ToolkenGPT, this feature hasn't been evaluated in the empirical studies. Additionally, if I understand correctly, we still need to train the embeddings for new tools. The authors should clarify this plug-and-play capability in more detail. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - Is it possible to perform instruction tuning using the same training data? - Can ToolkenGPT facilitate the zero-shot plug-and-play utilization of new tools? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors have discussed the limitations thoroughly. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer C7TV for acknowledging that our method is innovative, effective and scales well to massive tools with limited examples. Below we address the reviewer's comments and questions point by point. (a) Computation costs and performance. Thanks for the great comment! We understand the importance of demonstrating computational efficiency and we present additional results on computation resources and performance in global response (a) and we will include such results in the revised version. (b) Comparing with full-model fine-tuning. Thanks so much for the great suggestion! Please refer to the global response (a) to see a direct comparison with full-model fine-tuning. \(c\) Clarifying Plug-and-play capability. Thanks for this great clarification question! We made an explanation of the term plug-and-play in the global response, and we will update the paper in the final version to make it clearer. (d) Feasibility of instruction tuning. We appreciate this great suggestion! Based on our understanding, instruction tuning for tools shares a great overlapping with full-model fine-tuning using the same training data, as explained in the previous response (b). Some minor revisions might include mixing multi-tool data together and changing the prompt format a little bit. Therefore, we believe instruction tuning could be feasible with the same training data, although the computation cost may vary depending on the training methods for the instruction tuning. Further investigation into this direction of tuning the full model on a collection of API instructions is a very promising direction for the future research of tool-augmented LLMs. (e) Zero-shot plug-and-play for new tools. Thanks for the great suggestion! Please refer to our general response (b) for discussing the "plug-and-play" capacity of ToolkenGPT. --- Rebuttal Comment 1.1: Comment: Thank you for providing the detailed responses. My concerns are well addressed.
null
null
null
null
null
null
Regularity as Intrinsic Reward for Free Play
Accept (poster)
Summary: This work proposes an exploration approach that prioritizes "regularity" in the exploration behavior, in contrast to typical novelty-seeking exploration. The objective is defined as the minimization of entropy over user-defined, object-centric features. The method is primarily evaluated on block rearrangement in the robotic manipulation setting. --- **8/22/23 Update after author response** My major concern with this paper was the generality of RaIR. In my opinion, there are two ways to create a good exploration method. The first is to come up with a general method with little-to-no assumptions that works across a variety of tasks. The second is to create a specialized method, be upfront about the assumptions made, and get very good performance by leveraging those assumptions. Before the rebuttal, RaIR was introduced as a general exploration approach, yet its experiments were limited to stacking objects of identical shapes and sizes. While stacking is a very hard exploration challenge, I expected to see environments beyond object manipulation like locomotion, video games, etc. Furthermore, RaIR does depend on the key assumption of object-centric state representation, which it uses in both its objective and dynamics model. The authors provided some followup experiments in the rebuttal. They evaluated their method on a more generalized version of their construction environment, as well as on a Quadruped locomotion environment. They also provided some additional new environments (RoboDesk, Walker), but did not run exploration experiments on them, only sanity checks. So to conclude: **My concern on generality of the method:** With the new exploration experiments on generalized construction and quadruped, the authors have shown that RaIR can handle diverse objects, and can work in a single non-manipulation task. Therefore my concern is somewhat alleviated. I would recommend the authors to show more diverse environments in general though, to remove any remaining doubt from future readers that RaIR is not engineered towards a particular type of task. **My other concerns about object-centric assumption, baselines, related work, novelty, etc:** The authors have done a good job on addressing my concerns here. I hope these discussions will go into the next version of the paper, and influence how RaIR is presented. As of such, I will raise my score to 5. The authors have passed the threshold for addressing my main concern on generality, given the limited rebuttal time. For future improvements, I would recommend the following: 1) Continue adding more diverse environments to show RaIR is a general exploration method. 2) Study the assumption of object-centric representation a bit more - what happens if we use RaIR with an unstructured dynamics model? Will the regularity objective still work? Strengths: - The high level idea of using regularity to focus exploration is well motivated, and addresses the weaknesses of the more common novelty-seeking exploration methods. - This method explores well in block stacking / rearrangement, a hard exploration problem. The experimental section analyzes the block stacking task extensively. - The manuscript is generally well written. Weaknesses: My main concern is over the scope and generality of this method. While the introduction motivates regularity as a general-purpose exploration objective, the actual implementation and assumptions made to implement the objective restricts it to a very particular set of tasks - manipulation of identical and well characterized objects. I would have expected, from the introduction, a method that could apply to all sorts of domains (e.g. robotic locomotion, video games) instead of only object manipulation tasks. In RaIR's environments, all objects are the same shape and have the same physics. For the construction environment, the authors only apply RaIR to the x-y state space to bias towards vertical alignments. However, this x-y assumption would break if the objects were not identical, since the vertical alignment would now be object dependent. Another major limitation (which I appreciate the authors mentioning in Sec. 5) seems to be the assumption of a known state space and dependence on user features. RaIR seems to hinge on having a state space that is interpretable, so that the user can provide user-defined features to bias the exploration. I am concerned about the scalability of providing user defined features, particularly if the state space becomes more complex, or uninterpretable (e.g. pretrained ImageNet representations). However, if RaIR claims to be a general exploration method, it should address such problems. Finally, the experiment section is lacking baselines. There is only 1 novelty-seeking baseline that the authors compare against, which is by design going to be suboptimal in assembly tasks. I would like to see how RaIR compares to an exploration baseline that actually seeks regularity in its objective as well. I believe SMiRL (1) to be a viable baseline, as it also seeks to minimize surprise. I would also like to see some baselines from the unsupervised goal-conditioned RL literature, as those methods also do stacking. ## Moderate concerns: - Claims of Novelty - The authors claim in the first line of the abstract that they propose regularity as a novel reward signal for intrinsic motivation. However, the high level idea of regularity / stability / niche-seeking has been explored as an intrinsic motivation signal for RL(1), and more generally in the active inference literature (2). I would like to authors to revise their claims of novelty, and include the relevant literature. - Related work is rather short. Only a small paragraph is dedicated to intrinsic motivation. A rather long paragraph is dedicated to Compression, which is not necessary. I would recommend the following changes. - Expand on intrinsic motivation in RL, particularly methods that also do manipulation tasks and how RaIR compares to them. Mention (1,2) as well, as they also explicitly mention concepts similar to regularity (stability, niche-seeking). Another omission is the unsupervised skill discovery literature, where some works (3) also show manipulation tasks. - Paragraph on compression is unnecessarily long and detailed - it is sufficient to briefly mention compression progress and move the rest to the appendix if necessary. That space should be used mainly on Intrinsic Motivation. ## Minor concerns: - Line 84: what is $n_a, n_s$? - Line 124: I found the "Relational RaIR of order K" to be a bit abstract, and I needed to read it a few times over to fully understand it. Having a figure like Figure 3 for this section would have been very helpful. - Line 220: Please point to a figure. - Recreation experiment - while the recreation behavior is interesting, the authors do not include any practical applications or downstream implications for this phenomenon. I would prefer this section be moved to the appendix and the space be used for other things (e.g. experiment in new domain) or extended related work. 1. Berseth et al, SMiRL: Surprise Minimizing Reinforcement Learning in Unstable Environments, ICLR 2021 2. Friston et al, Active inference and learning. Neuroscience & Biobehavioral Reviews 2016 3. Zhao et al, Mutual Information State Intrinsic Control, ICLR 2021 Technical Quality: 2 fair Clarity: 3 good Questions for Authors: What is the scope of RaIR? I can see two ways to present RaIR. One way, and the way the paper is written, is to promise a broad scope. However, can RaIR be applied to more complex and realistic settings both in the manipulation domain and beyond it? I would suggest for experiments that show RaIR can be applied to other domains, and in more complicated scenarios. The other way, is to narrow down the scope and claims of RaIR to an intrinsic reward for object manipulation, which itself is a very challenging and interesting domain. I would like to see experiments in more realistic manipulation setups (different shapes, sizes, masses, etc.), as mentioned above. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The authors did address a main limitation, which is the assumption to an object centric, interpretable state space. Another limitation not mentioned is the need to define task-specific features for RaIR. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed feedback on our work. We address their concerns in the following. # Generality of RaIR We have provided new experiments in new domains in the rebuttal pdf (Fig. D2-D4). As the reviewer suggested we have included RaIR generated arrangements in locomotion domains (Quadruped and Walker), Construction with custom shapes and RoboDesk. For Quadruped and Walker, we take the hips, ankles, toes etc. to be the entities over which we compute RaIR. We see the emergence of interesting regular poses, that also happen to coincide with goal poses proposed in the Roboyoga benchmark [1]. For Custom Construction with shapes, we include not just cubes (mass(m) = 2), but also a flat block (m=1.5), a short column (m=1) and a ball (m=1). In this case, we compute RaIR again on the objects’ x-y positions (center of mass (CoM)). We show that we are still able to generate stacks and regular constellations. We would also argue that there is no limitation within RaIR that would cause it to not work for objects with different shapes. If we have very complicated geometries, e.g. composite shapes, we might want to represent each entity not just with their CoM but instead add new keypoints. We also showcase RaIR in Robodesk where the entities are very varied: drawer, slide, buttons and two different blocks and a ball. Here, with regularity, we observe the agent using the ball or the block to press buttons and stacking objects together. We hope that the new experiments better illustrate the generality of RaIR as a concept. *RaIR seems to hinge on having a state space that is interpretable* We refer the reviewer to our general response, where we discuss this at length. We want to reiterate here shortly that we use proprioceptive information for objects (positions, orientation and velocities), which itself is not user-defined and is specified by the environment. We only consider access to the knowledge which subspace corresponds to object positions. We find the inference of object positions from images to be an orthogonal research problem with ongoing works showing promising results. Note that using ImageNet features alone does not help solve the control problem, as the focus there is on semantic understanding for visual learning tasks [2]. In this case, additions are needed to align the representation and actions, i.e. control over the environment. One could create regularity spatially with the added constraint of compressing ImageNet features (e.g. object labels), similar to the color experiment we show in Fig D1. # Novelty of RaIR There is a fundamental difference between our notion of regularity and niche-seeking proposed in SMiRL and active inference literature: in all the environments showcased in SMiRL the environment itself is dynamic (or unstable) and the agent wants to reach stable and familiar states. For example, not fall down the cliff, avoid enemies that are shooting at you. Our notion of regularity is completely decoupled from surprise and is in fact closely related to compression: we want to find a state description of the current scene that is regular and compressible within itself. Methods such as SMiRL require an active source of entropy. In an environment where the environment itself is static unless you act on it, which is the case in all environments we consider, we strongly believe SMiRL wouldn’t work due to the Dark Room problem, and the agent would avoid doing anything. This is also what happens when you penalize novelty in object-manipulation environments: the agent doesn’t touch any objects to make sure everything is predictable. We will revise the related work section to make these distinctions clearer as suggested by the reviewer, and extend the related work discussion to include skill literature as well. # Baselines We have included new baselines in the rebuttal pdf, most notably RND (see general response for more details). RND aims for state space coverage and can be seen as an extension of count-based metrics to continuous domains, thus has no preference for chaos. We showcase that it performs worse than both CEE-US and our regularity augmented version RaIR + CEE-US on the Singletower 3 task. It is important to highlight here that the main goal of our work is to introduce RaIR as an additional reward signal to drive exploration towards regularities. As such, RaIR in principle can be used to augment different intrinsic reward functions. In our paper, we illustrated this on the example of CEE-US+RaIR as CEE-US was shown to beat the other baselines (both policy and planning based) in [2]. Note that in unsupervised goal-conditioned RL paradigms, the driving exploration force is still commonly ensemble disagreement, as in LEXA[1] and PEG[4]. In Sec. 4, we did mention PEG and its stacking performance in a similar environment. In principle, RaIR could also be plugged into these methods as a goal-picking strategy, augmenting ensemble disagreement. # Addressing minor concerns - $n_a$ is the action space size and $n_s$ is the state space size. - Abstract order-k: We indeed aimed to generalize RaIR and abstract away from the mapping functions such as distance or difference (with k=2) to functions that operate over k-tuples corresponding to k objects. In Figure 2b, you can find an illustration for k = 2 that we think is useful to clarify k>2. - Line 220: We will point to Figure 1 as the reviewer suggested. - Recreation experiment: Our goal with these experiments was to showcase that we have a way to automatically pick-up regularities in the environment as you observe them. [1] Mendonca et al (2021), Discovering and Achieving Goals via World Models. NeurIPS [2] Sharma et al (2023), Lossless Adaptation of Pretrained Vision Models for Robotic Manipulation. ICLR [3] Sancaktar, et al (2022). Curious exploration via structured world models yields zero-shot object manipulation. NeurIPS [4] Hu, et al (2023). Planning Goals for Exploration. --- Rebuttal Comment 1.1: Title: Appreciate the progress; Still doubts remaining. Comment: I thank the reviewers for their efforts so far, it is a good start. On my concern of generality - the new environments with increased variation and results of RAIR with Ground Truth dynamics models is promising. To be fully convincing, the authors should follow the same experiment protocol as the previous experiments, e.g. learn the model while training, and have some sets of evaluation metrics instead of qualitative snapshots. A "nail-in-the-coffin" experiment would be to use real objects meshes, e.g. from the YCB dataset, and show RAIR can find interesting configurations. On my concern about novelty with respect to entropy-minimization methods like SMIRL - I still believe RAIR and SMIRL are similar due to their entropy minimization objective for exploration. Just like SMIRL, I would not expect RAIR to work in static environments without some external force or exploration policy driving disturbances (for RAIR, you are using P2E to do this). Is it fair to say that if we ran SMIRL with a novelty seeking bonus, this method will be similar to RAIR without a user defined prior over what dimensions to compute entropy over? If so, then I think it will be natural to run a "SMIRL-like" baseline that is RAIR with entropy minimization over all dimensions. On my concern about how user defined state-spaces are not general - the authors claim that the object information (e.g. object orientation) is defined by the environment and not the user. However, the environment can arbitrarily decide the object orientation which can be a problem. Let us consider a cylinder in the stacking case. There are two possible ways to align the XYZ frame - align the XYZ axis so the up vector points out of the flat circle of the cylinder, or align the XYZ frame so the up vector points out of the rounded edges. (I attempted to add 2 drawings for illustration). ``` 0===0 --> ``` ``` || ---> || ``` So if this frame is arbitrarily decided by the environment, then it will be hard, if not impossible, for the user to define something like a "XY" prior since the frames are different per object. Finally, a minor comment - "proprioceptive" information means the robot body info (e.g. joint positions, velocity) and not info about external objects. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their quick response. In this response we would like to clarify a few things. ### Comparison to SmiRL The distributions over which we compute entropy compared to SmiRL are completely different. In SmiRL, the agent regards the entropy of its state marginal distribution under its current policy $\pi_{\phi}$ at each time step. This is done by fitting a density model with parameters $\theta$ over the environment steps collected in that rollout / episode ($\{s_0, ….s_{t-1}\}$) so far and the reward is computed as the log probability $\log p_{\theta_{t-1}}(s_t)$. In summary, SmiRL seeks **familiar** states that are seen in the data collected so far **within that rollout**. We tried SmiRL reward with ground truth models in our manipulation environment and in the quadruped(ant) locomotion environment. In the manipulation environment it simply lets everything stay as it is and does nothing. In the quadruped environment, it tries to go back to the initial state or the first stable pose that is reached. There is no incentive to be stable in a regular or symmetric pose. Note that the authors of SmiRL themselves promote this reward signal for **unstable** environments. We also want to clarify a misunderstanding: **RaIR without any novelty-seeking objective still works**. For free play, we showcase results with pure RaIR ($\lambda=0$) in Figure D6 in the rebuttal pdf as well as the supplementary figures S6, S7 and Table S1. RaIR alone can solve the stacking 3 objects task with 64% success rate, better than pure ensemble disagreement. Performance improves when we add ensemble disagreement in RaIR+CEE-US, because 1) disagreement helps exploration via the information gain objective 2) since the agent targets the places where it disagrees, i.e. has high epistemic uncertainty, it becomes its own adversary and this in our experience leads to a more capable world model (which can also be seen in comparison to the RND baseline using the same backbone). In the setting with ground truth models, all configurations are obtained by only optimizing for RaIR. SmiRL would not be able to be used in this way. SMiRL with novelty seeking bonus is NOT similar to RaIR. SmiRL just makes sure a certain configuration is not altered or reoccurs during a rollout, so a constancy bias. And we want to emphasize again that in our work regularity refers to the repetition of a certain pattern, such that there is redundancy in the description of a state. One type of redundancy is symmetries. Even an unstable configuration, e.g. a vertical headstand in the air in Walker environment, has high RaIR because it is a symmetric state, even though the agent won’t be able to keep the pose. ## Concern about user-defined state spaces We believe there is a misunderstanding: First of all, we would like to mention that for regularity as such, there is no universal representation or universal measure. It is similar to the *no free lunch* principle -- you have to commit to a particular inductive bias. Whether it is the environment or the agent deciding on this, is more of a philosophical question. Nevertheless, if object positions are given in a certain frame of reference, we will either find regularity in this frame (world coordinate frame or agent centric one) when direct RaIR is used, and with relational RaIR (default case in our work) we are independent of the frame (as long as all objects are represented in the same frame of reference). Our symbols are the difference vectors between the center of masses of different entities. Currently, we are not considering orientation as a symbol. Note that even when RaIR is orientation-agnostic, finding a stable configuration works because if you put the cylinder in a configuration where it doesn’t roll over, such that you can stack another object on top, you have much higher RaIR. In the example you gave, when considering orientations, then simply a change in orientation of the lower cube would yield a higher RaIR, so this could be found by the agent. If the representation of orientation is adversarial, such that alignments would yield a hard-to-stack configuration, then orientation can be added in as a secondary constraint in hierarchical RaIR: compute RaIR once for only positions and once for `concat([positions, orientations])`, and sum these two costs. In this case regularity in orientation is attempted in addition, if possible.
Summary: This paper presents a new approach that utilizes a regularity as an intrinsic reward, which aims to regularize, or constrain, the search space of exploration towards state regions where the entities are more likely to have structured patterns. To this end, the paper first formulates how to define regularity as an intrinsic reward and investigates several candidates that can be used as a regularity. And this intrinsic reward is combined with the disagreement-based intrinsic rewards to guide the exploration. Experiments show that optimizing this intrinsic reward (by using the ground-truth models) indeed leads to behaviors that induces structured patterns in the environments, and further show that this can be useful in case everything is learned end-to-end. Strengths: - Interesting formulation based on a good intuition. I enjoyed reading the paper. - Clear writing helpful for understanding the method. This should be highlighted as understanding the concept would have been difficult without the clear writing. - Experiments are conducted to support the claims by using ground-truth models and learned models. It starts from showing the potential benefit of using the proposed reward by using the ground-truth model, then shows that it can be useful with learned models, and finally shows that it can be useful for improving the performance on downstream tasks. Weaknesses: - In contrast to clear writing until the Method section, Experiments section is a bit difficult to parse, especially it is difficult to understand the metrics used in Section 3.2. Even though they are explained in the text, further emphasis and clear description on the meaning of metrics (e.g., what relative time means, how to interpret objects in the air, flipped) could be helpful. - As the authors already highlighted in the Limitation section, this method severely depends on the accessiblity of ground-truth internal states of all the entities in the environment, which is not feasible in practice. But this is understandable as learning such information is not the main focus of the paper. - Another limitation of this paper is the difficulty of balancing (i) the chaotic exploration from novelty-seeking intrinsic rewards. (in this paper, disagreement-based reward) and (ii) regularity-seeking intrinsic rewards. Of course there could be cases when injecting our prior of prefereing the structuredness can be helpful, but sometimes it's not, as can be seen in Throw and Flip experiments in Table 2. Would there be a way to automatically balance these two rewards, or in general, how can we control the magnitude of injecting our prior to the exploration process? - Experiments are a bit limited as only specific type of structuredness is investigated (for simplicity). Technical Quality: 3 good Clarity: 3 good Questions for Authors: I don't have further question other than the ones described in the Weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their assessment and feedback. # Generalizability of RaIR We have now added new experiments where we showcase RaIR in different scenarios. In order to address the concern of focusing on one specific type of structuredness, we have tested a variant of ShapeGridWorld, where we added color as part of the object representation to be compressed for regularity. In this case, we not only have x-y positions of the circles as symbols to be compressed, but also have a color encoding (as a one-hot encoding), such that RaIR aims to generate regularities not just spatially but also in the arrangement of colors. Examples can be seen in the uploaded rebuttal pdf with 2 and 3 colors (Fig. D1). We have also optimized for RaIR in the DeepMind Control Suite environments: Quadruped and Walker, where we were able to obtain some interesting regular poses via optimizing for RaIR with ground truth models. Note that these poses mostly coincide with the goals in the Roboyoga benchmark [1]. This adds to our point that seeking regularity is reflected in human designers’ idea of interesting goals and is a very fundamental bias that pops up in various domains. # Balancing Regularity and Chaos As is the case with any injection of bias, we enter a trade-off where we have some advantages and some disadvantages. As the reviewer also hinted at, if we trust the fact that we want structured behavior in, we accept to sacrifice some more “chaos”-preferring behavior. This can in fact be seen as a resource-allocation problem. If the agent only has limited play time, what should it focus on to best prepare for future tasks? If we have have unlimited resources we can train for these two rewards in e.g. an alternating fashion. However, in our case, these two forces complement each other as discussed in Sec 3.2 in the main paper. Ensemble disagreement comes with the added problem of chaos, but it also helps the agent focus on more interesting and challenging patterns during free play, instead of generating predictable and boring patterns and helps finding high RaIR patterns that are sparse solutions. That is why in this work, we investigated a linear combination of these two reward terms, with a hyperparameter $\lambda$. In the rebuttal pdf in Figure D6, we have included the downstream task performance for the chosen lambda value for RaIR + CEE-US ($\lambda=0.1$), RaIR + CEE-US with smaller lambda ($\lambda=0.01$), pure RaIR ($\lambda=0$) and pure CEE-US. We can see that the behavior towards regularity vs. chaos can be controlled by this hyperparameter. Tuning this hyperparameter is not critical and a simple heuristics can be employed that brings the ensemble disagreement and the RaIR improvements values on a similar scale. The RaIR improvement steps can also be computed analytically, e.g. by considering the full entropic state to the one with two 2 entries in alignment. As ensemble disagreement goes down with more training throughout free play, one could explore options to schedule $\lambda$ accordingly. We find studying different combination strategies to be an exciting direction and leave it to future work. Regarding the questions about the experiment descriptions and a more detailed discussion on concerns about ground truth internal states, we refer the reviewer to the general response. [1] Mendonca et al (2021), Discovering and Achieving Goals via World Models. NeurIPS --- Rebuttal Comment 1.1: Comment: Thank you for your response. I don't have follow-up immediate concerns or questions. Currently I would like to maintain my score but I might potentially adjust my score after the internal reviewer discussion or after having a look at other reviews and discussions after they are all finalized (especially the discussion with Reviewer fHtQ).
Summary: The paper draws inspiration from a child’s development cycle and proposes the use of regularity as an intrinsic reward to help guide exploration in RL. It shows that existing metrics of state novelty can be combined with the proposed regularity metric to enable construction of regular structure during free play in a sample efficient manner. The proposed method is validated on experiments from two different tasks. Strengths: - The paper proposes an interesting and novel formulation of using regularity, in the form of symmetry, as an intrinsic reward to guide exploration during RL. - The paper builds on an existing novelty-based approach that uses the epistemic uncertainty obtained from an ensemble of models to guide exploration. The authors show that RaIR can be combined with such novelty-based rewards to further improve exploration in tasks that demand regularity. However, the authors acknowledge that for tasks that are chaotic like throwing and flipping, imposing such regularity hurts the performance (Table 2). - The paper shows impressive results for stacking towers in various configurations solely from intrinsic rewards while prior work achieves this using carefully constructed reward functions. - The paper demonstrates the effectiveness of pretraining with the regularity-based intrinsic reward in performing zero-shot generalization to downstream tasks (Sec. 3.3). - The authors also show a way of prompting (Sec. 3.4) the agent to build certain structures by incentivizing it to raise regularity by building the same structure as the prompt. Weaknesses: - The authors organize the state in the form of graph nodes with each node representing a specific feature of an object or the agent. I had a few questions about this - - How did the authors decide on this specific state representation? Does it have any advantages that a particularly attractive? - It would be great if the authors could clarify how the graph is built (is it a fully-connected graph with all nodes)? - The current form of state representation results in the authors using GNNs. However, the same representation can be dealt with using transformers. It would be interesting to add a transformer variant of RaIR. - This state representation might make it difficult to apply the method to the real world where privileged state information is not available. I would be curious to hear the author’s thoughts on this. The limitations section mentions that the same operation could be done on a latent space but it's unclear if the RaIR would be applicable to such a latent space (which might not be as good as the privileged information) based state representation. - Since the method uses iCEM for policy learning, it seems like such a method of optimizing the policy for each new configuration won’t scale with reinforcement learning. It would be great if the authors could comment on this. - How many training steps does it take to train the transition model? This would throw some light on the sample-efficiency of the method. What is “relative time” in Figure 5? - In Sec. 3.4, why does the agent not arrange everything in a single line to maximize regularity? Is it because of the finite horizon optimization (meaning that tasks that need more time steps would be preferred less as compared to tasks giving higher immediate rewards)? Technical Quality: 3 good Clarity: 3 good Questions for Authors: It would be great if the authors could address the comments mentioned in the “Weaknesses” section. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper does have a limitations section. The authors must also acknowledge the following from the “Weaknesses” section - ```This state representation might make it difficult to apply the method to the real world where privileged state information is not available. I would be curious to hear the author’s thoughts on this. The limitations section mentions that the same operation could be done on a latent space but its unclear if the RaIR would be applicable to such a latent space (which might not be as good as the privileged information) based state representation.``` Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback and address their questions and comments in the following. # Clarification on the Graph Architecture We use a fully-connected Graph Neural Network (GNN). *How did the authors decide on this specific state representation? Does it have any advantages that a particularly attractive?* The GNN requires an object-factorized state representation. Note that in this case we take the existing observation in the Construction environment (which is an extension of the original Fetch Pick & Place environment, and has been introduced in [1]). In the original observation, the robot observations (end effector position and velocity, gripper state) and the individual object observations (position, orientation as Euler angles, linear and angular velocities) are concatenated to form the environment observation. We don’t modify the existing state representation from the environment and only factorize this observation vector into its components, which is the format needed for the GNN. The RaIR computation is decoupled from the GNN or any model: it can be done on actual state observation or model predictions. For RaIR, we also use an object-factorized representation, but we don’t feed in the whole state and instead a subspace with x-y-(z) positions. # Transformers Indeed, GNNs with attention mechanisms for neighborhood aggregation are shown to be equivalent to transformers with a few modifications. We are planning to test with Transformers as world model backbones in future work as well. It is important to highlight that we expect intrinsic rewards for regularity via RaIR to lead to more capable world models in assembly tasks or in general tasks favoring regularity, irrespective of the world model backbone that is used. # Policy learning with RaIR As RaIR is stationary/Markovian, it can be used as a reward signal to learn a policy via RL. Note that for this case, we wouldn’t be training a new policy for each configuration, but we would simply train one policy with RaIR as reward. When we roll out the learnt policy, with the stochasticity of the policy and the different start states in the environment, we would expect different configurations to emerge (i.e. converge to different local minima of RaIR). As such, we don’t see a reason why RaIR wouldn’t scale with RL. *In Sec. 3.4, why does the agent not arrange everything in a single line to maximize regularity?* Yes, indeed due to finite-horizon planning as well as the limited sampling budget, we don’t necessarily converge to global minima, which is a full tower for x-y compression and a line (horizontal line or tower) for x-y-z. The planner can find solutions that are reachable within the planning horizon from the current configuration. For certain starting configurations only local optima are found during an episode. # Real-world application *This state representation might make it difficult to apply the method to the real world where privileged state information is not available.* We refer the reviewer to our general response, where we discuss this at length. # Details on Experiment Pipeline *How many training steps does it take to train the transition model? This would throw some light on the sample-efficiency of the method. What is “relative time” in Figure 5?* This is indeed a strong point of our method, which we will explain better in the paper. We refer the reviewer to the general response, where we explain the experiment pipeline and the metrics in detail for further clarification. Here we want to highlight that during the whole free play time, only 600K environment steps are performed (corresponding to about 6.5h of real-world interaction), which underpins the high sample-efficiency. [1] Li et al (2020), Towards Practical Multi-object Manipulation using Relational Reinforcement Learning. ICRA. --- Rebuttal Comment 1.1: Title: Thank you for the rebuttal Comment: I thank the authors for the rebuttal and the additional experiments. My concerns have been addressed and it would be great if the authors can include the clarifications in the next version of the paper. I am increasing my score by a point.
Summary: Inspired by the fact that humans prefer symmetric patters, the author proposed to use regularity as intrinsic reward to encourage an agent to discover interesting environmental patterns. One example of regularity is to put boxes in a symmetric pattern. The results showed that this kind of intrinsic reward allows an RL agent to generate more interesting patterns in the absence of task rewards and also allows for better world model learning for zero-shot generalization. Strengths: - The proposed method is novel and intuitive. - The experiments are comprehensive, including both analysis of the emergent behaviors and the performance of applying to downstream tasks. Weaknesses: - Lack of baselines: For section 3.3, I believe RND and ICM can be a reasonable baselines to strengthen the significance of the proposed method. It seems to me that the goal of section 3.3 is to showcase the practical benefit of RaIR in solving tasks (maximizing extrinsic rewards). To show practical benefit, it would be necessary to answer why the proposed method is a better choice than prior works. Comparison with RND and ICM would answer this question. - Regularity considered in this paper are not well motivated. I agree the motivation of using regularity as intrinsic rewards is clear, but the specific design of each type of regularity in Table 1 are unclear to me. For example, why we consider these symmetry operations and why design distance in these ways. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: (See weakness) Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Yes, the limitation is mentioned. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their review and feedback. Our response follows: # Baselines: In the newly uploaded pdf, we have included experiments with RND. Here we run RND with the same model-based planning backbone (GNN ensemble together with iCEM as in CEE-US) and only alter the intrinsic reward used for planning. (Note that with a version of RND that trains an exploration policy instead of using a planner, meaningful exploration within the budget of 600K interactions is not observed as also reported in [1]). ICM, on the other hand, computes the intrinsic reward only retrospectively: you need to have visited and observed the actual next state to be able to compute the ICM reward (the error between the model’s prediction and the actual observation). This means that ICM cannot be used in look-ahead planning methods. That’s why we have instead implemented a one-step disagreement version, which is essentially Disagreement from Pathak et.al, 2019 [2]. Here we also used a GNN ensemble backbone with iCEM. (the ICM+exploration policy combination has been shown to not work within the budget of 600K interactions in [1]) Since the experiments are still running, we have included results from the first 180 iterations for our RND variant and 200 iterations of free play for Disagreement. We will update the results once the experiments finish. We show that so far both baselines fail to solve the challenging stacking 3 objects task (we plot 3 seeds), in contrast to RaIR+CEE-US and CEE-US with the same amount of free play. Only one seed for RND solves the stacking task with 3% success rate at iteration 180. Note that we propose RaIR as an additional reward signal to drive exploration towards regularities. As such, RaIR in principle can be used to augment different intrinsic reward functions. In our paper, we illustrated this on the example of CEE-US+RaIR as CEE-US was shown to beat the other baselines (both policy and planning based) in [1]. # Motivation for regularity We compute regularity as entropy of the histogram of occurrences for symbols in a multiset. The main question is: how do we obtain these symbols? The general problem is that there is no “ground truth” regularity. In our work, we intended to open the box of regularity that is otherwise not comprehensible: why certain representations lead to certain patterns. Symmetries, as well-characterized regularities, are one way to assess the properties of these representations. This is why we discussed the choice of different mapping functions $\phi$ in terms of their invariances under known symmetry operations. This was intended to be used as an intuitive guide to choose the different representations. For example: since relational RaIR, as opposed to absolute relational RaIR, does not prefer reflections, mostly patterns with translational symmetries are generated, e.g. lines in ShapeGridWorld. Knowing the properties of these functions and its implications allows us to make design choices to inject certain biases. Similarly, for the recreation of existing patterns we cannot use direct RaIR since it is not invariant to translations. Although perhaps trivial at first glance, we wanted to formalize these properties to provide deeper insight into our regularity reward and its control knobs. [1] Sancaktar, et al (2022). Curious exploration via structured world models yields zero-shot object manipulation. NeurIPS [2] Pathak, et al (2019). Self-supervised Exploration via Disagreement, ICML --- Rebuttal Comment 1.1: Comment: Thanks for the author's response. The author addresses my questions. I'm increasing my rating to 6.
Rebuttal 1: Rebuttal: We thank all reviewers for their constructive feedback and appreciate that they found our method to be "novel and intuitive”, the experiments “comprehensive” with “impressive results for stacking towers in various configurations solely from intrinsic rewards” and the paper “generally well written”. # Summary The rebuttal addresses: **1.** Generality: We added applications of RaIR to new environments (see pdf) - 2 locomotion environments: Quadruped and Walker - RoboDesk that contains entities with diverse geometries (drawer, button, blocks etc.) and modifications of the existing environments: - Custom Construction with diverse shapes (flat block, short column and ball), where the shapes also differ in their masses. - ShapeGridWorld with color, where the objects’ colors are included as part of the object feature for regularity. We showcase the generality of RaIR and its applicability to diverse environments (here using ground truth GT models). **2.** Baselines: As suggested, we added two baselines, RND and a type of ICM/ 1-step prediction error, that get outperformed by our method, see Fig D5 **3** Addressed questions of novelty and object-centric representations # Baselines We run two new intrinsic motivation baselines: RND [1] and one-step Disagreement [2] (as a look-ahead alternative to ICM [3] suggested by Reviewer CLep). We include the results on the *Singletower 3* and *Pick & Place 6* tasks. The experiments are still running, thus we include partial results. The baselines solve the pick & place task but completely fail at the more challenging stacking task. Additionally: attempting the 3 stack, RND manages to get to a stack of 2 objects only with $0.39 \pm 0.24$ success rate; one-step disagreement only gets a stack of 2 with $0.19 \pm 0.02$ success. # Clarification on object-centric representations Let us clarify the concerns regarding object-centric representations and interpretable state spaces that were raised by Reviewers fSr3, fHtQ, GveK: We embrace object-centric representations as a suitable inductive bias in RL, where the observations per object (consisting of poses and velocities) are naturally disentangled. So we don’t see this to be limited. The wording in the paper was perhaps indeed misleading: access to object-centric proprioceptive information is not necessarily privileged or unattainable itself. It is indeed very common in robotics to do pose estimation on objects for control. Approaches such as [5] do object pose estimations from images, and YOLO or even unsupervised object-discovery methods such as Slot Attention have been used to infer object pose, or at least object positions [4]. The assumption that this object-factorized state representation is interpretable is not far-fetched. The computation of RaIR itself is very generalized. The main design choice goes into which symbols we use to describe the scene and to construct the multiset. In our work, we take a shortcut of putting the human bias in as “compressing positions is a good idea”. But this can be extended: see for instance our new experiments where we put in color as an additional component for regularity (Fig. D1). Applying RaIR directly to latent representations that are not inherently disentangled presents a challenge: developing a representation mirroring human-relevant structure and regularities. Here, examples of significant regular situations of interest could come in to learn a tokenizable representation for RaIR. This resembles real-world learning, where exposure to regular structures (e.g., towers, bridges) leads us to replicate these patterns while interacting with blocks. However, as Reviewer GveK noted, RaIR computation here faces practical complexities. We will incorporate this discussion per the suggestion in our revised paper. # Details on Experiment Pipeline We agree with reviewers GveK and fSr3 that details regarding the terminology and metrics used need better explanation. We provide these here and in the revised paper: During free play, we start with randomly initialized models and an empty replay buffer. Each iteration of free play (referred to as training iteration in the downstream task success plots) consists of data collection with environment interactions (via online planning), and then model training on the collected data so far (offline). In each iteration of free play, we collect 2000 samples (20 rollouts with 100 timesteps each) and add them to the replay buffer. During the online planning part for data collection, we only perform inference with the models and no training is performed. Afterwards, we train the model for 25 epochs on the replay buffer. We then continue with data collection in the next free play iteration (see Alg. S1 in appendix). The relative time is the percentage of timesteps in the collected 2000 samples per free play iteration the agent performs certain types of interactions. So 50% metric for *one object moved* means an object was moved in 1000 timesteps in that free play iteration. *Two or more objects moved* checks if at least 2 objects are moving at the same time. *Object(s) in air* means one or more objects are in air (including being held in air by the agent or being on top of another block). *Object(s) flipped* checks for angular velocities above a threshold for one or more objects, i.e. they are rolled/flipped. Overall, during the whole free play time, only 600K environment steps are performed, which marks the high sample-efficiency of our method. [1] Burda et al (2019). Exploration by Random Network Distillation, ICLR [2] Pathak et al (2019). Self-supervised Exploration via Disagreement, ICML [3] Pathak et al (2017). Curiosity-driven Exploration by Self-supervised Prediction, ICML [4] Heravi et al (2023). Visuomotor control in multi-object scenes using object-aware representations, ICRA [5] Zhang et al (2023). Self-Supervised Geometric Correspondence for Category-Level 6D Object Pose Estimation in the Wild, ICLR Pdf: /pdf/50420fa0ffaa3925f1b6650c9beb19bcc54212d3.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Large Language Models Are Latent Variable Models: Explaining and Finding Good Demonstrations for In-Context Learning
Accept (poster)
Summary: The paper presents a new perspective to understand the in-context learning behavior of large language models (LLMs) from the angle of latent concept learning resembling topic models. Based on the generative process defined by input data, latent concepts, and labels, the authors propose a two-stage algorithm to first learn the latent concept and then use it to select the best-performing demonstrations that boost in-context learning performance. Experiments on eight datasets show that the method is able to consistently outperform random selection and selection based on semantic similarity. Strengths: * Originality: Although the latent concept learning perspective is largely inspired by Xie et al., the goal and method proposed in this paper for selecting effective demonstrations is still sufficiently interesting and different from Xie et al., which seem novel to me. * Quality: The paper first states its generative assumptions defined by input data, latent concepts, and labels, based on which it then derives a two-stage method to first learn the concept and then select demonstrations. The theories and methods seem solid to me. * Clarity: The paper is overall clear and nicely written. * Significance: While it's nice to see that the method is able to consistently outperform simple baselines across the tasks selected, I generally feel that the setting considered in this paper is somewhat artificial, and that the evaluation is not comprehensive enough to testify the generalization ability of the method. These concerns weaken the significance of the paper. See weaknesses below for details. Weaknesses: * Problem setting: The paper assumes access to a (relatively) large training set from which a few demonstrations can be drawn. With this amount of training data (e.g., 100), there could be better alternative choices than in-context learning. For example, one can tune an LLM with parameter-efficient methods and easily outperform in-context learning (shown in Liu et al.) without even having to consider how to select the best demonstrations. Of course, one can argue that in-context learning is applicable to non-open-source LLMs while training-based methods are not. However, given the recent growth in the availability of open-source LLMs such as LLaMA and Falcon, I believe it would be generally better to consider parameter-efficient tuning than in-context learning if the authors assume ~100 training samples are available. In summary, I feel that the problem setting of "carefully selecting good demonstrations from a larger training set" is somewhat artificial on its own from the very beginning. * Evaluation tasks: The eight downstream tasks selected for evaluation appear to be too easy for nowadays LLMs, and it's unclear given the current evaluation how generalizable the method is to more challenging tasks like MMLU and reasoning, potentially when combined with chain-of-thought prompting. * Typo: Line 104: "semantic analysis" -> "sentiment analysis" Reference: Liu et al. "Few-shot parameter-efficient fine-tuning is better and cheaper than in-context learning." NeurIPS (2022). Technical Quality: 2 fair Clarity: 3 good Questions for Authors: * What is the role of $\boldsymbol{\epsilon}$ in the generative process (Line 91)? It seems like it is not taken into account in the current modeling process. * How does the method work for instruction-tuned models (e.g., ChatGPT, Alpaca)? * Why are the results of LLaMA that bad (Figure 3a)? LLaMA is generally recognized as a better-performing model than OPT series under similar sizes. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Please see the Weaknesses section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. Below is our response to your comments: 1. **About how realistic is the problem setting**: We want first to reiterate that our problem setting is chosen to empirically prove the correctness of our proposed theory of in-context learning. We want to reiterate that improving and connecting a previously synthetic-only theory of in-context learning [6] with real-world models and data is already non-trivial and is relatively rare in the current theories of LLMs, which are usually disconnected from the real world. The value of our paper does not only lie in its real-world application. However, we do agree that we need to show real-world use cases of our proposed algorithm. In short, our proposed method is most useful with hard tasks that smaller LLMs perform significantly worse than larger models. Please refer to point 2 of our general response for results on GSM8K. In this case, even parameter-efficient fine-tuning with smaller models cannot obtain a reasonable performance (less than 4%), but in-context learning with both small and large models combined with our method can achieve much higher performance (19.3% using Llama2 7B and more than 80% using ChatGPT). 2. **About evaluation tasks**: Since our primary goal is to connect the theory with real-world models and datasets, we did not try to include harder tasks. However, to show that our method also works on more challenging tasks, we show new results on GSM8K in point 2 of our general response, which involved chain-of-thoughts reasoning and generation, and shows the 4-7.9% increase of using our method with Llama2 (7B) compared with the Unifrom baseline. We want to reiterate that improving and connecting a previously synthetic-only theory of in-context learning [6] with real-world models and data is already non-trivial and is relatively rare in the current theories of LLMs, which are usually disconnected from the real world. The value of our paper does not only lie in its real-world application. 3. **About $\epsilon$**: It represents a noise variable, with any zero-mean distribution. Without this variable, there is no randomness between $X$, $Y$, and $\theta$, as the functions $f$ and $g$ are all deterministic. The $\epsilon$ here is just to introduce some noises such that the conditional distribution $P(Y|X, \theta)$ is meaningful. Thanks for pointing this out. We will clarify this in the revision. 4. **About instruction-tuned models**: In point 2 of our general response, we include results with ChatGPT on GSM8K, which shows our proposed method also works on instruction-tuned models (Accuracy of random selection baseline: 76.5%; Ours: 81.2%). 5. **About Llama's performance**: The bad performance of Llama is also surprising to us. We wrote an email to the authors of Llama to ask about this, and they replied that they had never tested under such a scenario, so they don’t know. The code for testing Llama is completely the same as for testing other models, though. Our understanding is that Llama (first generation) is good at generation tasks, but not so good at in-context learning with simple classification tasks. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. I'm still relatively positive about the paper and I'm keeping my original rating. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to read our rebuttal and respond to us. We appreciate that you still feel positive about our paper. We are wondering if our rebuttal has resolved all of your concerns. We are happy to clarify more if you have any remaining concerns. We are also wondering if you would like to consider raising your score in light of the new experiments showing the real-world use case of the proposed algorithm and the clarification in the rebuttal materials.
Summary: The paper introduces a novel demonstration selection method aimed to enhance performance of (few-shot) in-context learning. The approach is characterized by a two-stage process that includes latent concept learning and demonstration selection, each holding a unique significance. In the latent concept learning stage, the authors present a method for acquiring a task-specific token embedding set through prompt-tuning. Subsequently, the demonstration selection stage involves the process of selecting in-context samples. This process is based on maximizing the likelihood of post-fixing the previously acquired task latent. The efficacy of the demonstration selection has been evaluated across various language models, leading to improved in-context learning performance. Strengths: This paper is clearly written with a clear definition of the task. The proposed method is presented with great clarity and detail, which significantly aids in understanding the overall procedure. The paper presented impressive performance results. The successful transfer of demonstrations selected from a smaller model (GPT-2) to other larger models is very impressive. There are extensive ablation and additional experiments for deeper analysis of each component of the proposed method. Weaknesses: An area of concern lies in the assumption that the task latent derived through prompt-tuning is considered the “optimal task latent” (line 146). This assumption may not hold universally, and could be re-considered as the use of manual prompts - which could offer a more intuitive understanding of the task. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Considering that the order of samples in the demonstration does not play a critical role in the proposed method, what motivated the decision to include a "re-ordering" (selecting the permutation) step in the demonstration selection phase? - How would the performance change if a larger model, as opposed to GPT-2, was employed for demonstration selection? Would this potentially enhance the overall performance? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are stated in the appendix C. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive review. Below is our response to your comments: 1. **About optimal latent**: Yes, in practice, the learned latent is not (even never) optimal, but approximate. The assumed optimality is for deriving the upper bound theory that the in-context learning classifier can be as good as the Bayes optimal classifier. And the manual prompt can also be viewed as an approximate of the optimal latent. However, by using a manual prompt, we are not able to prove our theory of in-context learning that LLMs infer a latent variable at the inference time. The prompt tuning approach allows us to reveal the unobservable latent task variable in LLMs. 2. **About reordering**: The reordering step in the proposed algorithm is indeed redundant. It’s a leftover from the original algorithm design before we perform the ablation study. We will remove this step in the paper. 3. **About better model for demonstration selection**: In point 2 of our general response, we show that a better/larger model can select better demonstrations with the GSM8K dataset. More specifically, our method with Llama 2 (7B) outperforms our method with GPT2-XL (1.3B) by 0.8-3.4% absolute accuracy when performing in-context learning with different models. --- Rebuttal Comment 1.1: Comment: I appreciate your insightful response. The authors' clarification regarding the concept of "optimal latent" and the inclusion of additional experiments have certainly contributed to my understanding of the presented paper. Still, I find that the theoretical assumptions and the detailed analysis of the conducted experiments, which support the reported enhancements in ICL performance, could benefit from further elaboration. In light of this, my assessment remains aligned with the initial score assigned.
Summary: This paper describes a framework to select demonstrations for in-context learning by using bayseian formulation for the data generation process. Similar to topic models, the formulation uses a "concept" variable and words in this sequence are conditioned over this concept variable and are conditionally independent of the other tokens in the generated text. The concept variable models the prompt/task instruction and is modeled by learning concept tokens by prompt tuning a small LLM. Since the concept tokens are made part of the vocabulary, the selection of in-context demonstrations can be learnt by maximizing the probability for the concept tokens. Experiments are presented on multiple NLP tasks and they indicate that selecting in-context demonstrations using this formulation works better than using random in-context examples. Strengths: 1. Simple formulation that aids better selection of in-context demonstrations 2. Experiments have been presented on multiple NLP tasks Weaknesses: Modeling: I'm not sure the topic-model like bag-of-words assumption is accurate in the way the modeling has been described. As Line 56 in the paper states, the generation of tokens would be independent of the previous tokens but to truly model this wouldn't you need to modify the current latent concept learning setup to be sequence-agnostic? The method works empirically so perhaps its okay but some clarification would help here. Perhaps the authors can explain further in the rebuttal phase if I misunderstood it. Experiments: The experiment baselines refer to the "Similar" baseline but any discussion or analysis of its results is completely missing from the main paper. I also checked the appendix, and the results in Table 3 (appendix) do not correspond to the results reported in the main paper in the histogram plots. Notation: The notation is a bit hard to follow and could benefit from simplification. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: See weakness Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. Below is our response to your comments: 1. **About topic model assumption**: We want to clarify that the topic model here is in a more general sense, which is not equivalent to LDA [1], but similar to the modern neural topic models proposed in [2,3]. In this more general definition of the topic model, the tokens/words are not required to be conditionally independent given the topic variable. We are aware that this definition of a topic model is basically a simple latent variable model of language with a single latent. Thanks for pointing this out, to avoid further confusion, we decided to change our title to Large Language Models as Latent Variable Models: Explaining and Finding Good Demonstrations for In-Context Learning. We will also revise our paper to make this point clearer. 2. **About Similar baseline**: We define the "Similar" baseline at line 238, which means using the most similar examples to the current query as the demonstrations for in-context learning, and it was first proposed in [4]. Since it is relatively straightforward and is a default demonstration selection method that has been used in many papers, we did not include much analysis of it in the main paper. Thanks for pointing this out. In general, our experiment results show that using demonstrations similar to the testing query will improve the in-context performance compared to random selection, which implies that similar examples contain useful information for LLMs to infer about some parts of the task latent that is relevant to the current query. However, it is still not as effective as directly selecting the examples that can best infer the whole task latent, as used in our proposed method, which can cover more aspects of the task. We will add this analysis to the revision. 3. **About the main results**: Figure 2 in the main paper corresponds to the last column of Table 3 in the appendix. You probably looked at the last three rows (which correspond to the average over all the models) instead of the last column (which corresponds to the average over all the datasets). We will revise the table to make it clearer. [1] David M. Blei, Andrew Y. Ng, and Michael I. Jordan. Latent dirichlet allocation. J. Mach. Learn. Res. 2003. [2] Miao, Yishu, Edward Grefenstette, and Phil Blunsom. Discovering discrete latent topics with neural variational inference. ICML 2017. [3] Miao, Y., Yu, L. & Blunsom, P. Neural Variational Inference for Text Processing. ICML 2016. [4] J. Liu, D. Shen, Y. Zhang, B. Dolan, L. Carin, and W. Chen. What makes good in-context examples for GPT-3? In Proceedings of Deep Learning Inside Out (DeeLIO 2022): The 3rd Workshop on Knowledge Extraction and Integration for Deep Learning Architectures --- Rebuttal Comment 1.1: Title: Acknowledgement of Rebuttal Comment: Thank you for your response. I re-read my review in light of your responses. I had understood what the similar baseline refers to but I had missed its inclusion in Figure 2. I'm updating my review score --- Reply to Comment 1.1.1: Comment: Thank you for reading our rebuttal and responding. We are glad that our rebuttal helped clarify your concerns. And we appreciate your raise of the score. Have a good weekend :)!
Summary: This paper tries to study in-context learning through a Bayesian lens, namely treating LLMs as implicit topic models. It proposes an algorithm to select optimal demonstrations from a set of annotated data with a small LLM and then use the selected demonstrations with larger LLMs. 12.5% improvement compared to random selection is observed by adopting the proposed algorithm. Strengths: The paper has extensive empirical results showing the proposed algorithm can effectively find helpful demonstration examples to boost the performance of in-context learning. Weaknesses: Although we see significant performance improvement, it's hard to conclude LLMs are topic models. It could be possible that topic (or distribution of input text) is one of the important factors but there are other attributes affecting in-context learning. In some ways, this also conflicts with some previous work, e.g. "In-context Learning and Induction Heads" points out the pattern copying behavior is the key, "Robustness of Demonstration-based Learning Under Limited Data Scenario" finds out random tokens are also helpful. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Have you tried to compared with better algorithms to find demonstrations, e.g. "Selective Annotation Makes Language Models Better Few-Shot Learners"? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. Below is our response to your comments: 1. **About our claims**: We would like to clarify that we are not claiming that LLMs are topic models. Our claim is that LLMs are IMPLICITLY topic models, that learn and infer a latent task/topic variable for solving each query. We would be very happy to revise and clarify this. There are many plausible ways of understanding in-context learning. The topic model/latent variable view is indeed just one important piece of it. We in no way claim ours is the only correct one. We would like to clarify that many of these are complimentary views of LLMs that do not necessarily conflict with either other. For example, the in-context learning as gradient descent view [6,7,8] can be understood as a specific way of inferring and utilizing the task latent at in-context learning time, as in the Bayesian interpretation framework [9,10,11]. 2. **About conflicting findings**: The pattern information of a task can be viewed as also included in the inferred latent variable, so our explanation is not conflicting [1]. There is a similar finding in [3] as the mentioned random tokens finding in [2], saying that the demonstrations with random labels also work. In contrast, [4] explicitly argues that ground truth labels matter, while the sensitivity to label correctness varies across different tasks. Interestingly, [3] and [4] are published at the same conference. The experiment in Table 7 of our appendix shows that the correct label does matter on our tasks, with or without our proposed demonstration selection method, which agrees with [4]. As the results in [2] are mostly from the NER task, it is likely that NER has a high tolerance for randomness in the demonstrations. 3. **About baselines**: We are aware of the mentioned paper [5] and have cited it in our related work section. We did not include it as a baseline because its setting is too different from ours, which makes it incomparable to our method. They propose a two-phase algorithm, where the first phase selectively annotates a relatively small number (18 to 100) of data from a large set of unlabeled data (3K), and then the second phase retrieves demonstrations from the annotated data based on their similarity to the query. We are, on the other hand, exclusively focusing on selecting demonstrations from a small set of labeled data (100). So our proposed method is only comparable with the second phase of their algorithm, which is the same as the ‘Similar’ baseline (defined in line 238) used in our experiments. It is possible to combine our method with the first phase of [5] to obtain a better performance under the selective data annotation setting with a large set of unannotated data, but this setting is out of the scope of our current paper, so we leave this to the future work. [1] Olsson, Catherine, et al. In-context learning and induction heads. arXiv preprint arXiv:2209.11895 (2022). [2] Hongxin Zhang, Yanzhe Zhang, Ruiyi Zhang and Diyi Yang. Robustness of Demonstration-based Learning Under Limited Data Scenario. EMNLP 2022 [3] Min, Sewon, et al. Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?. EMNLP 2022. [4] Yoo, Kang Min, et al. Ground-Truth Labels Matter: A Deeper Look into Input-Label Demonstrations. EMNLP 2022. [5] Su, Hongjin, et al. Selective annotation makes language models better few-shot learners. ICLR 2022. [6] E. Akyürek, D. Schuurmans, J. Andreas, T. Ma, and D. Zhou. What learning algorithm is in-context learning? investigations with linear models. NeurIPS 2022. [7] D. Dai, Y. Sun, L. Dong, Y. Hao, Z. Sui, and F. Wei. Why can gpt learn in-context? language models secretly perform gradient descent as meta optimizers. ACL 2023. [8] J. von Oswald, E. Niklasson, E. Randazzo, J. Sacramento, A. Mordvintsev, A. Zhmoginov, and M. Vladymyrov. Transformers learn in-context by gradient descent. ICML 2023. [9] M. Hahn and N. Goyal. A theory of emergent in-context learning as implicit structure induction. arXiv preprint, 2023. [10] H. Jiang. A latent space theory for emergent abilities in large language models. arXiv preprint 2023. [11] S. M. Xie, A. Raghunathan, P. Liang, and T. Ma. An explanation of in-context learning as implicit bayesian inference. ICLR 2022. --- Rebuttal Comment 1.1: Comment: As the end of discussion period is approaching, we are wondering if you have read our rebuttal and if you have any remaining concerns. We are happy to clarify more before the discussion period ends.
Rebuttal 1: Rebuttal: We want to first thank all the reviewers for taking the time to review our paper. We provide some clarification and additional results below: 1. **About topic modeling assumptions**: We want to clarify that the topic model here is in a more general sense, which is not equivalent to LDA [1], but similar to the modern neural topic models proposed in [2,3]. In this definition, the tokens/words are not required to be conditionally independent given the topic variable. We know that this topic model definition is essentially a simple latent variable model of language with a single latent. To avoid further confusion, we will change our title to **Large Language Models Can be Viewed as Latent Variable Models: Explaining and Finding Good Demonstrations for In-Context Learning**. We do not intend to claim that this latent variable explanation of in-context learning is the only correct one. As described in the related work section, multiple plausible and complementary ways exist to understand and interpret in-context learning. 2. **About the realisticness of our setting**: Since our primary goal is to connect the theory with real-world models and datasets, we did not try to include harder tasks. In practice, our proposed method is most effective with hard tasks that even parameter-efficient fine-tuning with smaller models cannot outperform in-context learning with the same or larger models. To improve the performance under a low data setting, with a small computing budget, and minimal inference latency, our demonstration selection method is a reasonable choice. Our demonstration selection method can also potentially be combined with other prompting techniques to boost performance further. **We added a new dataset, GSM8K** [4], which is a math word problem-solving dataset with chain-of-thoughts solutions. The table below shows the test accuracy of the final numerical answer using greedy generation. Note that we did not use a calculator to insert the correct result of each generated math equation during generation for time efficiency, which resulted in slightly lower scores. As shown in the first row of the table, prompt tuning with ten new tokens can only obtain less than 4% accuracy on the GSM8K test set. While it is possible that a better-designed efficient tuning algorithm with more trainable parameters will be able to get better results, it is still unlikely that the efficient tuning results with small models will be able to outperform the few-shot performance of larger/better models, as [8] show that a fully fine-tuned Llama 7B model on the whole GSM8K training set can only get 35.9% accuracy on GSM8K, and can be improved to 49.3% if combined with data augmentation, which is still significantly lower than the over 80% accuracy obtained by ChatGPT combined with our method. The last 4 rows show the in-context learning results with different size Llama 2 models [5] and ChatGPT. Our proposed demonstration selection method (last two columns) significantly outperformed the Uniform and Similar baseline as defined in line 236 to line 247 in our paper. Also, note that the demonstrations selected with a larger model (7B) are more effective than those selected with a smaller model (1.5B). | | Uniform | Similar | Ours w/ Llama 2 (7B) | Ours w/ GPT2-XL (1.5B) | | --- | --- | --- | --- | --- | | Prompt tuning | N/A | N/A | 3.7 | 1.3 | | Llama 2 (7B) / 4-shot | 11.4 | 13.1 | **19.3** | 15.9 | | Llama 2 (13B) / 4-shot | 17 | 18.3 | **21.6** | 20.5 | | Llama 2 (70B) / 4-shot | 50.2 | 53.5 | **54.3** | 52.9 | | ChatGPT (gpt-3.5-turbo) / 4-shot | 76.5 | 78.1 | **81.2** | 80.4 | *We want to reiterate that improving and connecting a previously synthetic-only theory of in-context learning [6] with real-world models and data is already non-trivial and is relatively rare in the current theories of LLMs, which are usually disconnected from the real world. The value of our paper does not only lie in its real-world application.* 3. **Analysis of the selected demonstrations**: We didn’t include an analysis of the selected demonstration examples in the paper because the common features shared between the selected examples are a bit hard to detect. However, we agree that it is still necessary to include such an analysis. Because of the space limitation, we only list the top demonstrations of GSM8K and SST2 in the supplemental 1-page pdf. Compared to the examples with lower scores, the selected examples for GSM8K have more deductive reasoning (i.e. with the connecting words ‘so’, ‘then’, ‘thus’, etc.), instead of listing parallel conditions. For SST2, the selected examples are longer and more complex, sometimes including a ‘but’. This can be understood as these harder examples can represent the task more comprehensively. This conclusion also aligns with the finds in [7] that hard examples in the pre-training data contribute to in-context learning the most. The label distribution of the selected demonstrations is usually balanced in class, which reduces the possible biases introduced by the demonstrations. [1] David M. Blei, Andrew Y. Ng, and Michael I. Jordan. Latent dirichlet allocation. J. Mach. Learn. Res. 2003. [2] Miao, Yishu, Edward Grefenstette, and Phil Blunsom. Discovering discrete latent topics with neural variational inference. ICML 2017. [3] Miao, Y., Yu, L. & Blunsom, P. Neural Variational Inference for Text Processing. ICML 2016. [4] Cobbe, Karl, et al. Training verifiers to solve math word problems. arXiv preprint 2021. [5] Touvron, Hugo, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint 2023. [6] Xie, Sang Michael, et al. An Explanation of In-context Learning as Implicit Bayesian Inference. ICLR 2021. [7] Han, Xiaochuang, et al. Understanding In-Context Learning via Supportive Pretraining Data. ACL 2023. [8] Yuan, Zheng, et al. Scaling Relationship on Learning Mathematical Reasoning with Large Language Models. arXiv preprint 2023. Pdf: /pdf/e83499c98c0450abad0c7c49c7345c7537bfd4e1.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This work aims at proposing a demonstration example selection algorithm in in-context learning, using a formulation of topic models for language models. Specifically, the proposed algorithm applies prompt tuning on some prefixing learnable tokens to obtain latent concepts, and select demo examples that maximize the probability of inferring the learned tokens. The authors test the algorithm on small LMs and find generalizability to larger LMs, on a set of classification tasks. Strengths: The writing of the paper is overall clear. The theoretic formulation is well-structured and points out where assumptions and approximations are made. The proposed algorithm is tested on a set of tasks and models. The ablation studies are also reasonable. Weaknesses: The motivation and takeaway of this work are rather vague. As a demonstration example selection algorithm, the setup is not realistic enough since it uses prompt tuning with labeled task data to obtain the latent concept tokens. Subsequently, the comparison with the baseline methods using uniform or similar demonstration examples is not fair, since they do not assume prompt tuning with labeled task data. Some natural and probably necessary questions here include: (1) how does the performance of the proposed method compare with the vanilla prompt tuning performance? and (2) since labeled task data are used, what is the performance of directly (and independently) select ICL examples based on their contribution to the predictions of the labeled task data? Additionally, the current analysis is mostly on the performance of the algorithm rather than the selected demonstration examples themselves. What attributes do they share in common? (apart from the qualitative clustering shown in the tSNE figure) What label distribution do they have? (do they simply act as a *calibration* to the model's output distribution?) In other words, probably as the title hints, what are good demonstrations for in-context learning? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the weakness section for my main questions to the authors. Additionally, is any generation task considered in this work apart from the classification tasks? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review. Below is our response to your comments: 1. **Takeaway of our paper**: We would like to clarify that our study's focus is on elucidating the underlying mechanisms of in-context learning and presenting a novel approach to demonstration selection that empirically verifies our theory in a real-world setting. By employing prompt tuning with labeled data, we can tease apart latent variables and relationships that otherwise might be obscured. 2. **Real-world use case and generation task**: In short, our proposed method is most useful with hard tasks that smaller LLMs perform significantly worse than larger models. Please refer to point 2 of our general response for detailed results on GSM8K. In this case, even parameter-efficient fine-tuning with smaller models cannot obtain reasonable performance (less than 4% accuracy), while in-context learning combined with our method can obtain significantly higher performance with both small and large models. Our method can obtain more than 80% accuracy with ChatGPT, compared with 76.5% accuracy using random selection. Note that our method only requires tuning a small model on limited data, so it is data and computation efficient. We want to reiterate that improving and connecting a previously synthetic-only theory of in-context learning [6] with real-world models and data is already non-trivial and is relatively rare in the current theories of LLMs, which are usually disconnected from the real world. The value of our paper does not only lie in its real-world application. 3. **Baselines**: To the best of our knowledge, none of the existing demonstration selection methods involves tuning a smaller model. So we are not able to involve a similar baseline. About the proposed contribution-based baseline, we are not quite sure what the reviewer means by ‘select ICL examples based on their contribution to the predictions of the labeled task data’. Do you mean select demonstrations by attributing the prompt-tuned model to the labeled training data? If that is the case, we agree this is an interesting idea, which to the best of our knowledge, has not been used on any existing demonstration selection method yet. However, we want to argue that this baseline is too exploratory, which could be a new project by itself, and does not seem to be natural enough to be incorporated in the current paper. We think this is out of the scope of our project, but we are happy to discuss it with the reviewer if they can clarify what they mean here. 4. **Compared to prompt tuning**: For simpler tasks like sentiment analysis, prompt tuning performs better than in-context learning, even when combined with our method (detailed results see Figure 8 in the Appendix). For harder tasks like GSM8K, simple prompt tuning with small models cannot obtain meaningful results (less than 4% accuracy), while in-context learning can obtain significantly higher performance with both large models (more than 80% accuracy with ChatGPT combined with our method) and small models (19.3% accuracy with Llama2 7B combined with our method). For detailed results, please refer to point 2 of our general response. 5. **Analysis of the selected demonstrations**: We didn’t include an analysis of the selected demonstration examples in the paper because the common features shared between the selected examples are a bit hard to detect. However, we agree that it is still necessary to include such an analysis. Because of the space limitation, we only list the top demonstrations of GSM8K and SST2 in the supplemental 1-page pdf. Compared to the examples with lower scores, the selected examples for GSM8K have more deductive reasoning (i.e. with the connecting words ‘so’, ‘then’, ‘thus’, etc.), instead of listing parallel conditions. For SST2, the selected examples are longer and more complex, sometimes including a ‘but’. This can be understood as these harder examples can represent the task more comprehensively. This conclusion also aligns with the finds in [1] that hard examples in the pre-training data contribute to in-context learning the most. The label distribution of the selected demonstrations is usually balanced in class, which reduces the possible biases introduced by the demonstrations. We will add the analysis of all selected demonstrations in the revision. [1] Han, Xiaochuang, et al. Understanding In-Context Learning via Supportive Pretraining Data. ACL 2023. --- Rebuttal Comment 1.1: Comment: As the end of the discussion period is approaching, we just want to make sure that you have read our rebuttal. It would be great if you can clarify some of your comments that we didn't understand (i.e. *‘select ICL examples based on their contribution to the predictions of the labeled task data’*) so that we can give proper responses. We are also more than happy to clarify if there are any remaining concerns. --- Rebuttal 2: Title: Thanks for the response Comment: Thanks for your detailed response. Below I'll first clarify my concerns over the selection of the baselines (Re: 3, 4) and then the overall takeaway of the work (Re: 1, 5). The authors described their method clearly in Figure 1, with two main stages: (a) Use prompt tuning to obtain concept tokens. During this process, a *labeled* dataset D is used (Algorithm 1). (b) Select k demonstration data from a candidate data set D^d (Algorithm 2). The size of D^d is 100 as mentioned in Line 230. However, it is not immediately clear to me what the size of D is, though the authors mentioned it is "limited data" (the method "requires tuning a small model on limited data"; can you perhaps clarify further?). My concern is that the two baselines compared in this work, random selection and selection based on similarity, did not utilize this labeled dataset D. Therefore in my review, I suggested two more comparisons: (1) Compare with prompt tuning as it uses the same labeled dataset D. (2) Since prompt tuning does not involve D^d in an ICL setup, I mentioned "since labeled task data are used, what is the performance of directly (and independently) select ICL examples based on their contribution to the predictions of the labeled task data". To clarify this a bit further, I meant to select/prepend demonstration examples from D^d based on whether they can maximize the probability of generating labeled examples from D (i.e., using the terminology from Algorithm 1 and 2 --- P(Y | X^d, Y^d, X) ). Having these two comparisons that also utilize the labeled dataset D will help the audience understand the importance of deriving the concept tokens theta more clearly. My second concern, apart from the demonstration selection algorithm, is on the takeaway of this work. The authors clarified that the goal is "elucidating the underlying mechanisms of in-context learning". In that way, I think a deeper analysis into the selected demonstration examples for each task would be necessary, to give the audience an interpretable picture of the mechanism of ICL. I agree that the transferability of the demonstration examples from small to large models is interesting and potentially useful (Re: 2). I have changed my scores accordingly. --- Rebuttal Comment 2.1: Comment: Thank you for carefully reading through our rebuttal and responding to us. We appreciate your raise of the score. Below is our response to your concerns: 1. **About the size of D**: The size of D range from 346 (ETHOS-SO, ETHOS-R) to 1.6k (SST2, FPB, COLA, DBpedia, EmoC, EmoS, GSM8K), which is determined by the availability of the annotated data and then caped by 1.6k. We will state this explicitly in the revision. 2. **About prompt tuning baseline**: We involved the comparison with prompt tuning on D in Figure 8 in the Appendix and also in the new experiments with GSM8K, as detailed in point 4 in our rebuttal. We will add this in the main paper in the revision. 3. **About contribution-based baseline**: We now understand the second baseline proposed by the reviewer. Thank you for the clarification. We will run the suggested baseline and either post the results here if we can get it done before the discussion period ends on August 21 or we will directly add this baseline in the revision. 4. **About in-depth analysis of the selected demonstrations**: We will involve an analysis of the selected demonstration from all datasets, both from a qualitative perspective as shown in the last paragraph of our rebuttal, and from a qualitative perspective by analyzing the text distribution and information gain of the selected demonstrations, similar to [1]. We will either post the quantitative results here if we can get it done before the discussion period ends on August 21 or we will directly add this in the revision. [1] Han, Xiaochuang, et al. Understanding In-Context Learning via Supportive Pretraining Data. ACL 2023.
null
null
null
null
null
null
Predicting mutational effects on protein-protein binding via a side-chain diffusion probabilistic model
Accept (poster)
Summary: The paper proposes SidechainDiff a diffusion model over the torsion angles of the protein sidechains and uses the learned representations of this model to predict the mutational effects of protein-protein binding. Strengths: The paper shows a wide variety of promising experimental results ranging from standard benchmarks of binding affinity and side-chain conformation to case studies related to SARS-CoV-2 RBD and antibodies. I cannot comment on their reproducibility because the code was not provided. Weaknesses: While the experimental results are strong, however for the paper to be ready for publication, the manuscript should be improved in many regards: comparison to the literature, presentation of the method and certain baseline comparisons. Comparison to the literature: 1. The paper lacks references to existing published methods that have developed Riemannian diffusion models over the hyperdimentional torus to model molecular torsion angles (e.g. [1] and [2]). In these regards, the authors should clarify the relationship between the diffusion process presented here and in those works. To my understanding, the diffusion processes are identical which raises the question of why the authors refer to the perturbation kernel as intractable and resort to the use of implicit score matching. [1] suggests that the perturbation kernel can be sampled analytically (with a truncated infinite series). Are the perturbation kernels the same? If so how does the explicit score matching approach of [1] compare to the one developed? 2. Related to the point before how are the samples X_t and the scores s_t obtained in equation (7)? 3. Except from the diffusion component, the overall approach of the paper seems analogous to RDE [3]. I believe that given the similarity it would be useful to clarify the differences between the methods. For example line 256 claims the two methods use the same architecture. Is this true and if so does doing the pretraining with the flow of RDE-Network get worse performance than no pretraining at all (DiffAffinity*)? Presentation of the method: 4. For the network architecture hardly any details are provided in the main text and even in the appendix these are not very clear. The authors should add some details of the architecture to the main text and clarify the presentation in the appendix: e.g. how is the score of the chi angles predicted? Is it an MLP on top of the hidden representation? Baseline comparisons: 5. The authors should add details about the hyperparameter and model tuning process. E.g. were any hyperparameters tuned for the retrospective studies on SARS-Cov-2? 6. Given the high-dimensional distribution with complex interdepencies between the angles of the different sidechains, looking simply at the MAE of individual torsion angles seems inadequate. This is especially relevant when some angles have reported MAE above 40 degrees, e.g. what would be the MAE performance of a simple baseline of predicting the median (on the circle) for every residue type and angle? More generally, I believe the authors should also provide further metrics, for example what is the steric clash rate in each of the approches? 7. Some claims in the paper seem not well justified and should be adjusted. E.g. line 305 (and similar 322) “[SidechainDiff] achieves comparable performance to RDE, suggesting superior capability in generating side-chain conformations”, if they are comparable then the method is not superior. Minor: 8. The text presents many orthographic errors (e.g. to name just a few at lines 161, 224, 549) I suggest the authors to use publicly available programs to review the text and correct these. [1] Jing, Bowen, et al. "Torsional diffusion for molecular conformer generation." *NeurIPS 2022*. [2] Corso, Gabriele, et al. "Diffdock: Diffusion steps, twists, and turns for molecular docking." ICLR 2023. [3] Luo, Shitong, et al. "Rotamer Density Estimator is an Unsupervised Learner of the Effect of Mutations on Protein-Protein Interaction." ICLR 2023. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: See weaknesses section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: See weaknesses section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's constructive feedback. We have carefully addressed each comment and made the necessary revisions accordingly. __Q1:__ 1. Thanks for your advice, We have incorporated references to existing published methods [1, 2] in the related work section. 2. Comprehensive analysis between our method and their methods [1, 2] can be found in **Q6 in the "global" response**. 3. Due to time constraints, we were unable to perform a comprehensive comparison between the explicit score matching approach of [1, 2] and the implicit score matching approach used in our paper. In our setting, ISM loss doesn't need to calculate the ground truth score function in torus space $\mathbb{T}^{4}$ which is consistent with our method, so we choose ism loss as our loss function. We acknowledge that the ism loss has its disadvantages, particularly in its reliance on the calculation of score network divergence, which can be challenging. __Q2:__ We use a random walk sample procedure to get $X_t$ which means pertubated $X_0$ . The forward diffusion procedure can be expressed as : $dX_t = d\mathbf{B}_t$. To be specialized, the forward diffusion procedure involves the following steps: 1. Sample a point $r$ from a standard normal distribution in $(\mathbb{R}^2)^4$. 2. Use the exponential map to project $r$ onto the tangent space of $X_0$. 3. Scale the tangent vector by a factor of $\sqrt{t}$. 4. Project the scaled tangent vector back to the unit circle using the projection map. 5. Obtain the point $X_t$ on the unit circle. We can formalize this sampling procedure on the 4-dimensional torus space $\mathbb{T}^4$ using the following equation: $X_t = Proj_{X_0}(\sqrt{t} Exp_{X_0}(r)), r\sim \mathcal{N}(0,I_{(\mathbb{R}^2)^4}).$ In Equation 7, We apologize that $s_t$ is a typo. we mean $s_\theta$ acturally. $s_\theta$ is score network. __Q3:__ 1. We thank the reviewer for pointing out the error in that model architecture. We adopted the same architecture to the RDE-Network for the $\Delta\Delta G$ stage. However, it is important to note that we employed a completely different model to handle the side-chain conformation. We use the same architecture to predict binding affinity but use different hidden representations from pre-training modules. Our performance is better than RDE-Network, which indicates that our pre-training module is more suitable for mutation-related tasks. 2. Yes, DiffAffinity* has better performance than RDE-Network in SKEMPI2 dataset. In our experiments, we observed that DiffAffinity* without any pre-training features outperformed RDE-Network. Unfortunately, the original paper [2] does not provide the results of RDE-Network without RDE hidden representation, preventing us from directly verifying whether our results align with the original findings. Nevertheless, in downstream tasks of SARS-CoV-2 RBD and human antibodies, we observed a significant improvement in the performance of RDE-Network over DiffAffinity*. This suggests that the pre-training features from RDE could indeed enhance the model's generalization ability and the end-to-end model DiffAffinity* might overfit the SKEMPI2 dataset. A comprehensive analysis of our hidden representation obtained from SidechainDiff and the comparation with that from RDE and ESM2 can be found **in Q4 in the "global" response and in Figure a in the PDF**. __Q4:__ The architecture of the score network $s_{\theta}(\mathbf{X}_t,t,\mathbf{Z})$ is implemented using a multi-layer perceptron (MLP) with 3 layers, each containing 512 units. In DiffAffinity, we concat the hidden representation from SidechainDiff and sequence information as the sequential input of the IPA-like transformer. We have incorporated additional details regarding our network architecture in both the main text and the appendix. __Q5:__ We appreciate the point raised by the reviewer on the issue of model hyperparameters. We do not employ any fine-tuning techniques in SARS-CoV-2 RBD mutational effects prediction and optimization of human antibodies against SARS-CoV-2 tasks. Instead, we utilize the models trained on the SKEMPI2 dataset and directly use their predictions for the $\Delta\Delta G$ of mutations in downstream tasks. __Q6:__ 1. We have incorporated more metircs and methods [3, 4] of side-chain conformation **in Q6 in the "global" response and in Table c in the PDF**. 2. Additionally, we experimented to predict the median (on the circle) for each residue type and angle, and the results are presented in the table below. | Method | $\chi_1$ | $\chi_2$ | $\chi_3$ | $\chi_4$ | |--------|---------|---------|---------|---------| | Random | 89 | 98 | 68 | 114.64 | __Q7:__ We have made the necessary corrections to rectify all the writing mistakes in our paper. We sincerely appreciate your valuable advice and guidance __Reference__ [1] Jing, Bowen, et al. "Torsional diffusion for molecular conformer generation." Advances in Neural Information Processing Systems 35 (2022): 24240-24253. [2] Corso, Gabriele, et al. "Diffdock: Diffusion steps, twists, and turns for molecular docking." ICLR 2023. [3] Luo, Shitong, et al. "Rotamer Density Estimator is an Unsupervised Learner of the Effect of Mutations on Protein-Protein Interaction." bioRxiv (2023): 2023-02. [4] McPartlon, Matthew, and Jinbo Xu. "An end-to-end deep learning method for protein side-chain packing and inverse folding." Proceedings of the National Academy of Sciences 120.23 (2023): e2216438120. [5] Misiura, Mikita, et al. "DLPacker: deep learning for prediction of amino acid side chain conformations in proteins." Proteins: Structure, Function, and Bioinformatics 90.6 (2022): 1278-1290. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thank you for the careful response. I believe that the additions of these theoretical analyses and better presentation of the method will be very valuable additions to the paper which retains promising experimental results. I have therefore raised my score to 7 and recommend acceptance. --- Reply to Comment 1.1.1: Comment: We thank the reviewer again for the constructive comments on our work.
Summary: The paper introduces a new representation learning-based approach called SidechainDiff that predicts how amino acid mutations influence protein-protein binding using Riemannian diffusion model. It's very important problem in the field of structural biology, for example, in evaluation of antibody variants impact to antigen binding. Moreover, SidechainDiff is the first approach that focuses on side chains compared to previous methods that work with protein backbones. Overall, this paper is a valuable contribution to both computational biology and artificial intelligence. Strengths: Originality: The paper provides a new solution for a well-known task of the prediction of amino acid mutation impact into binding. Intoduced methods are new (SidechainDiff and DiffAffinity), though authors used an already known concept of diffusion models. Current work focuses on side chain generation while previous studies provides protein backbone generation. The related studies are adequately cited. Quality: The submission is technically sound, appropriate, and complete. It provides a clear and detailed explanation of the all steps of the work. In addition, the authors compare DiffAffinity with other models providing a baseline of its performance. The authors provide insights into the further steps of the work and potential applications. Clarity: The submission is written clearly and organized. It provides all the necessary details for the reader. Significance: Due to the lack of experimentally determined structures there is a need for tools that are able to predict the effect of amino acid mutations to the binding without this information. This paper provides such a tool. Weaknesses: The predicted mutational effect was checked only on one type of antibody-antigen system (SARS-CoV-2 RBD). It's not clear if the model will show the same performance in other antibody-antigen systems or how the performance will change. Also, it's not discuss how the protein characteristics (such as length, mutation position, etc.) can influence the performance and predictions. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. Will SidechainDiff be an open source approach? If so, it would be very useful to prepare a GitHub repository with the code and instruction of how to use the code. 2. Can the results of SidechainDiff be biased towards SARS-CoV-2 RBD? Have you checked the results on other antibody-antigen systems? Will results remain the same? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: The limitations of SidechainDiff are not clearly written, please provide detailed discussion Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's constructive comments on our work. In response, we have carefully addressed their concerns and provided clarification on various issues as follows: __Q1:__ To obey the rules of the double-blind reviewing policy, we initially don't provide a GitHub URL. But SidechainDiff will be released soon after the peer review finishes, and we will provide the training, prediction, and instruction of SidechainDiff. __Q2:__ DiffAffinity is also practical in other antigen and antibody systems. In response to the question, we introduce additional tasks in other antibody-antigen systems. We aim to predict deep-sequences libraries of the therapeutic antibody trastuzumab for specificity to human epidermal growth factor receptor 2 (HER2) with experimentally validated binary labels. For this purpose, we utilize DiffAffinity, RDE, and ESM2* to discriminate between antigen-binding or non-binding mutations in the CDR H3 region of the 1N8Z antibody protein, where the total number of mutations is 491 [1]. The mutations are classified based on the sign of $\Delta \Delta G$, where positive values indicate antigen-binding and negative values indicate non-binding interactions. To address the imbalanced dataset, we employ the AUPRC metric. The results of this task are presented in the table below. | Method | AUPRC | |----|----| |ESM2*|0.739| |RDE|0.755| |DiffAffinity| 0.768| Compared with other deep learning methods, our model DiffAffinity also achieves the best performance in this task which indicates our model can easily handle the different datasets. __Q3:__ We acknowledge that one of the disadvantages of our model is its inability to capture the changes in backbone structure caused by mutations. __Reference__ [1] Mason, et al. Optimization of therapeutic antibodies by predicting antigen specificity from antibody sequence via deep learning. Nat Biomed Eng (2021) --- Rebuttal Comment 1.1: Comment: Thank you for the response! I'm satisfied with the additions. --- Reply to Comment 1.1.1: Comment: We thank the reviewer again for the constructive comments on our work.
Summary: The paper introduces DiffAffinity, a novel method for predicting the effects of mutations on protein-protein interactions. The authors leverage a diffusion model to learn representations that aid in predicting changes in binding affinity (∆∆G). They compare DiffAffinity's performance with existing methods such as FoldX and RDE-Net. The paper also explores the application of DiffAffinity in optimizing human antibodies against SARS-CoV-2. Strengths: 1. The paper innovatively proposes a diffusion-based approach for predicting sidechain conformations at the protein-protein interface. The authors provide a thorough comparison of DiffAffinity with existing methods, demonstrating its superior performance. 2. The authors provide a thorough comparison of DiffAffinity with existing methods, demonstrating its superior performance. Weaknesses: 1. The paper's main claim—that the hidden representation learned from the diffusion model can enhance binding affinity prediction—is not clearly elaborated. 2. The proposed method does not account for changes in the backbone, which could limit its effectiveness in cases where mutations cause such changes. 3. The paper could benefit from a discussion on the computational efficiency of DiffAffinity compared to other methods. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Could the authors elucidate why the representation learned from the diffusion-based model benefits binding affinity prediction? More elaboration on this point would be beneficial. 2. Is there a performance comparison between the model using diffusion-based representation and the model using other self-supervised representations (e.g., ESM2)? It would be insightful to see the model performance when replacing the diffusion-based representation with other representations. 3. The score network receives timestep t as input. When computing the hidden representation for the downstream task of affinity prediction, what's the value of t, and why was it chosen? 4. After obtaining the hidden representation from the diffusion-based encoder, an additional module predicts the binding affinity from the hidden representation. Is this additional module also an IPA-like transformer? The appendix seems to lack the specific model architecture of this module. 5. In Table 1, the model appears to have achieved state-of-the-art performance without the representation learned from the diffusion model. Does this suggest that the performance gain primarily stems from the model architecture itself? 6. Could the authors elaborate on the computational efficiency of DiffAffinity? How does it scale with the size of the protein or the number of mutations? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The proposed method only considers side chain conformation changes and does not take backbone changes into account. This limitation might hinder its practical applications. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's approval and constructive comments on our work. In response, we have carefully addressed their concerns and provided clarification on various issues as follows: __Q1:__ The affinity of the complex is derived from the dynamic conformation of the protein rather than its precise fixed structure. We employ a diffusion model to characterize the distribution of side-chain conformations, which allows for a more accurate simulation of the true physical interactions of the side chains. To elucidate the benefits of the representations learned by the diffusion model for affinity prediction, we incorporated an experiment visualizing these representations after dimensionality reduction using PCA (**Q4 in the "global" response and Figure a in the PDF**). In comparison with the representations from the flow model (RDE) and the pretrained language model (ESM-3B), we found that the representations based on the Sidechain Diff can more directly discern the effects of mutations on affinity. __Q2:__ We conducted additional comparisons between our DiffAffinity model and two other baselines: (a) ESM2 with a 2-layer MLP predictor, and (b) feeding ESM2 representations to the DiffAffinity architecture. Importantly, our DiffAffinity model consistently outperformed these baselines in all datasets mentioned. The results of these comparisons can be found in **Table a in the PDF**. The findings from these comparisons align with the conclusions drawn from addressing Q1, both indicating that the hidden representation obtained from SidechainDiff is more suitable for mutation-related tasks. These results further validate the efficacy and superiority of our DiffAffinity model in handling mutation-related challenges. __Q3:__ We thank the reviewer for pointing out the potential confusion caused by the usage of the score network, in the DiffAffinity model, we exclusively use the conditional encoder from SidechainDiff, where the inputs are wild-type sequences and wild-type protein structures to obtain hidden representations. These representations are then fed into DiffAffinity. Consequently, we do not utilize the score network from SidechainDiff, and there is no need to consider the timestep $t$ for the score network in this context. In the sampling procedure, we employ the score network to generate samples from the reverse process. The specifics of the sampling procedure and the setting of parameter $t$ can be found in **Algorithm 1 of Section 3.2**. __Q4:__ Yes, the additional module is an IPA-like transformer. The DiffAffinity model utilizes the hidden representation from SidechainDiff, along with sequence and protein structure information, as input to an IPA-like transformer. This transformer generates the final representation, and the difference between the final representations for the wild-type and mutation is used to predict the $\Delta \Delta G$ value. We have now added a detailed description in **Section A.2 of Appendix**. __Q5:__ In our experiments, we observed that DiffAffinity* without any pre-training features outperformed RDE-Network and other methods. Unfortunately, the original paper [1] does not provide the results of RDE-Network without RDE hidden representation, preventing us from directly verifying whether our results align with the original findings. Nevertheless, in downstream tasks of SARS-CoV-2 RBD and human antibodies, we observed a significant improvement in the performance of DiffAffinity over DiffAffinity*. This suggests that the pre-training features from SidechainDiff can indeed enhance the model's generalization ability and the End-to-End model DiffAffinity* may overfit the SKEMPI2 dataset. Meanwhile, we keep the same hyperparameter and model architecture with RDE-Network and ESM2* in the second stage of DiffAffinity. Our model shows superior performance than RDE-Network and ESM2* in downstream tasks which also indicates our hidden representation is more suitable for mutation-related tasks.We extend our analysis about pre-training features in **Q4 in the "global" response**. __Q6:__ Thanks for your attention to computational efficiency, according to our preprocess procedure, we first crop all protein complexes into patches containing 128 residues by first choosing a seed residue, and then choose 127 nearest neighbors based on C-beta distances. Therefore, the computational efficiency of DiffAffinity is not affected by the size of the protein or the number of mutations from the model design. We also experiment to verify this conclusion. | Number of mutations | Time | |----|----| | 1 | 2.33s| | 2~5 | 2.33s| | >6 | 2.33s| __Q7:__ We have now acknowledged the limitations of our model in the **Conclusion section** in the revision. __Reference__ [1] Luo, Shitong, et al. "Rotamer Density Estimator is an Unsupervised Learner of the Effect of Mutations on Protein-Protein Interaction." bioRxiv (2023): 2023-02. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my comments and queries in your rebuttal. I appreciate the effort you've invested in responding to my concerns. However, I still have some concern regarding the newly added ESM2 representation and the DiffAffinity experiment. Specifically, it seems counterintuitive that the ESM2 representation combined with the DiffAffinity model performs worse than the DiffAffinity model alone, without a pre-trained representation. Could you provide an explanation for this surprising result? In addition, while the SARS-CoV-2 experiment shows that DeepAffinity has a significantly better Pearson correlation coefficient than DeepAffinity*, the rankings for the five favorable mutations seem very close. How should we interpret this apparent discrepancy in the experimental results? --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer's approval and constructive comments on our work. In response, we have carefully addressed the remaining concerns as follows: __Q1__: 1.The inefficacy of ESM embedding for this task is mainly due to the fact that the ESM language model is trained only on the sequences of protein single chains rather than the protein complexes. 2.The dimension of ESM2 embedding is 2,560, considerably larger than the 128 dimensions of the representation embeddings used in our DiffAffinity and the previous method RDE. The scarcity of curated training labels for this task might lead to overfitting with higher-dimensional representations. It could explain that ESM2 representation combined with the DiffAffinity model performs worse than the DiffAffinity model alone. 3.Additionally, to assess the utility of ESM2 embeddings, we applied PCA to visualize these embeddings. Our observations revealed their failure to effectively capture the magnitudes of $\Delta\Delta G$ (more details in the global response Q2 and figure a in the global PDF). Our findings are consistent with previous studies, such as RDE [1] (Table 1 in its original paper), which also indicated the poor performance of ESM language model in predicting mutational effects on protein-protein binding. __Q2__: We also understand the concern of the reviewer about the SARS-CoV-2 experiment results. We first clarify that **Table 2** and **Table 3** show evaluations on two distinct tasks. Specifically, **Table 2** presents the evaluation on the **mutations within the RBD region** of SARS-CoV-2’s spike protein. This dataset comprises hundreds of mutations, and we employed the Pearson correlation coefficient to assess the accuracy of the predicted DDG. On the other hand, **Table 3** focuses on the evaluation on **mutations within the antibodies** that bind to the spike protein for SARS-CoV-2. Due to the absence of experimentally determined $\Delta\Delta G$ results in this dataset, we followed previous works to evaluate the methods’ performance in identifying and ranking the top 5 favorable mutation sites for binding. The results in **Table 3** indicate that DiffAffinity outperforms DiffAffinity*. However, due to very limited amount labeled data available for this task, substantial differences between these methods may not be evident. [1] Luo et al. Rotamer Density Estimator is an Unsupervised Learner of the Effect of Mutations on Protein-Protein Interaction. ICLR (2023)
Summary: The paper studies the problem of mutational effect prediction on protein-protein binding. To address the data scarcity issue, the authors propose to first pre-train their model on PDB using diffusion models to fit the protein side-chain distribution. Then, the learned representations are used for predicting the mutational effects on SKEMPIv2. The method achieves good results on the benchmark in the author’s evaluation setting. Strengths: 1. Mutational effects on protein-protein binding is an important and difficult problem in the protein community. It suffers from the data scarcity issue and pre-training seems like a good solution to the problem. 2. The paper introduces diffusion-based models to fit the protein side-chain distribution, which, to the best of my knowledge, is among the first papers to explore this approach. Weaknesses: 1. The paper's writing and clarity are poor, making it hard to follow. Many sections seem to be directly copied from previous works. The authors introduce many unnecessary notations and equations, which makes the method even more difficult to understand. 2. The novelty of the paper is limited, as it primarily extends existing works without introducing problem-specific designs. The substitution of flow-based models with diffusion models for side-chain distribution modeling is not sufficiently innovative. 3. The related work section lacks a discussion of existing methods on protein side-chain packing, which would provide a more comprehensive context for readers. 4. The experiment design contains significant flaws that undermine the persuasiveness and support of the evaluation presented. Notably, many important baselines and details are missing from the experiment section, which weakens the arguments made by the authors. (See questions) Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: Major points: 1. A significant portion of Section 3.2 appears to be directly copied from the Riemannian diffusion model paper [1], including Propositions 1 and 2. Please discuss why the two propositions are important to side-chain problems and mention in the main paper that **these contributions are derived from previous works**. Also, the section should be rewritten with appropriate paraphrasing to clarify the changes made to adapt the model for the new task. 2. The authors propose to use a three-fold cross-validation for evaluation, but several details regarding the dataset split need to be addressed. (a) How is the dataset split? Is it split by proteins or mutations? Is it split based on sequence similarity or structure similarity? (b) The claim that DDGPred does not provide training scripts, and the authors copied the results from the published work is not a valid excuse. The training script of DDGPred is very easy to implement. Reproducing the results with the same dataset splits is essential to ensure fair comparison, as the performance can vary significantly with different splits due to limited data. Cross-validation should be repeated multiple times to assess the significance of the improvement. 3. Important baselines are missing in the benchmark. First, more traditional methods should be included in comparison, e.g. FlexDDG [2]. These methods hardly suffer from the overfitting problem. Second, the performance of using pre-trained representations with a linear predictor is low and the improvement of DiffAffinity over DiffAffinity* is small. This raises the doubt on the effects of pre-training. Please include a baseline using other pre-trained representations, e.g., ESM2. Two baselines should be considered (a) ESM-2 with a 2-layer MLP predictor and (b) Feeding ESM-2 representations to the DiffAffinity architecture. 4. The reported metrics are problematic. First, AUROC is more suitable for balanced binary classification tasks, and the number of positive and negative samples should be reported for binary classification tasks. If the data is imbalanced, AUPRC should also be reported. Second, treating mutations with ddG around 0 as either positive or negative is not appropriate, considering the measurement errors and the fact that most mutations in SKEMPIv2 are neutral. Additionally, discarding proteins with fewer than 10 mutations when reporting per-structure Pearson raises concerns about the sensitivity of the reported metric to this threshold. The authors should clarify the number of proteins and mutations left for evaluation and provide justifications for choosing 10 as the threshold. 5. The evaluation in Sec. 4.3 is not convincing. Assigning high ranks to only five experimentally tested mutations is insufficient to draw conclusions about the effectiveness of the proposed method, especially considering the limited number of mutations tested experimentally in the original paper [3]. The authors should consider a more comprehensive evaluation by testing a larger set of possible mutations. 6. In Sec. 4.4, the authors only compare their method with three baselines and ignore recent advancements in side-chain packing problem [4, 5]. Since modeling side-chain conformation is a major contribution of the paper, more experimental evaluation with a broader set of baselines should be included to demonstrate the method's effectiveness. Minor points: 1. Line 121: Please give a definition of backbone atoms and side-chain atoms. This will be more friendly to audience without knowledge about proteins. 2. Eq. (3) in Sec. 3.2: $U(X_t)$ is used without any definition. 3. Line 148: Please discuss how the Brownian motion on $\mathrm{T}^4$ is defined in the context of side-chain packing problem. 4. Eq. (4) in Sec. 3.2: The SDE here is Variance-Exploding SDE, while the concepts introduced before are based on Variance-Preserving SDE. Please correct me if I am wrong. 5. Line 165: Please discuss why the perturbation kernel $p_{t|0}$ is difficult to obtain. 6. Line 171: “equation 6 -> Equation 6” Please make the references to equations consistent across the paper. Overall, I think the paper is not ready to publish in its current state. However, I firmly believe that the paper holds great potential for making a valuable contribution to the field if the authors dedicate more efforts towards improving the experimental design and writing. By addressing the weaknesses and incorporating the suggested changes, the paper can reach a higher standard of quality and become a valuable addition to the existing literature. With the necessary revisions, I am confident that the paper can be transformed into a strong and impactful publication. [1] De Bortoli, Valentin, et al. "Riemannian score-based generative modelling." Advances in Neural Information Processing Systems 35 (2022): 2406-2422. [2] Barlow, Kyle A., et al. "Flex ddG: Rosetta ensemble-based estimation of changes in protein–protein binding affinity upon mutation." The Journal of Physical Chemistry B 122.21 (2018): 5389-5399. [3] Shan, Sisi, et al. "Deep learning guided optimization of human antibody against SARS-CoV-2 variants with broad neutralization." Proceedings of the National Academy of Sciences 119.11 (2022): e2122954119. [4] Misiura, Mikita, et al. "DLPacker: deep learning for prediction of amino acid side chain conformations in proteins." Proteins: Structure, Function, and Bioinformatics 90.6 (2022): 1278-1290. [5] McPartlon, Matthew, and Jinbo Xu. "An end-to-end deep learning method for protein side-chain packing and inverse folding." Proceedings of the National Academy of Sciences 120.23 (2023): e2216438120. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: The authors should add a separate paragraph to dicuss the potential limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for constructive comments on our work. Below are our responses to the reviewer's concerns. __Q1:__ We realize that too many notations similar to the paper of the Riemannian Diffusion Model may be misleading. We agree with the reviewer in this aspect. Following the reviewer’s suggestion, we have rewritten Section 3.2. In the initial submission, we included these two propositions and explicitly cited their work just for the completeness of our paper. To avoid misleading, we now have put these propositions in the supplementary and clearly state their reference. To clarify the change for our model, the primary modifications of the original Riemannian diffusion model for our task are presented in **Q2 in the "global" response**. __Q2:__ 1. We split the dataset by proteins based on structure similarity. Specifically, we have split the SKEMPI2 dataset into three folds based on structure, ensuring that each fold contains unique protein complexes not present in the other folds. Two of the folds are utilized for training and validation, whereas the remaining fold is reserved for testing purposes. 2. Following the reviewer’s suggestion, we have retrained the DDGPred model using the same hyperparameters as outlined in the reference. The result indicates that the performance of DDGPred is roughly comparable with the reported in their reference (please see **Table a in the PDF**). 3. Following the reviewer's suggestion, we repeated the training of our model 5 times in cross-validation experiments. The results demonstrate that our methods are robust and stable in the test (Please see the **Table b in the PDF**). __Q3:__ 1. In the previous submission, we selected the most representative and publicly available benchmarks from different method categories. As the reviewer pointed out, we have checked FlexDDG according to their reference. But FlexDDG was not accessible, and no server was available for evaluation 2. Following the reviewer's suggestion, we performed comparisons between DiffAffinity and two baselines. The new results can be found in **Table a in the PDF**. For an in-depth analysis of SidechainDiff, we extend to a visualization analysis of the baseline results using PCA (Please see **Q4 in the "global" response and in Figure a in the PDF**). __Q4:__ 1. Following the reviewer's suggestion, we report the new AUPRC metrics behind. The AUPRC results demonstrate that our model outperforms all the baselines, indicating its effectiveness in handling imbalanced datasets. | Method | FoldX | ESM-1v | ESM-IF| DDGPred| RDE-Net | Linear| DiffAffinity*| DiffAffinity| |-|-|-|-|-|-|-|-|-| |AUPRC|0.839|0.735|0.768|0.892|0.887|0.741|0.857|0.896| 2. Following the reviewer's suggestion, we have reported the results considering only mutations with $\Delta \Delta G$ numeric values above 1 or below -1. Our model exhibits consistent performance with previous results (please see **Table e in the PDF**). 3. Following the RDE setting, we choose 10 as per-structure thresholds. We clearly show that our model consistently outperforms all the baselines, regardless of the chosen threshold ( please see **Table d in the PDF**). __Q5:__ 1. In addition to the task in Sec. 4.3, we also evaluated our method on another downstream task, including 285 mutations (as described in Sec. 4.2). Our method achieves a Pearson correlation of 0.466, higher than other methods. 2. Following the reviewer’s suggestion, we further evaluate our method on the HER2-antibody data including 491 mutations with experimentally validated binary labels[4]. Our methods also achieve better performance than RDE. | Method | AUPRC | |----|----| |ESM2* |0.739| |RDE|0.755| |DiffAffinity| 0.768| __Q6:__ Following the reviewer's suggestion, we have included AttnPaker and DLPaker for comparison. The analysis of the updated results can be found in **Q5 in the "global" response and Table c in the PDF**. Our method achieves better performance than AttnPaker and DLPacker in terms of both average MSE of side-chain torsion angles and side chain clash number. __Q7: Minor points__ 1. We have now added a definition of backbone atoms and side-chain atoms in the previous submission. 2. Thank you for the guidance. We have removed the irrelevant notation $U(X_t)$ from Eq. (3) as it does not pertain to our tasks. 3. In the context of the side-chain packing problem, we represent the rotamer (side-chain conformation) in $\mathbb{T}^4 = (\mathbb{S}^1)^4$. Consequently, we initially consider the Brownian motion in $\mathbb{S}^1$ and then extend it directly to $\mathbb{T}^4$.The diffusion equation for the density of Brownian particles $\mathbf{Y}$ at point $x$ and time $t$ on the unit circle is given by: $\mathrm{d}\mathbf{Y} = -\frac{1}{2}\mathbf{Y}\cdot\mathrm{d}t + \mathbf{K}\cdot\mathbf{Y}\cdot\mathrm{d}\mathbf{B}_t$ $\quad K_{11}=K_{22}=0,K_{12}=-1,K_{21}=1$. Here, $\mathbf{B}_t$ represents the Brownian motion on the real line. By applying the diffusion process to each dimension and taking the Cartesian product of the results, we obtain the Brownian Motion in the 4-dimensional torus space $\mathbb{T}^4$. 4. Yes, the SDE in our paper is Variance-Exploding SDE. 5. We have now provided more details about the perturbation kernel in **Q6 in the "global" response**. 6. We have corrected the writing errors in our paper. __References__ [1] Shan et al. Deep learning guided optimization of human antibody against SARS-CoV-2 variants with broad neutralization. Proceeding of the National Academy of Sciences (2022) [2] Luo et al. Rotamer Density Estimator is an Unsupervised Learner of the Effect of Mutations on Protein-Protein Interaction. ICLR (2023) [3] Starr et al. Shifting mutational constraint in the SARS-CoV-2 receptor-binding domain during viral evolution. Science (2022) [4] Masonet et al. Optimization of therapeutic antibodies by predicting antigen specificity from antibody sequence via deep learning. Nat Biomed Eng (2021) --- Rebuttal Comment 1.1: Comment: I'd like to thank the authors for their detailed response. However, my concerns about the experimental setting remain. Here are my response: >Q2 1. According to your description, the dataset is split by protein complexes, but it is not split by structure similarity. "Split by structure similarity" means the structures in training set and test set have structure similarity below a pre-defined threshold. If no sequence or structure similarity constraints are added during dataset split, the task will become much easier. 2. Thanks for adding this baseline. The results look reasonable. 3. The variance reported in Table b is too small. I guess the authors just retrain the model on the same split with different random seeds. However, "repeat cross-validation multiple times" means that the dataset split should be regenerated every time and the model should be retrained on the new splits. According to my experience, this will yield very large variance in the model performance and even change the rank of different methods. This avoids that the proposed method only excels on a certain split, which is very common on small datasets. >Q3 1. FlexDDG is a very strong baseline on this task, which should not be neglected. It is publicly available as a Rosetta script, the tutorial of which can be found in https://github.com/Kortemme-Lab/flex_ddG_tutorial. 2. Thanks for adding experiments with ESM representations. It is quite surpurising to see that DiffAffinity with ESM2 performs even worse than DiffAffinity with random initialization. Also, my question "the performance of using pre-trained representations with a linear predictor is low and the improvement of DiffAffinity over DiffAffinity* is small. This raises the doubt on the effects of pre-training" is still not addressed. >Q4 1. The improvement of DiffAffinity over DDGPred in terms of AUPRC is too small. Overall, I'd like to thank the authors' promise to revise the paper and parts of my concerns about the experimental setting are addressed. However, the major concerns still remain: (1) lack of repeated experiments (2) lack of important baselines (3) the incremental improvement brought by pre-training. Therefore, I keep my score unchanged. --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer’s constructive comments on our work. Below are our responses to the reviewer's remaining concerns. __Q2__ 1. We follow the strategy of splitting the train-test dataset used in prior works such as RDE [1] and GeoPPI [2]. To elucidate the structural similarity in SKEMPI2 dataset split, we run TM-align on each pair of protein complexes between the training and testing folds. Of these, 98.6% have a TM-score below 0.3 and only 0.6% have a TM-score exceeding 0.6. It is worth noting that a TM-score below 0.4 indicates a very low level of structural similarity [3]. Furthermore, we would like to emphasize that, beyond the evaluation on SKEMPI2 dataset through cross-validation, we assessed the performance on independent testing sets, including the set of mutations on RBD of SARS-CoV-2’s spike protein (**Table 2**) and the set of mutations on the antibodies of the SARS-CoV-2 (**Table 3**). Our method demonstrates state-of-the-art results on these independent testing sets. 3. We acknowledge that we misunderstood the reviewer’s suggestion in the previous revision, where we simply retrained the model on the same split with different random seeds. Following the reviewer’s suggestions, we have now retrained our model 5 times with different data splits. We show details on standard deviation and mean values below. DiffAffinity consistently outperformed RDE in terms of average scores across all evaluation metrics. | Method | Pearson $\mu$ | Pearson $\sigma$ | Spearman $\mu$ | Spearman $\sigma$ | AUPRC $\mu$ | AUPRC $\sigma$ | |-|-|-|-|-|-|-| |RDE|0.616| 0.032 |0.508|0.025|0.883|0.002| |DiffAffinity|0.668|0.027|0.547|0.026|0.893|0.001| __Q3__: 1. Following the review’s suggestion, we have now included FlexDDG for comparison, as presented below. DiffAffinity outperforms FlexDDG on all metrics. | Method | Pearson | Spearman |AUROC| AUPRC |Per-Structure Pearson|Per-Structure Spearman |-|-|-|-|-|-|-| |FlexDDG|0.402|0.427|0.675|0.866|0.414|0.386 2. i)The inefficacy of ESM embedding for this task is mainly due to the fact that the ESM language model is trained only on the sequences of protein single chains rather than the protein complexes. ii)The dimension of ESM2 embedding is 2,560, considerably larger than the 128 dimensions of the representation embeddings used in our DiffAffinity and the previous method RDE. The scarcity of curated training labels for this task might lead to overfitting with higher-dimensional representations. It could explain that ESM2 representation combined with the DiffAffinity model performs worse than the DiffAffinity model alone. Our findings are consistent with previous studies, such as RDE [1] (**Table 1** in its original paper), which also indicated the poor performance of ESM language model in predicting mutational effects on protein-protein binding. We understand the reviewer's concerns about the relative improvements from the pre-training model. i) It's worth highlighting that DiffAffinity outperforms DiffAffinity* across all evaluation metrics, as detailed in Table 2. Notably, it exceeds DiffAffinity* by 4.6% on the AUPRC metric (**Table in Q4**). ii).Compared to DiffAffinity*, DiffAffinity demonstrates substantial improvement when evaluated on the independent testing set for SARS-CoV-2 (**Table 2**). Specifically, DiffAffinity and DiffAffinity * achieve Pearson correlation coefficients of 0.466 and 0.295, respectively. The performance on an independent testing set is more indicative of the model's generalization capability. __Q4__: 1. We conducted a comprehensive evaluation of the methods utilizing a range of metrics widely-used in previous work. DiffAffinity achieves overall better performance than DDGPred. While DiffAffinity did not demonstrate a substantial improvement over DDGPred in terms of the AUPRC metric, it exceeds DDGPred by 18.6% in terms of Spearman correlation coefficients. Furthermore, for both the Per-Structure Pearson correlation coefficient sand the Per-Structure Spearman correlation coefficients, DiffAffinity exhibits notable increases of 12.5% and 11.1%, respectively (refer to **Table 1**). Additionally, it's important to note that DiffAffinity is much more computationally efficient than DDGPred. [1] Luo et al. Rotamer Density Estimator is an Unsupervised Learner of the Effect of Mutations on Protein-Protein Interaction. ICLR (2023). [2] Liu et al. Deep geometric representations for modeling effects of mutations on protein-protein binding affinity. Plos Computational Biology (2021). [3] Xu et al.  How significant is a protein structure similarity with TM-score=0.5?. Bioinformatics (2010).
Rebuttal 1: Rebuttal: In our “global” response, to address reviewers’ main concerns, we elucidate our motivation, delineate the distinctions between our approach and previous methods (including the original Riemannian diffusion model and torsion diffusion model), and present new results (including comparison with ESM2, visualization analysis of the learned representation, and expanded method comparisons for side-chain conformations) __Q1: Motivation__ Given limited annotated experimental data, we utilize representation learning on unlabeled data to improve mutational effect prediction in protein-protein binding. Recognizing the inherent flexibility of side-chain conformations, we introduce a conditional diffusion model to capture the dynamic nature of side-chain conformations, rather than relying on a fixed conformation. Our downstream task results and visualization analysis (**Figure a in the PDF**) of learned representations demonstrate its effectiveness. __Q2: Difference with Riemannian Diffusion Model__ Our primary modifications to adapt the original Riemannian diffusion model for our task are as follows: 1. We extend the original Riemannian diffusion model to a conditional diffusion model, where we jointly learn the conditional vector and the diffusion process. Specifically, we encode the structural context of the mutation as a conditional vector and learn the generative process of its side-chain conformations in a 4-dimensional torus space. 2. We explore the model's capacity for representation learning. Side-chain conformations are inherently flexible, with the degree of flexibility dependent on their structural context. Therefore, we hypothesize that a representation learning approach, aimed at learning the distribution and generative process of side-chains rather than a point estimation, can lead to improved performance in downstream tasks. __Q3: Additional Benchmark of ESM2__ We included pre-trained language model baselines in SKEMPI2 dataset. We conducted comparative experiments with ESM2-3B + 2-layer MLP and ESM2-3B + DiffAffinity (**Table a in the PDF**). Notably, DiffAffinity achieved state-of-the-art results on almost all benchmarks within the SKEMPI2 and SARS-CoV-2 datasets. __Q4: Pre-training representation Analysis__ To assess the pre-trained representative capacity of SidechainDiff based on the diffusion model (**Figure a (a) in the PDF**), we compared it with other methods RDE (**Figure a (b) in the PDF**) and ESM2-3B (**Figure a (c) in the PDF**). We calculated the difference of hidden representations between wild-type and mutant proteins obtained from pre-training methods and applied PCA to reduce the dimensions of the representations from SKEMPI2. We visualized the distribution of the representations and colored them based on their $\Delta\Delta G$ values. Compared to other pre-trained methods, the representations from our model can more effectively discern affinity changes caused by mutations in the latent space. __Q5: Side-chain Conformation Results__ We provided a sidechain conformation comparison with baselines based on side-chain packing methods, such as AttnPacker [1] and DLPacker [2]. Our model achieves comparable results with these methods which aim to predict side-chain conformations accurately. The results can be found in **Table c in the PDF**. We also discuss the diversity of side-chain conformation generated by SidechainDiff in Append. D.3.2 We also use the steric clash number metric [1] to evaluate the quality of the side-chain conformations generated. Even if Attnpacker adds additional steric clash loss, DiffAffinity also achieves SOTA results in this metric across all representative methods. __Q6: Difference with existing torsion angle diffusion model__ In this paper [4], both methods view the torus space $\mathbb{T}^4 \cong [0,2\pi)^4$ and set the exponential map on the torus as $\mathrm{exp}_{x}(y) = x + y\ \mathrm{mod}\ 2\pi$. In contrast, our model views the torus space $\mathbb{T}^4 \cong (\mathbb{S}^1)^4$ and sets the exponential map on the unit circle $\mathbb{S}^1$ as $\mathrm{exp}_{\mathbf{\mu}}(\mathbf{v}) = \cos(\Vert \mathbf{v} \Vert)\mathbf{\mu}+\sin(\Vert \mathbf{v} \Vert)\frac{\mathbf{v}}{\Vert \mathbf{v} \Vert}$, where $\mathbf{\mu} \in \mathbb{S}^1$ and $\mathbf{v} \in \mathbb{R}^1$. To handle the 4-dimensional torus space $\mathbb{T}^4$, we utilize projection mapping and exponential mapping on the unit circles of each dimension. By applying these mappings to each dimension and taking the Cartesian product of the results, we obtain the exponential mapping $\mathrm{Exp}$ and projection mapping $\mathrm{Proj}$ on the 4-dimensional torus space $\mathbb{T}^4$. The perturbation kernel is impossible to get an exact solution but requires numeric approximation. The perturbation kernel $p_{t|0}$ in the torus space $\mathbb{T}^4 \cong [0,2\pi)^4$ is proposed as: $p_{t|0}(x'|x) \propto \sum_{d\in \mathbb{Z}^m}\mathrm{exp}(-\frac{\Vert x-x' + 2\pi d\Vert^2}{2\sigma^2(t)})$. While the two exponential maps are equivalent to Riemannian exponential maps on the torus and the diffusion process remains the same in the 4-dimensional torus space $\mathbb{T}^4 \cong (\mathbb{S}^1)^4$, using the perturbation kernel $p_{t|0}$ in our paper requires complex derivation based on our specific exponential map. With implicit score matching loss, we are not required to approximate the perturbation kernel numerically. __Reference__ [1] Matthew et al. An end-to-end deep learning method for protein side-chain packing and inverse folding. Proceedings of the National Academy of Sciences (2023) [2] Mikita, et al. DLPacker: deep learning for prediction of amino acid side chain conformations in proteins. Proteins: Structure, Function, and Bioinformatics (2022) [3] Lin et al. Evolutionary-scale prediction of atomic-level protein structure with a language model. Science (2023) [4] Bowen et al. Torsional diffusion for molecular conformer generation. NeurIPS (2022) Pdf: /pdf/dd37e49b121e7cfc525f983f45e3e99fbe58f3e5.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Score-based Source Separation with Applications to Digital Communication Signals
Accept (poster)
Summary: 1. In this paper authors proposed a method for separating superimposed sources using diffusion based generative models. The proposed model derived a new objective function based on maximum a posterior and alpha posterior. 2. The application of the proposed work is clearly mentioned by considering the existing components in the system, The data driven score based single-channel source separation technique are proposed for signal septation which perform superior compared to the conventional methods. 3. The contribution of the proposed work are mainly. a. Bayesian method for single-channel source separation is used. b. Score Distillation Sampling (SDS) has got good results of local extrema loss. c. The proposed method performs well compared to the signal processing and annealed Langevin-dynamics-based approaches for RF source separation method and the results are encouraging. 4. The results presented in this paper are highly encouraging. Strengths: Paper is very well organized. Weaknesses: The experimental set set up should be clearly explained. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Justify randomizing across multiple noise levels, the local extrema of loss. 2. What is the value of k is chosen; relative scaling coefficient between the two signals 3. Explain the experimental set up used to derive results presented in Figure 1. 4. Increase the resolution of the Figure 2 and explain it in detail. 5. A detailed explanation of the diffusion model is required. papers related to diffusion model, and it can be applied to the proposed method need to be elaborated more. 6. Why the probabilities are converted to negative in the equation 6b justify it. 7. A Comparison is required how the computational complexity of the proposed work is reduced compared to the conventional methods. 8. Train and test ratio used for the implementation is 90:10 is there any specific reason for it. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The author need to address the questions and revise the paper and submit it back. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback and comments. We will address the raised questions and concerns here. **Regarding the reviewer’s comments under “weaknesses”:** Due to space considerations, we were not able to include all details in the main manuscript, but we have included them in the supplementary material. Appendix C provides details about the datasets and RF terminology. Appendix D provides architecture, implementation and training details about the diffusion models we used in our experiments. Appendix E provides insight into the classical baselines of matched filtering and LMMSE. Appendix F provides theoretical and implementation details about the BASIS algorithm. Appendix G presents our source separation experimental setup with a detailed overview of the results. Our intention with these multiple appendices is to help readers, and provide a paper that is as self-contained as possible, **Regarding the questions:** 1. **Multiple noise levels:** The reason for using multiple noise noise levels in a randomized fashion is related to the dynamics of the diffusion process. Adding noise at different levels allows the optimization algorithm to explore the landscape of the distribution at different resolutions. At low noise levels, the distribution might remain peaky and an iterative gradient-based algorithm could get stuck at suboptimal local minima (the valley between two peaks). On the other hand, at larger noise levels, the distribution naturally appears more smoothened and allows for the optimization routine to escape from local minima. Hence, using multiple noise levels allows us to trade off between exploration of the optimization landscape on one hand, and resolving the solution with a sufficiently high-resolution landscape on the other hand. Please note that this is explained in Section 3.2.1 and 3.2.2 (line 164) and Section 4 of the paper. *In the final version of the manuscript, we will better underscore these explanations.* 2. **Choice of $\kappa$:** In our source separation experiments we vary the relative scaling of the two components. As mentioned on line 301 and as shown in Figure 3, SIR ranges from $-24$ dB to $-3$ dB. To map this SIR range back to the range of $\kappa$s, the mapping on line 295 can be used. For example, given unit power sources $s$ and $b$, to construct a mixture with SIR -24 dB, the scaling factor required is $\kappa =15.85$. 3. **Experimental setup for Figure 1:** As stated in line 207, we consider two independent sources — $p_{\textsf{s}}(s)=0.5$ where $s \in \\{-1, +1\\}$ and $p_{\textsf{b}}(b)=0.25$ where $b \in \\{-2/\sqrt{20}, +2/\sqrt{20}, -6/\sqrt{20}, +6/\sqrt{20}\\}$. The joint distribution $p_{\textsf{s}, \textsf{b}}(s, b)$ has 8 equiprobable modes as shown in the leftmost plot of Figure 1. We randomly choose $s$ and $b$ to construct a mixture $y = s + \kappa b$ where $\kappa=15.85$, which corresponds to an SIR of $-24$ dB. If one plotted the line $y = s + 15.85 b$ in the $s$-$b$ plane it would appear as shown in Figure 1, with an intersection at the mode corresponding to the component values $(s, b) = (+1, -2/\sqrt{20})$. As described in Section 4, our algorithm asymptotically minimizes (14) in the limit of a large number of iterations. The central plot of Figure 1, plots (14) with $\omega=1$. Notice that the loss function is quite peaky and, especially, the incorrect mode $s=-1$ is pronounced even when the Gaussian smoothing is quite large. In the rightmost plot we demonstrate that with a suitable choice of $\omega=\kappa^2$, the optimization landscape can be made more amenable to gradient descent techniques, thus allowing convergence to the desired mode $s=+1$. *With the additional page provided in the final manuscript, we will make this explanation more clear.* 4. **Enlarging Figure 2**: *We will enlarge Figure 2 in the final manuscript as much as possible.* Due to space considerations, we have deferred the relevant terminology required to understand Figure 2 in greater detail to Appendix C in the supplementary. 5. **Literature review of diffusion models:** Thank you for this suggestion. *We will provide a more thorough review of diffusion models in the final manuscript/appendix.* 6. **Negative probabilities:** From (6a) to (6b) we are interested in converting a maximization problem to a minimization problem. Since the solution to a maximization problem is equivalent to minimizing the negated objective, the probabilities now have a negative sign. 7. **Computational complexity:** The primary objective in this study is to introduce a novel algorithm for source separation based on independently trained priors. While optimizing for the computation time is out of the scope of this paper, we make efforts to keep the implementation as efficient as possible (please take a look at the attached code) and compare the runtime of our algorithm to BASIS in Appendix G of the supplementary. *Nonetheless, we appreciate the suggestion of the reviewer and will look into optimizing the implementation for future work on the topic. We will also move the former runtime comparisons to the results section in the final manuscript.* 8. **Train-test split:** The 90-10 train-test split is a standard choice for splitting the RF Challenge dataset samples (as is done in the example notebooks from the challenge’s GitHub page). We decided to use the same split in the example notebooks for our data as well. While optimizing the training and test ratio could potentially yield improved outcomes, our primary objective in this study is to introduce the novel $alpha$-RGS algorithm. As a result, our focus remains on showcasing the algorithm's potential rather than extensively fine-tuning all implementation details for optimal performance.
Summary: This paper proposed a new Bayesian method to separate a mixture of two signals $\mathbf{s}$ and $\mathbf{b}$ from the mixture signal $\mathbf{y}=\mathbf{s}+\kappa \mathbf{b}$ where $\kappa \in \mathbb{R}_+$ is known. The new method leverages the score from pre-trained diffusion models to extend MAP estimation using generalized Bayes' theorem with an $\alpha$-posterior across different levels of Gaussian smoothing. Experiments on RF sources demonstrate superior separation performance, with gains of up to $0.95$ in terms of both BER and MSE over classical and existing score-based source separation methods such as BASIS by Jayaram and Thickstun (ICML 2020). Strengths: Using deep learning for source separation problems has appeared in much research literature. The key contribution of this paper is to provide a new separation method that works well for a superimposition of two discrete sources which produces a joint distribution with multiple equiprobable modes. This work can be considered a novel combination of well-known techniques. Weaknesses: The main weak point of this work is a lack of theoretical analysis or intuitions behind the superior separation performance of the proposed method. Technical Quality: 3 good Clarity: 3 good Questions for Authors: + In your plots in Fig. 3, $\alpha$-RGS outperforms classical methods such that MF only or LMMSE +MF. Why can this fact hold? + Why can your method outperform BASIS for equiprobable multimodal distributions? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: This is a theoretical work. The authors already finished the checklist as well as stated the limitations of theorems and results. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments regarding our technical contributions. Below we address the raised questions and concerns. **Regarding the reviewer’s comments under “weaknesses”:** 1. **Theory:** Due to space considerations, we included our theoretical results in the supplementary material. *However, we agree that it is beneficial to readers if we include the main parts of the developed theory in the manuscript itself. We will move the most important parts of the theory from Appendix B to Section 4 in the final manuscript.* Currently, Section 4 introduces the asymptotic loss function that our algorithm minimizes in the limit of a large number of iterations. Appendix B builds on this, “dissects” equation (14) further, and analytically proves the “mode-seeking” nature of our algorithm for multivariate normal sources, digital RF signals (discrete signals) and Gaussian mixture sources. For each case, we analytically solve for the individual terms in (14) and show that the extrema corresponds to the modes of the underlying source distribution. For the original MAP formulation (6b), we know that the solution(s) correspond to points of maximum probability, i.e., the modes. Hence, the solution of our algorithm, which optimizes (14), approaches the solution of (6b). **Regarding the questions:** 1. **MF and LMMSE:** We recall that MF is the optimal separation solution only when the interference $b$ is Gaussian (see Appendix E.1). Thus, when $b$ is not Gaussian, MF is simply suboptimal and our algorithm outperforms it. The LMMSE estimator is a ***linear*** estimator that minimizes the MSE between the estimated and true signal. The LMMSE solution is only optimal when both sources are Gaussian. On the other hand, the learned diffusion-based priors model ***non-linearities*** (i.e., it generally extends beyond linear operators) in the data and hence can leverage these during separation. For more details on MF and LMMSE solutions please refer to Appendix E. 2. **BASIS:** Please note that Appendix F of the supplementary contains a detailed explanation with the reasons for our model’s superior performance over BASIS. Having carefully reexamined the content of this appendix, we believe that it contains all the necessary clarifications regarding this point. Note that we also provide details about the implementation of BASIS in our experiments. In addition to the above, *we will incorporate some of this intuition into the final version of the manuscript.* --- Rebuttal Comment 1.1: Title: Reply to authors' rebuttal Comment: Thank you very much for your rebuttal. The answers are quite satisfactory. Hence, I raise my score.
Summary: This paper focuses on the problem of single-channel source separation (SCSS) for RF signals with discrete nature. The authors propose to solve the SCSS problem using MAP and use pre-trained diffusion models to approximate the scores for both the source signal and inference signal. To avoid being stuck in a local minimum, the proposed method uses an $\alpha$-posterior and optimizes across multiple noise levels. Experiments show that the proposed method significantly outperforms the baselines (both traditional and learned methods) in both MSE and BER. Strengths: 1) This paper is well-written and it is easy to follow. Given that it is the first work that explores score-based models in the SCSS problem of RF signals and it achieves significant improvement, this work could definitely point out a new direction in this domain and potentially influence other researchers. 2) The idea of using $\alpha$-posterior and randomizing across multiple noise levels could be potentially useful for other score-based optimization problems not limited to the scope of SCSS in RF signals. Weaknesses: 1) The proposed method requires to know the scale factor $\kappa$ (or SIR) and the distribution of the interference signal. However, the scaling factor is not available in real-world scenarios. Also, the interference signals could come from different sources such as WiFi, and Bluetooth. As a result, it may not be possible to train a diffusion model that models all kinds of interference signals in the real world. 2) I think this paper lacks theoretical analysis about: 1) how the proposed algorithm 1 could lead to the local extremum of equation (14), and 2) how minimizing the approximated loss in (9) and (14) could lead to the solution of the original MAP (6b) 3) It would be nice to have more ablation studies about: 1) optimizing across random noise levels vs fixed noise levels, 2) results with and without the zero mean noise in (12) Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1) Could the authors describe more about the suitable conditions in line 205, page 6? 2) In Figure 3, seems like when the SIR is large (around -5dB), the trained SOI model could even outperform the analytical one. Could the authors explain more about that? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have adequately addressed the limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback and helpful questions. We shall now address the raised questions and concerns. **Regarding the reviewer’s comments under “weaknesses”:** 1. **Knowledge about the scaling factor**: *This is an excellent point and we hope to address it in future work by introducing a novel extension to our algorithm that jointly optimizes over the range of possible scaling factors as well.* Many communication systems have power constraints and equalization capabilities, and with the endowment of such knowledge it is possible to estimate the signal to interference ratio (SIR) within reasonable margin. Equalization/normalization of the SOI’s power can be performed, e.g., by leveraging header information (metadata) and the SIR can be inferred from the mixture with the knowledge of the former. Thus, as a first step towards this goal, we assume knowledge of $\kappa$ in this work. 2. **Interference signals from different sources:** *Developing a library of priors in a cost-effective manner will be a focus of our future work.* It is true that the wireless ecosystem is growing at a rapid pace. We envision a system of plug-and-play priors where the receiver is given access to an ML backbone with diffusion-based priors, ***each*** trained on a different signal type, e.g., Bluetooth or WiFi. During transmission, a simple detector might detect the presence of a Bluetooth interference signal and the receiver can plug in the learned prior for Bluetooth signals to recover the signal-of-interest. We believe that such technology is vital for future communication standards such as 6G and we will focus our future efforts on researching new ways to efficiently learn these priors. For details on our diffusion model training setup, please see Appendix D. 3. **Theoretical results:** Due to space considerations, we included our theoretical results in the supplementary material. *However, we agree that it is beneficial to readers to include some of the developed theory in the manuscript itself. We will move the most important parts of this theory from Appendix B to Section 4 in the final manuscript.* 4. **Connection between formulations:** Section 4 introduces the asymptotic loss function that our algorithm minimizes in the limit of a large number of iterations. Appendix B builds on this, “dissects” equation (14) further, and analytically proves the “mode-seeking” nature of our algorithm for multivariate normal sources, digital RF signals (discrete signals) and Gaussian mixture sources. For each of these cases, we analytically solve for the individual terms in (14) and show that the extrema corresponds to the modes of the underlying source distribution. For the original MAP formulation (6b), we know that the solution(s) correspond to points of maximum probability, i.e., the modes. Hence, the solution of our algorithm, which optimizes (14), approaches the solution of (6b). 5. **Ablation studies:** We decided to use all noise levels in our algorithm since diffusion models capture statistical structures at different resolutions based on the amount of noise added in the forward process. This is pictorially depicted in the middle and rightmost plots of Figure 1, where the distribution is smoothened and less peaky with increasing amounts of Gaussian smoothing. We emphasize that the BASIS algorithm also uses all noise levels but the noise is gradually annealed over time via a ***fixed*** schedule. Each outer iteration of BASIS essentially tries to separate the components using a fixed noise level. As mentioned in the manuscript and in Appendix F, tuning this schedule is difficult, and more importantly still leads to underperformance in comparison to our method. As for the second ablation study without the subtractive noise, initial experiments demonstrated benefits with this additional term, but we will re-run the experiment for a few cases to be included in the supplementary material. We thank the reviewer for this suggestion. **Regarding the questions:** 1. **Suitable conditions:** While it is analytically challenging to answer the necessary conditions under which this holds, we are able to provide sufficient conditions for separability of the sources under equation (14): If the two sources, $s$ and $b$, are discrete and the super constellation (see Appendix C.1 in the supplementary) is uniquely decodable, i.e., the mapping between the symbols in the super constellations of each source is unique, perfect recovery is possible. However, in this work we intentionally focus on general signal types and mixtures, where perfect separability is not necessarily guaranteed and performance bounds are not analytically tractable. Providing theoretical guarantees in more complicated scenarios is a topic of broad interest and we hope to address some of these problems in future work. 2. **Trained model vs.analytical model:** We attribute this degradation to numerical instabilities during the computation of the analytical score in the symbol domain (before pulse-shaping, i.e., $a$ where $s = Ha$). Computing the score of the pulse-shaped symbols ($s = Ha$) is extremely challenging. For more details on the digital communication pipeline please refer to Appendix C. Appendix G ((G.1) – (G.4)), details the calculation of the analytical score for the QPSK SOI in our experiments. As shown in (G.4), we make an approximation via the pseudo-inverse of $H$. At low interference levels (high SIR), this approximation can lead to small errors in symbol recovery, due to the negligible additive noise. On the other hand, the trained model directly learns the score of $s = Ha$ and thus circumvents such approximation errors. --- Rebuttal Comment 1.1: Title: About the Rebuttal Comment: I thank the authors for the detailed response. I think the majority of my concerns have been addressed.
Summary: A new method for separating superimposed sources using diffusion-based generative models is proposed (alpha-RGS). The method relies on separately trained statistical priors of independent sources and is guided by maximum a posteriori estimation with an α-posterior. Experimental results with RF mixtures demonstrate that the method results in a BER reduction of 95% over classical and existing learning-based methods. Strengths: The authors propose a new method for source separation called α-RGS (α-posterior with Randomized Gaussian Smoothing). The method uses the (approximate) score from pre-trained diffusion models to extend maximum a posteriori (MAP) estimation using generalized Bayes' theorem with an α-posterior. α-RGS outperforms classical signal processing and annealed Langevin-dynamics-based approaches for RF source separation. Weaknesses: . Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. It would be interesting to see how the model performs in different channel conditions, with different types of noise other than additive white Gaussian noise (AWGN). This would give us a better understanding of the model's robustness to different noise conditions. 2. The BER curve of the proposed model is noisy because it was generated using a small number of test examples. This noise will likely disappear if we sample more test examples, which will allow us to compute the exact improvement (in terms of dB) of the proposed model over the baseline. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: . Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback and helpful questions. We shall now address the raised questions and concerns. **Regarding the questions:** 1. **Statistics of noise:** Thank you for the question. We would first like to clarify that in a typical communication setup with a transmitter, channel and receiver, it is often assumed that the channel noise is AWGN. Mathematically, the output $y$ is related to the input $s$ as $y = s + w$ where $w$ is AWGN. In our source separation setup we consider an interference channel $y = s + b$ where $b$ is no longer constrained to be AWGN and thus departing the channel noise model. The generality of our approach is rooted in learning the underlying structures of the interference, which can often be far more complicated than AWGN. 2. **BER curves:** *We will increase the size of the test set for generating the curves for the final version of the manuscript as much as possible.* --- Rebuttal Comment 1.1: Comment: Thank you for address my concerns.
Rebuttal 1: Rebuttal: We thank the reviewers for taking the time to review our work proposing a novel algorithm that leverages diffusion-based priors and randomized levels of Gaussian smoothing with applications source-separation in the RF domain, a relatively new domain area to the ML community. We appreciate all the comments and questions that have been raised in the reviews, and by this global response we hope to respond to the reviewers collectively so as to address a few concerns. Regarding the relevance of source separation in the context of wireless signals, we would like to reiterate that the source separation problem within the RF domain comes with a different set of technical challenges as well as academically interesting questions. We believe that such challenges are not only of interest to the ML community, but moreover, ML researchers can help shape novel algorithms for AI-enhanced next-generation communication technology (and indeed this is the vision for 6G). As for the theoretic aspects of our work, due to space constraints, the development and understanding of our proposed algorithm from first principles was deferred to Appendix B. Taking into account the reviewers’ comments, and in order to enhance readability, we will move some of the important theoretical results from Appendix B to the manuscript. We also appreciate the immense interest in a deeper understanding of the theory, and while we are able to provide sufficient conditions for separability in simple cases, there are indeed parts of the problem which might be analytically intractable—e.g.conditions for perfect separability or the lower bound on the performance for general classes of signals—which are intriguing theoretic investigations which we will pursue in future work. We understand that a detailed explanation of the experimental setup, digital communications terminology, details about the baselines and a detailed overview of the results are beneficial to readers. However, space constraints preclude us from including extensive details in the manuscript beyond what has already been provided, and are therefore included in Appendices C - G. With the additional space provided in the final manuscript, we will add a paragraph that describes the paper and appendix organization to help readers navigate the work efficiently. We will also move additional important details regarding the experimental setup to Section 5 in the final manuscript.
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper investigates a novel Bayesian approach for separating the superposition of two sources based on diffusion generative models. The problem is motivated by application to the spectrum of radio-frequency (RF) communication systems. Several experiments using real datasets on RF mixtures demonstrate that the proposed method reduces the bit error rate, which is the stared measure of performance for this application. Strengths: The strengths of this paper are as follows: + It investigates a well-known problem, i.e., source separation, by using modern tools, i.e., diffusion generative models. In particular, the use of diffusion generative models to model the prior in the Bayesian source-separation framework is novel and appealing. + Several numerical results are provided by using real datasets. + The paper is well-written and the contribution is clearly stated. Weaknesses: The weaknesses of this paper are as follows: + The paper introduces the problem of multi-source separation. However, it focus on the case of two sources and the methodology specializes very much on the presence of only two sources. + There is no theory or more specifically, there are no guarantees on performance of the proposed method to operate the underlaying sources. Notice that a lot have been done in this area and it would important to understand the limitations of the proposed method or at least to derive sufficient condition to ensure that the sources can be separated. + The applications to future wireless communication systems are interested but very much limited. In particular, those are applications are not the main focus of the ML community. it would have been beneficial to show the proposed can be used to other type of data since they are many practical cases (e.g. sounds and speech) for which source separation is requested. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: + The paper lacks of theoretical results showing the limitation of the proposed method and the assumptions on the data distribution to be able to perform source separation. + I strongly suggest the authors to further investigate other scenarios for which source separation is requested which could be of major interested for the ML community. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The limitation are not well discussed; In particular, it is not clear the underlaying assumptions necessary to satisfy the source separation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to understand the technical contributions in our work. Below we shall address the raised questions and concerns. **Regarding the reviewer’s comments under “weaknesses”:** 1. **Multi-source Separation:** To make the exposition clearer, we focus on the two-component case as mentioned in the beginning of our manuscript on line 3. We next explain how this readily defines a procedure for multi-source separation as well. Consider a three-component mixture $y = s + n + w$. This can be separated using our algorithm in two different ways — a) by treating the interference as $b = n + w$ in the first pass of our algorithm, we can obtain $\hat{s}$, and then separating $\hat{b} = y - \hat{s}$ in a second pass we obtain $\hat{n}$ and $\hat{w}$; b) by using the three priors simultaneously to obtain estimates $\hat{s}$, $\hat{n}$ and using the constraint $y = s + n + w$. In terms of applications, a very important scenario in RF is the interference rejection scenario, where we are mostly interested in the recovery of our signal of interest only. In this setup, the interference could potentially describe the background/environment, which in practice could be a superposition of multiple devices. *Recovering the signals in a multi-source setting (particularly in a single-pass) is a subject of future work — and given its relevance in the context of such problems, we will make this clearer in our discussion and concluding remarks.* 2. **Theory and performance guarantees:** The conditions for perfect signal separation can be described for certain simple models. One such sufficient condition is as follows: If the two sources, $s$ and $b$, are discrete and the super constellation (see Appendix C.1 in the supplementary) is uniquely decodable, i.e., the mapping between the bit representation and symbol representation of the signals is unique, perfect recovery is possible. However, in this work we are focusing on general signal types and mixtures, where perfect separability is not necessarily guaranteed and performance bounds are not analytically tractable. A more involved characterization of our algorithm’s convergence to the modes beyond Section 4 can be found in Appendix B. *We agree that understanding the sufficient conditions and easy access to the theory in Appendix B are important. We will modify Section 4 of the final manuscript to include the sufficient condition above along with a condensed version of the analysis from Appendix B.* 3. **Relevance to ML community:** While we agree that image and audio separation are important areas of research, we found existing score-based source separation methods initially developed for these domains to underperform on RF data (e.g., BASIS). Moreover, the underlying discreteness of typical RF signals distributions present new challenges to the ML community that generally do not arise with image or audio data. Recently, there has been tremendous interest in using ML techniques for digital communications (please see some sample publications below) and hence we strongly believe that such new application areas are of interest to the broad ML community. There are a plethora of challenging problems in the wireless domain and we believe the ML community can help take significant steps towards better solutions. ML is without doubt a central technology to future wireless standards such as 6G, where the increasing demand of bandwidth combined with high reliability will require sophisticated interference rejection/source separation algorithmic solutions. We hope that this work can help shed additional light on new and rapidly developing areas of ML. **Regarding the questions and limitations:** 1. **Sufficient conditions:** Thank you for pointing this out as this may have not been very clear from our manuscript. Please see the response above under “Theory and performance guarantees” for the sufficient condition. *We will include this condition in Section 4 of the final manuscript.* 2. **Applications to other modalities:** The paper is intentionally focused on digital communication signals due to the fundamental differences of the signals' statistics, which is essentially a different realm that will (possibly, and perhaps most likely) require (at least) new building block for successfully functioning DNNs (where "success" is in terms of significant gains over classical model-based methods). *However, we agree with the reviewer that it would be interesting to extend our experiments to other modalities as well. We appreciate the suggestion and we will add these experiments to our future work.* **A few recent publications (apart from the closely related works already cited in the manuscript) that leverage ML for digital communications:** 1. **Specific to single-channel source separation of RF/wireless signals with ML:** - M.Zhao, et al. Single-channel blind source separation of spatial aliasing signal based on stacked-LSTM.Sensors, 2021. - X. Hou and Y. Gao. Single-channel blind separation of co-frequency signals based on convolutional network. Digital Signal Processing, 2022 - H. Ma, et al. A novel end-to-end deep separation network based on attention mechanism for single channel blind separation in wireless communication. IET Signal Processing, 2023. 2. **ML in wireless/RF communications for other problems:** - T. O’Shea, et al. Over-the-air deep learning based radio signal classification.IEEE J.Sel.Topics Signal Process., 2018. - T. O’Shea and J. Hoydis. An introduction to deep learning for the physical layer.IEEE Transactions on Cognitive Communications and Networking, 2017. - Y. Eldar, et al. Machine Learning and Wireless Communications.Cambridge: Cambridge University Press, 2022. 3. **NVIDIA Sionna Toolkit for Next Generation Communications Research (which we use in our experiments):** - Hoydis, Jakob, et al."Sionna: An open-source library for next-generation physical layer research." arXiv preprint arXiv:2203.11854 (2022). --- Rebuttal Comment 1.1: Comment: Thank you very much for your rebuttal. The authors have answered my questions quite satisfactory. Accordingly, I will increase my score.
null
null
null
null
null
null
HyP-NeRF: Learning Improved NeRF Priors using a HyperNetwork
Accept (poster)
Summary: In the paper authors propose a novel method for NeRF based model which able to generalize. Authors use hypernetwork paradigm and Multi-Resolution Hash Encodings. The paper is interesting but has two main problems: - two-stage training, where the second stage is applied to generate NeRF representation. - authors do not mention a few important works in relation works and with comparison. Strengths: The generalization ability of NeRF based model is fundamental. The paper shows an interesting application of the model. Weaknesses: ## Related works lack a few important works: 1. The generalization ability of NeRF based model is significant. I believe the NeRF generalization models can be divided into two groups. Relatively small models which are similar to NeRF architecture like Voxel Base NeRF, tri-plane NeRF or MultiplaneNeRF, Pix2NeRF, and model which uses GAN, autoencoder, or diffusion model combined with NeRF. The model is in the second group since we use the network to generate another model like in Points2NeRF or Hypernerfgan. 2. The related work does not mention the tri-plane NeRF mod model, which is now extremely important. The author mentions paper EG3D [7], but the relation between models should be highlighted. 3. There are a few models which use hypernetworks in a similar fusion: Points2NeRF: Generating Neural Radiance Fields from 3D point cloud and Hypernerfgan: Hypernetwork approach to 3d nerf GAN 4. Consequently, the sentence: “However, unlike us, they do not use a hypernetwork, and use the meta-learning algorithms only for initializing a NeRF, which is further fine-tuned on the multiview images” should be corrected. ## Two-stage training 5. In my opinion, the two-stage training is problematic since the second step can be applied in all models. In the second stage, we tune the model produced by the hypernetwork. 6. In my opinion, all tables authors should give results for models with and without the second stage. 7. We need a pre-train autoencoder in the second stage. ## The model train generalizable NeRF priors 8. It is unclear why we do not force laten to be generative by adding some distance to classical Gaussian prior, similar to VAE. 9. Why do we not use an autoencoder instead of a trainable latent with a decoder? ## Experiments 10 . The experimental section is well organized, but in my opinion, the authors should compare models with GANS, autoencoders, and diffusion-based models in some sense. 11. The model should be compared with Points2NeRF, since the architecture is very similar. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: ## Uncler parts of the paper 1. Authors claim: “Our key insight is to use hypernetworks to generate both the network weights and instance-specific MRHEs.” and “Each NeRF, f(·)n , is parameterized by the neural network weights, ϕn, and learnable MRHEs, hi ...” So Hypernetwork produces hi or hi are trainable? Such a sentence is misleading. 2. Authors claim: “Given a set of NeRFs denoted by {f(ϕn,hn)}Nn, where N denotes the number of object instances in a given object category, we want to learn a prior Φ = {ΦS,ΦC},” It suggests that we need the NERF representation of each object in the training dataset. Such a sentence is misleading. 3. Hypernetwork is denoted by M. Usually, we use H for hypernetwork. 4. “We want to design our hypernetwork, M,with trainable parameters, Ω that can predict NeRF parameters {ϕn, hn} given a conditioning code zn = {Sn,Cn}. Hypernetworks and conditioning mechanisms are not the same as the author notices in the introduction. I recommend to do not mixing such methods. 5. The sentence is unclear: “Here Sn Cn belong to codebooks, and that are trained within an auto-decoding fashion.” What does it mean? 6. “However, Φ is not a known distribution like Gaussian distributions that can be naively queried by sampling a random point from the underlying distribution. “ Why do we not add some term to loss to force latent to be Gaussian? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The authors discuss the limitations of the model. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their thoughtful comments. 1. **Related works...**: We would like to thank the reviewer for the categorization. However, there could be several criteria for categorizing different techniques, like based on the downstream tasks the technique enables. Further, we will add the suggested related works in the final version. The sentence "However, unlike us, they do not use a hypernetwork..." is written in the context of Learned Intitialization [59], which is correct. We are happy to provide further clarification on this during the discussion phase. 2. **Two-stage training pipeline** - We don’t fully understand why the reviewer thinks it is problematic. - While it is true that the second stage can be applied to the baselines -- **how to apply the second stage to the baselines is not trivial**. - The expected input in our second stage is a standard NeRF whose parameters can be optimized through a standard volumetric loss on the set of denoised images. However, baselines like PixelNeRF and VisionNeRF work with a modified NeRF that is densely conditioned on pixel-level image features for a particular viewpoint (such as dense CNN-based features). In the fine-tuning step for such baselines, one would need to finetune both the NeRF and dense pixel-level feature parameters -- which is non-trivial. - For a baseline like CodeNeRF, which could potentially be fine-tuned, we show the example of the denoising module in the added rebuttal PDF, Figure 4, right. As shown, the VQVAE2 fails to improve the results due to the amount of noise in the input. On the other hand, for HyPNeRF (Figure 4, left), VQVAE2 improves the images by refining the edges and improving the texture allowing us to perform fine-tuning on these denoised images. 3. We have added results with and without denoise and finetune in the rebuttal PDF Table 1 and 3. 4. **Pretrained autoencoder:** The autoencoder (VQVAE2) can be trained easily using images rendered from HyP-NeRF (as the input) and the ground truth NeRF renderings. We do not depend on any additional data. Moreover, the overall training regime is simple to follow. Given this, we are confident that we have proposed a novel technique that can be a valuable contribution to the NeRF community. 3. **Design choice**: The technique of auto-decoding is inspired from an array of influential works like DeepSDF [37], Scene Representation Networks (SRN) [54], Light Field Networks [53], INR-V [46], CodeNeRF [17], and many more that learn high-quality prior over implicit neural representations (INRs) - in our case a NeRF. Following are additional reasons why auto-decoding is a more suitable technique to learn a prior in our case - - Building an encoder for our model without limiting the tasks is not trivial. For example, including an image-based encoder will add constraints of view and scale. pointcloud encoders would add a dependency on pointclouds adding a dependency on 3D supervision. Our current design only needs 2D supervision. - Such a training technique would also make it hard to generalize to diverse inputs (such as text and single-or-multiview images). Further, an encoder would need to output two codes - shape and color - while ensuring the disentanglement between them. In our design, we learn a generic prior over NeRF representation (see rebuttal PDF Figures 1, 2, and 3). - We also encourage the reviewer to refer to [54] and [17] for further clarification. - **Adding a loss term to make it Gaussian**: The hypernetwork aims to learn a set of codes corresponding to a set of NeRFs. In this case, one could add a KL divergence loss to force these codes to follow a Gaussian distribution, but without "random sampling" as in Variational AutoEncoders, the space would be too sparse for us to force it to become a Gaussian distribution. Therefore we allow the hypernetwork to learn a prior by itself following the array of influential methods cited in the paper. 3. **GANs and diffusion-based methods**: HyP-NeRF significantly differs from GAN-based works aiming to generate multi-view consistent images. We aim to produce a NeRF. Incorporating discriminators, GANs are often limited to lower resolutions, whereas HyP-NeRF thrives at a higher resolution of 512. Diffusion-based NeRFs (like DiffRF [31]) rely on an explicit version of NeRFs called radiance field and are thus also limited in the resolution they can output. We produce NeRFs in implicit space. Both of these works primarily show the task of unconditional sampling. On the contrary, our current baselines - PixelNeRF and CodeNeRF - also aim to learn a prior directly over NeRFs and do not provide comparisons with GAN or diffusion-based models. 4. **Comparison with Points2NeRF**: Even though Points2NeRF follows a similar architecture as ours, this is also true for many other works like DeepSDF [37], LFNs [53], INR-V [46], and so on; However, like these works, Points2NeRF primarily aims to solve a very different task: converting a pointcloud to NeRFs. This would need us to either significantly modify our own method or the proposed Points2NeRF method to be able to compare both of them. 5. **Unclear parts of the paper.** 1. $h_i$ is predicted by the hypernetwork. $f(.)_n$ consists of the MLP parameters $\phi_n$ and MRHE $h_i$ and the hypernetwork predict both of them. 2. We will clarify the sentence: we only need multi-view image supervision. 3. Many different notations have been followed for denoting hypernetwork in the previous works, $\Psi_\psi$ in Light Field Networks[53], $d_\omega$ in INR-V [46], and so on. 4. As the hypernetwork generates the output NeRF based on a given code, $z_i$ - we call it conditioning. This is the same style of notation used in [53] and [46]. 5. In the auto-decoding setup, $S_n$ and $C_n$ are trainable. 6. Addressed in point 3. --- Rebuttal Comment 1.1: Title: Thank you for your rebuttal. Comment: Thank you for your answer. All my questions and concerns have been addressed, but I think the authors should correct the paper fundamentally. I stay with my score. --- Reply to Comment 1.1.1: Title: Author's response to reviewer's comment Comment: Thank you for engaging in the discussion. If the rebuttal addressed all your concerns, could you please explain what needs to be fundamentally corrected?
Summary: In this paper, the authors present HyP-NeRF, a framework based on meta-learning principles, tailored for the learning of category-level NeRF priors. This is achieved through conditioning on latent codes with the assistance of hypernetworks. The NeRF model struggles with generalization across categories because of the high dimensionality of network space. To circumvent this issue, the authors propose a novel approach that estimates the multi-resolution-hash encoding and network’s weights via a hypernetwork conditioned on an instance specific latent vector. Furthermore, to denoise the predicted novel views during the fine-tuning phase, they utilize a VQ-VAE-2 model. The authors report significant performance gain in comparison with existing baselines. In addition, the authors showcase HyP-NeRF’s ability in downstream tasks such as single-view novel view generation, text-to-NeRF, and inpainting from cluttered scenes. Strengths: 1. The authors propose a novel idea for conditioning the network weight and multi-resolution hash encoding via hypernetworks. This approach interestingly facilitates the sharing of prior knowledge within a category level among NeRF instances. Rather than optimizing a fixed set of parameters, the authors present an approach to learn a weight distribution and condition it directly on the instance-level latent code. 2. By providing a robust category-level prior, HyP-NeRF enables the training of a single model for objects within the same category. This approach effectively eliminates the tendency of overfitting to a single scene as vanilla NeRF and its variants. 3. Through the conditioning with the latent embedding, HyP-NeRF demonstrates the capacity to cooperate with other models for various downstream tasks. These applications range from text-to-NeRF to novel view synthesis in cluttered scenes. Weaknesses: 1. My main concern is hypernetwork’s ability to disentangle geometry and color using the latent code Sn and Cn. Taking the single-view reconstruction case as an example (Table 1), this problem is under-constrained as it requires 3D inductive biases learned from a large set of scenes like the target scene. This is apparent in methods such as Pixel-NeRF and its variants, which rely on 2D image features to generalize to unseen scenes after substantial training. However, HyP-NeRF’s approach to resolve the geometric ambiguity through the denoising the 2D outputs does not seem logically coherent. Additionally, it would have been beneficial for the authors to provide evidence showing category geometry for {Sn, Cn} in Section 3.1, and a demonstration of color clustering for scenes with similar colors. Moreover, cases where HyP-NeRF appears to overlook the shape or color of an object (as seen in Figure 6 row 1 and the supplementary video 3:52, 4:06, 4:31) raise further concerns about the ability of latent codes Sn and Cn to capture geometry and color. 2. The scope of the baseline and dataset appears insufficient to conclusively support the authors’ claim, which echoes my previous comment that HyP-NeRF only benchmarked against Pixel-NeRF under the ABO dataset. Comparative results with more baselines and datasets, such as VisionNeRF[1] and NeRFDiff[2], would have added credibility to their claims. This becomes particularly relevant given that in the supplementary material, HyP-NeRF does not outperform the baselines on the SRN dataset (e.g., PSNR of 21.02 versus 24.48 with Vision-NeRF). I do notice that authors claim that for SRN they did not apply denoising finetuning and SRN has different poses, but it would be hard to judge HyP-NeRF’s performance under different settings. Therefore, I would like to see HyP-NeRF’s performance on ShapeNet with other baselines under the same setting to validate its performance. 3. As the authors introduce hypernetwork, which adds extra computation on top of the original NeRF. I would like to see computation cost in terms of (# MLP parameters) and (# of FLOPS) in comparison with the baselines. This is particularly relevant considering that HyP-NeRF requires test time optimization. Additionally, how does the latent codebook size (Sn and Cn) impact the performance of the network? 4. Readability issues, for example: ln 170 (3) should end with comma “,”. ln 171 uses “eqn.” to denote the equation, but later in ln 202 uses “Equation”. [1] Vision Transformer for NeRF-Based View synthesis from a Single Input Image https://arxiv.org/pdf/2207.05736.pdf [2] NeRFDiff: Single-image View synthesis with NeRF-guided distillation from 3D-aware diffusion https://arxiv.org/pdf/2302.10109.pdf Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Also listed in the weakness section for detailed reasons. 1. Please run more experiments with different baselines, for example, Vision-NeRF, NeRFDiff, and additional dataset such as ShapeNet to validate HyP-NeRF’s performance. 2. Conduct ablation studies to explore if Sn and Cn can capture object shape and color, such as similar Sn (same category) or clustered Cn (if they are within the same category and similar color). 3. Another important aspect is computation budgets and rendering speed. Given that HyP-NeRF requires test time optimization, which Pixel-NeRF does not, a comparison between the two models’ computational requirements and rendering speed would be highly informative. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: Yes, the authors have included limitation. Social impact does not apply. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's thorough analysis of our work and for raising their questions and concerns. In the global and local rebuttal, we have addressed the concerns and questions and provided additional comparisons and visualizations (rebuttal PDF). 1. To demonstrate **shape and color disentanglement**, we add three qualitative results in the rebuttal PDF: - Figure 1, - We start with two object instances from the train set - $A$ and $B$ and denote their corresponding shape and geometry code as $A_s$, $B_s$, and $A_g$, $B_g$. - Next, we switch the geometry and shape code and generate two novel NeRFs given by $\\{A_s, B_g\\}$ and $\\{B_s, A_g\\}$. - Here, we can clearly see the disentanglement: geometry is perfectly preserved, and the color is transferred faithfully across the NeRFs. - Figure 2: We fix a geometry code and interpolate the color codes. As shown, the geometry is perfectly preserved while the color smoothly transitions. - In Figure 3, we cluster color and shape codes using TSNE plots and visualize instances from the clusters. As shown, each cluster represents a similar color or shape. **HyP-NeRF appears to overlook the shape or color:** We propose two methods to query our learned prior - (1) test-time optimization (TTO) and (2) mapping network (MapN). - TTO uses pixel-wise difference between the generated NeRF renderings and ground truth view. - In the MapN, we encode the given single-view image (or text) using CLIP [39] encoder to obtain a feature vector. This feature vector is mapped to the hyper network's learned prior (separate mappings for shape and color codes), allowing the hypernetwork to generate the NeRF in a single forward pass. - In MapN, we lose out on the **low-level details like fine shape and color details** through CLIP's encoding process. Therefore, the resultant generated NeRF does not exactly match the given input, as mentioned in the limitation section on Page 9, line 349. - This indicates a limitation of the mapping technique **not** the prior learned by the hypernetwork as evident through TTO that operates within a generator's prior space. For this reason, we can obtain results on single (or multi)-view images, **even on views with severe occlusion** (video timestamps 5.39, 5.57, 3.51) that match the given input exactly. 2. **The scope of the baseline and dataset appears ...** - R3 suggests two baselines - NeRFDiff and VisionNeRF. - NeRFDiff has been recently accepted to ICML 2023 (after our submission to NeurIPS), was submitted to arxiv very close to the NeurIPS submission date, and hasn't released its codebase or dataset split on ABO yet. We will add NeRFDiff as a concurrent work in our related work section. - VisionNeRF requires an exorbitant amount of compute, quoting from their paper - “16 NVIDIA A100 GPUs, where the training converges at 500K iterations” for training at a resolution of 128 x 128 on SRN. On the other hand, we train HyP-NeRF on a single NVIDIA 2080Ti RTX GPU. **HyP-NeRF specifically thrives and distinguishes itself at a higher resolution of 512 made of data with high-fidelity textures and shapes** - a resolution of 512 would need much more compute than 128 to train VisionNeRF. In the supplementary, we have presented comparisons against CodeNeRF, FE-NVS, and VisionNeRF on SRN. - To further add credibility to HyP-NeRF, **we compare it to another popular baseline - CodeNeRF - that employs a similar latent conditioning technique by modifying a NeRF to be a conditional NeRF, on ABO in the rebuttal PDF Table 4.** HyP-NeRF significantly outperforms CodeNeRF at both resolutions. 3. **Baselines under the same setting**: We present our results on SRN with "denoise and finetune" in the rebuttal PDF, Table 1. We outperform the baselines on the Cars subset. Although we do not outperform on the Chair subset, the following points should be noted - - We chose single-view NeRF as one of the several tasks we can perform to showcase generalization to novel NeRFs. The existing baselines are specifically trained for the task of single-view NeRF generation and modify the NeRF function by conditioning the NeRFs on additional features (like CNN features in PixelNeRF and VisionNeRF). Our NeRF is a standard NeRF that expects only the viewing direction and 3D point location as input. Thus, unlike the baselines, our NeRFs can be directly adopted in any downstream tasks that expect a standard NeRF as input. - We incorporate test-time-optimization, which is known to fail at views that do not provide sufficient context. To further provide evidence of this, we compare with PixelNeRF (without the denoise and fine-tune step for a fair comparison) on **sparse-view NeRF generation ranging from a single view to 5 different views in the rebuttal PDF Table 3**. Our results significantly jump with two views and improve further as the views increase. This is also shown qualitatively in the supplementary paper, Figure 2. - Finally, HyP-NeRF showcases many diverse downstream tasks (including many novel tasks not shown before for NeRFs) through many examples in the submitted papers and video ranging from compression, text-to-NeRF generation, generating NeRF from occluded and cluttered images **scraped directly from the internet without any preprocessing**, and so on. 5. **how does the latent codebook size (Sn and Cn) ...**: The codebook size is equivalent to the number of training instances $\times$ 512 where each code corresponds to one training instance. Therefore, reducing the codebook size will result in lower generalization on unseen datapoints as it might have seen less number of examples during training. We would be happy to discuss and clarify this further in the discussion phase. 6. We will replace eqn. with Equation. in the final version of the paper. Rest of the concerns are addressed in the global rebuttal. --- Rebuttal Comment 1.1: Title: Minor correction Comment: We would like to add a small correction to the rebuttal. In point 1, we have mentioned $A_g$ and $A_s$ to be geometry and shape codes respectively, wherein they should be $A_g$ and $A_c$ denoting the geometry (i.e. shape) and the **color** of the object instead. Same goes for instance $B$. This is also mentioned in the added rebuttal PDF. --- Rebuttal Comment 1.2: Comment: Thank you for providing clarifications on your comments. I understand that some of the concerns arise directly from the intrinsic characteristics of the hypernetwork. It is evident that methods integrating hypernetworks into their pipeline would exhibit certain inherent issues. Based on this understanding, I've adjusted my rating to a borderline accept. Here are the issues I'd like to highlight: Although Table 2 demonstrates a reduction in FLOPs during inference, the inference time when considering the full pipeline is notably longer compared to the baselines. Specifically, it amounts to a sum of 312 seconds and 2 minutes. The authors have mentioned that denoising and fine-tuning do not have a significant impact. The global response suggests a "marginal difference". However, when looking at Table 1 and Table 3, I observed a noticeable increase of 9% in the Cars dataset in terms of PSNR when comparing results with and without denoising. --- Reply to Comment 1.2.1: Title: Author's response to reviewer's comment Comment: We are glad that we could address the raised concerns and are delighted to notice the change in rating. Regarding the mentioned issues, 1. Apologies for the confusion. Here $m$ denotes the number of poses needed to render for the Denoise and Finetune step (main paper, line 183). In this step, we render the NeRF from $m$ views that are denoised and further used for finetuning. Despite the time taken by this step, rendering a full NeRF ($\ge$ 120 views) would be much faster for HyP-NeRF. The exact breakdown is given below - PixelNeRF CodeNeRF HyP-NeRF TTO - 305s 165s Single-view rendering 47.8s 8s 2s Denoise & Finetune - - 319s (when m = 91) Total time to render 5736s 1265s 724s NeRF from 120 views The total time to render NeRF from 120 views is computed as: TTO $+$ 120 $\times$ time taken for single-view rendering $+$ Denoise & Finetune (for HyP-NeRF). Denoise & Finetune - the denoising step takes 182s (which includes rendering and denoising the 91 views), and finetune step takes 137s, which adds up to 319s. 2. In general, we have observed that denoising results in only a marginal difference (thereby retaining the original consistency). However, as you rightly pointed out, the overall improvement (after fine-tuning) is more pronounced, especially on the SRN car dataset. We think this is primarily due to the property of the dataset - the category has less diversity in terms of geometry as compared to chairs or ABO. We would also point out the difference in the final version of the paper.
Summary: The paper proposes HyP-NeRF, a method using hypernetworks to learn generalizable category-level priors, addressing the limitations of existing work on generalization. Specifically, the proposed hypernetwork-based method predicts the parameters of NeRF and multi-resolution hash encodings, and further incorporates a denoise and finetune strategy to improve quality while retaining multi-view consistency. Qualitative comparisons and evaluations on three tasks (generalization, compression, and retrieval) show that HyP-NeRF achieves state-of-the-art results. Strengths: 1. It is an interesting idea to take a hypernetwork to learn the category prior for NeRF, attaching the capacity of generalization to NeRF. 2. The fine-tuning procedure with a denoising network can improve the texture quality while retaining consistency. 3. Hyp-NeRF can extend to other downstream tasks, like the single image to 3D and text to 3D. 4. The paper is well-organized and easy to follow. 5. The experiments look convincing. It clearly supports the major contribution of the paper that a class of objects can be compressed in a unified network with the help of hypernetwork. Weaknesses: 1. There are not enough details on why fine-tuning procedures can retain consistency and avoid blurry results. Previous works, like instruct-NeRF2NeRF and StylizedNeRF, claim that fine-tuning with inconsistent images leads to blurry results. 2. The proposed method focuses on category-specific generalization similar to NeRF-based GAN. They both learn the prior of a class of objects and take NeRF as the 3D representation. Therefore, it is better to conduct an experiment comparing Hyp-NeRF and one of the 3D-aware GANs. 3. The examples in Figure 4 show low-quality results. Although the examples are simple, the result is not clear, with some noises in the appearance. 4. It lacks more quantitative comparisons with other methods like Table.4. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The denoised images seem to be 3D inconsistent. I wonder why the fine-tuning process in the second step can retain the multi-view consistency and bypass the blurry results which are common when fine-tuning NeRF with inconsistent images. Lines 188-191 are not clear. 2. What about efficiency, i.e., Inference time and training time? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: 1. The results are low-quality with blurry details and noises. Besides, the examples in the experiments are too simple. 2. The task is similar to NeRF-based generative models, like pi-GAN and EG3D. The paper lacks a discussion on the comparison between Hyp-NeRF and NeRF-based GAN. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to appreciate the reviewer's efforts in studying our work and for their positive assessment of our contributions. Below, we elaborate on the questions raised, and provide additional comparisons to address all the concerns. 1. **There are not enough details on why fine-tuning procedures can retain consistency and avoid blurry results. Previous works, like instruct-NeRF2NeRF and StylizedNeRF, claim that fine-tuning with inconsistent images leads to blurry results.** This is addressed in the global rebuttal 2. **The proposed method focuses on category-specific generalization similar to NeRF-based GAN. They both learn the prior of a class of objects and take NeRF as the 3D representation. Therefore, it is better to conduct an experiment comparing Hyp-NeRF and one of the 3D-aware GANs.** - HyP-NeRF is primarily different in the manner that our main focus is on generating NeRFs, whereas the GANs aim to generate multi-view consistent images of high quality. For example, EG3D first generates an output in a lower resolution of $128 \times 128$ due to a computationally expensive design. They then use a superresolution network to super-resolve the images to $512 \times 512$. Further, these images are passed onto a discriminator, which guides multiview consistency for each image. In this case, the output is an image. In contrast to EG3D, HyP-NeRF generates a NeRF trained through the volumetric rendering loss, and the resultant output (pre & post denoise and finetune) is a NeRF - that can be adopted in any downstream task. - Moreover, compute requirements increase drastically for an architecture involving discriminators. For example, an entire image should be rendered to be passed on to the discriminator in each forward pass. This would need one to sample number of rays equivalent to image resolution to render an entire image from the NeRF representation. On the other hand, we only need to sample a handful of rays that can fit in the memory in each training iteration and therefore are agnostic of the resolution enabling us to scale up to arbitrary resolutions. - Lastly, we are primarily different in the task we aim to show. GAN-based methods heavily focus on unconditional sampling, whereas unconditional sampling is not trivial in our case. Instead, we evaluate the task of single-view NeRF generation with works that closely resemble our works, like CodeNeRF and PixelNeRF, that aim to learn a prior over many NeRF instances. 3. **The examples in Figure 4 show low-quality results. Although the examples are simple, the result is not clear, with some noises in the appearance.** - We would like to point out that Figure 4 has two of HyP-NeRF - (1) one with denoise and fine-tune (2) without denoise and finetune as a part of the ablation. Our final result is the 3rd column. The 4th column presents the ablation results, which are slightly inferior in quality compared to the 3rd. This contrast is also showcased in the video timestamp 2.45 to 3.27. - While we understand quality is subjective, it is important to note that we showcase results on a resolution of 512 while much of the existing works in learning a prior over NeRFs remain in the resolution of 128. 3D GANs like EG3D showcase results in 512, but it is important to note that the method employs superresolution on the outputs of the NeRF renderings (which is originally at 128) that result in multi-view inconsistencies. On the other hand, we directly render at 512 at high quality. - We would be happy to provide further clarification if R2 can point out the exact instances that do not feel up to par in the author-reviewer discussion phase. 4. **It lacks more quantitative comparisons with other methods like Table.4.** - Table 4 is an ablation table where we remove a part of our network design and compare it with the full network to demonstrate the removed part's importance. Therefore, in this table, we have not compared with the baselines. Table 3 is a retrieval experiment using the mapping network to retrieve results from our codebook. Since we are retrieving results from our own codebook, there is no baseline. For the other tables, we have compared with the respective baselines. We would be happy to provide further clarification on this point during the author-reviewer discussion phase. - We have additionally added comparisons with CodeNeRF on ABO (128 and 512 resolutions) in the rebuttal PDF. 5. **What about efficiency, i.e., Inference time and training time?** This is addressed in the global rebuttal. --- Rebuttal Comment 1.1: Title: Comment Comment: Thank you for the authors' efforts. Regarding my initial reviews and the author's rebuttal, I would like to provide the following revised response: A1: Thank you for the explanation. I have noticed a minor difference between the results before and after denoising, which leads to a slight inconsistency. This explicitly addresses my concern. A2: Thank you for the comparison. I clearly understand the distinction between the 3D-aware GANs, such as EG3D-like models, and your proposed Hyper-NeRF. However, in my opinion, aside from the EG3D-like models, pure NeRF-based GANs like pi-GAN and subsequent GRAF-HD without super-resolution share intrinsic similarities with Hyper-NeRF. In both cases, a conditioned NeRF is generated by sampling from the learned distribution. Therefore, I still believe it would be beneficial to compare Hyper-NeRF with one of these 3D-aware GANs. A3: I hold the view that high resolution does not equate to high quality. However, I have reconsidered and now acknowledge that the results of Hyper-NeRF can indeed be compared to the recent state-of-the-art methods, although the qualitative results with denoising may not be entirely satisfactory. A4: Thank you for conducting the additional experiments. A5: As indicated in Table 2 of the PDF, Hyper-NeRF achieves comparable inference time after the warm-up phase with the two baselines. However, it's important to note that the warm-up time does impact the overall inference time. With fewer views to render, the difference in performance becomes more pronounced. --- Reply to Comment 1.1.1: Title: Author's comments to reviewer's response Comment: Thank you for your response and for raising further concerns, **A2:** Even though there are dissimilarities, as noted in our rebuttal (based on compute requirements, datasets, and the tasks), we do agree with your point that HyP-NeRF does share fundamental similarities with 3D-aware GANs like pi-GAN and GRAF, as they generate conditioned NeRFs by sampling from learned distributions. Based on your suggestions and our analysis, we have considered the following works in the discussion - GRAF, pi-GAN, GRAM (Deng et. al.), and EpiGRAF (Ivan et. al.). We would, however, like to underline that the difference in compute requirements between GAN-based methods and HyP-NeRF that make the comparison non-trivial -- - **Compute Requirements:** Adding to what we mentioned in the rebuttal regarding expensive compute for 3D-aware GANs, EpiGRAF's Table 1 outlines the following - - pi-GAN and GRAM go out of memory (OOM) when training on a resolution of 512, even on an NVIDIA V100. - EpiGRAF takes 24 V100 GPU days to train on 512 resolution (FFHQ dataset). **Notably, HyP-NeRF is trained on a single NVIDIA RTX 2080 Ti GPU for a period of just 3 days on 512 resolution.** **Comparisons:** We compare with GRAF on the ABO-Chair dataset at a resolution of 512 for the task of single-view NeRF generation through TTO. GRAF obtains a PSNR and SSIM of $15.87$ and $0.83$, respectively, whereas HyP-NeRF obtains a PSNR and SSIM of $24.23$ and $0.91$, respectively. Furthermore, TTO is performed for ~15 minutes on GRAF on a single instance for the reported results, whereas HyP-NeRF needs only 165 seconds (or 2.75 minutes). **A3:** We do agree with your point of view that high resolution does not equate to high quality. GAN-based methods tend to have higher quality at the cost of high compute, whereas, HyP-NeRF achieves comparable quality at much lower compute. **A5:** The reported inference time (in the rebuttal) denoted the time taken to run the end-to-end pipeline, including the TTO. Specifically, the following is the breakdown PixelNeRF CodeNeRF HyP-NeRF TTO - 305s 165s Single-view rendering 47.8s 8s 2s Denoise & Finetune - - 319s (when m = 91) Total time to render 5736s 1265s 724s NeRF from 120 views The total time to render NeRF from 120 views is computed as: TTO $+$ 120 $\times$ time taken for single-view rendering $+$ Denoise & Finetune (for HyP-NeRF). *Denoise & Finetune* - the denoising step takes 182s (which includes rendering and denoising the 91 views), and finetune step takes 137s, which adds up to 319s. **Our rendering takes just 2 seconds for a single image of 512 resolution** (4$\times$ and 22$\times$ faster than CodeNeRF and PixelNeRF, resp.). This is primarily because our computation is efficiently split between the hypernetwork (that runs only once per instance) and the predicted NeRF, and rendering a view only relies on the latter.
Summary: This paper proposes a way to encode a categorical prior for NeRFs. The method train an auto-decoder, which optimize together both the latent codes (one for every instance in the dataset) and the decoder parameters. The decoder will take the latent code and predict parameters for an instance-NGP backbone supported NeRF model. One key technical contribution is that the hyper-network predicts not only the parameters for the instant-NGP's MLP, but also the Multi-resolution hash encoding. This allows the predicted NeRF to get higher quality. The paper demonstrates this prior can be used for a variety of tasks. Strengths: - This method is able to generate relatively high-quality NeRF thanks to the design of hyper-network that predict parameters for both the MRHE and the MLP. This design did circumvent the limitation of storing NeRF at a voxel. - The paper demonstrated that the learned prior is capable of various tasks. This demonstrate the versatility of this learned prior. - Novelty. To the best of my knowledge, the solution to build an auto-decoder solution for hyper-network that predicts both the MLP parameters and the MRHE is novel. Weaknesses: - Require separate pipeline for generation and desnoising. Since the denoising and fine-tuning pipeline is an optimization procedure, it's not clear to me whether the pipeline is able to enforce satisfication of the conditioning input. For example, if we use single image as an input, and would like to obtain a NeRF where certain pose is that image. After test-time optimization using the NeRF loss of the single image, this NeRF will be pipe into the denoise pipeline, which is another optimization. How does this second stage of the pipeline will predict something that's aware of the conditional input signal? - Hyper-network number of parameters. It occurs to me that the hyper-network can take a lot of parameters in order to predict the parameters for both the MRHE and the MLP. There are also other issue of the hyper-network, such as the output does not necessarily respect the permutation invariance of the MLP parameters space, which is especially the case when the hyper-network structure is only an MLP. Training such a large amount of parameters can require a lot of computing or data. - It's not clear how to achieve unconditional generation using this pipeline. Since the pipeline, after training, will only provide a codebook and a hyper-network, without a way to sample the distribution of the codebook. This suggests that in order to make this pipeline useful for generating a NeRF that satisfies the data distribution, it requires additional handling of the pipeline such as learning variational auto-decoder. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weakness section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper has a good discussion of potential limitation. Additional limitation I forsee is already listed in the weakness section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are delighted to receive such a positive assessment of our work and are grateful for it. Below, we provide the answers to the questions that were raised. 1. **Require separate pipeline for generation ... this second stage of the pipeline will predict something that's aware of the conditional input signal?** This is addressed in the global rebuttal. 2. **There are also other issue of the hyper-network, such as the output does not necessarily … structure is only an MLP.** Unfortunately, we do not understand this part clearly, but would to love to discuss this during the discussion phase. 3. **Hyper-network number of parameters. It occurs to me that the hyper-network can take a lot of parameters in order to predict the parameters for both the MRHE and the MLP….Training such a large amount of parameters can require a lot of computing or data.** This is addressed in the global rebuttal 4. **It's not clear how to achieve unconditional generation using this pipeline ... pipeline such as learning variational auto-decoder.** - As mentioned on page 6, line 219, the hypernetwork is trained in an auto-decoding fashion and consequently learns a non-standard prior. This training strategy of auto-decoding has been well-adopted across an array of influential works that aim to predict an implicit function, such as DeepSDF [37], Scene Representation Networks (SRN) [54], Light Field Networks [53], INR-V [46], and many more. Therefore, as you rightly mentioned, it is not trivial to perform unconditional sampling through our learned prior, which is also listed in the limitation section of our work (page 9 line 346), and we would encourage future works to pursue this direction. - One major aspect of unconditional sampling is to showcase generalization. In our case, we test HyP-NeRF's ability to generalize based on a well-adopted conditional task of NeRF generation from a single-view image. Further, we showcase many diverse downstream applications that can be enabled through HyP-NeRF through many examples in the paper and video, including compression, text-to-NeRF generation, generating NeRF from occluded and cluttered images **scraped directly from the internet without any preprocessing** (except the segmentation masks obtained from SAM [20]), and so on. --- Rebuttal Comment 1.1: Title: Discussion about hyper-network Comment: Sorry for not being clear on the hyper-network issues. here is a simple example: Imagine an MLP with 1-D input, one 2D hidden layer, and finally a 1D output and there is no bias. This MLP can be written as $f(x; w_1, w_2) = w_2^T ReLU(x w_1)$, where $w_1, w_2\in R^2$. To use a MLP based hyper-network to predict both $w_1$ and $w_2$, the output of the hyper MLP will be 4 dimensions. However, such structure of the hyper-MLP is not aware of the fact that if we swap the first and second dimension of $w_1$ and $w_2$, the output of the MLP will be the same. This means that the hyper-MLP is predicting a larger space where structure exists, and a common remedy can come from sufficient training with more data. This can be a limitation of the paper if the application doesn’t come with efficient data. --- Reply to Comment 1.1.1: Title: Clarification about hyper-network Comment: We thank you for this clarification. Although, as you rightly mentioned, it could be a limitation of the hypernetworks in general, but in our setting, the training happens in an end-to-end fashion through the volumetric rendering loss. This results in the hypernetwork learning a common permutation of NeRF weights across all the instances, simplifying the task for the hypernetwork. Therefore, it can learn on any number of instances, including a single, two, three, or more instances. However, with such a low number of datapoints, the hypernetwork's prior is too sparse and does not have enough diversity to generalize to novel NeRFs. Therefore, we need to train it on a sufficient number of datapoints (as in the ABO / SRN datasets). We hope this clarifies your question, and we would be happy to discuss this further. We will also add this clarification in the main paper.
Rebuttal 1: Rebuttal: We thank the reviewers for their feedback and positive assessment of our contributions. We address major factual errors and broad reviewer concerns in this global response and specific reviewer concerns/questions in the local response. We are confident that we will have addressed all their concerns with this rebuttal and additional visualizations and experiments. We would be happy to clarify any more questions during the author-reviewer discussion phase. **We refer Muw9 as R1, bkzC as R2, vtAE as R3, bLah as R4** **Clearly, the reviewers appreciate several aspects of the paper**. R1, R2, and R3 appreciate that HyP-NeRF enables **several downstream tasks** (R1: “demonstrates the versatility of this learned prior”, R2: “can extend to other downstream tasks”, and R3: “cooperate with other models for various downstream tasks”). R1 & R3 find the **idea novel** (R1: “both the MLP parameters and the MRHE is novel” and R3: “authors propose a novel idea for…”). R2 also likes the idea of **denoising and fine-tuning** (“denoising networks can improve the texture quality while retaining consistency”). R3 & R4 appreciate our **experimental setup** (R3: “The experiments look convincing. It clearly supports the major contribution of the paper”, R4: “experimental section is well organized”). R1 finds the NeRFs to be **high quality** (“generate relatively high-quality NeRF thanks to…”). 1. **Factual Errors & Opinions:** We first want to address R4’s review, which, unfortunately, has several factual errors and opinions unsupported by facts. For example: - One of the major weaknesses raised by R4 is missing references to MultiPlaneNeRF, Pix2NeRF, HyperNeRFGAN, and Points2NeRF. We appreciate R4 for bringing this up. However, MultiPlaneNeRF was submitted to arxiv after the submission deadline, Pix2NeRF is already cited in the paper, and while we are happy to add references to HyperNeRFGAN and Points2NeRF, we want to note that they are not yet peer-reviewed. We hope the reviewer reconsiders this unfair criticism of our work. - The reviewer's comments on alternative design choices are arbitrary and unsupported by facts (review subsection titled "The model train generalizable NeRF priors"). In our paper, we justify our design choices and provide comprehensive ablations to evaluate them. Unfortunately, the reviewer has listed a number of arbitrary "what about x" design choices with no evidence to suggest that they would work better. 2. **R1: How is the condition retained?** - The denoise and finetune step operates on the output of the hypernetwork, $M$, that produces a NeRF, $f$, on a given condition. - To retain the condition, we render $f$ to $m$ different poses (as mentioned in line 183). In our experiments, $m=91$. - The denoising module improves the texture of these $m$ rendered images. However, it is to be noted that this step only changes the input image marginally (this is also illustrated in the added rebuttal PDF, Figure 3), and thus the condition is not lost. - Further, since $f$ is fine-tuned on the denoised images (which are already condition aware), the overall process does not lose the condition. 2. **R3: ... resolving geometric ambiguity through the denoising the 2D outputs does not seem logically coherent** - We do not resolve the geometric ambiguities through the denoising process. In fact, our assumption is the exact opposite - the input to the denoise and finetune step should already be multi-view consistent. - 3D inductive bias is established through the standard volumetric loss proposed in [30] as mentioned in Equation 2 while training the hypernetwork, $M$. The NeRF predicted by $M$ is already multi-view consistent. 3. **R2 & R3: How are the results multi-view consistent and not blurry?** - While training a NeRF on multiview images that are highly inconsistent can result in inconsistency and blurry results, but we would like to point out that the multiview inconsistencies in the denoised images in the “Denoise and Finetune” step are negligible. - To begin with, our images (to be denoised) are rendered from a fully-fledged NeRF, which is multiview consistent by design. This is evident from the experimental evaluations where it can be seen that HyP-NeRF significantly outperforms (on ABO) or at least is comparable (on SRN) to the baselines on the task of single (or sparse) view image to NeRF synthesis. This NeRF is rendered to 91 different views ($m=91$). - Secondly, even though VQVAE2 denoises the renderings, the denoised images are only marginally different from the input images, and therefore the multi-view inconsistencies are negligible (as clearly shown in the PDF Figure 2), unlike Instruct-Nerf2Nerf and StylizedNeRF that modify the images drastically resulting in inconsistencies and in blurry results. - Our final trick is - instead of training a NeRF from scratch, only fine-tune the original predicted fully-fledged NeRF that is already very similar in geometry and texture to the denoised images, thus making sure that the final NeRF is not distorted and retains the initial geometric qualities while showcasing improved texture and fine geometry refinements (like smoothened edges). **Computation Metrics (inference time, parameters, and FLOPS), rebuttal PDF, Table 2**. In the HyP-NeRF inference time, $m$ denotes the number of images used for the denoise and finetune process. Although we have the largest number of parameters (2$\times$ and 61$\times$ compared to PixelNeRF and CodeNeRF, respectively), our FLOPS are significantly lower than both the baselines (26$\times$ and 28$\times$ less compared to PixelNeRF and CodeNeRF, respectively). This is primarily because, given N query points in a scene, the forward pass through the hypernetwork (computationally expensive) happens only once for the scene. Only the NeRF predicted by the hypernetwork (the less computationally expensive part) is run for each query point. Pdf: /pdf/ccf337b883518a95763b5fd3b5395400f6387a12.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
TensorNet: Cartesian Tensor Representations for Efficient Learning of Molecular Potentials
Accept (poster)
Summary: The paper presents TensorNet, a novel O(3)-equivariant message-passing neural network for efficient representation of molecular systems in scientific research. This model utilizes Cartesian tensor atomic embeddings to simplify feature mixing via matrix product operations. By decomposing tensors into rotation group irreducible representations, it enables independent processing of scalars, vectors, and tensors when required. TensorNet outperforms higher-rank spherical tensor models in performance, utilizing fewer parameters, even with a single interaction layer for small molecule potential energies. Additionally, TensorNet can accurately predict vector and tensor molecular quantities on top of potential energies and forces, greatly reducing the model's computational cost. Therefore, TensorNet provides a promising framework for developing state-of-the-art equivariant models with enhanced efficiency and computational affordability. Strengths: 1. This paper adeptly presents TensorNet, a new learning model that not only establishes state-of-the-art performance but does so with a remarkable reduction in the number of parameters utilized. This marks a significant leap in model efficiency without sacrificing performance quality, setting a new benchmark in the field. 2. The majority of the empirical outcomes display a marked enhancement over existing methods, with the implementation of TensorNet consistently yielding superior results. The experimental evidence provided substantiates the model's efficacy, reinforcing the robustness and applicability of this innovative approach in real-world scenarios. Weaknesses: The paper's exposition of the model architecture can be somewhat challenging to comprehend due to its complex nature. A potential improvement would be the inclusion of intuitive diagrams or visual aids within the main body of the text, not just in the Appendix. Simplified illustrations, possibly even a step-by-step visual guide, could greatly enhance the reader's understanding of the architecture and make the methodology more accessible to a broader audience. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The proposed architecture (illustrated in Fig. A1) appears quite intricate. Could you provide more insight or intuitive reasoning behind the specific design choices in the model? What fundamental principles or considerations have influenced the complexity and uniqueness of this architecture? 2. Could you delve deeper into the primary constraints or drawbacks of the TensorNet model compared to the existing methods in the field? What are the potential areas where other models might still hold an advantage, and how does TensorNet aim to address these challenges in future iterations or improvements? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive feedback on the model. We now address the questions raised. **The proposed architecture (illustrated in Fig. A1) appears quite intricate. Could you provide more insight or intuitive reasoning behind the specific design choices in the model? What fundamental principles or considerations have influenced the complexity and uniqueness of this architecture?** We will address this issue in terms of clarity of the figure and the model by adding an additional figure (see general rebuttal’s pdf). One can justify the design choices by taking into account the precise operations that allow TensorNet to be O(3) equivariant and taking them as the fundamental building blocks: 1) Linear combinations of irreducible components: mixing independently different rank features (different linear layers for I, A and S) is presumed to give more flexibility to the model when compared to just applying a linear layer to X (= I+A+S). 2) Multiplication with invariant quantities: again, we assume it is preferable to modify independently I, A, S compared to modifying X alone. We can distinguish subcases: - In the case of the embedding module: We need to generate scalars, vectors and tensors from relative position vectors, that is, edge-wise. Assuming more model flexibility when modifying independently I, A, S (compared to just modifying X) by multiplying invariant quantities, it makes sense to encode edge interatomic distance with different learnable functions $f^{(ij)}_I$, $f^{(ij)}_A$, $f^{(ij)}_S$. After aggregation of edge-wise features into node-wise features, we use again the modification with invariants by taking the (normalized) norm and obtaining new invariants through a MLP that will modify the node’s I, A, S, enabling further processing of the direct neighborhood that has been aggregated. - In interaction layers, when considering the message-passing framework, we find that encoding interatomic distance (weighted with a cutoff) separately to incoming I, A, S features from neighbors goes in line with previous message-passing models for molecular potentials while exploiting the flexibility of independent modifications of different rank features. 3) Interaction via matrix product: We prove that using a particular sum of products the model exhibits O(3) equivariance. Importantly, these products are performed at the node level, after aggregation, which is more efficient. There are an infinite amount of expressions in terms of matrix products that allow O(3) equivariance: for example, w * YM + w * MY, where w can be a learnable vector of weights with hidden channels length and * is the element-wise product. The most simple one, 1 * YM + 1 * MY, is the one currently used. 4) We can consider separately the normalization operations that take place. There are two normalizations: - Using LayerNorm: It can be expected that when taking the norms of I, A and S, their magnitude can be quite different. LayerNorm would equilibrate the norms coming from I, A and S. In fact, the LayerNorm used in the output MLP giving energies gave a substantial benefit in terms of accuracy. - Normalizing X tensors by means of X -> X / (||X||+1): we found this to stabilize training (without this normalization, for some molecules the loss became NaN), +1 ensures numerical stability (though any other number could be used). This normalization also enforces a non-linearity. Since ||X|| = || I + A + S || is used (as opposed to using separately || I ||, ||A||, ||S||) the components I, A, S receive non-linear contributions from the other components. We hope this explanation helps in clarifying the architectural choices, given that the use of the different components I, A, S along the architecture makes the diagram convoluted. **Could you delve deeper into the primary constraints or drawbacks of the TensorNet model compared to the existing methods in the field? What are the potential areas where other models might still hold an advantage, and how does TensorNet aim to address these challenges in future iterations or improvements?** The main limitation comes from the prediction of quantities built on top of higher-rank tensors, like functions expanded in terms of spherical harmonics, e.g. electronic densities. However, TensorNet can still predict the density as a scalar on each point, the limitation comes from the representation in terms of higher-rank tensors (bearing in mind that our ultimate intention is to predict energies and forces in a fast and accurate way, and therefore this is not a requirement). The prediction of these quantities of rank higher than two is very uncommon. Examples of physical quantities that can correctly be predicted up to rank-2 apart from energies and forces are molecular or atomic dipoles, polarizability tensors, nuclear-shielding tensors, and quadrupole moments, which account for the vast majority of quantities that are used in molecular settings. We do think that TensorNet will be primarily used to predict neural network potentials (molecular energies and forces), and molecular properties which is the reason for which it has been designed and where it holds its main advantages. e3nn based models will still be used where speed is less important or somehow the higher rank tensors are required, e.g. for representing functions in terms of spherical harmonics. For the future, we are currently exploiting the simple matrix operations of TensorNet further to achieve even greater performance speed-ups, mainly using Cuda graphs and other standard techniques. Integration with a widely used biomolecular dynamics package is already in progress with the final aim to be used in practical applications. --- Rebuttal Comment 1.1: Comment: I thank the authors for responding to my concern and for promising the improvement of the paper.
Summary: This paper proposes TensorNet, an O(3)-equivariant neural network architecture for molecules. Using the decomposition of a 3x3 matrix into a scalar, vector, and matrix shown in Eq. (2), TensorNet efficiently computes the interaction of O(3)-equivariant features up to l=2, where l is the degree (frequency) of the O(3) representation. The performance is evaluated on several standard benchmarks, such as qm9, which show that TensorNet achieves equivalent or better performance than baselines. Strengths: 1. TensorNet is a novel method. I have never seen the construction of an equivariant net based on the decomposition (2). 2. The performance is evaluated with several different datasets and in terms of different metrics (e.g., prediction error, computation speed). 3. The performance is comparable to or better than existing approaches. Weaknesses: 1. The paper has room to improve in terms of presentation. I'm unfamiliar with the chemistry (molecule) domain, and some parts seem challenging to understand without expertise. For example, a vector r_ij is defined on line 175, but the mathematical definition is not described. Also, the meaning of the cutoff radius is unexplained. Another point is that there is no reference nor citation to Eq. (2), the core equation of this paper. These are not well known in the machine learning community, and it could be better to introduce them in plain words. 2. The limitations of the proposed method are not explicitly discussed. One limitation is that TensorNet cannot capture higher degree (l>2) information of O(3). 3. It is argued that some existing methods have a downside: "the computation of tensor products in most of these models containing higher-rank tensors and pseudotensors can be expensive" (lines 111-112). However, only one method (ET) is compared with the proposed method in terms of computational cost. Also, there is no theoretical evaluation of the complexity e.g., using big-O notation. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: n/a Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Limitations are not explicitly addressed. No particular concern for potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive feedback on the novelty of the approach, completeness of the validation, and results. We here answer the further points raised. **The paper has room to improve in terms of presentation. I'm unfamiliar with the chemistry (molecule) domain, and some parts seem challenging to understand without expertise. For example, a vector r_ij is defined on line 175, but the mathematical definition is not described. Also, the meaning of the cutoff radius is unexplained. Another point is that there is no reference nor citation to Eq. (2), the core equation of this paper. These are not well known in the machine learning community, and it could be better to introduce them in plain words.** We understand the concern that the reviewer mentions about some definitions not being explicitly addressed for researchers not familiar with the molecular domain. We will address this and improve it in the new version of the manuscript. It will also be clarified in the additional figure that would be incorporated into the manuscript (see general rebuttal’s pdf). The edges between nodes (atoms) are built by defining a cutoff radius, where atoms $i$ and $j$ are connected by an edge $ij$ if they are at a distance smaller than the cutoff radius. At the same time, for every edge, we can consider a relative position vector $r_{ij}$, computed from 3D coordinates of atoms $r_i$ and $r_j$, as $r_{ij}$ = $r_j$ - $r_i$. Regarding the decomposition (2), the reference is [21] (page 3, first and second paragraphs), even though it is true that it is found two or three lines below the expression and it may not be immediately clear that it refers to (2). Therefore, we will add an additional reference and citation to clarify. **The limitations of the proposed method are not explicitly discussed. One limitation is that TensorNet cannot capture higher degree (l>2) information of O(3).** Regarding limitations, we will include them as an independent section at the end of the manuscript. TensorNet is not using tensors of rank higher than 2 to favor computational efficiency. In terms of accuracy for lower-rank quantities, this does not seem to be a problem, as results show that TensorNet performs competitively or better compared to models using higher ranks. TensorNet cannot choose the maximum rank of the tensors being used, in contrast to spherical models, where the rank can be seen as a hyperparameter, and the architectures are built in a general way that can consistently include higher ranks. The main limitation comes from the prediction of quantities built on top of higher-rank tensors, like functions expanded in terms of spherical harmonics, e.g. electronic densities. However, TensorNet can still predict the density as a scalar on each point, the limitation comes from the representation in terms of higher-rank tensors (bearing in mind that our ultimate intention is to predict energies and forces in a fast and accurate way, and therefore this is not a requirement). The prediction of these quantities of rank higher than two is uncommon. Examples of physical quantities that can correctly be predicted by TensorNet up to rank-2 are energies, forces, molecular or atomic dipoles, polarizability tensors, nuclear-shielding tensors, and quadrupole moments, which account for the majority of quantities that are used in molecular systems. **It is argued that some existing methods have a downside: "the computation of tensor products in most of these models containing higher-rank tensors and pseudotensors can be expensive" (lines 111-112). However, only one method (ET) is compared with the proposed method in terms of computational cost. Also, there is no theoretical evaluation of the complexity e.g., using big-O notation.** The computational cost of models based on spherical tensors is well-established and we will add more comparisons in the manuscript. For example, in reference [20] (MACE), a fast model is built that explicitly addresses this issue, and one can see in the reference in Table 2, at the end of page 8, times to compute energy and forces are compared for NequIP [17], BOTNet [19] and MACE on the 3BPA molecule with 27 atoms. MACE is 4x faster than previous models when considering the full model (L=2, the SOTA model for rMD17). TensorNet 2L (our SOTA model for rMD17) with a batch size of 32 is ~4x faster than MACE on the same molecule and batch size, on a NVIDIA RTX 2080 Ti, and therefore it is by far faster than NequIP and BOTNet. We compared our speed to the ET [16], a Cartesian vector equivariant model, as opposed to the other higher-rank spherical models, because Cartesian vector representations are found to be fast (also, notice that the SOTA ET model had 6 layers, and we compare to 4 and 5 layers). Our intention is to emphasize that TensorNet, with accuracies comparable to SOTA spherical models, is as computationally efficient (or even more computationally efficient in some cases) than a Cartesian vector model for small molecules. Finally, the complexity of TensorNet is linear regardless of the rank with the number of atoms N (~O(N)), which is the main scaling parameter. The number of neighbors per atom M is controlled by the cutoff radius and the density, which are usually fixed across models and systems, therefore it is not usually a scaling factor to take into account, but rather a constant factor that is included in the overall speed. In any case, since matrix products are performed after aggregation over neighbors (we compute matrix products node-wise, not edge-wise), these do not scale with M. This is in contrast to spherical models, where tensor products are computed on edges, and therefore display a worse scaling with the number of neighbors M, that is, a worse scaling when increasing the cutoff radius at fixed density. We will add this to the manuscript. --- Rebuttal Comment 1.1: Title: Response Comment: Thank you for the response. The rebuttal comments make sense to me (especially the reason of rank 2) and mostly resolve my concerns. I will raise my score.
Summary: This paper proposes a cartesian tensor representation for efficient learning of molecular potentials. It enables the feature mixing process to be a simple matrix product operation. In addition, the matrix product operation is simplified by cost-effective decomposition techniques. Experimental results demonstrate that the proposed method can effectively reach a comparable performance with a much smaller number of parameters. Strengths: (1) The proposed method is very technically solid and this reviewer also thinks the efficient computation of equivariant architectures becomes increasingly important; (2) The extension of torchMD-net with efficient decomposition techniques is reasonably motivated, just like low-rank decomposition techniques for large language models; (3) Experimental settings are very solid, covering enough number of experimental settings, which fully support the effectiveness of the proposed method. Weaknesses: (1) The presentation is not that easy to follow and the notation is a little bit complicated, which brings additional hardness for readers to understand the core idea. In addition, although the paper is more about mathematical techniques utilization, no figure illustration provided is still very tough for readers to quickly capture the core idea; (2) The proposed method is more like an application of efficient tensor decomposition techniques to the existing equivariant network architectures. Without molecular domain-specific insights somehow lowers the significance of the proposed method. Technical Quality: 3 good Clarity: 3 good Questions for Authors: It seems the proposed architecture is an extension on TorchMD-net. Is it possible to apply this refinement to other equivariant models? If not, then this limitation will significantly lower its significance. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: NA. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the nice feedback on the strengths of TensorNet. We clarify here the other points raised. **The presentation is not that easy to follow and the notation is a little bit complicated, which brings additional hardness for readers to understand the core idea. In addition, although the paper is more about mathematical techniques utilization, no figure illustration provided is still very tough for readers to quickly capture the core idea** While the mathematics can be a bit overwhelming, we tried to make it as simple as possible using the supplementary information for the more technical parts. We do provide a figure illustration in supplementary, however, we will introduce another more illustrative figure (see general rebuttal’s pdf) and, space permitting, include the one found in supplementary in the main manuscript. Also, we will use the extra space to further clarify the explanation of the mathematical techniques. **The proposed method is more like an application of efficient tensor decomposition techniques to the existing equivariant network architectures. Without molecular domain-specific insights somehow lowers the significance of the proposed method** To our understanding, TensorNet, apart from being a mathematical framework, contains physical inductive biases which are meaningful in molecular domain-specific settings. A common approach in molecular physics is to expand some interaction energy in terms of charges (scalars), dipoles (vectors), quadrupoles (rank-2 tensors) and so on, see for example reference [13] (sections 2 and 3). In TensorNet, atomic features are learnable full tensors that depend on neighboring atoms and that can be decomposed precisely into scalars, vectors, and tensors which interact with other atoms’ features by means of matrix products, giving rise to new scalar, vector and tensor features. These products can be regarded as computing all possible combinations (interactions) between scalars, vectors and tensors, akin to scalar/dipole, dipole/dipole, dipole/quadrupole and quadrupole/quadrupole interactions (and so on) in a unified way, contributing to the predicted potential energy of the system. In fact, we also show that for ethanol in a vacuum, TensorNet can simultaneously and accurately predict physical quantities of different geometrical nature from shared atomic features, pointing to the fact that these features are physically meaningful. These facts will be reinforced in the manuscript. **It seems the proposed architecture is an extension on TorchMD-net. Is it possible to apply this refinement to other equivariant models? If not, then this limitation will significantly lower its significance.** TensorNet is not an extension of the Equivariant Transformer, but a completely new model built on the same codebase. TorchMD-NET is a highly optimized PyTorch-based library for neural network potentials, in which the Equivariant Transformer (ET) [16] (probably the model the reviewer refers to when mentioning TorchMD-NET) is one of several existing models. Other models include a graph neural network similar to SchNet, an invariant transformer, and currently TensorNet. In fact, any equivariant model based on Cartesian vectors (such as PaiNN [15] and the ET [16]), can be rewritten using TensorNet’s formalism, by identifying scalar features with tensor features (matrix features) proportional to the identity matrix, and vector features with skew-symmetric tensors. As the reviewer mentions, these models lack the ‘refinement’ of the incorporation of rank-2 features. Given the current implementation of Cartesian vector equivariant models, separated into scalar and vector pathways, one could consider the incorporation of a rank-2 feature pathway. In this regard, TensorNet provides a framework that allows working in a unified way with ‘general’ geometrical objects (full tensors) and their decomposition into scalars, vectors, and tensors, as opposed to designing differentiated pathways for different-rank features.
Summary: The paper introduces Tensor3 Net, a message-passing neural network architecture designed for molecular systems representation. Tensor3 Net leverages rank-2 Cartesian tensor representations and O(3)-equivariance. The tensors are decomposed into rotation group irreducible representations, enabling separate processing of scalars, vectors, and tensors when necessary. Strengths: The paper is well-written and the idea of building a O(3) equivariant model with Cartesian Tensor is valid, given the computation complexity of models such as e3nn, based on spherical harmonics. Weaknesses: The one main weakness is that the author only model rank-2 tensors, which is relatively arbitrary. What it should be done is to introduce a network with general rank cartesian tensors (these will then be 3 x 3 x 3 x ... x 3 tensors). I see the value of adding the l=2 components in a cartesian way too marginal for publication at Neurips. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 1 poor Limitations: The authors do not discuss the limitations of their approach. Flag For Ethics Review: ['No ethics review needed.'] Rating: 2: Strong Reject: For instance, a paper with major technical flaws, and/or poor evaluation, limited impact, poor reproducibility and mostly unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback on the model in terms of its validity compared to e3nn. Here we address the main point raised. **The one main weakness is that the author only model rank-2 tensors, which is relatively arbitrary. What it should be done is to introduce a network with general rank cartesian tensors (these will then be 3 x 3 x 3 x ... x 3 tensors). I see the value of adding the l=2 components in a cartesian way too marginal for publication at Neurips.** TensorNet proposes a novel idea on how to achieve O(3)-equivariance using simple and fast matrix operations. These operations are commonly executed in neural networks and fast on GPUs. Even though we are limiting the operations to rank-2 Cartesian tensors, the model achieves state-of-the-art accuracies or performs better than models based on e3nn or using up to rank-3 spherical tensors (and pseudotensors). The use of rank-3 spherical tensors renders those models more computationally expensive, and in fact, no neural network potentials have used a rank higher than 3, while they have the possibility of using arbitrary higher ranks (of course, at extra cost). TensorNet achieves the same accuracy as models using higher-rank tensors (and pseudotensors). The results are important because if we want to enable molecular dynamics simulations using neural network potentials at quantum-level of accuracy, we need to compute forces for millions and even billions of time steps and fast potentials are critical for that. We foresee TensorNet’s approach to be important in the quest to replace molecular mechanics potentials commonly used in bio-molecular dynamics. In fact, any equivariant model based on Cartesian vectors (such as PaiNN [15] and the ET [16]), can benefit from the idea and be rewritten using TensorNet’s formalism, by identifying scalar features with tensor features (matrix features) proportional to the identity matrix, and vector features with skew-symmetric tensors. Given the current implementation of Cartesian vector equivariant models, separated into scalar and vector pathways, one could consider the incorporation of a rank-2 feature pathway. In this regard, TensorNet provides a framework that allows working in a unified way with ‘general’ geometrical objects (full tensors) and their decomposition into scalars, vectors, and tensors, as opposed to designing differentiated pathways for different-rank features. The decomposition of arbitrary rank Cartesian tensors into irreducible representations of the rotation group is highly non-trivial. For the case 3x3x3 (rank 3), it can be shown (see for example the paper *‘Decomposition of third-order constitutive tensors’* from Y. Itin and S. Reches, available on arXiv, subsection 4.2, and Figure 1 on page 24) that the 3x3x3 = 27-dimensional representation can be decomposed into 1 + 3 + 3 + 3 + 5 + 5 + 7, that is a scalar, three vectors, two quadrupoles, and an octupole (as opposed to rank-2 decomposition, 3x3 = 9 = 1 + 3 + 5). The decomposition is very complex (subsection 4.2) and involves a significant amount of operations. These decompositions for arbitrary ranks would be unfeasible in terms of memory and computational efficiency. In contrast, for rank-2 Cartesian tensors, the decomposition only requires computing a trace and a matrix transpose and already achieves SOTA. In terms of limitations, constraining TensorNet to rank-2 restricts the prediction of quantities built on top of higher-rank tensors, like functions expanded in terms of spherical harmonics, e.g. electronic densities. However, TensorNet can still predict the density as a scalar on each point, the limitation comes from the expansion in terms of higher-rank tensors. We want to emphasize that our ultimate intention is to predict energies and forces in a fast and accurate way, and therefore this would not strictly be a requirement. Also, the prediction of these quantities of rank higher than two is uncommon. Examples of physical quantities that can correctly be predicted up to rank-2 apart from energies and forces are molecular or atomic dipoles, polarizability tensors, nuclear-shielding tensors, and quadrupole moments, which account for the vast majority of quantities that are used in molecular settings. In summary, we believe that TensorNet deserves publication here because it is a novel idea that will be very useful to be integrated by any models aiming for O(3) equivariance using fast operations. The rank-2 restriction does not limit the applicability of the model due to SOTA accuracy and the wide range of physical quantities to which it can be applied.
Rebuttal 1: Rebuttal: The questions raised by the reviewers have been addressed in their individual rebuttals. However, since some of them expressed concerns about the clarity of exposition, we use this general rebuttal to attach a figure that will be added to the manuscript, which we hope provides clarification on both the notation and the methods. Pdf: /pdf/e0f8d633c3bb325c339753d79e5633c7bd8517ee.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Multi-Object Representation Learning via Feature Connectivity and Object-Centric Regularization
Accept (spotlight)
Summary: This paper presents a novel approach towards an unsupervised object-centric representation learning framework. Diverging from previous methodologies that prioritize an input reconstruction scheme, this method emphasizes feature connectivity to cluster neighboring pixels, employing two object-centric regularization losses. The underlying insight for these two losses is the learning of a distinct feature of each object while repelling the object feature between objects by constraining the covariance matrix of the predicted label. To validate their approach, the authors execute their experiments on a variety of image types, ranging from simulated and real-world images to those with complex textures. The results reveal a marked performance improvement when compared to alternative methods, demonstrating the efficacy of the proposed framework. Strengths: 1. The innovative approach proposed in this paper provides food for thought. It suggests that the customary input reconstruction paradigm utilized in prior methods can be substituted with the two object-centric regularization terms introduced in this study. The basic concept underpinning these two loss functions is simple, yet it proves to be highly efficacious. 2. The experimental outcomes presented in the paper are encouraging. The proposed framework demonstrates significant improvement when compared with baseline methods. Moreover, the authors effectively showcase the applicability of the proposed design even when the scale of the dataset diminishes. An ablation study further underscores the relevance and importance of the proposed design in this context. Weaknesses: 1. The scalability of this method to real-world scenes of higher complexity remains a point of ambiguity. Further testing and discussion are necessary to ascertain the effectiveness of this approach in more complex, real-world scenarios. 2. Here is one typo: Line 113, to be. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. Beyond the L2 distance employed for feature embeddings, exploring the impact of other choices for feature embeddings on model performance and threshold selection could be beneficial. Various methods such as cosine similarity or Mahalanobis distance, among others, could potentially impact the model performance and threshold choice differently. 2. The mask transformation matrix A's initialization strategy is an area that requires further investigation. An exploration into whether a more intricate network structure could enhance the performance would provide more insight into the system's capabilities and potential limitations. This exploration could open up new possibilities for performance optimization. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: See weaknesses and questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review and for the encouraging comments. We provide some responses to the questions as follows: **Q1: The scalability of this method to real-world scenes of higher complexity remains a point of ambiguity. Further testing and discussion are necessary to ascertain the effectiveness of this approach in more complex, real-world scenarios.** We have conducted new experiments for further evaluation on additional Flowers, Birds, COCO [4a] datasets which contain objects that have more diverse colors and shapes. We have also added two more recent baselines BO-QSA [21] and SLASH [4b] for comparison as suggested. The results in the table below shows that OC-Net is robust and remains the top performer: | | Flowers | Flowers | Birds | Birds | COCO | COCO | |---|---|---|---|---|---|---| | Metric | Dice | IoU | Dice| IoU | mDice| mIoU | | SLASH | 39.91 $\pm$1.48 | 25.98 $\pm$1.15 | 43.45$\pm$1.10 | 28.24$\pm$1.08 | 22.34$\pm$1.10 | 13.37$\pm$1.07 | | BO-QSA | 65.77 $\pm$1.90 | 51.70 $\pm$1.93 | 44.61$\pm$1.65 | 30.29$\pm$1.50 | 34.90$\pm$1.14 | 23.57$\pm$0.94 | | OC-Net | **67.21 $\pm$0.21** | **54.42 $\pm$0.23** | **47.80$\pm$0.19** | **33.50$\pm$0.17** | **48.18$\pm$0.17** | **35.58$\pm$0.15** | The objects discovered for the various methods on sample images can be viewed in the PDF attached in our general response to all reviewers. We see that OC-Net is able to segment out the objects in a fine-grained manner. **Q2: Beyond the L2 distance employed for feature embeddings, exploring the impact of other choices for feature embeddings on model performance and threshold selection could be beneficial. Various methods such as cosine similarity or Mahalanobis distance, among others, could potentially impact the model performance and threshold choice differently.** We will include these comparisons in the camera ready paper. **Q3: The mask transformation matrix A's initialization strategy is an area that requires further investigation. An exploration into whether a more intricate network structure could enhance the performance would provide more insight into the system's capabilities and potential limitations. This exploration could open up new possibilities for performance optimization.** We will include additional description and further studies for the mask transformation matrix in the camera ready paper. [4a] Yang, Yafei, and Bo Yang. "Promising or elusive? unsupervised object segmentation from real-world single images.", NeurIPS 2022. [4b] Kim, Jinwoo, et al., "Shepherding Slots to Objects: Towards Stable and Robust Object-Centric Learning", CVPR 2023.
Summary: The paper presents a novel approach to object-centric (multi-object) representation learning. In contrast to conventional sequential or iterative attention-based methods, the proposed method employs a pixel-wise shortest distance-based clustering technique. Moreover, the paper emphasizes the use of regularization-based representation learning as an alternative to traditional reconstruction-based approaches. The inclusion of theoretical proofs in the appendix, combined with extensive empirical studies, serves to substantiate the paper's claims and highlight the superior and robust performance achieved by the proposed methodology. Strengths: 1. The paper is well-written with clear structures, logical flow, and informative figures that effectively convey the research findings. 1. The proposed method introduces novel concepts, such as feature connectivity and regularization-based training losses, which lead to impressive results, not only just high score but also in terms of the sample efficiency, on six diverse datasets. These datasets encompass synthetic (simulated), real-world, and complex textures, and the evaluation covers two fundamental tasks in object-centric learning: object discovery and property prediction. 1. The paper offers theoretical support through a mathematical proof presented in the appendix. This proof establishes an upper bound for the downstream generalization error, enhancing our understanding of the proposed method's performance and its capacity to generalize effectively in downstream tasks. Weaknesses: 1. The primary concern raised by the reviewer revolves around the scalability of the proposed method to other datasets. - The reviewer questions the difference between using the original RGB input image (with proper normalization) and the feature map generated by the single 1x1 convolution layer where the single 1x1 convolution layer will act as an injective function from a pixel to a representation vector. It is important to address this aspect and provide a clear distinction between the two approaches. - The minimal receptive field (1x1) of the encoder (a single-layer 1x1 convolution) limits the ability to consider local or global context during clustering. This issue is not adequately resolved by the empirical studies since the datasets used lack objects with diverse colors or shapes. - The reviewer acknowledges that the positional encoding helps to compensate for this limitation. However, there is a concern about the naive utilization of positional encoding and its ability to capture complex object shapes. Additionally, there is a worry that the resulting clusters might primarily focus on object colors without incorporating positional information. - In contrast to the reviewer's concerns, the authors claim in section G.1 that the 1x1 convolution contributes to fine-grained object-centric learning. This conflict needs to be addressed and supported by the authors' analysis and empirical evidence. - The reviewer suggests conducting empirical studies on additional datasets with more complex object shapes and color combinations, such as PTR [1], MSN [2], and MOVi [3], as demonstrated in previous works [4,5]. - Additionally, considering the results obtained by other encoders used in previous methods, such as Slot Attention [6], would further support the authors' claims. 2. Another concern pertains to the rigidity of the feature aggregation process due to the use of a hard clustering algorithm. This rigidity may hinder the proposed method's ability to perform well on downstream tasks and more complex real-world datasets. [1] Hong, Yining, et al., "PTR: A Benchmark for Part-based Conceptual, Relational, and Physical Reasoning", NeurIPS 2021. [2] Stelzner, Karl, et al., "Decomposing 3D Scenes into Objects via Unsupervised Volume Segmentation", ICLR 2022. [3] Greff, Klaus, et al., "Kubric: A Scalable Dataset Generator", CVPR 2022. [4] Biza, Ondrej, et al., "Invariant Slot Attention: Object Discovery with Slot-Centric Reference Frames", ICML 2023. [5] Kim, Jinwoo, et al., "Shepherding Slots to Objects: Towards Stable and Robust Object-Centric Learning", CVPR 2023. [6] Locatello, Francesco, et al., "Object-Centric Learning with Slot Attention", NeurIPS 2020. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. In terms of the training schedule, it is necessary to provide a clear description regarding any distinction between "steps" (L157) and "iterations" (L165). This will help readers better understand the experimental details and avoid any potential confusion. 1. In the figures depicting qualitative results (Figure 2, 6-13), there seems to be a discrepancy in how the authors classify backgrounds and visualize them as black. While the majority of backgrounds are depicted as black, there are cases where backgrounds are shown in different colors. This raises a question about whether there is any meaningful distinction between black-colored backgrounds and those with different colors. Clarification is needed to understand the criteria used for classifying and representing backgrounds in the visualizations. 1. The paper utilizes a simplistic approach for positional encoding, which differs from other possible options such as the soft positional embedding used in Slot Attention [1] or fixed sinusoidal positional encoding. Since the chosen method deviates from the conventional approaches, it is important to provide a clear explanation regarding the design and rationale behind the positional encoding used in the paper. This will help readers understand the unique choices made and the implications of the selected approach for the overall methodology. [1] Locatello, Francesco, et al., "Object-Centric Learning with Slot Attention", NeurIPS 2020. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: 1. To facilitate a comprehensive and fair comparison, it is crucial to include an analysis of the model's speed in the paper. Given that the proposed method involves iterative updates of pixel distances, which may pose challenges for parallelization, there could potentially be a significant speed sacrifice compared to other methods. However, the paper currently lacks any analysis or discussion regarding the speed performance of the proposed method. 1. To enhance the persuasiveness of the paper, it would be beneficial to include some more experiments, such as a) recent studies such as ISA [1] and SLASH [2] for the object discovery task, and b) property prediction results over the complex datasets such as CLEVRTEX, in the final version. 1. The explanation regarding the mask transformation matrix (denoted as A) lacks clarity. It is crucial to provide additional details regarding the initialization and training of the matrix, going beyond the information provided in section A.4. Furthermore, the purpose and necessity of the matrix should be clearly articulated, supported by a thorough description of the procedural intricacies and empirical studies demonstrating its effectiveness. [1] Biza, Ondrej, et al., "Invariant Slot Attention: Object Discovery with Slot-Centric Reference Frames", ICML 2023. [2] Kim, Jinwoo, et al., "Shepherding Slots to Objects: Towards Stable and Robust Object-Centric Learning", CVPR 2023. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review and for the insightful comments especially on scalability. We provide some responses to the questions as follows: **Q1: The reviewer questions the difference between using the original RGB input image (with proper normalization) and the feature map generated by the single 1x1 convolution layer** Using RGB values would treat each image on its own, while applying a convolutional layer enables the method to learn appropriate feature maps based on a dataset of images in order to optimize for downstream generalization. **Q2: The minimal receptive field of the encoder limits the ability to consider local or global context during clustering. This issue is not adequately resolved by the empirical studies since the datasets used lack objects with diverse colors or shapes. The reviewer suggests conducting empirical studies on additional datasets with more complex object shapes and color combinations such as PTR, MSN, and MOVi, as demonstrated in previous works.** We have conducted new experiments to further evaluation on additional Flowers, Birds, COCO [3a] datasets which contain objects that are more complex or comparable to PTR, MSN and MOVi. We have also added two more recent baselines BO-QSA [21] and SLASH for comparison as suggested. The results in the table below shows that OC-Net is robust and remains the top performer: | | Flowers | Flowers | Birds | Birds | COCO | COCO | |---|---|---|---|---|---|---| | Metric | Dice | IoU | Dice| IoU | mDice| mIoU | | SLASH | 39.91 $\pm$1.48 | 25.98 $\pm$1.15 | 43.45$\pm$1.10 | 28.24$\pm$1.08 | 22.34$\pm$1.10 | 13.37$\pm$1.07 | | BO-QSA | 65.77 $\pm$1.90 | 51.70 $\pm$1.93 | 44.61$\pm$1.65 | 30.29$\pm$1.50 | 34.90$\pm$1.14 | 23.57$\pm$0.94 | | OC-Net | **67.21 $\pm$0.21** | **54.42 $\pm$0.23** | **47.80$\pm$0.19** | **33.50$\pm$0.17** | **48.18$\pm$0.17** | **35.58$\pm$0.15** | The objects discovered for the various methods on sample images can be viewed in the PDF attached in our general response to all reviewers. We see that OC-Net is able to segment out the objects in a fine-grained manner. **Q3: Another concern pertains to the rigidity of the feature aggregation process due to the use of a hard clustering algorithm. This rigidity may hinder the proposed method's ability to perform well on downstream tasks and more complex real-world datasets.** Our proposed method can be modified to use soft clustering by simply changing the inclusion criteria (Algorithm 1, line 21) and the terminating condition of the object discovery process (Algorithm 1, line 26). **Q4: In terms of the training schedule, it is necessary to provide a clear description regarding any distinction between "steps" and "iterations".** We will change "steps" to "iterations" to avoid confusion. **Q5: In the figures depicting qualitative results (Figure 2, 6-13), there seems to be a discrepancy in how the authors classify backgrounds and visualize them as black.** Following past work such as Slot Attention [3b], we assign the best-matching regions produced by the various methods as the same color as the objects in the ground truths. Our method does not separately model backgrounds and treats all background of different appearances the same. **Q6: The paper utilizes a simplistic approach for positional encoding, which differs from other possible options such as the soft positional embedding used in Slot Attention or fixed sinusoidal positional encoding.** We found that using linear positional encodings help attain better performance especially for downstream property prediction. We have conducted additional experiments to compare alternative positional encodings as suggested and show the results on Multi-dSprites below: | Property | Color | Position | Shape | |---|---|---|---| | Sinusoidal | 97.8$\pm$0.8 | 75.2$\pm$4.8 | 43.7$\pm$0.0 | | Linear (Ours) | **98.0$\pm$0.6** | **98.3$\pm$0.1** | **78.1$\pm$0.0** | **Q7: To facilitate a comprehensive and fair comparison, it is crucial to include an analysis of the model's speed in the paper.** We implemented a novel batch-wise parallel version of Dijkstra's algorithm to ensure fast training and evaluation of our model. The code has been provided in the supplementary material. We will include an analysis of the model's speed in comparison to all other methods in the camera ready paper. **Q8: To enhance the persuasiveness of the paper, it would be beneficial to include some more experiments, such as a) recent studies such as ISA [1] and SLASH [2] for the object discovery task, and b) property prediction results over the complex datasets such as CLEVRTEX, in the final version.** Code for ISA is currently unavailable, hence we refer to the table in Q2 for additional experiments with SLASH for object discovery on new datasets. We also show the updated results of property prediction with CLEVRTEX below: | Property | Position | Shape | |---|---|---| | Slot Attention | 47.5$\pm$19.6 | 30.6 | | EfficientMORL | 21.8$\pm$2.0 | 18.5 | | GENESIS-V2 | 79.8$\pm$8.0 | 35.2 | | SLATE | 62.0$\pm$8.0 | 30.5 | | SysBinder | 38.2$\pm$3.1 | 29.6 | | BO-QSA | 66.5$\pm$0.3 | 28.7 | | OC-Net | **80.7$\pm$2.4** | **36.1** | **Q9: The explanation regarding the mask transformation matrix (denoted as A) lacks clarity. It is crucial to provide additional details regarding the initialization and training of the matrix, going beyond the information provided in section A.4.** We will include additional description and further studies for the mask transformation matrix in the camera ready paper. [3a] Yang, Yafei, and Bo Yang. "Promising or elusive? unsupervised object segmentation from real-world single images.", NeurIPS 2022. [3b] Locatello, Francesco, et al., "Object-Centric Learning with Slot Attention", NeurIPS 2020. --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer T4Hn Comment: The reviewer expresses appreciation for the authors' response, acknowledging that some concerns have been addressed through supplementary experiment results. However, the reviewer remains concerns about the scalability of the proposed method due to its simple concept, which, at the same time, diverges significantly from previous object-centric learning approaches. While the simplicity can be innovative, the reviewer emphasizes the need for rigorous validation to establish the method as a credible baseline framework for future research. In this context, the reviewer wish to note the importance of comprehensive details in the paper, regarding model architectures of both proposed and baseline models, as well as experimental settings and results. Specific concerns include the absence of detailed results for claims such as the clustering threshold ε and the impact of hyperparameters on performance. Additionally, the effectiveness of utilizing 1x1 convolutions needs empirical support, especially since it influences model robustness across diverse datasets. The reviewer also raises questions about the sensitivity of the proposed method to hyperparameters, like positional encoding. It seems that other positional encoding than the proposed linear encoding does not perform well, showing that it may affect performance across various datasets. The reviewer expresses concern regarding the potential performance decline of the proposed method when applied to different datasets with distinct model settings. The authors have not demonstrated how the method fares under these conditions, raising questions about its adaptability and generalization. Furthermore, the reviewer inquire about the proposed method's performance on object discovery tasks involving backgrounds. As claimed by the SLASH authors, understanding backgrounds is one of the important ability for the object-centric learning models. The reviewer also suggests the use of "FG-" metrics to clarify foreground-only benchmarks in the original paper. Lastly, the reviewer seeks more information on the real-world dataset experiments, including model results (does the model soley detect the target objects without any unexpected object discovery?) and dataset preprocessing details (where are the other parts including backgrounds in COCO?). While the reviewer acknowledges the paper's novelty and robust performance, the reviewer also assert that certain core details are lacking. The reviewers look forward to the authors' response addressing the raised concerns, as it would contribute to establishing a greater confidence in the validity and robustness of this paper. --- Reply to Comment 1.1.1: Comment: Thank you for your insightful and helpful follow-up feedback. We are happy to hear that we were able to address some of your concerns. **Q10: The reviewer wish to note the importance of comprehensive details in the paper, regarding model architectures of both proposed and baseline models, as well as experimental settings and results.** We used official setup and published code for the training of all baseline models (including BO-QSA [21], SLASH [5b], AST-Seg [3c]). Experimental settings were also followed from previous works when the same datasets are used. We will include full details for the new models and experiments in the camera-ready paper. **Q11: Specific concerns include the absence of detailed results for claims such as the clustering threshold $\epsilon$ and the impact of hyperparameters on performance.** The only hyperparameter of OC-Net that varies is the threshold $\epsilon$, and we have shown in Appendix A.3 that $\epsilon$ is able to take a wide range of values without affecting performance. This insensitivity to changes in $\epsilon$ holds for the new datasets as shown in the table below, where performance remains robust for a wide range of $\epsilon$ values: | | Flowers | Birds | COCO | |---|---|---|---| | Metric | IoU-FG | IoU-FG | mIoU-FG | | OC-Net ($\epsilon = 0.7$) | 53.89$\pm$0.22 | 30.17$\pm$0.14 | 33.88$\pm$0.13 | | OC-Net ($\epsilon = 1.6$) | 54.41$\pm$0.22 | 31.77$\pm$0.15 | 35.57$\pm$0.14 | | OC-Net ($\epsilon = 2.3$) | **54.42 $\pm$0.23** | **33.50$\pm$0.17** | **35.58$\pm$0.15** | **Q12: The effectiveness of utilizing 1x1 convolutions needs empirical support, especially since it influences model robustness across diverse datasets.** As suggested, we conduct additional comparison with a new variant of OC-Net where the 1x1 convolutions are replaced with 5x5 convolutions. The mIoU results in the table below show that OC-Net with 1x1 convolutions produces more fine-grained segmentations especially for datasets with small objects: | | SVHN | IDRiD | CLEVRTEX | CLEVRTEX-OOD | Flowers | Birds | COCO | |---|---|---|---|---|---|---|---| | Metric | mIoU-FG | mIoU-FG | mIoU | mIoU | mIoU-FG | mIoU-FG | mIoU-FG | | OC-Net (5x5) | 42.1$\pm$0.2 | 19.1$\pm$0.1 | 30.3$\pm$0.7 | 31.2$\pm$0.2 | 53.6$\pm$0.2 | 32.5$\pm$0.2 | 27.5$\pm$0.1 | | OC-Net | **49.9$\pm$0.1** | **31.2$\pm$0.2** | **34.4$\pm$0.9** | **32.3$\pm$0.5** | **54.4$\pm$0.2** | **33.5$\pm$0.2** | **35.6$\pm$0.2** | **Q13: It seems that other positional encoding than the proposed linear encoding does not perform well, showing that it may affect performance across various datasets.** While using sinusoidal positional encoding affects property prediction results after training the extracted object representations with a gradient boosted tree, performance for object discovery remains robust to changes in positional encoding (PE) as seen from results in the table below on Multi-dSprites: | Metric | ARI-FG | Dice-FG | mIoU-FG | |---|---|---|---| | OC-Net (Sinusoidal PE) | 99.7$\pm$0.0 | 99.3$\pm$0.1 | 98.9$\pm$0.1 | | OC-Net (Linear PE) | **99.8$\pm$0.0** | **99.5$\pm$0.0** | **99.1$\pm$0.0** | **Q14: The reviewer inquire about the proposed method's performance on object discovery tasks involving backgrounds. As claimed by the SLASH authors, understanding backgrounds is one of the important ability for the object-centric learning models.** For rigorous evaluation of fine-grained object segmentation, we use foreground only (FG) metrics for the simulated (Multi-dSprites, Tetrominoes) and real-world datasets (SVHN, IDRiD, Flowers, Birds, COCO). We use foreground and background metrics for the complex textures datasets (CLEVRTEX, CLEVRTEX-OOD), where results in table 2(c) demonstrate that OC-Net is robust in segmenting complex-textured objects and backgrounds. **Q15: The reviewer also suggests the use of "FG-" metrics to clarify foreground-only benchmarks in the original paper.** We will use "FG-" metrics to clarify foreground-only benchmarks in the camera-ready paper. **Q16: Lastly, the reviewer seeks more information on the real-world dataset experiments, including model results (does the model soley detect the target objects without any unexpected object discovery?).** Following previous works, OC-Net fully segments each image before matching the ground-truth regions with the predicted regions for evaluation. **Q17: On dataset preprocessing details: where are the other parts including backgrounds in COCO?** We used the exact preprocessing steps as presented in [3d] and BO-QSA [21] for the Flowers, Birds and COCO datasets. [3c] Sauvalle and Fortelle. "Unsupervised multi-object segmentation using attention and soft-argmax.", WACV 2023. [3d] Yang, Yafei, and Bo Yang. "Promising or elusive? unsupervised object segmentation from real-world single images.", NeurIPS 2022.
Summary: The paper presents a method for learning pixel representations for multi-object segmentation. The method works by projecting pixels using $1 \times 1$ convolution into a feature space. The graph is defined on top of pixels using 8-connectivity. A modified Dijkstra's algorithm assigns pixels to objects based on a distance threshold, where the distance between pixels is measured as the Euclidean distance between normalised feature embeddings. Object representation is formed by averaging the pixel embeddings and adding a learnable mask projection. Two losses are used to learn the pixel embeddings, which aim to increase the diagonal and minimise the off-diagonal entries of the object-feature covariance matrix. The method is evaluated on a series of 2D sprite datasets, SVHN and IDRiD datasets and CLEVRTEX, where the method shows strong segmentation performance. Strengths: The presented formulation is quite simple. This leads to short training time and high-sample efficiency. The method shows strong performance on benchmarks to the point of saturating or "solving" simpler cases. The writing is clear. Weaknesses: The description of the method is lacking critical information and details. - In particular, it is not clear how final masks are obtained. Description of the method in Algorithm 1 suggests that the object sets $\mathcal{O}_c$ are not disjoint and may contain the same pixels as other objects (a consequence of revisiting all pixels starting from the empty set (Lines 11 and 24 of the algorithm)). How are the masks obtained in that case, e.g. for evaluation? - It is also not clear why the object discovery procedure converges, as it relies upon having learned an already appropriate pixel embedding to always guarantee that the condition in line 21 of the algorithm can be satisfied. What happens if that is not the case? As there is no information exchange between pixels in the same object, it is not clear whether semantic or instance information can be learned and extracted. This is somewhat evidenced by the model not performing well on Tetrominoes in the case where the model is required to learn object shapes. As evaluation is limited to simpler or simulated datasets, it is not clear whether this would scale to complex real-world data. Given performance on simpler datasets, it would interesting to show the method working on Birds, COCO, and Flowers in comparison to prior work [21, A]. There are related works that explore image segmentation by modelling a graph on top of learned or hand-crafted features [B, C, D]. It would be beneficial to discuss and compare to such works, e.g. MaskCut method from [C]. Comparison to prior work is limited to older methods, whereas newer methods are not compared against. In particular, a comparison to [21, A, E] is needed given the strong performance on some of the same datasets. Furthermore, methods that were previously shown to obtain good results on CLEVRTEX are not included [E, F, G] in comparisons. Additionally, the object property prediction is limited to simple, nearly monochrome datasets whereas prior work [41] have considered more complex cases in CLEVRTEX already. ### References [A] Seitzer et al. "Bridging the Gap to Real-World Object-Centric Learning" [B] Wang et al. "Self-supervised Transformers for Unsupervised Object Discovery using Normalized Cut" [C] Want et al. "Cut and learn for unsupervised object detection and instance segmentation" [D] Shi and Malik "Normalized cuts and image segmentation" [E] Sauvalle and Fortelle "Unsupervised multi-object segmentation using attention and soft-argmax" [F] Jiang and Ahn "Generative Neurosymbolic Machines" [G] Monnier et al. "Unsupervised layered image decomposition into object prototypes" Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Is it correct to say that the LayerNorm is applied _without_ learned affine transformation as outputs of this layer (or functions thereof) are not included in the loss calculation? If it is, how is it trained as the membership of pixel to object set does not appear to be differentiable? How is the Figure 2 visualisation obtained? It seems that the object discovery procedure requires pixels to be connected, however, there are several "floating" pixels coloured the same colour as the objects. On L161, $\epsilon$ is described as being set such that the normalised similarity should be 50%. How are the similarities normalised? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: There seems to be an unstated assumption that objects need to be up to a certain size. I.e. the threshold implicitly defines a maximum distance between the sampled initial pixel and the furthest pixel that can be included. For example, considering an image of solid color, the resulting pixels should have the same features (close to zero due to LayerNorm) save for the added positional embedding. Thus, only pixels $\epsilon$ from the starting pixel will be added to the object set $\mathcal{O}_c$. It also seems that the assignment of the pixels to a particular object might be heavily dependent on the sampled initial pixel. That is, a pixel j might be included if the initial sampled pixel i is close to it, but not if it is further away as it cannot satisfy the condition on Line 21 of Algorithm 1 due to too many hops. It will then become a part of a different object. Or conversely, pixels of two different objects from a boundary would be included in the same object if one was sampled as initial pixels. Training would further enforce this. Considering the situation above, if the initial sampled pixel was in the corner, would this not potentially create several components? Similarly, could it be that the model cannot handle an object split into two regions by another object, such as e.g. a dog behind a fence post? Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review and for the extensive pointers to related works. We provide some responses as follows: **Q1: Algorithm 1 suggests that the object sets are not disjoint and may contain the same pixels as other objects. How are the masks obtained in that case, e.g. for evaluation?** If a pixel is assigned to multiple objects, we assign it to the mask of the first object in that list and ignore its membership in other objects. **Q2: It is not clear why the object discovery procedure converges, as it relies upon having learned an already appropriate pixel embedding to always guarantee that the condition in line 21 of the algorithm can be satisfied. What happens if that is not the case?** There is always at least one pixel (the starting pixel $\mathbf{p}_i$) that satisfies the condition in line 21, thus guaranteeing that the algorithm will terminate. **Q3: As there is no information exchange between pixels in the same object, it is not clear whether semantic information can be learned and extracted.** Relational information between pixels in the same object are included in object representations in the form of positional encodings and the processed object mask (Eqn. 2). This enables our model to outperform other baselines in downstream property prediction, demonstrating that semantic information can be extracted. **Q4: Given performance on simpler datasets, it would interesting to show the method working on Birds, COCO, and Flowers** We have conducted additional experiments for further evaluation on Flowers, Birds, COCO as suggested. We have also added two more recent baselines BO-QSA [21] and SLASH [5b] for comparison. The results in the table below shows that OC-Net is robust and remains the top performer: | | Flowers | Flowers | Birds | Birds | COCO | COCO | |---|---|---|---|---|---|---| | Metric | Dice | IoU | Dice| IoU | mDice| mIoU | | SLASH | 39.9$\pm$1.5 | 26.0$\pm$1.2 | 43.5$\pm$1.1 | 28.2$\pm$1.1 | 22.3$\pm$1.1 | 13.4$\pm$1.1 | | BO-QSA | 65.8$\pm$1.9 | 51.7$\pm$1.9 | 44.6$\pm$1.7 | 30.3$\pm$1.5 | 34.9$\pm$1.1 | 23.6$\pm$0.9 | | OC-Net | **67.2$\pm$0.2** | **54.4$\pm$0.2** | **47.8$\pm$0.2** | **33.5$\pm$0.2** | **48.2$\pm$0.2** | **35.6$\pm$0.2** | The objects discovered for the various methods on sample images can be viewed in the PDF attached in our general response to all reviewers. We see that OC-Net is able to segment out the objects in a fine-grained manner. **Q5: There are related works that explore image segmentation by modelling a graph on top of learned or hand-crafted features [B, C, D]. It would be beneficial to discuss and compare to such works.** DINOSAUR [A], TokenCut [B], MaskCut [C] belong to a class of methods that rely on the DINO model [2a] which was pre-trained on the ImageNet dataset. For fair comparison, we excluded methods that rely on pre-trained models and trained all methods from random initialization. Nonetheless, we will include a section to compare these works in the updated paper as suggested. We have carried out additional experiments for comparison with Ncut [D] as suggested. The mIoU results in our general response (3) show that OC-Net outperforms other graph-based methods. **Q6: A comparison to [21,A,E] is needed given the strong performance on some of the same datasets. Furthermore, methods that were previously shown to obtain good results on CLEVRTEX are not included [E,F,G] in comparisons.** We conduct additional comparison with most recently published BO-QSA [21] which attain scores that outperform or are comparable with AST-Seg [E], GNM [F], DTI [G]. The mIoU results in our general response (4) shows that OC-Net is superior. **Q7: The object property prediction is limited to simple, nearly monochrome datasets whereas prior work have considered more complex cases in CLEVRTEX already.** We conducted additional experiments to further evaluate the object property prediction on CLEVRTEX as suggested. Results are in our general response (2). **Q8: Is it correct to say that the LayerNorm is applied without learned affine transformation as outputs of this layer (or functions thereof) are not included in the loss calculation?** Yes, LayerNorm is applied without affine transformation. Membership of pixel to object set is used to control the flow of gradients such that gradients only flow towards the pixel embeddings which belong to each object. **Q9: It seems that the object discovery procedure requires pixels to be connected, however, there are several "floating" pixels coloured the same colour as the objects.** As clarified in Q1, pixels may be assigned to multiple objects and during evaluation, we assign such pixels to the mask of the first object. This may result in some floating pixels. **Q10: $\epsilon$ is described as being set such that the normalised similarity should be 50\%. How are the similarities normalised?** We normalise all similarity values to be between $0$ and $1$ by taking the negative exponent. Details have been provided in Appendix A.3. **Q11: There seems to be an unstated assumption that objects need to be up to a certain size. It also seems that the assignment of the pixels to a particular object might be heavily dependent on the sampled initial pixel.** Objects can include any pixel as long as there exists a shortest path distance that is less than $\epsilon$ from that pixel to the sampled pixel. We have found that our trained models are robust to size and sampling sequence as demonstrated by $\epsilon$ being able to take a wide range of values without affecting performance (see Appendix A.3). **Q12: Could it be that the model cannot handle an object split into two regions by another object?** Yes, as stated in the limitations on occlusion, if an object is split into two disjoint regions by another object, the model would consider them as separate objects and output three object masks. [2a] Caron et al. "Emerging properties in self-supervised vision transformers", ICCV 2021. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. I believe some of my concerns have been addressed. It is particularly encouraging to see the results on the more varied real-world datasets, which strengthen the paper's message greatly. I still have some concerns regarding the presented algorithmic procedure. Mainly the necessity to ignore potential multiple object assignments, the potential randomness resulting from the uniform sampling of the initial pixel and other limitations needs discussion and more focus in the paper. Details such as the answer to Q1 are critical to understanding the method, so I would encourage incorporating them into the methods section. Also statements such as "BO-QSA [21] which attain scores that outperform or are comparable with AST-Seg [E], GNM [F], DTI [G]" need to be supported with experimental results, given that such comparisons were not presented in [21] (except for [G]). Finally, it would be good to include (spate/time permitting) the descriptions of the baseline models used to make the new comparisons in the paper. For example, is BO-QSA a transformer or mixture decoder-based model? The results are in Tab 11. (additional PDF) seem to be better than the mixture version from [21] but much lower than the transformer version. Was the model also altered as SLATE/SysBinder (Appendix C) to have 1x1 inputs? I will see if other reviewers have any new questions that develop into a discussion but will be updating my score to reflect the new results and clarifications presented in the rebuttal afterward. --- Reply to Comment 1.1.1: Comment: Thank you for your constructive follow-up feedback. We are happy to hear that we were able to address some of your concerns. **Q13: I still have some concerns regarding the presented algorithmic procedure. Mainly the necessity to ignore potential multiple object assignments, the potential randomness resulting from the uniform sampling of the initial pixel** During the evaluation of 2D images where each pixel can only belong to one ground-truth object, we assign pixels that have been assigned to multiple objects to the mask of the first object. When pixels could belong to multiple ground-truth objects, we retain the multiple object assignments. The sampling-based algorithm gives OC-Net the key benefit of being able to segment any image without the need to fix the number of objects beforehand, as required in most state-of-the-art methods. **Q14: Details such as the answer to Q1 are critical to understanding the method, so I would encourage incorporating them into the methods section.** We will include these details in the updated paper. **Q15: Statements such as 'BO-QSA [21] which attain scores that outperform or are comparable with AST-Seg [E], GNM [F], DTI [G]' need to be supported with experimental results, given that such comparisons were not presented in [21] (except for [G]).** Since AST-Seg [E] presented scores which outperform GNM [F] and DTI [G], we conduct additional comparison with AST-Seg as suggested. We train all models end-to-end for fair comparison. The table below shows the results on CLEVRTEX and CLEVRTEX-OOD with $64\times 64$ images, we see that OC-Net remains competitive and is robust especially for fine-grained segmentation of foreground objects (mIoU-FG): | | CLEVRTEX | CLEVRTEX | CLEVRTEX-OOD | CLEVRTEX-OOD | |---|---|---|---|---| | Metric | mIoU-FG | mIoU | mIoU-FG | mIoU | | BO-QSA | 32.72 $\pm$1.65 | 34.70 $\pm$1.33 | 31.91 $\pm$1.40 | 33.92 $\pm$1.13 | | AST-Seg-B3-BT | 31.77 $\pm$1.72 | **38.63 $\pm$1.81** | 25.93 $\pm$1.87 | 32.41 $\pm$2.17 | | OC-Net | **35.73 $\pm$0.07** | **37.49 $\pm$0.07** | **33.93 $\pm$0.13** | **34.98 $\pm$0.14** | **Q16: It would be good to include (space/time permitting) the descriptions of the baseline models used to make the new comparisons in the paper. For example, is BO-QSA a transformer or mixture decoder-based model? Was the model also altered as SLATE/SysBinder (Appendix C) to have 1x1 inputs?** We will include full details for the new experiments in the updated paper. We use the unaltered transformer version from [21] with $64$ latent size. Results in Table 11 were computed only with respect to the foreground for the Flowers and Birds datasets to evaluate fine-grained segmentation. We present the mIoU and mDice scores for foreground and background in the table below, which show that OC-Net remains robust: | | Flowers | Flowers | Birds | Birds | |---|---|---|---|---| | Metric | mDice | mIoU | mDice| mIoU | | SLASH | 62.18 $\pm$1.04 | 50.15 $\pm$1.99 | 67.20$\pm$1.45 | 55.93$\pm$1.32 | | BO-QSA | 75.11 $\pm$0.47 | 63.01 $\pm$0.37 | 67.79$\pm$0.76 | 56.95$\pm$0.23 | | OC-Net | **75.47 $\pm$0.09** | **63.84 $\pm$0.10** | **68.09$\pm$0.08** | **57.32$\pm$0.08** |
Summary: The paper discusses a novel method for learning object-centric representations from images, the new approach uses feature connectivity to group together pixels likely to be part of the same object and employs two object-centric regularization terms for refining the representations. The method has been tested on various types of images and has significant improvements in object discovery quality, sample efficiency, and generalizability compared to prior work. It has the potential to improve many downstream tasks in predicting object properties. But it also has limitations to handle objects with complex part-whole hierarchies and object occlusions. Strengths: 1. Contributed a novel method for learning object-centric representations from images, the Object Discovery process and Object-Centric Regularization are carefully designed for object-centric learning. 2. Comprehensive mathematical proof and reasoning for the proposed embeddings and regularization terms. 3. Comprehensive comparison between the proposed methods and prior work, and on various datasets. 4. Promising results on sample efficiency, accuracy, and generalizability compared to prior work. 5. Paper writing is clear and succinct. Weaknesses: 1. Lack of example on incorporating the method to an actual downstream task, and showing the advantage of the method using the proposed terms. 2. Lack of results on more challenging datasets with complex scenes, object types, object shapes, and object occlusions. 3. Designed for object-centric, but as mentioned in the limitations, it has limitations for objects with complex part-whole hierarchies. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: How does the proposed object entanglement compare to the attention mechanism? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors mentioned the method has limitations to handle objects with complex part-whole hierarchies and object occlusions. It would still be interesting to see how the method works on more realistic images, such as a picture of a complex kitchen scene, with part-whole complex cabinetry and a human holding objects and having big object occlusions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review and for the encouraging comments. We provide some responses to the questions as follows: **Q1: Lack of results on more challenging datasets with complex scenes, object types, object shapes, and object occlusions.** We have conducted additional experiments on the Birds, COCO and Flowers datasets with more complex scenes as suggested. We have also added two more recent baselines BO-QSA [21] and SLASH [1a] for comparison. The table below shows the results: | | Flowers | Flowers | Birds | Birds | COCO | COCO | |---|---|---|---|---|---|---| | Metric | Dice | IoU | Dice| IoU | mDice| mIoU | | SLASH | 39.91 $\pm$1.48 | 25.98 $\pm$1.15 | 43.45$\pm$1.10 | 28.24$\pm$1.08 | 22.34$\pm$1.10 | 13.37$\pm$1.07 | | BO-QSA | 65.77 $\pm$1.90 | 51.70 $\pm$1.93 | 44.61$\pm$1.65 | 30.29$\pm$1.50 | 34.90$\pm$1.14 | 23.57$\pm$0.94 | | OC-Net | **67.21 $\pm$0.21** | **54.42 $\pm$0.23** | **47.80$\pm$0.19** | **33.50$\pm$0.17** | **48.18$\pm$0.17** | **35.58$\pm$0.15** | The objects discovered for the various methods on sample images can be viewed in the PDF attached in our general response to all reviewers. **Q2: The authors mentioned the method has limitations to handle objects with complex part-whole hierarchies and object occlusions. It would still be interesting to see how the method works on more realistic images, such as a picture of a complex kitchen scene, with part-whole complex cabinetry and a human holding objects and having big object occlusions.** We will include some of these samples in the camera ready paper. **Q3: How does the proposed object entanglement compare to the attention mechanism?** The Slot Attention baseline [1b] used in the paper relies on an adapted version of the attention mechanism to perform object-centric learning. This attention mechanism does not consider feature connectivity whereas our method does, enabling better fine-grained object discovery and sample efficiency. [1a] Kim, Jinwoo, et al., "Shepherding Slots to Objects: Towards Stable and Robust Object-Centric Learning", CVPR 2023. [1b] Locatello, Francesco, et al., "Object-Centric Learning with Slot Attention", NeurIPS 2020. --- Rebuttal Comment 1.1: Comment: Thank you for providing further experimental results and addressing my questions. All my concerns have been satisfactorily addressed, and I continue my rating as strong accept. Tackling multiple object learning, especially in complex real-world scenarios is challenging. The paper introduces a novel and promising approach to multiple object representation learning, making a valuable contribution to this field.
Rebuttal 1: Rebuttal: We would like to thank the reviewers for the helpful feedback and insightful comments. We are encouraged that our contribution of an unsupervised object discovery network which leverages on feature connectivity and designed object-centric regularization terms is described as "*novel*" (TJh8, T4Hn), "*innovative*" (2W4b), "*simple*" (GCAa, 2W4b) and "*yet it proves to be highly efficacious.*" (2W4b). All reviewers highlight the "*sample efficiency*" (TJh8, T4Hn, GCAa) and "*applicability of the proposed design even when the scale of the dataset diminishes.*" (2W4b). We are also heartened that our efforts to design a solution based on theoretical grounds has been described as "*comprehensive*" (TJh8), "*enhancing our understanding of the proposed method's performance and its capacity to generalize effectively in downstream tasks*" (T4Hn). Finally, reviewers comment that our experiments "*on six diverse datasets*" (TJh8) are "*comprehensive*" (Tjh8), "*encouraging*" (2W4b) and "*shows strong performance on benchmarks to the point of saturating or 'solving' simpler cases*" (GCAa). We further thank reviewers for the very constructive feedback which help us refine our work. In response to this feedback, we have conducted new experiments which we summarize below: **1. Application of OC-Net to more complex scenes** We have conducted new experiments for further evaluation on additional Flowers, Birds, COCO [5a] datasets which contain objects that have more diverse colors and shapes. We have also added two more recent baselines BO-QSA [21] and SLASH [5b] for comparison as suggested. The results in the table below shows that OC-Net is robust and remains the top performer: | | Flowers | Flowers | Birds | Birds | COCO | COCO | |---|---|---|---|---|---|---| | Metric | Dice | IoU | Dice| IoU | mDice| mIoU | | SLASH | 39.91$\pm$1.48 | 25.98$\pm$1.15 | 43.45$\pm$1.10 | 28.24$\pm$1.08 | 22.34$\pm$1.10 | 13.37$\pm$1.07 | | BO-QSA | 65.77$\pm$1.90 | 51.70$\pm$1.93 | 44.61$\pm$1.65 | 30.29$\pm$1.50 | 34.90$\pm$1.14 | 23.57$\pm$0.94 | | OC-Net | **67.21$\pm$0.21** | **54.42$\pm$0.23** | **47.80$\pm$0.19** | **33.50$\pm$0.17** | **48.18$\pm$0.17** | **35.58$\pm$0.15** | The objects discovered for the various methods on sample images can be viewed in the attached PDF. We see that OC-Net is able to segment out the objects in a fine-grained manner. **2. Object property prediction evaluation on CLEVRTEX** We conducted additional experiments to further evaluate the object property prediction on CLEVRTEX as suggested. The table below shows the results: | Property | Position | Shape | |---|---|---| | Slot Attention | 47.5$\pm$19.6 | 30.6 | | EfficientMORL | 21.8$\pm$2.0 | 18.5 | | GENESIS-V2 | 79.8$\pm$8.0 | 35.2 | | SLATE | 62.0$\pm$8.0 | 30.5 | | SysBinder | 38.2$\pm$3.1 | 29.6 | | BO-QSA | 66.5$\pm$0.3 | 28.7 | | OC-Net | **80.7$\pm$2.4** | **36.1** | **3. Additional comparison with new unsupervised segmentation algorithm** We have carried out additional experiments for comparison with Ncut [5c] as suggested. The mIoU results in the table below show that OC-Net significantly outperforms other graph-based methods: | Method | Multi-dSprites | Tetrominoes | SVHN | IDRiD | CLEVRTEX | CLEVRTEX-OOD | |---|---|---|---|---|---|---| | Felzenszwalb | 95.0$\pm$0.0 | 96.9$\pm$0.0 | 39.8$\pm$0.0 | 15.4$\pm$0.0 | 26.8$\pm$0.0 | 23.4$\pm$0.0 | | Ncut | 58.9$\pm$0.0 | 57.4$\pm$0.0 | 32.2$\pm$0.0 | 4.3$\pm$0.0 | 22.9$\pm$0.0 | 18.6$\pm$0.0 | | OC-Net | **99.1$\pm$0.0** | **100.0$\pm$0.0** | **49.9$\pm$0.1** | **31.2$\pm$0.2** | **34.4$\pm$0.9** | **32.3$\pm$0.5** | **4. Additional comparison with new state-of-the-art model** We conduct additional comparison with the most recently published BO-QSA model [21]. The mIoU results in the table below shows that OC-Net is superior: | Method | Multi-dSprites | Tetrominoes | SVHN | IDRiD | CLEVRTEX | CLEVRTEX-OOD | |---|---|---|---|---|---|---| | BO-QSA | 88.0$\pm$1.2 | 25.8$\pm$1.2 | 48.3$\pm$1.3 | 4.5$\pm$1.7 | 31.9$\pm$1.7 | 31.2$\pm$1.0 | | OC-Net | **99.1$\pm$0.0** | **100.0$\pm$0.0** | **49.9$\pm$0.1** | **31.2$\pm$0.2** | **34.4$\pm$0.9** | **32.3$\pm$0.5** | [5a] Yang, Yafei, and Bo Yang. "Promising or elusive? unsupervised object segmentation from real-world single images.", NeurIPS 2022. [5b] Kim, Jinwoo, et al., "Shepherding Slots to Objects: Towards Stable and Robust Object-Centric Learning", CVPR 2023. [5c] Shi, Jianbo, and Jitendra Malik. "Normalized cuts and image segmentation." IEEE Transactions on pattern analysis and machine intelligence, 2000. Pdf: /pdf/c7ee3c44c1d7361c741f09899f2fde370a8a755f.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Selective Amnesia: A Continual Learning Approach to Forgetting in Deep Generative Models
Accept (spotlight)
Summary: This paper draws inspirations from continual learning and proposes a Selective Amnesia method to selectively forget concepts in pre-trained generative models. The proposed method is implemented in a widely-used framework of Bayesian continual learning, together with generative replay to optimize the objective. Experiments on several datasets demonstrate the effectiveness of the proposed method. Strengths: 1. The paper is well-organized and easy to follow. 2. The reverse application of the continual learning process to forget certain concepts is an interesting idea. 3. The experimental results, especially for the case study on Stable Diffusion, seem to be remarkable. This may contribute to realistic applications in erasing harmful concepts and protecting privacy. Weaknesses: 1. The idea of selective or graceful forgetting has been explored in continual learning. In particular, AFEC [1] also considers a similar Bayesian-based framework and controls a forgetting rate to incorporates a non-informative prior $p(\theta)$. Interestingly, I find Eq.(2) also derive the prior $p(\theta)$. Therefore, a comparison of the proposed Selective Amnesia and AFEC is necessary. 2. The experiments only compare Selective Amnesia and Original. Although the results of selective forgetting seem to be remarkable, is it possible to compare with representative baselines of continual learning and/or machine unlearning? 3. A primary motivation of this work is to avoid the computational overhead of retraining all old data. However, the proposed method relies on strong generative replay, which potentially deviates from the motivation. I encourage the authors to compare the computational overhead of generative replay, and also consider it when comparing with other baselines. [1] AFEC: Active Forgetting of Negative Transfer in Continual Learning. NeurIPS 2021. Technical Quality: 3 good Clarity: 3 good Questions for Authors: My major concerns include the comparison of AFEC, more advanced baselines and computational cost of generative replay. Please refer to the weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have discussed the limitations and potential negative impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for acknowledging the quality, strong experimental results and interesting approach of our work. Below we provide our responses to the concerns raised by the reviewer. > Therefore, a comparison of the proposed Selective Amnesia and AFEC is necessary. Thank you for bringing AFEC to our attention. We will include a discussion of AFEC in the revised manuscript. The major difference between AFEC and our work SA is that AFEC tackles the traditional setting of sequential tasks learning. AFEC proposes an adaptive posterior over prior tasks when learning a new task in a traditional continual learning setting, which allows conflicting weights from old tasks to be forgotten if they will negatively affect the new tasks. In our work, we would like complete forgetting of $D_f$. Applied to AFEC, this means setting $\beta=1$ in Eq 6, i.e., completely ignoring the posterior over old tasks. This reduces to relying solely on catastrophic forgetting of $D_f$, which we tried in early experiments and found did not lead to sufficient forgetting of $D_f$ due to strong conditioning of conditional generative models. Regarding the prior term in Eq 6, $p(\theta)$ appears from applying Bayes rule over the posterior of the task to remember. This resembles the prior in Eq 2 of our work, which arises from Bayes rule over $D_f$. Similar to us, it seems that the prior term in AFEC is left out of the optimization. > ...is it possible to compare with representative baselines of continual learning and/or machine unlearning. We were unable to find suitable continual learning baselines as continual learning is concerned with *preventing* catastrophic forgetting, while we are interested in promoting forgetting. As mentioned above, relying solely on catastrophic forgetting of $D_f$ led to suboptimal results. We also could not find suitable machine unlearning baselines for our specific setting of forgetting classes in conditional variational generative models. Based on a recent machine unlearning review [1], the methods covered were applied mainly to discriminative models. To our knowledge, there are no machine unlearning works targeted at generative models. Next, we considered several representative methods in machine unlearning and sought to investigate if they could be extended to generative models. As discussed in Sec 2.3, several prior works had limiting assumptions that prevented them from being applied to our setting. For instance, [2] requires altering the original training process of the model via dataset sharding. [3] requires a variational posterior over the original model weights, whereas we only have a MLE point estimate. [4] assumes that the weights of the model that has forgotten $D_f$ only differ from the original model by a small Gaussian error, and proposes to directly modify the weights of the network. As the method is demonstrated only on classification, it is unclear if the assumption holds in the generative case. Regardless, we ran preliminary experiments of [4] on the MNIST VAE, with the simplified Fisher forgetting method of Sec 4 of the paper (as SA also utilizes the FIM). We found this to be unstable due to the inversion of the FIM ($F^{-1/4}$ term). We also considered [5], which we will include in Sec 2.3 of the revised manuscript. [5] proposes to tune a model to remove the influence of certain datapoints but requires the gradients during the original training process to be cached, which we do not have. In general, it appears that methods in the machine unlearning literature [1] often have limiting assumptions and are more suitable for discriminative settings. In considering other avenues for baselines, we also considered other methods designed for generative models (which were pointed out by reviewer 3). However, we found that these methods will not work in our specific setting of conditional variational models, or were not appropriate for forgetting. Extending these methods to our setting would constitute major separate research. For additional details regarding these methods, please refer to our response to reviewer 3. > the proposed method relies on strong generative replay, which potentially deviates from the motivation.I encourage the authors to compare the computational overhead of generative replay, and also consider it when comparing with other baselines. We thank the reviewer for the suggestion on the overhead of GR, which we will include in our revised manuscript. The cost of GR comes in generating $D_r$. In the case of SD, we use a representative set of 5000 images generated from prompts by ChatGPT, which takes approximately 6 hours on a single GPU. As these prompts are general, we reuse the same $D_r$ across all SD experiments. The GR overhead is far smaller than the computational resources needed to retrain SD from scratch (reported to require 256 A100s for 150,000 GPU hours). Also, the cost of generating $D_r$ can be amortized over many runs, making it negligible compared to the cost of training SA or baselines like ESD (which we discuss further in our response to reviewer 3). Thank you again for your review and feedback. We hope our response has addressed your concerns. If so, we hope you will consider updating your review and score. References [1] Nguyen, T., et al. "A survey of machine unlearning." [2] Bourtoule, L., et al. "Machine unlearning." [3] Nguyen, Q.P., Low, B.K.H. and Jaillet, P.. "Variational bayesian unlearning." [4] Golatkar, A., Achille A,, and Soatto, S.. "Eternal sunshine of the spotless net: Selective forgetting in deep networks." [5] Wu, Y., Dobriban E., and Davidson S.. "Deltagrad: Rapid retraining of machine learning models." --- Rebuttal Comment 1.1: Title: Reviewer response Comment: Thank you for replying to my comments and questions. This rebuttal has addressed my concerns. I appreciate the idea of introducing graceful forgetting in generative models, which is importance for realistic applications. The use of ChatGPT for generative replay is also an innovative idea for this technical route (especially for Diffusion Models). I increase my score and prefer to accept this paper. --- Reply to Comment 1.1.1: Comment: Thank you very much for your feedback and we appreciate that you have raised your score!
Summary: The paper aims to forget selected concepts from deep generative models trained with variational inference with a focus on the trustworthiness such as malicious prompts. The paper introduces a continue learning approach that combines EWC and GR to maximize the likelihood (or ELBO) on the remembered set. Experimentally, the paper evaluates forgetting qualities for simple models on standard datasets and for large scale stable diffusion on real world datasets. Strengths: The forgetting task is very meaningful and important as we are now facing such trustworthy challenges in the large model era. The continual learning approach that combines EWC and GR is a good idea and solves several practical difficulties as discussed in the methodology section. There is theory that could guarantee improved likelihood after using the surrogate loss. The framework applies to several types of models trained with likelihood or ELBO. The results on forgetting identity and malicious contents from stable diffusion are interesting and potentially useful. Weaknesses: The selection of $q$ is not well studied in experiments and seems arbitrary. There should be more detailed explanation on the selection of $q$. The paper lacks analysis on the computational resources needed for the continual learning approach. The experimental results for simple models on standard datasets are not compared to baseline methods. There are a few methods for similar goals in the wild including data redaction, feature unlearning, model rewriting, model taming, etc. The experimental results for stable diffusion do not seem to be better than baseline methods, especially the "erasing concepts from diffusion models". Some minor points: I recommend the authors to discuss more related research (mentioned above) in the related work section so that readers can have a better sense of the literature. Please check typos and unbalanced parentheses in the math equations. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: What is the criteria of selecting $q$? Why does the selection of $q$ leads to mapping of concepts (line 262)? How much computational resource (memory and time) do you need to continual-learn stable diffusion? Do you fine-tune a subset of layers or train all layers? What is the advantage of the proposed method compared to baseline methods for stable diffusion? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The paper addressed limitations in the last section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for acknowledging the motivations and usefulness of our work. Below we provide our responses to the reviewer's concerns. > ...detailed explanation on the selection of q The choice of $q$ can be understood theoretically from Corollary 1-- a greater difference between $q$ and the distribution over $D_f$ means a lower likelihood over $D_f$. In general, we want $q$ to be far from $D_f$. For instance, in the experiments of Table 1, $q$ is chosen to be uniform noise as it is intuitively far from the distribution of natural images. However, we stress that Corollary 1 serves as a heuristic, and users are free to choose $q$ that is most relevant for their use case. In our SD experiments, we attempt to forget certain celebrities and a semantically-relevant choice is to have the model generate unrecognizable persons. > Why does the selection of q leads to mapping of concepts? SA trains the model to forget $D_f$ by instead generating concepts from $q$ when conditioned on $c_f$. For instance, in the experiments in Sec 4.2, as we chose $q$ to represent images of middle-aged men, the model generates unrecognizable men when prompted for Brad Pitt. > Do you fine-tune a subset of layers or train all layers? Our method does not make assumptions about network architecture, thus we tune all layers except in the nudity experiments, where we tune only unconditional layers of SD. One could also tune only cross-attention layers to minimize interference with other concepts (explored in appendix E). > ...analysis on the computational resources needed Appendix B mentions computational resources which we will expand to include more details. In brief, we used 2 RTX A5000s for DDPM experiments and 4 A6000s for SD experiments. Training with SA takes 4-5 hours on DDPM and 20 hours on SD. Memory usage in SD is approximately 40GB. Note that we did not optimize for computational efficiency in our reported experiments; by tuning hyperparameters in preliminary experiments, we could achieve similar performance with 2 GPUs and 6 hours of training. For comparison, the ESD baseline also uses 2 GPUs and around 2 hours of training. We believe additional performance gains can be achieved with further tuning. > ...results for simple models on standard datasets are not compared to baseline methods… including data redaction, feature unlearning, model rewriting, model taming, etc *and* ...discuss more related research (mentioned above) in the related work section Thank you for the suggested keywords. We surveyed representative works but the methods are not applicable to forgetting concepts in conditional variational generative models, or require major extensions. We elaborate below, and will include these discussions in the revised manuscript: [1] and [2] pertain to GANs and Normalizing Flows, respectively. The former requires the discriminator as feedback for the generator for data redaction. [4] employs exact likelihood computation of NFs to reduce the likelihood over $D_f$. Variational models lack access to exact likelihoods, and Sec 3.2 shows that minimizing the ELBO over $D_f$ leads to poor results. [3] implicitly assumes that the generator’s latent space has disentangled features, which does not apply to our conditional models. For instance, a given latent $z$ can generate all ten digits of MNIST, $x_i=G_\theta(z,c_i), i=0, …, 9$, by changing the conditioning signal. Hence, it is unclear how one would apply [3] to conditional models. [4] is work on image editing, such as removing watermarks from images, by directly modifying a single layer’s weights in a generator. The paper's experiments focused only on GANs and required finding the best layer to tune for specific applications. This method was not designed for forgetting concepts in conditional variational models and we are unsure whether it qualifies as a suitable baseline. Preliminary experiments with the provided code (on GANs) on more drastic changes that forgetting necessitates did not yield desired results, e.g, altering entire roofs of churches to domes led to severe visual artifacts. > The experimental results for stable diffusion do not seem to be better than baseline methods, especially the "erasing concepts from diffusion models" We do not claim that our method is strictly better than the strong baselines, at least purely in terms of the classifier metrics. We emphasize that the metrics only provide a partial view of the results; the compared methods have qualitatively different behaviors, as discussed in Sec. 4.2 and App. C. ESD (and SLD) steers the model in arbitrary, uncontrollable directions away from the concept, which results in generated images lacking semantic relevance. For instance, ESD frequently produces images without faces or persons, such as houses or cars when prompted for Angelina Jolie (Fig 13 of app.), or mountains and forests in the nudity experiments (Fig 5). This is reflected in the high proportion of generated images without faces (over 30%) in Table 2 of the appendix. By choosing $q$ to be images of unrecognizable persons, SA produces fewer images without faces (about 5%). Compared to SLD, SLD tends to produce distorted faces with visual artifacts. We believe these aspects should be considered when comparing methods. > Please check typos and unbalanced parentheses in the math equations. We thank the reviewer for pointing out the typos. We have corrected the unbalanced parentheses in Thm 1 and Corollary 1. [1] Kong, Z., and K. Chaudhuri. "Data redaction from pre-trained gans." [2] Malnick, S., Shai A., and Ohad F.. "Taming a Generative Model." [3] Moon, S., S. Cho, and D. Kim. "Feature unlearning for generative models via implicit feedback." [4] Bau, D., et al. "Rewriting a deep generative model." We hope that we have addressed your concerns. If so, we kindly request to consider updating your review and score. --- Rebuttal Comment 1.1: Title: Reply Comment: Thanks the authors for the reply. The response addressed some of my questions. Regarding baseline models, please include the discussion in the related work section. I like the idea of more "semantic relevance" of the proposed model. Table 2 is good. However, I still feel there lack a systematic evaluation if you are trying to show this is a general phenomenon that does not restrict to faces or some specific examples. I'm increasing my score to 6. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for raising their score and for providing additional feedback. We agree with the reviewer that a systematic evaluation of forgetting a variety of concept types is desirable. However, this would require a general framework (vis-a-vis our classifier-based approach), such as evaluation of the model’s likelihoods, and benchmark datasets which is an interesting an avenue for future research. Within the context of this paper, we have conducted additional experiments on forgetting artist styles (for e.g., forgetting “van gogh style” by setting $q$ to images of “pop art style”) and obtained positive results. We will include these in the revised appendix. Taken together with the main quantitative and qualitative results, we believe this shows that Selective Amnesia works on a variety of concepts.
Summary: In this work, authors introduce Selective Amnesia - a new method for selective forgetting of particular concepts in generative modeling without access to the training data. Authors propose to use Continual Learning methods - namely, Elastic Weights Consolidation and Generative Replay to retrain the base model with carefully selected additional data that promote forgetting of examples conditioned on particular conditional values without affecting the other ones. In particular, to enforce forgetting, authors introduce a surrogate objective that, instead of just minimizing the probability of generating examples from a distribution that needs to be forgotten, substitutes it with a different one e.g., random noise. Strengths: - This submission tackles an important problem crucial for the reliable use of recent ML applications. - The main contribution is clear, seems to be well thought and is well presented both in terms of intuition and details. - The proposed evaluation is extensive, convincing, and well-structured. The experiment results conducted on several datasets with multiple methods are impressive. Simple experiments were implemented with Variational Autoencoders, while the more extensive ones with high-quality datasets employed diffusion models. - The work is well-written and easy to follow. - Additionally to the main contribution, this work introduces an interesting reformulation of the EWC method. Weaknesses: The proposed method does not really enforce forgetting the particular concept but rather promotes replacing it with a different one. This might lead to fake associations between conditional values and generated outputs. For example, as presented in this work, forgetting “Brad Pitt” generations by learning to generate a “male clown” instead might also affect other generations with prompts similar to “Brad Pitt” - e.g. other actors. This problem should be more evident when trying to forget more general concepts by replacing them with different ones. To make the method more general, it would be interesting to find a technique that automatically finds the suitable surrogate dataset. Small: - Small detail: "most prior work on continual learning for generative models are applied to GANs [19, 20], while our work is primarily concerned with variational models." - This is not true, see, for example (1,2,3,4,5). Those methods tackle the problem of continual learning of variational models, so the fact that they already exist does not affect the novelty of this work. However, I think it would be beneficial to mention at least [1], that first introduced EWC for Variational Autoencoders. I don’t believe the rest should be cited, but I just wanted to point those to authors as potentially interesting for any future works. 1. Nguyen, Cuong V., et al. "Variational Continual Learning." International Conference on Learning Representations. 2. Egorov, Evgenii, Anna Kuzina, and Evgeny Burnaev. "BooVAE: Boosting approach for continual learning of VAE." Advances in Neural Information Processing Systems 34 (2021): 17889-17901. 3. Achille, Alessandro, et al. "Life-long disentangled representation learning with cross-domain latent homologies." Advances in Neural Information Processing Systems 31 (2018). 4. Mundt, M., Pliushch, I., Majumder, S., Hong, Y., & Ramesh, V. (2022). Unified probabilistic deep continual learning through generative replay and open set recognition. Journal of Imaging, 8(4), 93. 5. Deja, Kamil, et al. "Multiband VAE: Latent Space Alignment for Knowledge Consolidation in Continual Learning." IJCAI 2022Limitation - how precise is the forgetting presented in this method? What happens if the concept we want to forget is highly correlated with another one? E.g. if we take CelebA dataset, I think it might be very hard to forget the “wearing lipstick” class without forgetting “heavy makeup” one. From a more precise point of view, I believe this question relates to the problem of how adequate is the approximation of p(x|c)p_f(c) with generations with concepts that we want to forgot/remember? What would happen if we simply skip the EWC regularisation term and continue model retraining with only two ELBOs with generative replay? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - How precise is the forgetting presented in this method? What happens if the concept we want to forget is highly correlated with another one? E.g. if we take CelebA dataset, I think it might be very hard to forget the “wearing lipstick” class without forgetting “heavy makeup” one. From a more precise point of view, I believe this question relates to the problem of how adequate is the approximation of p(x|c)p_f(c) with generations with concepts that we want to forgot/remember? - What would happen if we simply skip the EWC regularisation term and continue model retraining with only two ELBOs with generative replay? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Authors address the main limitations of the work except from the one described in weaknesses/questions Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for acknowledging the strong motivation and contributions of our work, as well as our extensive experimental results. Below we provide our responses to the concerns raised by the reviewer. > What happens if the concept we want to forget is highly correlated with another one? We thank the reviewer for raising this issue of “leakage” to correlated concepts. We have documented this effect in appendix E, where we show that forgetting “Angelina Jolie” leads to slight changes when generating “Jennifer Anniston”. We discussed a way to mitigate this issue, which is to tune the cross-attention layers only in Stable Diffusion. We view this “leakage” issue as a potential double-edged sword that could be beneficial in situations where we would like to forget highly correlated concepts. For instance, in the nudity experiments in Sec 4.2, one would prefer the remapping of clothed persons to generalize to related prompts, such as when the prompt includes artist styles that typically contain nudity. > ...it would be interesting to find a technique that automatically finds the suitable surrogate dataset This is an interesting idea. In this work, our approach was to enable the user to pick $q$ as the surrogate distribution as its choice may be context-dependent (e.g., due to cultural differences). That said, the system can suggest potential $q$'s that are commonly accepted to be benign yet retain the generation capabilities of the model. We will add this discussion to the revised manuscript as future work. > ...see, for example (1,2,3,4,5)... I think it would be beneficial to mention at least [1], that first introduced EWC for Variational Autoencoders Thank you for pointing out these related works and we will revise our discussion in Sec 2.2. As the reviewer notes, [1] addresses the continual learning problem and was applied to a VAE to sequentially learn MNIST classes. The method uses variational approximation to calculate the posterior over the parameters of a model for the T-th learning task, given parameters for the T-1th learning task. Our setting here is different in that we work on forgetting specific concepts rather than continual learning; we will amend our related works section to discuss the relationship. > What would happen if we simply skip the EWC regularisation term and continue model retraining with only two ELBOs with generative replay? As suggested by the reviewer, we ran our method on DDPM to forget the airplanes class in CIFAR10 by turning off EWC completely. This is achieved by setting $\lambda=0$. We kept all other hyperparameters identical to experiments in Table 1. After training, we evaluated the image quality on the remaining 9 classes. We obtained an FID of 45.7, a precision of 0.078 and recall of 0.803, which suggests that image fidelity significantly deteriorated without the EWC term. This result is overall in line with the ablations in Table 1, which show that image fidelity decreases as we decrease the strength of the EWC term. [1] Nguyen, Cuong V., et al. "Variational continual learning." arXiv preprint arXiv:1710.10628 (2017). Thank you for your positive review and we hope that we have addressed your remaining concerns. If there are any further issues, please let us know. --- Rebuttal Comment 1.1: Title: Rebuttal comment Comment: Thank you for addressing all of my comments and questions and for pointing out the discussion on the concept leakage in the appendix. I agree that this might be a double-edged sword, although this is a significant limitation of this work if someone wants to apply it in practice. Maybe the careful selection of the surrogate distribution could help in preventing changes in the correlated concepts? I am satisfied with the reply, and I am strongly convinced that this work should be accepted for NeurIPS. --- Reply to Comment 1.1.1: Comment: Thank you for your positive remarks! Regarding the selection of $q$ to further prevent concept leakage, this may be possible in conjunction with other schemes, e.g., training only the cross-attention in Stable Diffusion. Another approach might be to automatically remap highly-correlated concepts to related $q$'s, but this remains future work.
Summary: This paper presents a technique called Selective Amnesia, which enables controllable forgetting of concepts in pretrained deep generative models. The authors show that this technique can be applied to a variety of models, including text-to-image and VAEs, and can be framed from the perspective of continual learning. The paper discusses two popular approaches used in their work: Elastic Weight Consolidation and Generative Replay. The authors argue that this technique can be used to prevent the generation of harmful, misleading, and inappropriate content by deep generative models. Strengths: 1. The manuscript ventures to address a vital and emergent challenge — ensuring content safety within the ambit of generative artificial intelligence. This issue is of critical importance, given the increasing prevalence and capabilities of generative models, coupled with the potential risks and implications associated with their misuse. The authors' focus on such a topical concern is commendable and lends significant relevance and timeliness to their work. 2. The authors introduce a model inspired by continual learning — a concept that, while established in the broader field of machine learning, presents a novel approach in the specific context of generative model safety. This inventive application of continual learning principles adds a fresh dimension to the discourse on generative model safety, thereby enhancing the manuscript's contribution to this field of study. 3. The selection of datasets for the empirical evaluation is impressively diverse. The authors have conducted experiments on a broad spectrum of datasets, ranging from simpler, well-established ones such as MNIST to more complex, real-life image datasets. This range serves to validate the robustness and versatility of their model across different levels of complexity and in varied contexts, adding significant credibility to their findings. Furthermore, this choice reflects a meticulous approach to experimental design, allowing the model's performance to be tested and evaluated under a wide array of conditions. 4. The quality of exposition is commendable. The paper presents a clear narrative, beginning with the intuitive premise, followed by a comprehensive description of the model, including a highly useful algorithmic representation. The logical sequencing of these sections facilitates reader comprehension. The authors have succeeded in presenting a complex topic in an accessible and cogent manner. Weaknesses: The manuscript is of high quality and is presented competently, with no major flaws readily apparent. However, my primary concern pertains predominantly to the scope and rigor of the quantitative evaluation. 1. The current quantitative evaluation is exclusively conducted on relatively simple datasets. While this serves as a valid starting point, extending the quantitative evaluation to more challenging datasets would provide a more comprehensive perspective on the model's performance. Although this might present additional difficulties, I would encourage the authors to devise some form of quantitative measure, even if it requires human assessment for accuracy. Such a quantitative comparison would help to mitigate potential selection bias and lend greater credence to the results. 2. A more thorough discussion on the chosen baseline models and the current state-of-the-art models would be greatly beneficial. By elucidating how these models relate to the proposed model, and where they diverge, the authors could provide a clearer context for their work. Such discourse would enhance the readers' understanding of the field's landscape and allow them to appreciate the distinctiveness and the value of the authors' contribution more deeply. 3. The comparative analysis currently conducted is primarily qualitative in nature. However, this leaves a gap in the evaluation, as quantitative comparisons can provide unique insights that are not always captured by qualitative analysis. Quantitative comparisons can offer more objective, replicable, and measurable findings, which are highly valuable in establishing the proposed model's superiority over existing models. Addressing these concerns would add depth to the study, providing a more rigorous and holistic evaluation of the proposed model. Such improvements would be of significant value, helping to fortify the authors' claims and making a more compelling case for the paper's contributions to the field. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to the above sections for detailed discussion. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes, there is a section at the end of the paper discussing limitations and future research opportunities. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for acknowledging the strong motivations, novelty and quality of our work. Below we provide our responses to the concerns raised by the reviewer. > ...extending the quantitative evaluation to more challenging datasets would provide a more comprehensive perspective on the model's performance *and* ..even if it requires human assessment for accuracy Quantitative results for the Stable Diffusion experiments are shown in Tables 2, 3 and 4 in Appendix C (due to space constraints). We will make this clearer in the main text. In brief, we evaluated these experiments with well-established pretrained classifiers: the GIPHY Celebrity Detector (GCD) and NudeNet nudity classifier; the latter was employed in prior works on SLD and ESD. We agree with the reviewer that additional quantitative measures, such as human evaluations, would provide a more holistic evaluation due to the many factors one can consider when forgetting concepts in Stable Diffusion. For instance, we have noted in Sec 4.2 of the paper that our model produces more semantically-relevant images (far fewer faceless images), and the faces that our model generates are of higher quality (particularly compared to SLD), despite having a higher GCD score (where lower is better) than ESD and SLD. We will add to Sec. 5 that conducting human evaluations is part of our future work. > A more thorough discussion on the chosen baseline models and the current state-of-the-art models would be greatly beneficial. We included discussion and comparisons with works from the machine unlearning literature as well as baselines in Stable Diffusion (ESD and SLD) in Sec. 2.3 and 2.4, respectively. Discussions on qualitative and quantitative results from experiments between our method and ESD and SLD are also included in Sec 4.2. We will include additional discussions in the related works section on feature unlearning, data redaction and model rewriting in deep generative models. We believe this expanded discussion will be reasonably comprehensive given space constraints. If the reviewer has specific suggestions for relevant works or comparisons, please let us know. We hope that we have addressed your concerns. If so, we kindly request to consider raising your score. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. It has addressed most of my conners. With everything considered, I'd raise my rating to 6. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their positive feedback and for raising their score!
Rebuttal 1: Rebuttal: Thank you to the reviewers for their comments and feedback. We appreciate the positive comments and that the reviewers find our continual learning approach to concept forgetting to be meaningful and technically interesting. Based on the comments, we have revised the paper to include (i) additional comparisons to related work pointed out by the reviewers, (ii) details on computational cost, and (iii) text to clearly point out relevant portions of the Appendix (e.g., on concept leakage and additional quantitative results). Please see below for detailed responses to each reviewer.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Subspace Identification for Multi-Source Domain Adaptation
Accept (spotlight)
Summary: This paper considered multiple-source domain adaptation problems in the context of causal representation learning. Specifically, this paper considered quite general scenarios in the domain adaptation by figuring four different representations. - Domain specific and label irrelevant - Domain specific and label relevant - Domain invariant and label irrelevant - Domain invariant and label relevant Notably, this paper derived theoretical assumptions via illustrating when we could identify these four representations (or subspace). A VAE based approach in deep learning is further proposed to identify the subspace. Empirical results are further validated in synthetic data, benchmark domain adaptation dataset (including challenging DomainNET dataset) Strengths: Overall this reviewer feels that this paper has a solid contribution in the theoretical aspects in causal based domain adaptation. Extensive experiments further validated the practical utility. - **Significance & Originality** This paper proposes a very solid theoretical justification in the identifiability issue in multi-source da, which is a fundamental problem. More importantly, this paper studied the general setting (or most difficult settings) in multi-source da. Besides, theoretical assumptions are proposed, which is generally non-trivial. - **Quality** The theoretical assumptions and proof seem valid and reasonable for me. The proposed practical method is also reasonable. Empirical results in toy data and real-data demonstrated that the proposed method could better identify different representations. - **Clarity** In general, this paper is clearly written and well-explained. Since this paper is a bit theoretical, there are still several theoretical points that need to be clarified (see the weakness part). Weaknesses: Since this paper covers several fundamental proofs, this reviewer feels uncertain about several points within the paper. 1. Theorem 1 about theoretical assumptions. In general, I could follow the proof, where intuitions within these assumptions could be better explained. I just take theorem 1 as an example. - About conditional independence. If I understand correctly, this implies, the representation variable $P(Z|U) = \prod P(z_i|U)$ ? (sort of disentanglement or mean-field approximation assumption)? If we write it as a log probability term, it will be $\log P(Z|U) = \sum \log P(Z_i|U)$, which is consistent with your paper… - About linear independence. This assumption seems to illustrate that the gradient of the probability density over each component should be linearly independent? That implies if we use a gradient based approach to learn each component, each component will converge to a distinct direction. Thus we could possibly identify different independent components? - I have a general question in theory 1 (for example). Is it possible to extend this assumption into the blockwise vectors? i.e, $P(Z|U) = P(Z_1|U)P(Z_2|U)P(Z_3|U)P(Z_4|U)$, where $Z_1$ is a subspace vector rather than a scalar. 2. About explanations in toy data. It seems that the proposed method works quite significantly in toy data. However the experimental details (such as the data generation) could be better clarified. For example, is there some sort of spurious correlation? 3. In Equation (1) Line 3, I would think this holds when we have figure 1 by considering the conditional independence. 4. I like the example in explaining different subspace in line 82-92. It could be better to consider this example into more real-world scenarios such as drug discovery and health. For example, data collected from different hospitals may create different $Z_1$, this could be helpful to better understand the importance of considering a general scenario. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See comments in the weakness part. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are very grateful for your valuable comments, helpful suggestions, and encouragement. Below please see our point-to-point responses to your comments and suggestions. >**Q1**: Theorem 1 about theoretical assumptions. In general, I could follow the proof, where intuitions within these assumptions could be better explained. I just take theorem 1 as an example. >1) About conditional independence. If I understand correctly, this implies, the representation variable $P(z|u)=\prod(z_i|u)$ ? (sort of disentanglement or mean-field approximation assumption)? If we write it as a log probability term, it will be $\log P(z|u)=\sum P(z_i|u)$, which is consistent with your paper… >2) About linear independence. This assumption seems to illustrate that the gradient of the probability density over each component should be linearly independent. That implies if we use a gradient-based approach to learn each component, each component will converge in a distinct direction. Thus we could possibly identify different independent components. >3) I have a general question in theory 1 (for example). Is it possible to extend this assumption into the blockwise vectors? i.e, $P(z|u)=P(z_1|u)P(z_2|u)P(z_3|u)P(z_4|u)$, where $z_1$ is a subspace vector rather than a scalar. **A1**: I would like to express my sincere gratitude for your meticulous review. We answer these questions point by point. 1) s for conditional independence, I think that the understanding of the reviewer, where the representation variables follow the mean-field approximation assumption, is correct. The intuition of conditional independence is that the changing factors are independent of each other. For example, in the image classification scenario, the light directions of images and the resolution of images change independently. 2) I think there is some misunderstanding between the derivative of latent variables and the derivative of model parameters. On one hand, the linear independence assumption implies that the $w(z,u_j)-w(z,u_0)$ are linearly independent, and each element in $w(z,u_j)=(\frac{\partial q_1(z_1,u)}{\partial z_1},\cdots, \frac{\partial q_i(z_i,u)}{\partial z_i},\cdots \frac{\partial q_{n_s}(z_{n_s},u)}{\partial z_{n_s}})$ denotes the first-order derivative of the ground truth log probability density $q_i(z_i,u)$ with respect to ground-truth latent variables $z_i$. On the other hand, the gradient of the gradient-based approaches is calculated by the derivative of the loss with respect to the parameters of the models. These two derivatives are different. 3) Existing disentanglement methods based on nonlinear ICA usually assume that each dimension of latent variables is independent of each other. However, there are some cases that the independent assumption can not be met. So disentanglement under dependent latent variables ($p(z|u)=P(z_1|u)P(z_2|u)P(z_3|u)P(z_4|u)$ is a special case of dependent latent variables) is a future direction. In this scenario, homogeneous linear equations are hard to be developed since the latent variables are not independent, but the property of independence of latent variables given their Markov Blanket can be used to develop the homogeneous linear equations. >**Q2**: About explanations in toy data. It seems that the proposed method works quite significantly in toy data. However, the experimental details (such as the data generation) could be better clarified. For example, is there some sort of spurious correlation? **A2**: In light of your valuable suggestions, we have reorganized the causal generation process. In detail, we generate the simulation data for binary classification with 8 domains, which follows the data generation process as shown in Figure 2, which includes two types of latent variables, i.e., domain-specific latent variables $z_s$ and domain-invariant latent variables $z_c$. To better evaluate the subspace identification results, we let the label distribution is the same across different domains, so there is no spurious correlation. The domain-specific latent variables $z_s$ are sampled from 8 different mixtures of Gaussians and $z_c$ are sampled from a factorized Gaussian distribution. As for the nonlinear generation process, we let the observed variables be generated via an MLP with the Tanh activation function. >**Q3**: In Equation (1) Line 3, I would think this holds when we have Figure 1 by considering conditional independence. **A3**: Thank you for your reminder. We totally agree with you, and leverage the property of the causal generation process when deriving Equation (1). In light of your suggestion, we have provided a clear explanation of how to derive Equation (1). In detail, the derivation in Equation (1) can be separated into three steps. 1) We introduce the latent variables $z_1,z_2,z_3$, and $z_4$, which have mentioned in Section 2.1. 2) We factorize the joint distribution in Equation (1) into $p_{x,z_1,z_2,z_3,z_4|y,u_{\mathcal{T}}}$ and $p_{y|u_{\mathcal{T}}}$ with the help of Bayes Rule. 3) We further use Bayes Rule to factorize $p_{x,z_1,z_2,z_3,z_4|y,u_{\mathcal{T}}}$. Since $x$ is conditional independent of $u, y$ given $z_1,z_2,z_3,z_4$, we can obtain $p_{x|z_1,z_2,z_3,z_4}$. >**Q4**: ...It could be better to consider this example in more real-world scenarios such as drug discovery and health. For example, .... **A4**: Thank you for your affirmation. In the scenario of health, we try to provide an example of malaria detection, $z_1$ denotes the domain-specific information, for example, the children's hospital and the contagious hospital; $z_2$ denotes the latent variables that can be age relevant to domains and labels. For example, infants are more likely to suffer from malaria, and the age of patients are also influenced by the different hospital; $z_3$ denotes the symptoms of patients; And $z_4$ denotes the label-irrelevant latent variables like gender. --- Rebuttal Comment 1.1: Comment: Thanks for your rebuttal. I think it has addressed my confusions on the theoretical parts. I think it could be useful to include a short discussion about the intuition on different assumptions. After further checking others' reviews, I recommend acceptance. --- Reply to Comment 1.1.1: Comment: We are glad to address your confusion of theories and thank you for the valuable suggestions. We will provide a short discussion about the intuition of the assumptions. With best wishes, Authors of submission #4882
Summary: This paper studies the problem of multi source domain adaptation where we have access to multiple labeled source domains and unlabeled target domain. Authors consider a novel data generation process by modeling them through 4 new latent variables (i.e. combinations of domain specific or domain invariant and label relevant/ label irrelevant). This is done to relax the assumptions made by previous theoretical analyses. The authors provide sufficient theoretical analysis in this context by providing subspace identification guarantees and using that to improve the results. They conduct multiple experiments include synthetic experiments and the results are satisfactory. Strengths: * The paper is well-presented. * The paper provides new framework through their modeling of data generation process by introducing the 4 latent variables and then considering them to influences from the domain index and the target labels. * Their theoretical analysis is convincing and experiments are impressive. Weaknesses: * This framework is somewhat similar to an earlier work on domain generalization(https://arxiv.org/pdf/2102.11436.pdf) . I agree that your data generation process further expands on their structural causal model but noting the similarities it should be cited and expand on the differences * Importantly, since we model label relevant attributes separately, should we still make the assumption that source and target contain the same labels. * My question specifically is can you comment on if this framework naturally support open-set/partial MSDAs. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please comment on the weakness I raised above and if you could provide some results even on one dataset where there is either a lesser number of classes in target than in source (https://arxiv.org/abs/1808.04205) or like in other open set DA problems that would be great Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are very grateful for your valuable comments, helpful suggestions, and encouragement. Below please see our point-to-point responses to your comments and suggestions. >**Q1**: This framework is somewhat similar to earlier work on domain generalization(https://arxiv.org/pdf/2102.11436.pdf). I agree that your data generation process further expands on their structural causal model but noting the similarities it should be cited and expand on the differences **A1**: We sincerely appreciate your recommendation on these methods, which help clarify the difference between our contributions and other works. In light of your suggestions, we have included the discussions and comparisons in the revised related work. Several methods [1][2][3] employ the prior causal structure to address the distribution shift challenges. Although these methods leverage causal graphs to investigate how the distributions change, the proposed method employs a different causal generation process, which considers different types of domain shifts and is more general. Moreover, our method considers different types of latent variables and provides subspace identification guarantees. [1] Model-Based Domain Generalization Alexander Robey, George J. Pappas, Hamed Hassani, NeurIPS2021 [2] Learning Disentangled Semantic Representation for Domain Adaptation, Ruichu Cai, Zijian Li, Pengfei Wei, Jie Qiao, Kun Zhang, Zhifeng Hao, IJCAI2019 [3] Domain Adaptation under Target and Conditional Shift, Kun Zhang, Bernhard Sch¨olkopf, Krikamol Muandet, Zhikun Wang, ICML2013 >**Q2, Q3, Q4:** Importantly, since we model labels relevant attributes separately, should we still make the assumption that source and target contain the same labels? My question specifically is can you comment on if this framework naturally supports open-set/partial MSDAs. Please comment on the weakness I raised above and if you could provide some results even on one dataset where there is either a lesser number of classes in target than in source (https://arxiv.org/abs/1808.04205) or like in other open set DA problems that would be great A: Thanks for your treasure suggestions. We would like to highlight that the proposed SIG is a general framework for multi-source domain adaptation. In this paper, we employ the standard multi-source domain adaptation setting to evaluate our method, where the label space of the source and target domain is assumed to be the same. Generally speaking, our method can extend to other scenarios, such as open-set MSDAs and partial MSDAs. As you mentioned, it is because we model labels relevant attributes separately, and hence allow the setting where source and target domains contain different labels. For example, in partial MSDAs[4], the source label space is a superset of the target label space. One of the challenges of partial MSDAs is to mitigate the influence of the source labeled data in outlier label space. In light of your suggestion, we have extended our method to the partial domain MSDAs setting, following the paradigm of PADA[4] and using the pseudo label to estimate the target label space. Specifically, we apply the class-aware conditional alignment on the target label space, where the confidence of the source-specific label is low. Moreover, we have provided the experimental results of the partial MSDAs in the following table, where five labels are removed in the target domain to satisfy the partial MSDA setting. We find that our SIG model achieves superior performance. | | Art | Clipart | Product | RealWorld | Average | |-------|-------|---------|---------|-----------|---------| | SIG | 76.0 | 62.7 | 85.8 | 85.6 | 77.5 | | PSDA | 75.5 | 60.4 | 83.3 | 83.8 | 75.7 | [4] Partial Adversarial Domain Adaptation, Zhangjie Cao, Lijia Ma, Mingsheng Long, Jianmin Wang,ECCV2018 --- Rebuttal Comment 1.1: Title: Answers my questions. Comment: Dear Authors, Thank you for taking the time and running some more experiments. I appreciate the effort and I note the improvements in MSDA settings too. I am happy to increase my score. --- Reply to Comment 1.1.1: Comment: We are very happy that you found the response well addressed your concerns. Thank you once again for your valuable comments and suggestions and for championing our submission. With best wishes, Authors of submission #4882
Summary: In this paper, the authors investigate the problem of multi-source domain adaptation. In detail, the authors first devise a causal generation process, which consider the domain-specific and label-irrelevant, domain-specific and label-relevant, domain-invariant and label-invariant, domain-invariant and label-irrelevant variables. Based on the aforementioned generation process, the authors raise the identification guarantees for the latent variables. The authors also evaluate the proposed methods on several benchmarks and achieve good performance. Strengths: 1. The authors provide an interesting perspective for multi-source domain adaptation, which uses nonlinear ICA to identify the latent variables. Compared with the existing methods, the proposed method relaxes the assumptions. 2. The authors evaluate the proposed methods on several datasets and achieve the ideal results. Weaknesses: 1. According to the theoretical results, the authors give three assumptions which is similar to those of nonlinear ICA literature. Why are these existing assumptions applicable in real-world scenarios? Moreover, according to the subspace identification results, the dimension of latent variables is small and restricted by the number of domains, which is hard to be met in practice. 2. For the real-world scenario, the number of ground-truth latent variables is unknown, is it reasonable to assume that their number is according to the observed domains? 3. The authors equip a cross-attention module to the ResNet101 for the DomainNet dataset, it is suggested the authors provide the experiment results of other datasets, e.g., Office-home, with the proposed cross-attention module. Minors: There are some typos in this paper. For example, the illustrations of Lemma2 in the main and appendix are inconsistent. Technical Quality: 3 good Clarity: 3 good Questions for Authors: NO Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: NON Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to express our deep gratitude for the valuable feedback and helpful suggestions provided on our paper, as well as for the time you devoted to reviewing it. Below, we have addressed each of your comments and suggestions. >**Q1**: According to the theoretical results, the authors give three assumptions that are similar to those of nonlinear ICA literature. Why are these existing assumptions applicable in real-world scenarios? Moreover, according to the subspace identification results, the dimension of latent variables is small and restricted by the number of domains, which is hard to be met in practice. **A1**: Thanks for your great question. The assumptions used in this paper are reasonable and mild. We explain them one by one. 1) Assumption 1 implies that $p(z|u)$ is smooth and positive, i.e., the domains are changing continuously, for example, the light directions of images are changing continuously. 2) Assumption 2 means that $p(z|u)=\prod p(z_i|u)$, i.e., changing factors are independent of each other, for example, the light directions of images and the resolution of images are independent. 3) Assumption 3 indicates that the latent variables can be identified when the number of domains is sufficient. Since it is easy to access the data with different distributions. Hence, these assumptions are applicable in real-world scenarios. In practice, the changes between different domains can be described via a small dimension, for example, the change of light directions of images can be described via a one-dimension variable. It is a common minimal change assumption in causality[1]. Therefore, the small dimension of latent variables can be met in practice. We will add a discussion about the reasonableness of the assumptions in the final version. [1] Domain Adaptation with Invariant Representation Learning: What Transformations to Learn? Petar Stojanov, Zijian Li, Mingming Gong, Ruichu Cai, Jaime Carbonell, Kun Zhang, NeurIPS2021 >**Q2**: For the real-world scenario, the number of ground-truth latent variables is unknown, is it reasonable to assume that their number is according to the observed domains? **A2**: Thank you very much for the profound question to help us clarify the implementation of our work. Due to the minimal change assumption mentioned above, the dimension of the changing variables is usually small. Besides, since the latent variables $z_2$ are caused by the domains $u$ and $y$, the dimension of $z_2$ is restricted by $|u|\times|y|-1$, which is large enough and hence is easy to satisfy in real-world scenarios. For example, the DomainNet dataset contains 6 domains and 345 categories of objects, indicating that the dimension of latent variables can be 2070. Therefore, it is reasonable to assume that their number is according to the observed domains. >**Q3**: The authors equip a cross-attention module to the ResNet101 for the DomainNet dataset, it is suggested the authors provide the experiment results of other datasets, e.g., Office-home, with the proposed cross-attention module. **A3**: We are grateful for your careful review and constructive suggestion to improve the completeness of our experiments. In light of your suggestion, we have provided the experiment results on the Office-home dataset with the cross-attention module, named SIG+CA, which is shown in the table below. We find that the SIG+CA also achieves comparative results, showing the effectiveness of the cross-attention module. | | Art | Clipart | Product | RealWorld | Average | |-------|-------|---------|---------|-----------|---------| | Source Only | 64.5 | 52.3 | 77.6 | 80.7 | 68.8 | | DANN | 64.2 | 58.0 | 76.4 | 78.8 | 69.3 | | DAN | 68.2 | 57.9 | 78.4 | 81.9 | 71.6 | | DCTN | 66.9 | 61.8 | 79.2 | 77.7 | 71.4 | | MFSAN | 72.1 | 62.0 | 80.3 | 81.8 | 74.1 | | WADN | 75.2 | 61.0 | 83.5 | 84.4 | 76.1 | | iMSDA | 75.4 | 61.4 | 83.5 | 84.4 | 76.2 | | SIG | 76.4 | 63.9 | 85.4 | 85.8 | 77.8 | | SIG+CA | 75.8 | 61.8 | 84.0 | 84.7 | 76.6 | >**Q4**: There are some typos in this paper. For example, the illustrations of Lemma2 in the main and appendix are inconsistent. **A4**: Thanks for noticing it and kindly letting us know! We have read the paper carefully and corrected the typos. --- Rebuttal Comment 1.1: Comment: Greatly appreciate the responses from the reviewers.
Summary: This paper proposes a novel framework to tackle the problem of multi-source unsupervised domain adaptation. Existing methods often have stringent requirements, such as a large number of domains and invariant label distributions. To address these limitations, this paper proposes a subspace identification theory that disentangles domain-invariant and domain-specific variables under less restrictive constraints. The proposed Subspace Identification Guarantee (SIG) model is based on variational inference and additionally incorporates class-aware conditional alignment to handle domain shifts. Experimental results demonstrate that the SIG model outperforms existing SOTA techniques on four benchmark datasets (OfficeHome, ImageCLEF, PACS, DomainNet), showcasing its effectiveness in real-world applications. Strengths: - The paper defines three major drawbacks of [a] – invariant label distribution, requiring a large number of domains and monotonic transformation between latent variables. These assumptions are relaxed by the *Subspace Identification Guarantee* (SIG) model proposed in this paper. Most interestingly, while Kong et al. claimed that for $n$ dimensional latent space, $2n+1$ domains would be necessary, authors here have shown that “n+1” domains are sufficient. - The core ideas on subspace identifiability are theoretically grounded. The paper defines a new data generative process based on a 4-way split of the latent space and builds upon the theoretical results to define a new multi-source unsupervised domain adaptation framework using a VAE-based architecture. - Results are presented on 4 benchmark datasets, with the proposed model outperforming all other SOTA approaches. [a] Kong et. al., Partial Identifiability for Domain Adaptation, ICML 2022 Weaknesses: - In the data generation process, the paper introduces 4 latent variables from the product of two sets {domain-specific, domain-invariant} $\times$ {label-relevant, label-irrelevant}. I am unsure what the intuitive reasoning behind the "domain-invariant label-irrelevant" latent variable $\mathbf{z}_4$ is. - There are no individual constraints on $\mathbf{z}_1$ and $\mathbf{z}_4$ during the learning process. Are they identifiable individually? - While the paper claims that the proposed SIG model can handle all three types of shifts: covariate, target, and conditional, experiments focus primarily on covariate shift (based on the datasets used). It’s unclear how the datasets, which contain mostly covariate shifts, were used for studying the two other shifts. - The clarity could be improved, especially when introducing the SIG model. For example, in Section 4.2, $\hat{\mathbf{z}}_{3,\mathcal{S}}^{(i)}$ is denoted as "..latent variables of i-th class from source". How is this estimated? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please see weaknesses. Also, the derivation of the ELBO in Equation 5 should be provided to enhance the completeness of the paper. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors discuss the limitations adequately. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We highly appreciate the valuable commands and helpful suggestions on our paper and the time dedicated to reviewing it. Below please see our point-to-point responses to your comments and suggestions. >**Q1**: I am unsure what is the intuitive reasoning behind the "domain-invariant label-irrelevant" latent variable $z_4$ is. **A1**: Thanks for your valuable questions which have improved the readability of our paper. We have provided the intuitive reasoning behind the domain-invariant and label-irrelevant latent variables $z_4$ and will include it in the final version. Although $z_4$ is not unrelated to domains or labels, it is an important component of the causal generation process. Moreover, reasoning $z_4$ with the block-wise identification guarantee can disentangle $z_4$ with other latent variables i.e., $z_1, z_2, z_3$, which makes it convenient to precisely identify other latent variables with subspace identification guarantee. >**Q2**: There are no individual constraints on $z_1$ and $z_4$ during the learning process. Are they identifiable individually? **A2**: We deeply value your careful reviews for the preciseness of our work. Actually, $z_1$ and $z_4$ are not identified individually. According to ELBO as the equation below, we can employ $\textcolor{blue}{\ln p_{{u}|{z}}({{u}|{z}})= \ln p_{{u}|{z}} ({u}| z_1,z_2,z_3)}$ as a constraint on $z_1$. It indicates that $z_1$ is identified with domain labels. Moreover, we employ $\textcolor{green}{\ln{p_{x|{z}}}({x}|{z})=\ln{p_{x|{z}}}(x|{z}_1,{z}_2,{z}_3,{z}_4)}$ as a constraint on $z_4$, indicating that $z_4$ is with block-wise identification. $ELBO = E_{q_{z|x}(z|x)}[\textcolor{green}{ln p_{x|z}(x|z)}+ln p_{y|u,z_{2},z_{3}}(y|u,z_{2},z_{3})+\textcolor{blue}{\ln p_{u|z}(u|z)}]-D_{KL}(q_{z|x}(z|x)||p_{z}(z))$ In the implementation, since the reconstruction of $u$ is not the optimization goal and might bring insignificant complexity of optimization, we remove the reconstruction of $u$ as mentioned in Line 200 of Page 5. In light of your suggestions, we further conduct an experiment to evaluate the effectiveness of the reconstruction of $u$. Specifically, we add a domain classifier to our SIG, named SIG+D. Experimental results are shown in the following table. According to the experiment results, we can find that the SIG+D also achieves a comparative performance and the standard SIG can achieve an ideal performance without introducing extra complexity. ||Art|Clipart|Product|RealWorld|Average| |-|-|-|-|-|-| |SIG|76.4|63.9|85.4|85.8|77.8| |SIG+D|76.1|63.2|85.6|85.8|77.7| >**Q3**: It’s unclear how the datasets, which contain mostly covariate shifts, were used for studying the two other shifts. **A3**: Thanks a lot for this question. We have tried our best to include the most mainstream benchmark datasets for multi-source domain adaptation. While most of them follow the covariate shift, some of them consist of other types of domain shift like conditional shift and target shift. For example, in office-home datasets, pictures with monitor, computer, and laptop are tagged with the same label, meaning that the covariate shift assumption ($p_{\mathcal{S}}(y|x)=p_{\mathcal{T}}(y|x)$) does not always hold, but conditional shift ($p_{\mathcal{S}}(x|y) \neq p_{\mathcal{T}}(x|y)$) holds. Moreover, $p(y)$ also slightly changes across domains. Therefore, these datasets contain a mixture of three types of shifts instead of covariate shifts. In light of your suggestion, we design an experimental setting where the target shift changes across domains. Experiment results on SIG, iMSDA, and WADN are shown in the table below. We find that our method achieves the best performance, evaluating the effectiveness of the target shift scenario. ||Art|Clipart|Product|RealWorld|Average| |-|-|-|-|-|-| |SIG|75.4|62.0|84.9|85.2|77.8| |iMSDA|73.2|57.2|82.6|84.2|74.3| |WADN|74.8|60.6|84.1|84.2|75.9| >**Q4**: ...For example, in Section 4.2, $\hat{z}_{3,\mathcal{S}}^{(i)}$ is denoted as "..latent variables of $i$-th class from source". How is this estimated? **A4**: We sincerely appreciate your suggestion of our paper for better clarity. We estimate $\hat{z}_{3,\mathcal{S}}^{(i)}$ via a moving average method as follows: 1) For each class, we initialize $\hat{z}_{3,\mathcal{S}}^{(i,0)}$ in the first training step. 2) In the $\tau$ training step, we randomly sample a batch of sample $(x_k, y_k)_{k=1}^{B}$ with the size of $B$ from source domain $\mathcal{S}$. 3) We estimate $z_3$ of each sample via the pre-trained backbone networks and the bottleneck networks, so we obtain $(z_{3,k}, y_k)_{k=1}^{B}$. 4) Given the $i$-th class, we calculate $c_{3,\mathcal{S}}^{(i)}=\frac{1}{B_i} \sum \limits_{(z_{3,k},y_k=i)} z_{3,k}$, where $B_i$ is the sample number of the $i$-th class. 5) Sequentially, we update the estimated variables via $\hat{z}_{3,\mathcal{S}}^{(i,\tau)}=\gamma c_{3,\mathcal{S}}^{(i)} + (1-\gamma) \hat{z}_{3,\mathcal{S}}^{(i,\tau-1)}$ where $\gamma$ is a hyper-parameter. *(We also provided a PDF version of Q4 due to some issues related to the Markdown formatting and LaTeX equations in OpenReview)* In light of your valuable advice, we have added an algorithm in Appendix to introduce how to estimate $\hat{z}_{3,\mathcal{S}}^{(i)}$ and will include it in the final version. >**Q5**: The derivation of the ELBO in Equation 5 should be provided to enhance the completeness of the paper. **A5**: Thank you very much for this suggestion to improve the completeness of our paper. In light of your suggestion, we have provided the derivation of the ELBO in the Appendix, which is shown as follows. ${\quad}\ln p(x, y, u) = \ln \frac{p(x, y, u, z)}{p(z|x, y, u)}=\ln \frac{p(x|z)p(u,y,z)}{p(z|x, y, u)}=\ln \frac{p(x|z)p(y|u,z_2,z_3)p(u|z)p(z)}{p(z|x, y, u)}$ $=E_{q(z|x)}\ln\frac{p(x|z)p(y|u,z_2,z_3)p(u|z)p(z)}{q(z|x)} + D_{KL}(q(z|x)||p(z|x,y,u))$ $\geq E_{q(z|x)}\ln p(x|z) + E_{q(z|x)} \ln p(y|u,z_2,z_3) + E_{q(z|x)} \ln p(u|z) - D_{KL}(q(z|x)||p(z)) = ELBO$
Rebuttal 1: Rebuttal: Dear Reviewers Yx4K, nEWG, pec6, and bTZb: Thanks for the thoughtful and constructive review. It is encouraging that the reviewers think SIG is novel (Reviewer Yx4K and pec6), interesting (Reviewer Yx4K and nEWG), and solid ( Reviewer bTZb). We here provide a general response to summarize the modifications of the paper. - To Reviewer Yx4K, we have clarified the intuition of the "domain-invariant and label-irrelevant" latent variables $z_4$. - To Reviewer Yx4K, we have explained the identifiability of $z_1$ and $z_4$, and provided experimental results according to your suggestions. - To Reviewer Yx4K, we have clarified how the datasets are used to study the conditional shift and target shift. And we also added experimental results in light of your suggestions. - To Reviewer Yx4K, we have added the derivation of the ELBO in Equation (5). - To Reviewer nEWG, we have explained the reasonableness of the assumptions. - To Reviewer nEWG, we have explained the assumptions and the reasonableness for the number of latent variables. - To Reviewer nEWG, we have added the experimental results of our SIG with the cross-attention module of the OfficeHome dataset. - To Reviewer nEWG, we have read the paper carefully and corrected the typos. - To Reviewer pec6, we have clarified the difference between our contributions and other works as your recommended. - To Reviewer pec6, we have discussed the assumption of the same label space of source and target domains. - To Reviewer pec6, we have commented on the extension to open-set/partial MSDA of our method and added experimental results of partial MSDA. - To Reviewer bTZb, we have clarified the intuition of assumptions. - To Reviewer bTZb, we have discussed the conditional independence of Equation (1). - To Reviewer bTZb, we have provided an example in the health scenario. Thanks again for your time dedicated to carefully reviewing this paper. We hope that our response properly addresses your concerns. With best regards, Authors of submission 4882 Pdf: /pdf/f932efda652b2528d69db7ad130596d012869e1a.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Sampling from Structured Log-Concave Distributions via a Soft-Threshold Dikin Walk
Accept (poster)
Summary: This paper studies the problem of sampling from a log-concave distribution, parametrized by $\exp(f)$ where $f$ is $L$-Lipschitz, over a $d$-dimensional polytope specified by $m$ halfspaces. Assuming the polytope is contained in a ball of radius $R$, the authors develop an algorithm that generates a sample from a distribution at most $\delta$ TV distance away from the target distribution, in time $O((md+dL^2R^2)\cdot md^{\omega-1}\log(w/\delta))$ where $\omega\approx 2.37$ and the initial point is $w$-warm start. Their approach is an extension from Dikin walk by introducing a soft-threshold regularization term. This result has a wide range of applications in differential private optimization. It is worth noting that Dikin walk was previously adapted for sampling from a uniform distribution over a polytope, and the iteration count and arithmetic operations per iteration are nearly tight [LLV20]. For sampling from a log-concave distribution, hit-and-run has its runtime depends on the dimension $d$, but under certain regimes, it is slower than the proposed algorithm in this paper. [LLV20]: Laddha, Lee and Vempala. Strong self-concordance and sampling. STOC'20. Strengths: Under many regimes, this paper provides the state-of-the-art result for sampling from a log-concave distribution over a polytope. Moreover, their algorithm is conceptually easy to understand and should be easy to implement, as it is a variant of the standard Dikin walk for sampling from a uniform distribution. The soft-threshold regularization term is not completely novel, but prior work gets a highly sub-optimal dependence on the self-concordance parameter $\nu$, at least for log-barrier. Consequently, this algorithm also leads to many applications, such as differential private empirical risk minimization and Bayesian Lasso logit regression. Weaknesses: There are several weaknesses of this paper to highlight. 1. Restriction to log-barrier. Note that the $md$ iteration bound comes from the log-barrier is $m$-self-concordance. It is well-known that if $m\gg d$, better barrier functions with better parameters exist, such as the hybrid barrier and Lewis weights barrier. However, if one carefully examines the techniques employed in this paper, they are tuned very specified towards log-barrier. In particular, the authors prove that in the limit, sampling from the regularized Gaussian distribution over a polytope whose constraint contains infinitely many copies of identities will converge to sample from the uniform distribution over another polytope. This enables them to utilize prior rich works on Dikin walk for uniform sampling over polytopes to conclude the proof. Unfortunately, this approach does not scale with more complicated barrier functions. Take the volumetric function as an example, the Hessian itself reweights all constraints based on their leverage scores, and duplicating a constraint infinitely many times will make the leverage score goes to 0, invalidating the construction completely. I think it will be much more interesting if the proposed framework can generalize to other barrier functions. 2. Tightness of arithmetic operations. The algorithm takes $O(md^{\omega-1})$ to form the Hessian matrix. This does not seem to be tight. I think the results in this paper will be much stronger if, at least for log-barrier, the arithmetic operations per iteration are tight (say nearly linear in $md$). 3. Dependence on $L$ and $R$. The Dikin walk is inherently a second-order method, so one would expect a better dependence on $L$ and $R$, hopefully polylog dependence. Notably, the Dikin walk for sampling from uniform distribution over a polytope does not depend on $R$, as the walk exploits the local geometry of the Dikin ellipsoid. Hit-and-run can achieve such a result only if the body is in isotropic position [CE22]. The quadratic dependence on $R$ weakens the result significantly in my opinion. Authors should discuss whether the dependence on $R$ can be improved. Despite the above concerns, I think this paper is technically solid and has many interesting applications. [CE22]: Yuansi Chen and Ronen Eldan. Hit-and-run via localization schemes. 2022. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: See weakness. I'm willing to raise score if authors can address my concerns. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments and suggestions. We are glad that you appreciate our state-of-the-art results, and its many applications to training differentially private and Bayesian ML models, and we thank you for supporting our paper. We answer your specific questions below. **“…generalize to other barrier functions”** It is an interesting question whether one can extend our results to more general barrier functions. In Appendix $F$ we show that, if any $\nu’$-self-concordant barrier function for the polytope $K$ is used in place of the log-barrier function in our algorithm, then our regularized barrier function is self-concordant with parameter $\nu = \nu’ + L^2 R^2$ (in, e.g., the setting where $f$ is $L$-Lipschitz). In the special case where $f$ is constant, [Laddha, Lee, Vempala, STOC 2020] show that their Dikin walk Markov chain has mixing time $O(\bar{\nu} d)$, but they require the barrier function to satisfy a stronger condition, “strong self-concordance with symmetric self-concordance parameter” $\bar{\nu}$ (in particular, they show that the Lewis weights barrier satisfies this stronger condition with parameter $\bar{\nu}=d$). Showing that our regularized barrier function is *strongly* self-concordant with symmetric self-concordance parameter $\bar{\nu} = d + L^2 R^2$ when, e.g., the Lewis weights barrier is used, is thus a natural direction for future work. **"The algorithm takes $O(m d^{\omega-1})$ to form the Hessian matrix. This does not seem to be tight.”** Investigating whether one can reduce the cost of computing the log-barrier Hessian matrix at each step of our algorithm is an interesting open problem. In particular, we note that [Laddha, Lee, Vempala, STOC 2020] show, in the special case where $f$ is constant, that the (average) cost of computing the Hessian matrix of the log-barrier of the polytope $K:= $ {$\theta \in \mathbb{R}^d : A \theta \leq b$} at each step of their Dikin walk can be improved to roughly $O(d^2 + \textrm{nnz}(A))$ arithmetic operations, where $\textrm{nnz}(A)$ denotes the number of non-zero entries of $A$. Whether their result can be extended to the problem of computing the regularized barrier functions used in our algorithms, in the more general setting where $f$ is $L$-Lipschitz or $\beta$-smooth, is an interesting direction for future work. We will discuss this in the Conclusions, limitations, and future work section. **“Dependence on $L$ and $R$...”** It may be possible to eliminate the polynomial dependence on $L$ and $R$, but it is outside the scope of the current paper. This is a challenging problem, and we discuss it in the Conclusions, limitations, and future work section, and in more detail in Appendix $F$. One challenge in obtaining bounds which are independent of $L$ and $R$ is that the isoperimetric inequality used (in our paper, and in many prior works on the Dikin walk) to bound the mixing time of Dikin walk Markov chains relies on a metric—the cross-ratio distance for the polytope $K$— which, roughly speaking, defines the distances between Markov chain steps by how quickly these steps approach the boundary of the polytope $K$. Measured in the cross-ratio distance, the steps proposed by our Dikin walk (or, more generally, by versions of the Dikin walk which take steps that are small enough such that the term $e^{f(z)-f(\theta)}$ in the Metropolis acceptance probability is $\Omega(1)$ when $f$ is $L$-Lipschitz on a polytope contained in a ball of radius $R$) may be as small as roughly $O(\frac{1}{RL})$ with respect to the cross-ratio distance metric, and the mixing time bounds one would obtain via the aforementioned isoperimetric inequality would depend polynomialy on $RL$. --- Rebuttal Comment 1.1: Comment: Thank you for your response! Overall, I think the results and quality of this submission is above the bar of NeurIPS, and I'll raise my score to 7.
Summary: The paper proposes a sampling method for log-concave distribution with bounded polytope. The methods can be used in privacy preservation. The theoretical results show the guarantee of the sampling methods with some accepted error magnitude. The comparisons for other methods are also detailedly proposed, showing the improvement of the proposed one. Strengths: Pros: 1. The work focuses on an important problem in the privacy field--sampling and also provides some ways for us to adopt their methods. 2. The theoretical results show the effectiveness of the proposed methods in magnitude. 3. Some interesting discussion and delicate application in the appendix is provided for better practical usage. Weaknesses: Cons: 1. Work in the main context is not friendly to the readers. Maybe authors can use more paras to provide different information. The current version is not so friendly for me to read at least. Some more important Lemma/Corollary can also be organized, I believe. I think the design of the func for the barrier should also be clearly stated at first, but not in the Algorithms. 2. Some technical contributions can also be provided to better differentiate yours and the Dikin walk with simply applying a new regularizer. 3. Some numerical studies can be provided to better show the effectiveness of the methods. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: 1. can you provide some insights about how your algorithm can speed up the sampling. I just think that is because compared with the Dikin walk, the designed regularizer supports fast computing, but I am not sure. 2. Also, I can not figure out how much improvement for the dimension $d$ in the real problem. Some small examples can be provides to both show the constant in the magnitude and $d$'s improvement in some cases. 3. I am not sure about the adjustment of the line height is allowed. 4. Maybe we also need to consider the time cost for computation of the Lip/smooth constant for $f$ Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 1 poor Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments and suggestions. We are glad you appreciate our theoretical results and the applications to differential privacy, and thank you for supporting our paper. We are sorry for any difficulty understanding our presentation. We answer your specific questions below. **“technical contributions can also be provided…”** Our work makes the following novel technical contributions (discussed in Section 3). **(1)** We introduce self-concordant barrier functions which simultaneously take into account the geometry of both the constraint polytope $K$ and the Lipschitz or smoothness property of the target log-density $f$. **(2)** The main technical challenge in bounding the mixing time of our Dikin walk is to prove that the determinantal term in the Metropolis acceptance probability $\frac{\textrm{det}\Phi(z)}{\textrm{det}\Phi (\theta)}$ is $\Omega(1)$ with high probability (w.h.p.), where $\Phi$ is the Hessian of our regularized log-barrier function. In previous works on the Dikin walk, which use the log-barrier without regularizer, the determinantal term can be bounded using the following inequality of [Vaidya, Atkinson, 1993] which holds for the Hessian $H$ of the log-barrier: $$(\nabla V(\theta))^\top[H(\theta)]^{-1}\nabla V(\theta)\leq O(d)\qquad\forall\theta\in\textrm{int}(K),$$ where $V(\theta):=\log\textrm{det}H(\theta)$. Unfortunately, this inequality does not hold for every self-concordant barrier function. We prove that it *does* however hold for our regularized barrier functions. To see why, we first show that the Hessian of our regularized barrier function, $\Phi(\theta)=\alpha^{-1}H(\theta)+\eta^{-1}I_d$, can be viewed as the limit of an infinite sequence {$H_j(\theta)$}$_{j=1}^\infty$ of matrices, where each $H_j$ is the Hessian of a log-barrier obtained by representing $K$ by an increasing set of (redundant) inequalities. Roughly, this allows us to show that the above inequality, which holds for any log-barrier, must also hold for our regularized barrier function (if we replace $H(\theta)$ with $\Phi(\theta)$ in the above inequality and $V(\theta)$ definition). **“…insights about how your algorithm can speed up the sampling…”** The regularizer in our algorithm speeds up the runtime by allowing the Dikin walk to take larger steps, while still ensuring these steps are accepted w.h.p. by the Metropolis accept/reject rule for $f$. Taking larger steps allows our Dikin walk to converge more quickly to the target distribution $\pi\propto e^{-f}$. To see why, note that from any point $\theta$, the original Dikin walk proposes updates $z=\theta+y$ where $y$ is normally distributed with covariance matrix $\alpha H(\theta)^{-1}$, where $H$ is the Hessian of, e.g., the log-barrier function for $K$ and $\alpha>0$ is a hyperparameter. If one applies the Dikin walk to sample from a non-constant distribution $\pi\propto e^{-f}$, one needs to ensure the stationary distribution it converges to is equal to $\pi$. This can be done by accepting each proposed step with probability proportional to the Metropolis rule $e^{f(z)-f(\theta)}$ and rejecting it otherwise. However, the acceptance probability may be very low (e.g., if half the eigenvalues of $\alpha H(\theta)^{-1}$ are $>\frac{c}{dL^2}$ for some $c>\Omega(1)$, the acceptance probability may be exponentially small in $c$). One approach (used in [Narayanan, Rakhlin, JMLR 2017]) to ensuring the acceptance probability is high is to choose a smaller $\alpha$ such that all the eigenvalues of $\alpha H(\theta)^{-1}$ are $\leq\frac{1}{dL^2}$, which ensures the acceptance probability $e^{f(z)-f(\theta)}=\Omega(1)$ w.h.p. Unfortunately, this approach can lead the Dikin walk to propose steps with covariance matrix that has many of its eigenvalues unnecessarily small. This is because some eigenvalues of $H(\theta)^{-1}$ may be much larger than other eigenvalues (e.g., if $K$ is much wider in some directions than in others). To overcome this, our Dikin walk proposes steps with covariance $(\alpha^{-1}H(\theta)+\eta^{-1}I_d)^{-1}$, where we set $\eta=\frac{1}{dL^2}$ (and set $\alpha$ to the same value $\frac{1}{d}$ used in prior works which apply in the special case where $f$ is constant). This ensures the largest eigenvalues of the covariance matrix are no larger than $\frac{1}{dL^2}$, *without* reducing (by more than a constant factor) the eigenvalues which were already $\leq\frac{1}{dL^2}$. We will add this discussion to Section 3. **"…Some small examples…"** Thank you for the suggestion. We will add one or two concrete examples, in addition to the examples given in Section 2 and in the last two columns of Table 1, to better illustrate the runtime improvement. **“…computation of the Lip/smooth constant”** Oftentimes, a bound on the Lipschitz or smoothness constant can be calculated analytically. This includes, e.g., applications to training Bayesian or differentially private logistic regression models (or other generalized linear models such as support vector machines (SVM)). In these applications, $f(\theta)=\sum_{i=1}^n\ell(\theta^\top x_i)$ where $\ell:\mathbb{R}\rightarrow\mathbb{R}$ is a convex loss and {$x_1,…,x_n$} $\subset\mathbb{R}^d$ is a dataset. The loss $\ell$ may be $O(1)$-Lipschitz (e.g., if $\ell$ is the logistic loss $\ell(s)=\log(1+e^{-s})$, or the loss $\ell(s)=\max(0,s)$ used to train SVMs) or may be $O(1)$-smooth (in e.g. logistic regression). When the Lipschitz or smoothness constant is not known, one can in practice set our algorithm's hyperparameters by hand such that the average acceptance probability is, e.g., $>\frac{1}{2}$. One can then run the Markov chain until an easily-computed heuristic convergence metric (e.g., the autocorrelation time) is lower than some desired value (see, e.g., [Durmus, Moulines, “High-dimensional Bayesian inference…” Bernoulli 2019], who use a similar approach to choose hyperparameters of a different Markov chain). We will add a remark in the final version. --- Rebuttal Comment 1.1: Comment: Thank you for your response! I believe the authors do very good job and I will raise my score to 7
Summary: This paper studies the problem of sampling from distribution of the form $\pi(\theta) \propto e^{-f(\theta)}$, restricted to a polytope $K$. Here, $f$ is either Lipschitz or smooth convex. To this end, the authors propose to use _Dikin walk_ Markov chain (Kannan and Narayanan, 2012), which was originally proposed as a sampler for the uniform distribution on polytopes. Given that $K = \{\theta : A\theta \leq B\}$ with $A=(a_1,\ldots, a_n), B=(b_1,\ldots,b_n)$, the Dikin walk proposes the update $z = \theta + \sqrt{\alpha H^{-1}(\theta)}\xi$ where $\xi\sim N(0,I_d)$, $H$ is the Hessian matrix of the log-barrier function $\varphi(\theta) = -\sum_{j=1}^m \log(b_j-a_j^{\perp}\theta)$, and $\alpha$ is a hyperparameter. If the proposed update is in the interior of $K$, it is accepted with a certain probability, otherwise it is rejected. To ensure that the acceptance probability is $\Omega(1)$ while allowing the Dikin walk to make sufficiently large steps, the authors propose to add the scaled Hessian $\alpha H(\theta)$ by a regularizing term $\eta^{-1}I_d$, in order to "round up" the set of proposals; then, we can adjust $\alpha$ and $\eta$ so that the set of proposals can be fitted in $K$ with high probability. The authors have done a runtime analysis and show that their method can be run in $\tilde{O}(md^{\omega+1})$ in order to sample with small TV error, where $m$ is the dimension of the polytope and $\omega$ is the matrix multiplication constant. In particular, when $m = O(d)$ (e.g. $A$ is full rank), it can be run in $\tilde{O}(d^{\omega+2})$. Strengths: - The authors did an excellent job on literature review, with complete runtime analysis of previous methods. The time for warm-start and the evaluation of $f$ are also taken into consideration. - The authors provide detailed explanation on Dikin walk and its limitations. - The algorithm seems simple enough to implement with a few lines of code. - The authors provide some motivations for sampling on polytopes in the area of differential privacy. - I appreciate the overview of the proof of the sampling guarantee in Section 3. Weaknesses: - The authors mention that the hyperparameter $\alpha$ alone is insufficient for the Dikin walk to make large steps due to the differences in the geometry of the proposals at each point $\theta$. So I think it is better to add a regularizer that depends on $\theta$ that allows us to remove the hyperparameter $\eta$. Have the authors considered such approach? (In other words, can we somehow make $\eta$ depend on $\theta$?) - The comparison of the runtimes are nice, but I think some experiments on simulated data would make it more convincing that the proposed method is better than the previous ones. - The authors might want to also discuss the following paper by Chalkis, Fisikopoulos, Papachristou and Tsigaridas: "Truncated Log-concave Sampling for Convex Bodies with Reflective Hamiltonian Monte Carlo". - In the introduction, the authors need to be more explicit about the matrix multiplication constant $\omega$. Outside readers might not know that $\omega \leq 2.373$, making it harder to compare the numbers in Table 1. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - There are some notations that should be introduced in the Notation section. For example, $a_j$ and $b_j$. $\Phi(\theta)$ and $\Phi(z)$ are used at the beginning of Section 3 but I could not find their definitions anywhere before this section. - Maybe I have missed it, but there should be a mention at the beginning on a lower bound of $R/r$ so that it is easier to make comparisons of the numbers in Table 1. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 2 fair Limitations: The authors have mentioned a limitation of the proposed method that it can only be applied to Lipschitz or $\beta$-smooth $f$. Also, finding a better regularizing term is also a possible research direction. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments and suggestions. We are glad that you appreciate our proof overview, and thank you for supporting our paper. We answer your specific questions below. **"regularizer that depends on $\theta$..."** Thank you for the suggestion. While there are settings where a regularizer that depends on the position $\theta$ may lead to a faster runtime, this would likely require additional access to the function $f,$ or different assumptions on the structure of $f$, beyond what we assume in our paper. For the class of functions considered in our paper, we conjecture that an $\ell_2$ regularizer which does not depend on $\theta$ is optimal. This is because we consider the class of functions $f$ which are $L$-Lipschitz or $\beta$-smooth with respect to the $\ell_2$ norm, and our bound on $L$ or $\beta$ does not depend on $\theta$. Moreover, we only have access to the function $f$ through an oracle which returns the value of $f$ at any given point $\theta$, but does not tell us how $f$ changes at nearby points. We will add a remark about this in the final version. **“The authors might want to also discuss the following paper…”** Thank you for pointing us to this reference, we will discuss it in the related work section. In particular, we note that the bounds in [Chalkis, Fisikopoulos, Papachristou, Tsigaridas, ACM Transactions on Mathematical Software, 2023] assume that $f$ is $M$-strongly convex for some $M>0$ (we assume $f$ is convex, but not necessarily $M$-strongly convex for $M>0$), and thus are not directly comparable to our bounds. **“matrix multiplication constant $\omega$.”** We will add the value of $\omega$ to the introduction. **“notations…”** Thank you for pointing this out. We will add these definitions to the notation section. **“lower bound of $R/r$...”** Any convex body satisfies the lower bound $R/r \geq 1$ ($R/r =1$ for a ball, and one can construct a polytope, which approximates a ball, for which $R/r$ is arbitrarily close to $1$). Moreover, for any convex body $K$, there always exists a linear transformation $T$ for which $TK$ is contained in a ball of radius $R$ and contains a ball of radius $r$ such that $R/r$ satisfies the upper bound $R/r \leq O(\sqrt{d})$ ($TK$ is referred to as a “well-rounded” convex body). We will add this information to the table caption. --- Rebuttal Comment 1.1: Title: Response Comment: Thank you for addressing my concerns. I have raised the score by one. Regarding the experiment, can the authors perform some quick experiment on e.g. high-dimensional truncated Dirichlet distribution (in which case $f$ should be convex on a bounded polytope) and add the results to the supplementary? --- Reply to Comment 1.1.1: Comment: Thank you for the suggestion. We will add an experiment evaluating the runtime and accuracy of our algorithm (and of prior algorithms) when sampling from a log-concave distribution like the one you suggested, in the final version of the paper.
Summary: The paper studies the question of efficient sampling from a log-concave distribution from a constrained polytope. The main idea is to use the well known Dikin's random walk with a soft threshold. The threshold is important to ensure a far more efficient convergence to the desired distribution. This soft threshold ensures that the acceptance ratio is high while also ensuring that the walk remains inside the polytope. Strengths: The paper gives a definite improvement over the best-known algorithm in the run time for both Lipschitz and smooth loss function. This has direct implications in making DP algorithm for convex optimization more efficient. Just on the strength of the result of the algorithm, I favor acceptance. I have read the proof and did not find any issue with any of it. Weaknesses: The only weakness of the paper is that the writing can be improved. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: N/A Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments, and for taking the time to review our paper. We are glad that you appreciate the strength of our results and we thank you for supporting our paper.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
CamoPatch: An Evolutionary Strategy for Generating Camoflauged Adversarial Patches
Accept (poster)
Summary: Existing methods often produce clearly visible distortions since they do not consider the visibility of the patch. To address this, the authors propose a novel method for constructing adversarial patches that approximates the appearance of the area it covers. They achieve this by using a set of semi-transparent, RGB-valued circles, drawing inspiration from the computational art community. They utilize an evolutionary strategy to optimize the properties of each shape, and employ a simulated annealing approach to optimize the patch’s location. Strengths: 1. The research content of the paper is meaningful and important 2. The paper is well written 3. Experiments are relatively abundant Weaknesses: Although the method proposed in this paper offers some practical applications, its level of innovation is relatively limited, resulting in incremental improvements. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The use of the l_2 distance evaluation index may not be suitable for non-patch attacks, as it primarily focuses on measuring the pixel-level differences between the original and perturbed images. As for the experimental comparison algorithm, it would be valuable to investigate if it includes attacks that introduce overall image disturbance, such as global modifications or image-level perturbations. This would provide a more comprehensive evaluation of the algorithm's robustness against different types of attacks. 2. The number of images chosen for the ablation experiment is relatively small, consisting of only a hundred images. This limited sample size may impact the reliability and generalizability of the experiment's results, especially if the observed effects are not clearly evident. A larger and more diverse set of images would provide a more robust evaluation of the proposed method. 3. The observation that random methods yield comparable results to the proposed method in this paper raises questions about the effectiveness and uniqueness of the proposed approach. Further analysis is needed to understand the reasons behind this phenomenon. Please do more experiments to compare the random method with the proposed method and explain why they are similar in attacking. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The method presented in this paper can be considered as a heuristic black box attack algorithm that incorporates concealment during the attack process. However, it should be noted that this approach is not entirely novel and has been explored to some extent in previous research. The authors can refer to the following papers: [1] Zhong Y, Liu X, Zhai D, et al. Shadows can be dangerous: Stealthy and effective physical-world adversarial attack by natural phenomenon[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 15345-15354. [2] Jia X, Wei X, Cao X, et al. Adv-watermark: A novel watermark perturbation for adversarial examples[C]//Proceedings of the 28th ACM International Conference on Multimedia. 2020: 1579-1587. [3] Wei X, Guo Y, Yu J. Adversarial sticker: A stealthy attack method in the physical world[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 45(3): 2711-2725. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W#1:** We argue the innovation of our work lies in its consideration of visibility to adversarial patches as well as its methodology for generating adversarial patches. Specifically, previous works do not consider the visibility of a patch resulting in clear distortions to the image as shown in Figure 1 of our manuscript. We argue that this not only questions the practicality of the attack but also the accuracy of their use for DNN robustness evaluation (lines 92 to 94). To construct the pattern of the patch, many works use previously proposed non-patch attack methods (references [12, 34, 7]) which involve the modification of pixel values either through gradient of heuristic methods. Alternatively, other methods make use of pre-defined textures and color (references [48, 16]). Different to existing methods, we construct the patch using a set of semi-transparent RGB circles, which to our knowledge has not been previously proposed. Our novel method of patch construction not only results in more effective attacks (Table 1 in manuscript, **Tables 1 and 2 of the global 'author rebuttal PDF'**) but also generates adversarial patches with low visibility (Figures 1 and 3). Furthermore, whereas random search has shown to be the go-to approach for patch location optimization (lines 169 to 170), we demonstrate our use of simulated annealing results is a more effective method for location optimization (Table 2 of our manuscript). Finally, as our method minimizes the $l_2$ distance to reduce visibility, our patch construction method can be applied to the non-patch minimum-norm attack scenario. **Q#1:** We address the reviewer's concern from the following two aspects. - The $l_2$ distance has been extensively used to promote or ensure low visibility of adversarial perturbation within non-patch adversarial attacks (lines 27 to 28; references [41, 18, 27, 4, 2]), either by its use for constraining the size of the perturbation (lines 58 to 60) or through its minimization (lines 100 to 102; references [33, 29, 25, 8, 19]). - In this work focuses on the generation of adversarial patches. Non-patch attacks consider different assumptions and constraints applied to the perturbations (lines 27 to 29; lines 58 to 61), therefore for the sake of fairness we only compare our method to patch adversarial attacks. This is also reflected in previous works (references [2, 12, 4, 48]) that compare methods with similar assumptions and constraints. **Q#2:** As requested by the reviewer, we have amended our ablation study to 1,000 correctly classified images from the ImageNet validation set. The results of the top $4$ performing configurations are reported in **Table 3 of the global 'author rebuttal PDF'**. From our experiments, we find the observation is similar to our previous setting. **Q#3:** We argue the uniqueness of our work lies in its consideration of visibility to adversarial patches as well as its methodology for generating adversarial patches. Specifically, previous works do not consider the visibility of a patch resulting in clear distortions to the image (Figure 1). We argue that this not only questions the practicality of the attack but also the accuracy of their use for DNN robustness evaluation (lines 92 to 94). To construct the pattern of the patch, many works use previously proposed non-patch attack methods (references [12, 34, 7]) which involve the modification of pixel values either through gradient of heuristic methods. Alternatively, other methods make use of pre-defined textures and color (references [48, 16]). Different to existing methods, we construct the patch using a set of semi-transparent RGB circles, which to our knowledge has not been previously proposed. This novel method of patch construction not only results in more effective attacks (Table 1 in our manuscript, as well as **Tables 1 and 2 of the global 'author rebuttal PDF'**) but also generates adversarial patches with low visibility (Figures 1 and 3 of our manuscript). Furthermore, whereas random search has shown to be the go-to approach for patch location optimization (lines 169 to 170), we demonstrate our use of simulated annealing results is a more effective method for location optimization (Table 2 of our manuscript). **Limitations.** We thank the reviewer for providing these three reference. However, we believe our proposed method is unique from the following three aspects. - We focus on generating adversarial patches, whereas the works raised by the reviewer aim to generate physical changes which change the scene of the target image (i.e. changes in shadows [1] or stickers placed on a sign [2]). - [1] to [3] do not consider the visibility of their changes, instead their changes reflect plausible realistic changes to the image. In contrast, we aim to reduce the visibility of the adversarial patch that is placed upon the target image. - [1] to [3] are constrained by the object chosen to be placed upon the target image (i.e. a shadow, a sticker or watermark). Differently, our proposed method constructs adversarial patches by optimizing the characteristics of a set of semi-transparent circles, allowing the patch to approximate any image it is placed upon. --- Rebuttal Comment 1.1: Comment: Thanks to the author for his careful reply. The information provided by the author has effectively clarified most of my inquiries. After a thorough evaluation of the provided content and the feedback provided by other reviewers, I have decided to revise my initial rating.
Summary: This paper explores black-box score-based patch attacks on image classification. The previous patch attacks are relatively visible to the naked eye, so the authors hope to have a camouflaged adversarial patch. Therefore, the authors propose a novel method for constructing adversarial patches that approximates the appearance of the area it covers. This camouflaged patch is composed of multiple translucent RGB circles. With the help of evolutionary algorithm optimization, the proposed attack can optimize the shape and location of patches. The proposed method achieves better or comparable performance to state-of-the-art methods and is stealthy. Strengths: 1. Camouflaged adversarial patches are a more significant threat to security than normal adversarial patches. 2. The proposed method is visually better imperceptible. Weaknesses: 1. Experimentation is not sufficient. The authors only performed robustness evaluation on two CNNs, and did not test on more CNNs or updated ViT, which limits the generalization of the method; The authors do not have HPA [1], MPA [1], and Adv-watermark [2], the performance of these methods is unknown in imperceptibility, and they are all score-based patch attacks; The author did not test on patch defense, including LGS [3], DW [4], and PatchGuard [5]. 2. The idea of a translucent RGB patch is not very novel in patch attacks and has been discussed in [6] and [7]. The differences between this work and its patch form are not discussed. 3. In the absence of experiments on targeted attacks, we believe that the color prior cannot achieve targeted attacks. 4. There is no curve of attack performance changing with the number of queries, which is of great significance to the analysis of algorithm convergence. [1] PatchAttack: A Black-Box Texture-Based Attack with Reinforcement Learning, ECCV 2020 [2] Adv-watermark: A Novel Watermark Perturbation for Adversarial Examples, ACM MM 2020 [3] Local gradients smoothing: Defense against localized adversarial attacks, WACV 2019 [4] On visible adversarial perturbations & digital watermarking, CVPRW 2018 [5] PatchGuard: A Provably Robust Defense against Adversarial Patches via Small Receptive Fields and Masking, S&P 2021 [6] Adversarial camera stickers: A physical camera-based attack on deep learning systems, ICML 2019 [7] The Translucent Patch: A Physical and Universal Attack on Object Detectors, CVPR 2021 Technical Quality: 3 good Clarity: 3 good Questions for Authors: See Weaknesses. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W#1:** We address the reviewer's concerns from the following two aspects. - In fact, We evaluate the proposed method by conducting adversarial attacks on four, including two adversarial and two non-adversarial, classifiers. - In this work we compare the proposed method with the TPA algorithm of PatchAttack (reference [1] provided by the reviewer). We additionally make comparisons with the HPA algorithm used within PatchAttack but refer to it as OPA (reference [16] of our manuscript). We do not make comparisons with the MPA algorithm of PatchAttack due to its similarities with the HPA algorithm as well as their similar performance. Finally, our decision was also made by considering the use of MPA from reference [1] provided by the reviewer as motivation for the development of TPA texture-based attack (subsection 3.3 of [48]). - At the request of the reviewer, we show the results of conducted non-targeted attacks on a Transformer based DNN classifier (Dosovitskiy et al., An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale, ICLR'21 ) in **Table 1 of the global 'author rebuttal PDF'**. We additionally amend our experiments to include the Adv-watermark algorithm whilst also attacking a PatchGuard (reviewer reference [5]) defended model, due to it being a more recent defence mechanism. **W#2:** Despite the use of translucent RGB shapes not being novel, we argue that our method for generating adversarial patches is. Specifically, the work 'adversarial camera stickers' (reference [6] provided by the reviewer) focuses on making physical changes by applying semi-transparent stickers over the camera, resulting in modifications to the entire image. Similarly, The 'translucent patch' (reference [7] provided by the reviewer) applies a similar technique to object detection networks by simulating translucent stickers placed upon the lens of a camera. Differently, our work only modify a small localized region of the image (patch) using translucent RGB circles to construct its pattern. Furthermore, the changes made by [6] and [7] to the target image are clearly visible whereas our primary goal is to minimize the visibility of changes made to the target image. We will revise our original manuscript to include the differences between the works [6, 7] and ours. **W#3:** As requested by the reviewer, we amend our experiments to include targeted attacks on both adversarial and non-adversarial trained classifiers. The results of these experiments can be found in **Table 2 of the global 'author rebuttal PDF'**. **W#4:** As requested by the reviewer, we include the performance curve of the proposed and compared algorithms in **Figure 1 of the global 'author rebuttal PDF'**. --- Rebuttal Comment 1.1: Comment: The authors' explanations basically answer our questions, but there are still two questions that we don't know: 1. How to choose the classes of targeted attacks? We think this is important for the fairness of the experiment. 2. Why is the introduction of non-normalized residual? We feel that there seems to be a lack of relevant description. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for providing these comments. **Q1:** To conduct targeted-attacks we follow the same setup as previous works (reference 12 within the manuscript) and select a random target class for each image. **Q2:** After the suggestion made by **reviewer tLps (W#2)**, we amend our evaluation to include the “non-normalised residual” metric which measures the absolute difference between the pixel values of the constructed patch and the area of the original image it covers. Following the original experimental setup, we report the mean and variance over 10 independent runs, and apply the Wilcoxon signed-rank test (reference 47 within the manuscript). We will amend our evaluation to include this metric within the final version of the manuscript in addition to a definition of the metric.
Summary: Adversarial examples for a DNN that are not visible to humans are generated with Evolutionary Strategies. The examples are semi-transparent RGB-valued circles. The shape and position are optimized. Strengths: - ES and SA are gradient free methods - Patch detectability is taken into account Weaknesses: - How is visibility evaluated? Is L2 always working for the distance - No defense to new attack provided - Limited presentation of Evolutionary Computation methods for adversarial examples in the related works - Limited discussion of computational cost Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Questions: - Is \sigma used in Algorithm 1? - What post-hoc adjustment did you use? - Why only 10 runs? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 1 poor Limitations: none Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W#1:** The norm of a perturbation, including $l_2$, has proven to be a reliable metric of evaluate the visibility of a perturbation, with previous works either constraining (lines 58-60; reference [2]) or minimizing (section 2.2; references [45, 33, 29, 35, 8]) its value to ensure or promote the imperceptibility of the perturbation. We additionally make use of the SSIM metric to measure the structural impact of the patch on the overall image. **W#2:** The purpose of this work is to further highlight the vulnerability of DNN image classifiers to adversarial patches by proposing a novel attack algorithm that generates adversarial patches with lower visibility compared to current state-of-the-art methods, i.e. camouflaged. Although this work does not propose a defense mechanism to our attack, we discuss possible directions to address the vulnerabilities of DNNs demonstrated by this work in lines 311 to 320 of our manuscript. In particular, incorporating camouflaged adversarial patches in the training dataset, a.k.a. adversarial training, is a promising way to defend against our proposed method. **W#3:** To the best of our knowledge, the use of evolutionary strategies (ES) have not been explored for the construction of adversarial patches, therefore our discussion of ES is limited. However, within the black-box scenario, the use of meta-heuristics such as ES has becomes a popular approach due to their independence from gradient information. Allowing all pixels of an image to be modified, particular works include that of Alzentot et al., (GenAttack, GECCO'19) who make use of an evolutionary algorithm to evolve a population of adversarial images. Additionally, the work of Qiu et al., (Black-box adversarial attacks using evolution strategies, GECCO'21) apply the popular CMA-ES algorithm to generate adversarial images. Similarly, the work of Li et al. (https://doi.org/10.1109/TEVC.2022.3151373) make use of the differential evolution optimizer to generate adversarial images. Alternatively, the use of ES have also been applied to the sparse adversarial attack scenario. Specifically, the work of Su et al. also make use of differential evolution to generate adversarial images by modifying a single pixel of an image. We will include this discussion within our revisions. **W#4:** As shown in Table 2 of our manuscript, we compare the runtime of different hyper-parameter configurations. This partially covers the evaluation of the computational cost of our proposed method. On the other hand, for the sake of fair comparisons, we fixed the computational budget to be 10,000 queries (line 208; lines 212 to 213) for all peer algorithms in our experiments. In particular, we believe the querying of the attacked deep neural network is the most computationally demanding area of an adversarial attack. **Q#1:** $\sigma$ is a parameter used within the patch optimization stage of the proposed method. It controls the trade-off between exploration and exploitation (lines 154 to 161). We describe this process in Algorithm 3 in the appendix (lines 166 to 167) which also describes the use of $\sigma$. In the revised version, we will include the arguments of both Location and Patch optimization processes within Algorithm 1 to improve the clarity of $\sigma$ and other parameters. **Q#2:** In our experiments, we have conducted a parameter tuning upon four parameters $li,N,\sigma$ and $t$ involved in our proposed method (lines 272 to 276). In particular, we used the VGG-16 ImageNet model as the victim (please refer to the ablation study, Section 4.3 of our manuscript). **Q#3:** We follow the settings used in other works (references [2,12]), as cited in lines 218 to 222. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for the additional efforts to improve the experimentation and answer my questions. My review will remain unchanged. --- Rebuttal Comment 1.2: Comment: **To W#3:** Recent work [1] and [2] both use evolutionary algorithms to implement black-box patch attacks. Therefore, the authors may need further discussion. [1] Efficient Decision-based Black-box Patch Attacks on Video Recognition, ICCV 2023 [2] Query-Efficient Decision-based Black-Box Patch Attack, IEEE Transactions on Information Forensics and Security --- Reply to Comment 1.2.1: Comment: We thank the reviewers for their recommended references. **[2]:** This work addresses the decision-only setting by placing localised regions of an image from the target class onto the attacked image. Under the constraint of the patch causing misclassification, the coordinates of the patch are optimised using an adapted differential evolution algorithm to minimise the size of the patch (through its l_0 norm). Whereas the imperceptibility of the patch occurs when its size becomes extremely small, we keep the size of the patch constant and reduce its visibility by approximating the region of the image it is placed upon (by minimising the l2 norm). **[1]:** This work applies a similar technique to **[2]** but for conducting patch-based adversarial attacks on video recognition networks. Specifically, they apply an adapted differential evolution algorithm to optimise the coordinate of the patch in addition to the video frame it is placed upon. Similar to [2] they construct an adversarial by placing local regions of a frame from a target class video onto the attacked video. Will will include this discussion within the final version of the manuscript, highlighting the differences with our work in terms of the addressed scenarios and methodologies.
Summary: This paper proposes a evolutionary strategy based framework for generating imperceptible adversarial patch. Compared with existing methods, the proposed method can generate invisible adversarial patch by iteratively optimizing both the patch and its position on the image. Strengths: 1. The proposed method is novel and performs well on ASR and invisible. 2. The method analysis is clear and algorithm pipeline is helpful. 3. The experiments are solid and comprehensive. Weaknesses: 1. Lack of the efficiency comparison with other methods. 2. Lack of the different evolutionary strategy comparison. 3. Lack of the performance on transformer based vision models. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. I am curious the computation cost of the proposed method. 2. I am curious about the performance of the proposed method with different evolutionary strategy. 3. How is the performance of the proposed method on transformer based vision models? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: 1. Test the computation time or memory cost of the proposed method. 2. Test the proposed method with different evolutionary strategy. 3. Test the proposed method on vision transformers. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W#1, Q#1, L#1:** As shown in Table 2 of our manuscript, we compare the runtime of different hyper-parameter configurations. This partially covers the evaluation of the efficiency of our proposed method. On the other hand, for the sake of fair comparisons, we fixed the computational budget to be 10,000 queries (line 208; lines 212 to 213) for all peer algorithms in our experiments. In particular, we believe the querying of the attacked deep neural network is the most computationally demanding area of an adversarial attack. **W#2, Q#2, L#2:** In this work we make use of a classic (1+1)-evolutionary strategy (ES) for patch optimization (lines 153 to 154; reference [30]). One of the most important concerns of using other ES, such as CMA-EA and PGPE with ClipUp, for patch optimization is that they usually require a large number of function evaluations to achieve optimal image approximations. This is supported in previous works in the computation creativity community (modern ES for creativity: fitting concrete $481$ images and abstract concepts; reference [44]). Unfortunately, our context is restricted to a fixed query budget. As flagged in the conclusion section, we envisage the use of surrogate model assisted approaches such as Bayesian optimization can be explored for patch optimization to further enhance the efficiency of our proposed method (lines 332 to 336). **W#3, Q#3, L#3:** At the request of the reviewer, we show the results of conducted non-targeted attacks on a Transformer based DNN classifier (Dosovitskiy et al., An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale, ICLR'21 ) in **Table 1 of the global 'author rebuttal PDF'**. --- Rebuttal Comment 1.1: Comment: The anthor answers most of my questions and I will keep my rating.
Rebuttal 1: Rebuttal: Attached is the manuscript containing the tables and figures of reviewer requested experiments and plots. Pdf: /pdf/ff405f80ec68960de0dee1e0d2ca7c3c048753a1.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper presents a novel method for generating adversarial patches that approximates the appearance of the area it covers, achieving better or comparable performance to state-of-the-art methods while minimizing the visibility of the patch. The method uses semi-transparent, RGB-valued circles that blend into the original image with an evolutionary strategy to fine-tune each circle's properties and employs simulated annealing for optimal location selection. This paper also provides an empirical evaluation of the proposed method's effectiveness by attacking classifiers trained on the ImageNet dataset and comparing it with state-of-the-art adversarial patch methods. The contributions of this paper include a new approach to generating adversarial patches that are more difficult to detect, a comparative analysis of state-of-the-art methods, and insights into the limitations and future directions of adversarial attacks on DNNs. Strengths: 1. This paper is well-written and organized, with clear explanations of the proposed method and the experimental setup. The authors provide detailed descriptions of the algorithms used and the evaluation metrics employed, making it easy for readers to understand the research. 2. The proposed CamoPatch method is a novel approach to generating adversarial patches while minimizing the visibility of the patch. The use of semi-transparent, RGB-valued circles and the evolutionary strategy is a unique contribution to the field of adversarial attacks on DNNs. 3. This paper provides a thorough empirical evaluation of the proposed method's effectiveness. The authors also provide insights into the limitations and future directions of adversarial attacks on DNNs, which adds to the quality of the research. The supplementary material is adequate as well. Weaknesses: It’s appreciated that the authors provide clear and detailed descriptions of the empirical study, though the work could be more credible if more experiments and evaluation are concerned. 1. This paper only use attack success rate (ASR) to evaluate the attack performance of the method, which is kind of limited evaluation. More metrics should be involved such as non-normalized residual (nq), original accuracy and after-attack accuracy. It would be appreciated if the experiments in terms of targeted attack are concerned. 2. When it comes to evaluating the visibility of the adversarial patches, the SSIM metric seems to be unnecessary because the difference between the proposed method and other methods in this metric is not obvious. The numerical gap is not greater than 0.02. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. As the core idea of this paper, ‘camouflaged’ can be presented in more forms such as visual comparison and evaluation on camouflage level. 2. The simulated annealing method, as the important approach to optimize the patch’s location, should be improved or replaced by a novel approach so it can be more consistent with the idea of ‘evolutionary strategy’. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: It’s appreciated that the authors are up front about the limitations of their work in the paper and introduces the possible solutions in the future work. The authors may draw lessons from weakly supervised learning methods to explore adapting the proposed method to scenarios where only the predicted label of the input is available. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W#1:** As per requested by the reviewer, we report the original accuracy, after-attack accuracy, non-normalized residual (nq) and $l_2$ norm metrics of conducted targeted and non-targeted attacks in **Table 2 and Table 1 of the global 'author rebuttal PDF'**. **W#2:** The reason for the small numerical gap is due to the small $40 \times 40$ patch size, which is roughly 3.2% of the total number of pixels within the image (lines 207-208). Therefore, a numerical gap of 0.02 is significant, considering the small number of changed pixels. **Q#1:** To evaluate the level of camouflage, we employ SSIM and $l_2$ norm as the performance metrics in our experiments. Moreover, we provide visual comparisons of adversarial images constructed by the proposed method compared to Patch-RS (Figure 1 of our manuscript) and other state-of-the-art algorithms (Figure 3 of our manuscript). Furthermore, to facilitate a visual comparison, we provide an example of an adversarial image with the patch magnified (Figure 2 of our manuscript). **Q#2:** The selection of simulated annealing (SA) as our location optimizer is based on its proven ability to effectively explore discrete spaces and successfully avoid local optima (lines 169-170). In addition to the performance of the proposed method compared to state-of-the-art methods (Table 1 of our manuscript), we further demonstrate the superior performance achieved when using SA compared to pure random search (Table 3 of our manuscript). In future, alternative methods can be explored to improve the location optimization of a patch location, i.e. the use of surrogates (lines 327-331). **Limitations:** To enhance and advance the proposed method, we recognize the potential benefits of incorporating techniques from weakly supervised learning as suggested by the reviewer to adapt to the label-only scenario effectively. We believe there are two particularly fruitful directions of incorporating techniques from weakly supervised learning to adapt to the label-only scenario. Firstly, by using estimation techniques (lines 321-326) to label constructed adversarial images, we believe the use of weakly supervised image classification models (Hu et al., Weakly Supervised Image Classification through Noise Regularization, CVPR'19) as surrogates would improve the efficiency of the proposed method in additional to providing direction of the search for the label-only setting. Secondly, utilizing the stochastic nature of the proposed method to generate a set of non-evaluated candidate solutions, the use of weakly supervised techniques such as semi-supervised learning could be applied (Kim et al., Density Ratio Estimation-based Bayesian Optimization with Semi-Supervised Learning, Corr'23) to train a surrogate model on both evaluated and non-evaluated adversarial images. Thereby using the surrogate to select predicted-optimal solutions for evaluation.
null
null
null
null
null
null
Optimization and Bayes: A Trade-off for Overparameterized Neural Networks
Accept (poster)
Summary: The paper presents a novel approach that bridges the gap between optimization and Bayesian learning problems. It introduces an interpolation technique that connects optimization and Bayesian learning. Additionally, the author proposes a generalization bound for infinite wide neural networks and establishes a connection between generalization and sampling efficiency. Strengths: The primary strength of this work lies in its successful formulation of a generalization bound for infinite wide neural networks. The author also provides insightful insights into the relationship between generalization and sampling efficiency, shedding light on an important aspect of the problem. Weaknesses: **Technical comments:** - It would be beneficial to elucidate the motivation behind the sampling efficiency, denoted as $\text{eff}_{\lambda}$ in Equation (3). - In the related works section, the author should consider mentioning alternative approaches to over-parametrized neural networks, such as random features and mean-field regimes, as suggested in [1]. - Corollary 6 is based on a surrogate loss function. Are there any assumptions on the main loss function necessary to derive an upper bound on expected loss? **Presentation comments:** - In the problem setup section (Line 88), the input and label spaces are denoted as $\mathcal{X}$ and $\mathcal{Y}$, respectively. However, in Line 90, the training dataset is denoted as $(s,t)$. For consistency, it would be advisable to use $(x,y)$ to represent each data sample throughout the manuscript. - In the equation following Line 127, the right-hand side of the second equality represents an expectation with respect to $\theta \sim p(f(\theta))$. However, the current notation is unclear and could be misinterpreted as $\theta \sim p(\theta)$. This should be clarified. - It is recommended to provide a reference for Donsker and Varadhan, which is used to derive Equation (4). - Please provide definitions for effective sample size and sample size before Equation (3). - Please provide a clear definition of generalization and generalization bound within the context of this paper. Is the bound on true risk? - The theorems should be self-contained. In Line 295, the author mentions $\alpha$ without prior explanation in Theorem 5. **Minor** - The author employs $t_a$ and $t$ to represent label and time, respectively. It would be preferable to use alternative symbols to avoid confusion. - Inline equations in Lines 637 and 641 should be displayed as outlined equations for improved readability. **References:** - [1]: Fang, Cong, Hanze Dong, and Tong Zhang. "Mathematical models of overparameterized neural networks." Proceedings of the IEEE 109.5 (2021): 683-703. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: - Please elaborate more on the sentence in lines 178-179: “This is also the gap between expected loss bound obtained by some training methods and the optimal one.” - For binary classification, there are different surrogate loss functions. Please provide some examples of surrogate loss function in multi-classification. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 1 poor Contribution: 1 poor Limitations: - The concept of importance sampling is not new, and the author proposes importance sampling algorithms to solve Bayesian learning problems. However, the paper does not explicitly discuss the variance issues associated with importance sampling, which can be observed in the experiment results (Figure 3, left). - The analysis for over-parametrized neural networks may not be applicable to networks with activation functions like ReLU or other functions with unbounded first derivatives. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed feedback. We appreciate the time and effort you've spent in reviewing our work. Below, we've addressed your points in the order presented. > It would be beneficial to elucidate the motivation behind the sampling efficiency, denoted as $\text{eff}_{\lambda}$ in Equation (3). Thanks for the clear question. We are willing to explain sampling efficiency in more detail. We've defined the sampling efficiency as $(\mathbb{E}[w])^2/\mathbb{E}[w^2]$, which is the ratio of effective sample size $n_e^*=\frac{n(\mathbb{E} [w])^2}{\mathbb{E} [w^2]}$ to the actual sample size $n$. The effective sample size is a key concept in importance sampling, controlling the variance of the final estimator. To provide a brief overview of the concept, we now provide an elementary derivation of the form of sampling efficiency and effective sample size in the importance sampling context. In importance sampling, $w$ represents the importance weights. Given $n$ samples, each of which is independently sampled as $Z_i$, and the importance weights are $w_i$. The final estimate is $S=\frac{\sum_{i=1}^n w_i Z_i}{\sum_{i=1}^n w_i}$. If the variance of $Z_i$ itself is $\sigma^2$, the variance of $S$ is $\frac{\sum_{i=1}^n w_i^2 \sigma^2}{(\sum_{i=1}^n w_i)^2}$. We know that the variance of the mean of $n_e$ number of independent identically distributed variables is $\frac{\sigma^2}{n_e}$. Thus, the final variance in importance sampling is **equivalent** to the variance when sampling $n_e=\frac{(\sum_{i=1}^n w_i)^2}{(\sum_{i=1}^n w_i^2}$ independent samples. Therefore, the effective sample size is defined as $n_e=\frac{(\sum_{i=1}^n w_i)^2}{(\sum_{i=1}^n w_i^2}=\frac{n\overline{w}^2}{\overline{w^2}}$. Since $w_i$ is also a random variable, we can derive the population version of $n_e$ by changing the sum to the expectation. Thus, we have $n_e^*=\frac{n(\mathbb{E} [w])^2}{\mathbb{E} [w^2]}$. The effective sample size is generally less than the true sample size, and this decrease factor is the sampling efficiency, which leads to the definition in Eq. (3). We hope this clarifies your question. More details can be found in Owen, Art B. "Monte Carlo theory, methods and examples." (2013). (Section 9.3) > the author should consider mentioning alternative approaches to over-parametrized neural networks, such as random features and mean-field regimes, as suggested in [1]. Thank you for the suggestion. We will add references to these alternative approaches in the related works section. > Corollary 6 is based on a surrogate loss function. Are there any assumptions on the main loss function necessary to derive an upper bound on expected loss? The only requirement we have for the main loss function is that its output is in $[0,1]$, as stated in line 91. > For consistency, it would be advisable to use $(x,y)$ to represent each data sample throughout the manuscript. Thank you for your suggestion. We understand the value of consistent notation. However, in Section 7, 'x' and 'y' are used to represent activations in neural networks. To prevent confusion, we opted to use 's' and 't' to denote inputs and outputs in the manuscript. > In the equation following Line 127, the right-hand side of the second equality represents an expectation with respect to $\theta \sim p(f(\theta))$. However, the current notation is unclear and could be misinterpreted as $\theta \sim p(\theta)$. This should be clarified. It appears that your understanding of this equation is incorrect. We confirm that we intended to write $\theta \sim p(\theta)$, not $\theta \sim p(f(\theta))$. In fact, the latter notation would not make sense as $p(f(\theta))$ cannot be regarded as a distribution. If you could explain why you believe it should be $\theta \sim p(f(\theta))$, perhaps we can better assist in clarifying this misunderstanding. > It is recommended to provide a reference for Donsker and Varadhan, which is used to derive Equation (4). We'll add the following citation: "Donsker, Monroe D., and SR Srinivasa Varadhan. "Asymptotic evaluation of certain Markov process expectations for large time—III." Communications on pure and applied Mathematics 29.4 (1976): 389-461." > Please provide definitions for effective sample size and sample size before Equation (3). We will add definition of effective sample size in the manuscript. For more details about effective sample size , olease kindly refer to our explanation in the beginning of the response. > Please provide a clear definition of generalization and generalization bound within the context of this paper. Is the bound on true risk? Generalization refers to the phenomenon that minimizing the empirical loss on a training dataset leads to reduced expected loss on new, unseen data. Generalization bound is just an upper bound of the difference between the expected loss and the empirical loss. For a precise definition, please refer to lines 88-97. We use $R(h)$ to denote true risk, specifically, the expected prediction error of a model on the unknown real data distribution. So, yes. The upper bound is on true risk. > In Line 295, the author mentions $\alpha$ without prior explanation in Theorem 5. We agree with your observation. We will clarify this by stating "for all $\alpha=1,\dots,j$" in the Theorem. > It would be preferable to use alternative symbols to avoid confusion. Thank you for your suggestion. We will change symbol $t$ to $z$ for notation of label. > Inline equations in Lines 637 and 641 should be displayed as outlined equations for improved readability. Thank you for pointing this out. We will make the necessary adjustments to improve readability. --- Rebuttal Comment 1.1: Title: Additional Response by Authors Comment: > Please elaborate more on the sentence in lines 178-179: “This is also the gap between expected loss bound obtained by some training methods and the optimal one.” That sentence refers to the second term on the right side of Donsker and Varadhan’s variational formula in equation (4). Donsker and Varadhan’s variational formula, proposed in the last century, plays a significant role in PAC Bayesian, and its proof is quite simple – one can derive it by unfolding the definition. The second term on the right side of Donsker and Varadhan’s variational formula measures the difference between the distribution $q$ and the Gibbs distribution $p_\lambda$. Only this second term on the right side depends on $q$, while the first term does not. The left side of the equation represents the part related to $q$ in the PAC Bayesian bound. We aim to optimize this equation to obtain the best bound for the expected loss. According to Donsker and Varadhan’s variational formula, if $q$ is the optimal Gibbs distribution $p_\lambda$, the whole equation is minimized. If not, a sub-optimal bound would result, and the gap between this sub-optimal bound and the optimal bound can be measured by a KL divergence. We hope this clarifies the concept for you. > For binary classification, there are different surrogate loss functions. Please provide some examples of surrogate loss function in multi-classification. Certainly, here are several examples of surrogate loss functions that are commonly used in multi-classification tasks: 1. Logistic Loss: $$L(y, z) = -\sum_{i=1}^{C} z_i \log \left( \frac{e^{y_i}}{\sum_{j=1}^{C} e^{y_j}} \right)$$ 3. Hinge Loss: $$L(y, z) = \sum_{j \neq z} \max(0, y_j - y_z + \Delta)$$ 3. Squared Loss: $$L(y, z) = \sum_{i=1}^{C} (y_i - z_i)^2$$ 4. Exponential Loss: $$L(y, z) = \sum_{i=1}^{C} \exp(-y_i z_i)$$ We hope that answers your question. > The concept of importance sampling is not new, and the author proposes importance sampling algorithms to solve Bayesian learning problems. However, the paper does not explicitly discuss the variance issues associated with importance sampling, which can be observed in the experiment results (Figure 3, left). First, we concur that importance sampling is a classic method, and our method is a class of importance sampling algorithms. Second, we respectfully disagree about the claim "the paper does not explicitly discuss the variance issues". We do address the variance issue of importance sampling through the concept of effective sample size. The effective sample size and the variance of importance sampling are inversely related: as the effective sample size increases, the variance decreases. Indeed, the concept of effective sample size is derived from the observation of variance in importance sampling. Regarding your previous questions about effective sample size, we hope our explanation in the beginning of the response clarifies its definition and its impact on variance in importance sampling. > The analysis for over-parametrized neural networks may not be applicable to networks with activation functions like ReLU or other functions with unbounded first derivatives. We believe our paper still provides valuable insights, even though the analysis may not directly transfer to activation functions like ReLU. Lastly, we appreciate your detailed comments and questions. We hope our reply has addressed your concerns and provided clarifying context where it was needed. Please feel free to ask if you have further questions or need additional information.
Summary: The paper provides a perspective interpolating optimization and Bayesian inference. The gradient based optimization procedure gives an output distribution $q(x)$ and the Bayesian inference procedure gives a posterior $p_\lambda(x)$, the two distributions are connect by the weight $w_\lambda$. Then the paper proposes to find an interpolation of gradient based optimization and Bayesian inference by using a different weight $w_\lambda^\beta$. Strengths: 1. The paper is very well written, even for a reader with not much background in optimization, I found the paper very easy to follow and the main results clearly presented. 2. Although I am not familiar with the literature of optimization, but I feel the interpolation of gradient based optimization and bayesian inference is an important contribution to the community. 3. The paper proposes the entropy term and the energy term as two major terms governing the weight, and hence measures the sampling efficiency and generalization error bound. For finite fully connected layers, the analytic form of energy and entropy is provided, and also empirically studied. The paper provides a promising perspective for future analysis of other types of deep neural networks like CNNs and transformers. Weaknesses: Only some minor ones. Minor weakness: 1. line 275, equation 9 is too long 2. line 265, Equation (27) is not properly referenced. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. Some of the main theoretical result still follows from the basic assumption of infinite width limit of fully connected neural networks, which is a commonly used approach in theoretical analysis of deep neural networks. However, it is well known that in practice this assumption is generally not true, so I am curious to know what can be improved to generalize the theoretical setting to finite width neural networks. 2. Why the paper considers specifically a simple clipping weight, rather than the actual power function $w_\lambda^\beta$, does the proof require that specific form of weight to hold true? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your constructive feedback and recognition of our work. Your comments have helped us to improve our manuscript. > Weaknesses Thank you for the suggestion. We will improve the presentation of the mentioned equations for clarity. > Some of the main theoretical result still follows from the basic assumption of infinite width limit of fully connected neural networks, which is a commonly used approach in theoretical analysis of deep neural networks. However, it is well known that in practice this assumption is generally not true, so I am curious to know what can be improved to generalize the theoretical setting to finite width neural networks. Indeed, this is a critical question. The assumption of infinite width made the theoretical analysis in Section 7 feasible. However, such assumption is not always valid, and this is a common challenge in many works analyzing infinitely wide networks. To address this, some researchers [1,2,3] have started to explore the extension of infinite width network analysis to finite widths. We believe that an important direction for future work would be to examine whether similar techniques can be extended to our setup, to provide estimation error for the entropy term and energy term under finite width networks. > Why the paper considers specifically a simple clipping weight, rather than the actual power function $w_\lambda^\beta$, does the proof require that specific form of weight to hold true? This is a great insight. In our paper, we considered weight clipping due to the monotonicity guarantee provided in Theorem 3. However, this is not the only possible choice. Following your recommendation, we found that the power function $v_\beta(w_\lambda) = w_\lambda^\beta$ for $\beta \in [0,1]$ can also creates an interpolation between optimization and Bayesian learning. When $\beta = 0$, the modified weights are constant at 1, which turns the distribution into a deep ensemble. When $\beta = 1$, the weights remain unchanged, preserving the Gibbs distribution. This power function also maintains a monotonicity property similar to that in Theorem 3. Therefore, it is indeed feasible, and we appreciate your suggestion. We express our gratitude once again for your thoughtful comments and questions. Your insightful suggestions have inspired us to explore further. [1] Hanin, Boris, and Mihai Nica. "Finite depth and width corrections to the neural tangent kernel." arXiv preprint arXiv:1909.05989 (2019). [2] Littwin, Etai, Tomer Galanti, and Lior Wolf. "On random kernels of residual architectures." Uncertainty in Artificial Intelligence. PMLR, 2021. [3] Dyer, Ethan, and Guy Gur-Ari. "Asymptotics of wide networks from feynman diagrams." arXiv preprint arXiv:1909.11304 (2019). --- Rebuttal Comment 1.1: Comment: Thank you for your response. After reading the author's rebuttal and other fellow reviewers' comments, I do not see any explicit reason for me to change my score and hence I still recommend acceptance.
Summary: This paper proposes a framework to bridge the gap between ERM and Bayesian learning problems. The authors derived the algorithm-dependent PAC-Bayesian generalization bound for infinitely wide networks where the KL divergence between the posterior distribution obtained by infinitesimal step size gradient descent and a Gaussian prior. Since direct theoretical analysis on the KL divergence is intractable, they used the well-used simplification that the output distribution of neural networks drawn from the prior can be approximated by a Gaussian distribution in the infinite width limit. From the analysis, they provided an interpolation method for accuracy-computation trade-off. In addtion, as a byproduct, this paper analyze the dynamics of the Hessian trace. Strengths: This paper presents new insights into the relationship between ERM and Bayesian learning. The most interesting aspects of this paper are the analysis of the KL divergence using variable transformations and its relation to the change in Helmholtz free energy in isothermal processes. Moreover, the dynamics of the Hessian trace, which is typically relate to flatness in loss landscape, is analyzed. Weaknesses: Although the motivation is interesting and the derivations of the equations are interesting, it is difficult to understand what novel PacBayes bound was finally obtained. It is also difficult to understand how they differ from the previous PacBayes bound and what kind of findings are obtained. Many of these problems seem to be presentation issues rather than technical issues. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Q.1 Would you write explicitly what the final resulting of the novel PAC Bayes bound is obtained? Since most of the analysis is in the dynamics of the KL divergence, the paper is gradually being analyzed for KL. However, makes it difficult to understand the final results obtained. In particular, I would like to know the final result of the PAC Bayes bound formulated as a function of training time and training data. Q.2 I would like to know if it is possible to compare the results of Gaussian processes with those in the limit of infinitely wide neural networks. A neural network with infinite width can be regarded as a Gaussian process, and the PacBayes bound of the Gaussian process can be obtained by Seeger by using the Radon-Nidodim derivative. Very naively , it seems that the PacBayes bound can be obtained by plugging the NTK derived from the infinite width of neural network into the PacBayes bound of the Gaussian process. If this Gaussian process based analysis can be an answer of Q1 in the paper, I would like to know what differences can be observed compared to this case. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: The structure of the paper is difficult to understand. The theoriy that are ultimately obtained is unclear compared to the exisitng PAC Bayes bound. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review our work and for your insightful comments and questions. We appreciate your recognition of the novel insights and contributions of our work in bridging the gap between ERM and Bayesian learning. > Would you write explicitly what the final resulting of the novel PAC Bayes bound is obtained? ... I would like to know the final result of the PAC Bayes bound formulated as a function of training time and training data. The final result of the PAC Bayes bound includes several parts, a main bound and a decomposition of KL divergence, and dynamics of energy and entropy change. The main PAC-Bayes bound is: $\mathbb{E}_{\theta\sim q_t}[R(\theta)]=\Phi^{-1}\_{\frac{\lambda}{m}}(\mathbb{E}\_{\theta\sim q\_t}[r(\theta)]+\frac{D\_{KL}(q\_t||p)+\log\frac{1}{\delta}}{\lambda})$ where the KL divergence $D_{KL}(q_t||p)$ is given by: $D\_{KL}(q\_t||p)=\mathbb{E}\_{\\{(Y^{(i)}\_0,\Gamma^{(i)}\_0)\\}_{i=1}^d\sim\Sigma} [V\_t-S\_t] $ The dynamics of $V_t$ and $S_t$ are given by the differential equations: $\begin{align*} V\_0&=0,S\_0=0\\\\ \frac{d}{dt}V_t &= -\nabla \mathcal{L}(Y_t)\sum_{i=1}^d Y^{(i)}_t\\\\ \frac{d}{dt}S_t &= -\text{Tr}(\nabla^2\mathcal{L}(Y_t)\Theta^{(d)})-\nabla\mathcal{L}(Y_t)\sum\_{j=1}^d\sum\_{i=1}^j \Gamma^{(i)}\odot\Xi^{(i,j)}\\\\ \frac{d}{dt}Y_t&=-\Theta^{(i)}\nabla \mathcal{L}(Y_t)^\top\\\\ \frac{d}{dt}\Gamma_t&=-\Phi^{(i)}\nabla \mathcal{L}(Y_t)^\top \end{align*}$ The closed form solution of the KL divergence is hard to obtain, but the numerical solution can be computed efficiently on a computer (as shown in Fig 4 of our paper) by solving the above ODE. > If this Gaussian process based analysis can be an answer of Q1 in the paper, I would like to know what differences can be observed compared to this case. Thank you for this thought-provoking question. You're correct in stating that a neural network with infinite width can be regarded as a Gaussian process, but this relationship only holds at initialization. This correspondence breaks down once the network is trained via gradient descent, leading to a distribution that differs from a Gaussian process. While one can indeed construct a Gaussian process using the conjugate kernel $\Sigma$ (as opposed to the NTK $\Theta$) to answer questions about the Bayesian learning of neural network, it cannot directly answer questions about the generalization of neural networks trained by gradient descent, hence it cannot be used to answer the Q1. Thank you once again for your valuable feedback and insightful questions. We hope this response have addressed your concerns and questions.
Summary: The paper introduces a novel learning algorithm, Transformative Bayesian Learning (TransBL), which aims to navigate the balance between empirical risk minimization (ERM) and Bayesian learning in the context of overparameterized neural networks. The authors establish a PAC-Bayesian bound for these networks and provide a theoretical discourse on the relationship between generalization and sampling efficiency. Their primary goal is to explore an intermediary distribution that bridges the gap between ERM and Bayesian learning.The paper presents a sound theoretical argument which I found to be interesting and with potentially moderate impact. The main issue currently I see is the presentation of their results. For example, the paper currently lacks certain buildup and intuitive explanations and some clarifications before giving formal treatment. Besides that, the order and flow of the paper currently uses a lot of ambiguities until they are cleared up much later. I've added several of these ambiguities. Perhaps the authors assumed the reader to be very intimately familiar with this exact research question who will "mentally fill in the gaps", but I don't think this should be the default assumption. I've faced some these ambiguities as I read the manuscript, and added them in the "weaknesses" section. Due to my lack of expertise on some related literature, I defer the judgement about novelty of these ideas to other reviewers. Strengths: - Sound theoretical analysis: The paper provides a theoretical analysis on the relationship between ERM and Bayesian learning, specifically in the context of overparameterized neural networks, that may be novel. - The introduction and justification of TransBL is innovative and backed by comprehensive derivations. The concept of transforming gradient-based optimization into importance sampling could also be interesting. - Interpolation Mechanism: The introduction of an interpolation mechanism by modifying weights presents an innovative solution to the trade-off between computation efficiency and generalization error. - Usage of PAC-Bayesian Bounds: The derivation of algorithm-dependent generalization bounds through the use of PAC-Bayesian theory and infinitely wide neural networks adds depth to the theoretical contribution. - Analysis of Energy and Entropy Change: The authors provide a comprehensive understanding of the dynamics of energy and entropy changes, which play a pivotal role in their algorithm. This exploration of these dynamics could pave the way for further developments in deep Bayesian learning and PAC-Bayesian bounds. Weaknesses: - While the presented theoretical results are relatively well stated, there is a lack of high level intuition and build up that make the paper rather hard to understand for anyone who's not expert on all the related topics. This can be alleviated by using one or few relevant examples (toy examples where everyone can grasp and convey some key points, as well as by giving some more intuitive and high level explanation of the ideas, before giving a formal description. - It would be helpful for the authors to make some parts of the paper notations and variables clarified so that a reader can follow the terms without the need to know the cited literature or read the appendix. I've added several questions in the "Questions" section to raise some of these points. As another example, in section 7.3 there seems to be a clarification of what $f$ entails by the equation $\theta(t)= f(t,\theta(0))$ . Some earlier mention/clarification on this would have helped me/possibly other readers. In line with previous comment, more concrete explanations, as opposed to merely abstract terms, will help clarify the ideas. - The assumptions such as infinitely wide networks could limit the applicability of the results. Discussion on these assumptions and how they might affect the applicability of their method to practical settings with finite width will be helpful to the reader. While it is understandable that the limit of infinite width is necessary for the theoretical derivations is necessary, perhaps the authors can add more discussions on the approximation errors that this implies for practice , ie. the discrepancy between infinite for theory and finite width used in practice. - Lack of sufficient empirical validations and baselines: While the paper can be viewed as a mere theoretical contribution, the central proposition of the paper, that they bridge the gap between ERM and Bayesian learning, is a claim of empirical nature. In other words, more empirical evidence can substantially strengthen the validity of the work and broaden its applicability. While the authors do compare their method to ERM and Bayesian learning, there is a distinct lack of comparison with other more recent methods aiming to achieve similar objectives. These comparisons will be helpful in informing the reader about the practical implications of these study and broaden the paper's impact. If the authors do not believe further experiments are necessary, perhaps they can be more upfront about their contribution to be of a theoretical nature. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - in section 3 the term output distribution $q$ and optimisation flow $f:\Theta\to\Theta$ are introduced but are left somewhat loosely defined. My impression from reading section 3 multiple times is the following: $p(\theta)$ is the prior, namely just the weight and bias distributions of the model. The flow $f:\Theta\to\Theta$ corresponds to a single step of the gradient-based optimisation procedure (based on its domain & range sets, and specially its name), such that $f(\theta)$ corresponds to the next step of the optimisation procedure. And $q(\theta)$ is the probability of the parameters after one step of the optimization process. This interpretation make an intuitive sense: if we assume the gradient updates $\theta_{t+1} = \theta_t + \epsilon \nabla f(\theta)$ for some infinitesimal $\epsilon$ then the fact that volume of $|\nabla f(\theta)|$ appearing as a multiplicative term $q(\theta)$. I understand that I may be incorrect in these interpretation, since an alternative interpretation is that $q$ and $f$ refer to the "final" distribution over parameters (i.e., many steps of the gradient descent). Can authors clarify these points? - The definitions given between lines 129-130 given for energy and entropy seem rather interesting but somewhat ambiguous. My question is two folds: 1) it would be good to be clear about whether these are the contributions of this paper? It would be helpful for readers to know which parts are and are not the contributions of this work. 2) What is the intuition behind calling the log-determinant term "entropy"? It would be helpful if authors could provide a concrete example for explaining these concepts to familiarize the readers. - In line 155 the authors proclaim that the equation (2) doesn't "involve any more training. - Later in the text after line 261 the authors compute $d \theta(t)/d t$ and refer to it as "gradient flow". Is this gradient flow the same as "f" introduced earlier in section 3? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the detailed and constructive feedback. > Strengths We are grateful for your recognition of the strengths of our paper, especially our theoretical contributions in areas including the interpolation mechanism, usage of PAC-Bayesian bounds, and analysis of energy and entropy change. > While the presented theoretical results are relatively well stated, there is a lack of high level intuition and build up that make the paper rather hard to understand for anyone who's not expert on all the related topics. This can be alleviated by using one or few relevant examples (toy examples where everyone can grasp and convey some key points, as well as by giving some more intuitive and high level explanation of the ideas, before giving a formal description. Thank you for this valuable suggestion about providing intuitive explanations alongside our theoretical findings. We plan to include a motivating example, which contains a single-variable loss function with two global minima, showing how our TransBL approach intuitively assigning a smaller weights to the sharp minima due to entropy change. This example will be further discussed with references to PAC Bayesian methods to give readers a clearer understanding. > As another example, in section 7.3 there seems to be a clarification of what $f$ entails by the equation $\theta(t)= f(t,\theta(0))$. Some earlier mention/clarification on this would have helped me/possibly other readers. Thank you for pointing this out. We will ensure that the function $f$ is introduced and explained earlier in the manuscript. > While it is understandable that the limit of infinite width is necessary for the theoretical derivations is necessary, perhaps the authors can add more discussions on the approximation errors that this implies for practice , ie. the discrepancy between infinite for theory and finite width used in practice. We concur that infinite width is a strong assumption. Such assumption made the theoretical analysis in Section 7 feasible, and it is prevalent in many theoretical discussions, including the prior work mentioned in lines 43-44. There are some works attempting to analyze finite-width corrections. However, these often complicate the theoretical analysis, and we hope that reviewer could understand our choice to leave such discussions for future work. Nevertheless, we appreciate your suggestion and will enhance our discussion on the necessity and limitations of the infinite width assumption. > Lack of sufficient empirical validations and baselines Thank you for raising this important point. Our main contribution is indeed theoretical, bridging the ERM and Bayesian learning through a parameter $\beta$. This paradigm is a departure from previous works, making direct comparisons challenging in some aspects. Existing variational approximation methods can also be viewed as a compromise between ERM and Bayesian learning. However, our method significantly differs from such variational approximation. As demonstrated in our experiments, especially as shown in Fig 3.a, we achieve continuous interpolation. On the other hand, variational approximation's bias depends on the expressiveness of the function class, which means theoretical guarantee of connecting ERM and Bayesian learning and achieving a continuous control like in Fig 3.a are not possible. This makes direct comparisons with these variational approximation difficult. Apart from our main theoretical contribution, we offer a practical contribution. We've developed a novel method for estimating entropy change, as detailed in Sec 8.1. Our method proves to be notably more efficient than prior approaches based on the Hutchinson method. > And $q(\theta)$ is the probability of the parameters after one step of the optimization process > alternative interpretation is that $q$ and $f$ refer to the "final" distribution over parameters (i.e., many steps of the gradient descent). Can authors clarify these points? You are correct in the explanation of $q(\theta)$. Both "one step" and "many steps" interpretations are valid and do not affect the analyses within sections 3-6. "f" can represent any invertible function with an existing Jacobian determinant, encompassing single-step gradient descent (GD), multi-step GD, and gradient flow. Only in Section 7 do we focus on gradient flow to discuss the specific forms of energy and entropy change. > it would be good to be clear about whether these are the contributions of this paper? It would be helpful for readers to know which parts are and are not the contributions of this work The energy and entropy decomposition of KL divergence in lines 129-130 is based on a straightforward analogy. We don't regard this definition itself as a contribution of this work. Our four main contributions are detailed from lines 68-84. > What is the intuition behind calling the log-determinant term "entropy"? It would be helpful if authors could provide a concrete example for explaining these concepts to familiarize the readers. Consider a one-dimensional function f(x)=2x. The Jacobian here is 2. If input x is uniformly distributed within [0,1], the output f(x) will be uniformly distributed within [0,2]. The entropy increases by ln(2), which corresponds to the log-determinant. We'll include this example in our revisions to make our definitions more intuitive. > In line 155 the authors proclaim that the equation (2) doesn't "involve any more training. Yes, equation (2) doesn't involve any additional training of the parameter $\theta$ beyond the optimization procedure in optimization flow f. > Is this gradient flow the same as "f" introduced earlier in section 3? Yes, the gradient flow discussed in line 261 is the same as the abstract "f" discussed in sections 3-6. The analyses in Sections 3-6 deal with "f" in an abstract, general sense. Only in Section 7 do we narrow our analysis to the gradient flow. Thank you again for your time and insightful comments. --- Rebuttal Comment 1.1: Comment: I thank the authors for their clarifications and taking the time to answer my questions. Since most of the points raised were with regard to presentation, I hope that authors will take care of them if the paper is accepted. While I do understand the reasoning of authors for studying the infinite width settings, merely tending width to infinity can lead to highly simplified dynamics (kernel gradient descent) that authors rely on to derive their results, which severely limits how predictive and interesting these results are for real settings. For this reason I will keep my score as borderline accept.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper derives an algorithm-dependent PAC-Bayesian generalization bound for infinitely wide neural networks that is based on the KL divergence between posterior obtained by gradient flow and a Gaussian prior. The work shows how to transform optimization into importance sampling through interpolation, coined TransBL, to trade-off generalization and sample efficiency. Also, it provides a proof of non-diminishing Hessian trace under certain conditions. Strengths: - The paper is clearly structured and the questions it aims to answer are well motivated. - Interesting function space perspective of neural network training dynamics through analysis of the KL divergence under infinite width limit assumptions. - Bridges gap between optimization and Bayesian learning problems to allow interpolation between computational efficiency and predictive accuracy. - Additional results of Hessian trace that could be independently useful. Weaknesses: - Experimental setup sometimes hard to follow It was sometimes hard to follow the setup of the experimental section (maybe dedicate a separate section in the appendices for details). Having a better picture of the details (especially how inference for TransBL is performed) would allow reproducibility but also spot details that could cause misalignment between theory and empirical results. - Limited experiments The results are promising and already clearly show that the theory is already well-aligned with learning in practice. But, it would be interesting to extend this to a slightly wider set of problems or configurations. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: a) There seems to be a slight drop in accuracy for TransBL in Figure 3.a. Is it expected that this is due to noise? b) The work seems to rely on gradient-flow with infinitesimal step size, whereas regular neural networks are typically trained with finite step sizes. To what extent can we expect similar results still? c) What happens empirically for different $\beta$? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: - experimental setup could be more clear The experimental set-up is not very clear to me. This makes it harder to judge how well the empirical results align with the theoretical derivations. - discussion on current limitations The paper compares learning in practice with theoretical results. It would be helpful if the paper could also assess in what cases we would expect current results not to hold anymore. - typos In the abstract, the method is called 'TansBL' (twice), whereas the paper uses 'TransBL'. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We deeply appreciate your recognition of the clarity, motivation, and contributions of our work. > It was sometimes hard to follow the setup of the experimental section (maybe dedicate a separate section in the appendices for details). Having a better picture of the details (especially how inference for TransBL is performed) would allow reproducibility but also spot details that could cause misalignment between theory and empirical results. Thank you for the suggestion. We will incorporate a more detailed description of the experimental setup in the appendices, providing specifics like the ensemble sampling used and an explanation for the shading in Fig 3.a. To address reproducibility concerns, we've provided the code with seeds in the supplementary material to ensure consistent replication of our results. > it would be interesting to extend this to a slightly wider set of problems or configurations We appreciate your recommendation. One of the main challenges in expanding the experiments is our focus on the distribution obtained from optimization, rather than sampling few solutions as done in many deep learning studies. For instance, to produce Fig 3.b, we ran over 100,000 optimization processes. This complexity makes it challenging to quickly expand to other configurations. However, we do recognize the importance and potential of broader applicability, and we will explore this in future work. > There seems to be a slight drop in accuracy for TransBL in Figure 3.a. Is it expected that this is due to noise? We believe this is likely due to statistical noise. As indicated by the broad shading (which represents a 3σ interval), the variance is quite high in the early training stages. At this phase, due to the initial parameter distribution not aligning with the Gibbs distribution, the effective sample size is relatively small, resulting in a larger variance. Consequently, the estimated mean can be noisy. > The work seems to rely on gradient-flow with infinitesimal step size, whereas regular neural networks are typically trained with finite step sizes. To what extent can we expect similar results still? Thank you for raising this important point. We opted for the infinitesimal step size primarily to simplify the analysis, as it allows us to describe neural network training dynamics with an ODE. For finite step sizes, we believe the core ideas could still be applicable with discrete step sizes, although the theoretical analysis might be more challenging. Moreover, in our experiment with actual neural networks, we indeed use discrete step sizes and have observed that the empirical results closely align with our theoretical findings, as shown in Fig 2. > What happens empirically for different $\beta$? Empirically, β serves as a control between generalization and sample efficiency. Larger values tend to exhibit behavior closer to a Bayesian posterior with a rapid decrease in effective sample size but improved generalization, as illustrated in Fig 3.c. Smaller values, on the other hand, show behavior closer to standard deep ensembles. > experimental setup could be more clear We appreciate your suggestion. Due to space limitations, we had to omit certain details from the main paper, like the aforementioned experimental parameters and figure explanations. However, in light of your feedback, we'll ensure that they're included in the appendices to enhance the clarity. > It would be helpful if the paper could also assess in what cases we would expect current results not to hold anymore. This is a pertinent point. Our current analysis heavily relies on the overparameterization. Real-world neural networks often do not operate in this regime, creating a discrepancy between the theoretical entropy estimation and reality. However, if an estimate for this can be obtained, our analysis in Sections 3-6 remains valid, and methods like TransBL and weight clipping can still be applied. > In the abstract, the method is called 'TansBL' (twice), whereas the paper uses 'TransBL'. Thank you for spotting this oversight. We apologize for the inconsistency and will correct this typo. Again, we appreciate your feedback, which will undoubtedly strengthen our manuscript.
null
null
null
null
null
null
An Alternative to Variance: Gini Deviation for Risk-averse Policy Gradient
Accept (poster)
Summary: The paper points to limitations of the mean-variance criterion for risk-averse RL, and of the methods that optimize it. Instead, the paper proposes an alternative risk measure relying on Gini Deviation. The paper shows how to derive the PG for this metric, and demonstrates that it learns risk-averse policies more stably than methods that optimize the mean-variance. Strengths: 1. The idea of using Gini Deviation for risk-averse RL is original as far as I know. 2. The metric is demonstrated as more stable than mean-variance. Weaknesses: Risk-aversion in RL is often handled by optimizing risk measures. Many risk measures have been studied, e.g., mean-variance, var, cvar, entropic, distortion measures (some of them are briefly mentioned in the paper). There are studies about both the properties of different measures and how to optimize them. In particular, commonly desired properties of the metric are captured by the notion of coherent risk measures. For example, mean-variance is not a coherent risk measure, but CVaR is, and many works have recently studied it. A paper that proposes a new risk measure and claims to its benefits cannot ignore these works and the notion of coherence. Considering the discussion above: 1. The presentation of the paper contribution lacks a lot of relevant context, hence the novelty and contribution are difficult to judge. 2. Relevant literature is hardly covered. There is no organized literature survey, and relevant works are not mentioned. 3. The experiments mostly demonstrate the weakness of the mean-variance risk measure, which is already known and cannot justify the suggested metric just by itself. Since the paper justification is essentially empirical, this is a quite critical issue. Finally, the maze benchmark used in Sec 2.2, Sec 5.1 and Fig 1 is taken directly from [Greenberg et al. 2022], with neither crediting its original nor citing the corresponding work. Notice that using benchmarks and code (even if you modify them) without explicitly citing them in both paper and code may be considered Plagiarism. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See the issues discussed above under Weaknesses. Additional question: As specified in the paper, the current method uses multiple trajectories for a single policy gradient. However, quantile regression (as in QRDQN) allows to learn quantile value representation without this limitation. Could such quantile regression be used to overcome the multi-trajectory limitation in mean-GD as well? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: Algorithmic limitations are fairly discussed in Section 6. Flag For Ethics Review: ['Ethics review needed: Research Integrity Issues (e.g., plagiarism)'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the review. We address main concerns below. **For Weaknesses** 1. **Regarding the notion of coherence.** Thanks for pointing out the coherent risk measure. We are aware of this category of risk as mentioned in Section 1. However, as indicated by our title and abstract, we are aiming to address some limitations of variance related risk measures in this paper, instead of comparing with other coherent risk measures. Both variance and Gini deviation are measures of variability, while CVaR (a coherent risk measure) is a measure for extreme outcomes. They are targeting different application scenarios and users. Gini deviation is known as a **coherent measure of variability** (Line 161), e.g., see [22](Gini-type measures of risk and variability: Gini shortfall, capital allocations, and heavy-tailed risks). The properties of coherent measure of variability are different from the properties of coherent measure of risk, e.g., see Section 2.3 of [22]. We will briefly compare them in the revised version. 2. **Regarding lacks a lot of relevant context, and relevant literature is hardly covered.** Since our paper focuses on the variance related risk measures, i.e., variability. We give a detailed analysis of the main stream mean-variance methods in Section 2. 3. **The experiments mostly demonstrate the weakness of the mean-variance risk measure, which is already known and cannot justify the suggested metric just by itself.** Thanks for raising this concern. We think there may be a misunderstanding. It is already known that variance is not a coherent risk measure, but this is not the weakness that we demonstrate nor try to resolve. First we would like to invite the reviewer to read Section 2 of [22] (Gini-type measures of risk and variability: Gini shortfall, capital allocations, and heavy-tailed risks). We focus on measures of variability (instead of measures of risk such as CVaR) as formally defined in [22]. Measures of variability are location invariant and consider the entire distribution of outcome instead of the tails. They include variance and Gini-deviation, but not CVaR. The confusion stems from the fact that the term "risk" is used casually in RL to refer to both measures of risk and variability. However in this work we demonstrate that variance is not positive homogeneous (Line 161) and then propose to use Gini-deviation as a replacement since it is positive homogeneous and a coherent measure of variability (but not a coherent measure of risk). We will add text to that effect in the paper. 4. **Regarding the missing citation of [Greenberg et al. 2022]** We acknowledge that the maze problem that we described in Figure 1 is a modified version of the Guarded Maze problem in [Greenberg et al. 2022] and that we failed to cite their work in our paper and our code. We sincerely regret this omission. While we had no intention of misappropriating their work, we now realize that this mistake may be seen as a form of plagiarism. We are taking this seriously and we have resolved this issue directly with the authors. The authors contacted us separately and independently of the NeurIPS reviews to point out the missing citation and work attribution. We exchanged several emails with the authors and quickly fixed this mistake. We ran by the authors a revised version of the manuscript where we did the following changes: 1) In Section 1, second paragraph, we added a citation to [Greenberg et al., 2022] 2) In Section 1, last paragraph, we changed "we created several domains" to "we modified several domains (Guarded Maze [7], Lunar Lander [15], Mujoco [16])". 3) In Figure 1: we indicated that the environment is a modified version of Guarded Maze and added a citation to [Greenberge el al., 2022] 4) In Section 5.1, we added the following explanation. "The original Guarded Maze problem is asymmetric with two openings to reach the top path (in contrast to a single opening for the bottom path). In addition, paths via the top tend to be longer than paths via the bottom. We modified the maze to be more symmetric in order to reduce preferences arising from certain exploration strategies that might be biased towards shorter paths or greater openings, which may confound risk aversion." 5) In the appendix, we added a discussion about additional related work in which we cited [Greenberg et al., 2022] After inspecting those changes, Greenberg wrote: "Thank you for your recent messages and for revising the manuscript. I am glad to see that our paper could help you in your very interesting work. The attribution in the revised paper that you sent seems entirely proper to me. I will be happy to see you in a conference." Hence, if our paper is accepted, we will incorporate the changes summarized above that have been approved by Greenberg. **For Questions** 1. **Reagrding using QRDQN to learn quantile value representation** Thanks for this good question. QRDQN is a TD learning method for risk neutral decision making. Since most risk terms are not time consistent, i.e., optimizing risk at each time step is usually not the same as optimizing the total return risk at the initial state, incorporating quantile TD learning is thus challenging. We postpone this part to future work. --- Rebuttal Comment 1.1: Comment: I thank the authors and appreciate the detailed response. I accept the responses, and increase my score accordingly. I believe that it will indeed be helpful to specify more clearly the notion of coherent variability and its importance. Regarding the literature: note that the title specify "risk-averse PG" as the objective, and note that very few of the cited related works are from the last 5 years. A paragraph about alternative risk-averse approaches may both help clarifying the context of this paper in the literature, and prevent confusion between different notions of coherence. --- Reply to Comment 1.1.1: Comment: Thanks for your reply and the suggestion! As mentioned in the rebuttal, we will definitely add more explanations to clarify variability and risk and discuss more related work in terms of other risk measures.
Summary: This paper considers the problem of risk-sensitive reinforcement learning in the policy optimization setting, under the mean-variance risk criteria. They provide background on the problem and demonstrate the issues with learning an optimal policy due to the estimation of the variance requiring double samples. They demonstrate that previous approaches to this problem which use the immediate reward variance as a proxy are not desirable (Section 2). They introduce the Gini deviation as a surrogate for the variance, and demonstrate that through a Choquet integral characterization, it can be estimated without the double-sampling issue by leveraging quantile distributional RL. They incorporate this into a PPO-style algorithm and compare its performance against others across a range of risk-sensitive benchmarks. Strengths: - Well presented motivation and contribution. - Novel idea, which achieves strong empirical performance. Weaknesses: - There is little theoretical support for the algorithm, which would strengthen the contribution. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - What is the space $\mathcal{M}^\infty$ introduced in Definition 1? It is never defined. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors discuss limitations of their work, and propose directions for future work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive review. We address main concerns below. + **What is the space $M^\infty$ introduced in Definition 1?** We apologize for this typo. It should be $L^\infty$ to represent the set of bounded random variables, e.g., see original definition in Equation 1 of the paper [Characterization, Robustness and Aggregation of Signed Choquet Integrals]. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: I thank the authors for clarifying my confusion, and after reading the other reviews I am inclined to maintain my score.
Summary: The paper proposes to restrict the Gini deviation of the return for risk-averse reinforcement learning (RARL). This is motivated by a discussion of the shortcomings of variance-based RARL methods, such as scale dependence and instability due to large squared returns. A derivation of an expression for the gradient of the Gini deviation of the return serves as a basis for a practical algorithm, mean-GD (MG). It adds a penalty proportional to the Gini deviation to the reinforcement learning objective. The distribution of returns, which is required for the gradient of the Gini deviation of the return, is approximated with a finite number of sampled trajectories. An empirical evaluation of MG in the tabular and function approximation settings shows advantages of MG in learning risk-averse policies over return and reward-based baselines. Strengths: The use of the Gini deviation as a new risk measure for RARL is motivated well by a concise discussion of the limitations of variance-based methods and is, to the best of my knowledge, new. The shortcomings of existing methods are illustrated with the help of a simple maze environment. The development of methods that are less sensitive to hyperparameters and environment properties such as reward scale is furthermore highly relevant for applications. The derivation of a tractable expression for the gradient of the Gini deviation is done carefully and required assumptions are stated clearly. The experimental evaluation furthermore considers a sufficiently broad range of environments (tabular maze over lunar lander to MuJoCo environments) and baselines. Weaknesses: While the derivation of equations (15) and (16) are sufficiently clear, I would appreciate some intuition about the nature of the contribution of the Gini deviation penalty to the parameter update. The current version of the text does not really discuss the proposed algorithm but jumps right into the experimental evaluation. I think that for adoption of the algorithm by the community an intuitive understanding would be helpful. There is furthermore a gap between the assumptions for the derivation and the actual environments used for the experiments and the illustration. The maze environment, for example, uses a shortest path objective which implies a discrete return. Many real-world environments will have complicated return distributions with atoms etc. It would be interesting to learn whether this poses a problem or if the assumptions are technical in nature and only required for the proof. The significance of the experimental results is difficult to judge without further information. While I found the used hyperparameters in the supplementary material, I did not find information on how they were chosen. Was a hyperparameter optimization performed and, if yes, was it done manually, with a grid search or with some other algorithm? Without this information the frequent failures of the baselines are difficult to place. I would furthermore suggest to put vertical lines indicating the performance of MVPI in the figures (to avoid the problem of incompatible x axes). Right now it looks a bit like the most competitive baseline is only shown in the supplementary material where it is hard to compare it to the rest. Related to this point: Part of the motivation for the proposed algorithm was to mitigate the need to extensively tune the hyperparameter $\lambda$ for different environments/reward functions. It would therefore be interesting to see how sensitive MG as well as the baselines are to the hyperparameters controlling the risk-averseness. Typos: * In line 42 “We show that our method can learn risk-averse policy […]”, there is an “a” missing. * In line 69 the expression for the gradient of $M(\theta)$ contains an $a$ and a $u(\theta)$ which have not been introduced. Together with a different expression given in line 88, this looks like a typo. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * It seems to me that the equality in line 234 is incorrect as the probability density of sampling a trajectory is not necessarily the same as the probability density of obtaining its reward. An extreme counter example would be an environment with a reward identical to zero but many possible trajectories. I do not think this poses a problem for the further derivation of MG as the set of trajectories that result in the same reward is independent of the policy parameters. However, it would still make sense to correct this formulation for better readability. * As the policy is defined to output values in $[0, 1]$ (line 48), does this mean discrete actions are assumed? * I would suggest to better relate the paragraph on the baselines (starting in line 281) to the discussion of prior methods in section 2. With the current formulation, it is not clear to the reader which baselines correspond to which of the methods discussed in section 2. Adding some clarification here would make interpreting the results a lot easier. * I would be interested in learning why most of the environments were specifically designed for this paper. Are no standard environments for RARL available that would have been useful? * Would it be possible to use the standard deviation instead of the variance to get scale invariance? * Is the variance of the estimate for the CDF in equation (16) a problem? * Related to the last question: Would it be possible to learn an approximation of the return distribution to reduce variance and get better convergence of the policy? The return distribution is clearly non-stationary but tracking non-stationary functions with estimates is not uncommon in reinforcement learning. Maybe this could help with asymptotic performance. * Why is MG performing poorly on HalfCheetah while MVPI seems to do well? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitations of the MG algorithm, mainly low sample efficiency, have been addressed in a separate paragraph. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive review. We address main concerns below. **For Weaknesses** 1.**The current version does not really discuss the proposed algorithm but jumps right into the experiments.** The full algorithms are in Section 9 in appendix due to page limit. The procedures of the algorithms are 1. collect a group of trajectories 2. calculate the risk-neutral policy gradient 3. sort the trajectories by returns and calculate the Gini deviation policy gradient 4. update the policy by adding them together with a trade-off $\lambda$. 2.**Regarding the gap between the assumptions and the actual environments.** This gap can be easily closed since we can always add a truncated Gaussian noise to the reward to make the return continuous without changing the optimal solution. This assumption for continuity of the return variable is also common in the literature, e.g., see Section 2 of [Optimizing the CVaR via Sampling]. 3.**Regarding the hyperparameters.** Since most of the comparing methods are also compared in the MVPI paper, e.g. Tamar, MVP, MVPI, TD3, we take the parameter searching range in MVPI as a reference and manually search by ourselves. We ensure each algorithm sweeps over the same number of hyperparameter combinations to make a fair comparison. We will add more detials in the revised version. 4.**Right now it looks a bit like the most competitive baseline is only shown in the supplementary material.** Thanks for raising this concern. MVPI is a TD learning method, i.e., update paremters at each environment step. Thus, it is not straightfoward to be compared with episode based methods. In addition, the MVPI plots were in appendix mainly due to the page limit. If the paper is accepted, we will move the figure to the main content using the extra page for accepted papers. 5.**Regading the sensitivity to lambda** We show the learning curves of total return based methods, e.g., MVO, Tamar, MVP, MG with different lambdas in the Maze domain in the uploaded PDF file. **For Questions** 1.**Regarding equation in line 234. An extreme counter example would be an environment with a reward identical to zero but many possible trajectories.** We think this one is closely related to the previous question on the continuous assumption of the return variable. The counter example here conflicts with this assumption. 2.**As the policy is defined in [0, 1](line 48), do you assume discrete actions?** We apologize for this typo. We do not assume the actions to be discrete. We will correct this in the next version. 3.**Regarding the environment design, and standard environments for RARL?** The reason for specific design is that the evaluations in previous mean-variance papers are either done (a) in small domains but without a clear understanding of why a particular type of algorithm can succeed or fail (we did this in Maze) or (b) in large domains but verifying the risk-aversion is not very straightforward (e.g., MVPI adds action noise to Mujoco. It reports the mean-variance score but it is unclear what a risk averse policy is or whether risk-aversion is achieved). Thus we modify several domains where the risk-aversion can be clearly defined to better evaluate each algorithm. For standard environments for RARL, as far as we know, there is no common benchmark in all previous mean-variance papers in Section 2. For example, Tamar's paper used portofolio. MVP's paper used portofolio, american-style option, and optimal stopping. MVPI's paper used Mujoco with noisy action. 4.**Regarding using standard deviation to get scale invariance** First, tough Std is related to the signed Choquet integral, it is a supremum over some signed Choquet integrals, e.g., see Example 3 in the paper [Characterization, Robustness and Aggregation of Signed Choquet Integrals]. The gradient calculation is in general hard. Second, directly computing the gradient of Std is possible. $Std = Var^{\frac{1}{2}}$, then $\nabla Std = \frac{1}{2}\frac{1}{\sqrt{Var}}\nabla Var$. This gradient may still suffer from numerical scale issues due to the $\nabla Var$ term. 5.**Regarding the variance of the estimator** To get an intuitive understanding of the variance, we can compare our updating rule with REINFORCE. In REINFORCE, the return term is multiplied to the sum of log pi. The variance of $\eta_i$ depends on the variance of ordered samples, which may not have a closed form. However, compared with the return in REINFORCE, $\eta_i$ is calculated based on the difference between ordered returns times a value in (0,1), which has a much smaller numerical scale, thus smaller variability. 6.**Would it be possible to learn an approximation of the return distribution to reduce variance and get better convergence?** Thanks for this suggestion. Learning a return distribution function can leverage distributional RL. However, due to the non time consistency of the risk term, incorporating TD method (distributional RL uses TD learning) is challenging, since optimizing risk at each time step is usually not the same as optimizing the total return risk at the initial state. We postpone this part as our future work. 7.**Why is MG performing poorly on HalfCheetah while MVPI seems to do well?** We designed two HalfCheetah domains (the second in Appendix). One is adding a noisy reward based on X-location (X<-3). In this domain, MVPI performs good since (a) MVPI is built on top of TD3. MG is built on top of PPO. TD3 learns faster and better than PPO in this domain as shown in Figure 5 of TD3 paper [Addressing Function Approximation Error in Actor-Critic Methods]. (b) MVPI's reward modification strategy naturally prevents it from visiting the X<-3 region. The other is adding linearly decayed noisy reward based on the distance the agent has covered. The distance that agents covered is in Figure 19 (Section 11). It is clear that MVPI gets stuck in this domain, i.e., final X location is close to the origin. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: I thank the authors for the detailed response. I appreciate their explanations on how to close the gap between Assumption 1 to 3 and the benchmarking environments, on the choice of benchmarking environments, possible downsides of substituting the standard deviation for the Gini deviation, the variance of the estimator for the GD gradient, and the challenges associated to leveraging distributional RL for RARL, and the performance of MVPI on two versions of the HalfCheetah environment. It is a useful addition to the paper that the authors will discuss the process of hyperparameter tuning in the revised paper, and will integrate the MVPI results better in the main text. I would encourage them to also include the new experiments on the sensitivity with respect to the lambda parameter. Regarding equation in line 234, I thank the authors for pointing out that the counter example I gave violates Assumption 1. The example was intended to illustrate that, in general, the probability (density) of obtaining a specific trajectory is not the same as the probability (density) of obtaining it’s return as multiple trajectories could lead to the same return. For this reason line 234 appears imprecise to me. This is only a minor point, however, and does not affect the validity of the results. I would leave it to the authors to decide whether to change this formulation. I furthermore appreciate that the issue of a missing citation, pointed out by another reviewer, was resolved. I consider my questions and concerns sufficiently addressed, thank you. --- Reply to Comment 1.1.1: Comment: Thanks for your reply and the suggestion! We will add more explanation to Line 234 for better readability.
Summary: This paper presents a new approach to measure the risk for risk-averse reinforcement learning problems. To be more precise, traditional methods usually use the variance of total return or per-step reward as the measurement of risk, since it is known that the variance is easy to calculate. Though easy to calculate, variance-based methods are observed to be sensitive to numerical scale and hard to optimize the policy. Based on the observation, the authors propose to use Gini deviation as the alternative to variance for measuring the risk. The authors study the properties of the new measurement and present how to estimate Gini deviation using empirical samples. Empirical experiments demonstrate that the new proposed methods can mitigate the limitation of variance-based methods while it can still learn high reward returns, compared with previous methods. Strengths: - The paper is well written, and the authors did an excellent job of presenting the limitation of variance methods before introducing Gini Deviation. As a result, readers can get a sense of the motivation for using Gini deviation as an alternative. - The authors thoroughly introduce the properties of Gini deviation and present how to estimate this quantity using empirical samples. Besides the authors also introduce how to estimate this quantity under the setting of using importance sampling and PPO. And I think through this, authors can get a better understanding how to use the quantity in different variant of policy gradient methods. - The authors also demonstrate the advantage of Gini Derivation via empirical experiments, for both tabular, discrete, and continuous control experiments. Weaknesses: - Though the authors present that Gini Deviation can be estimated via empirical samples, my primary concern is that it does not have close form and thus its estimation totally depends on the quality of the samples, if we do not have enough samples, then the variance or the bias of the estimation can lead to undesired optimization behavior. - Since Gini Deivation requires estimating the quantile, it may introduce additional complexity compared with variance-based method. - The authors claim that variance-based methods might be sensitive to numerical scales, but I have not seen any ablation study to directly compare this with Gini Deivation-based policy gradient method. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - Can you clarify if we use empirical samples to estimate Gini Deviation, what would be the draw back? For example, what's the variance and bias of the new estimator using empirical samples if it has? - Would you mind explaining the complexity of the new proposed method? - Also, is it possible to demonstrate that with the new proposed measurement, the new estimator would be robust to numerical scales? (Maybe no need for empirical experiments, some intuition explanation would also be okay). Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive review. We find the questions in "Weaknesses" and "Questions" are closely related. We combine the related ones and address main concerns below. 1. **Regarding the draw back, bias and variance of the sample based estimator** As discussed in the limitation section (section 6), the draw back of our sample based estimator is that we need a sufficient number of samples for each update due to estimating the CDF. However, we would also like to highlight that using multiple trajectory samples to estimate CDF or inverse CDF is also common in static CVaR related policy gradient methods, e.g., see papers [Optimizing the CVaR via sampling; Efficient risk-averse reinforcement learning]. Generally, rigorous characterizing the bias and variance of $\eta_i$ in Equation 16 requires the true CDF, which is unknown. Also the bias and variance of $\eta_i$ involves the bias and variance of ordered samples, which are also complex. We give an intuitive analysis first and then demonstrate by simutation. The bias of $\eta_i$ comes from the difference between the empirical CDF and the true CDF. This bias will decrease when the sample size increases as the empirical CDF will be closer to the true CDF. For variance, we can compare our updating rule with REINFORCE. In REINFORCE, the return term is multiplied to the sum of log pi, while in GD policy gradient, $\eta_i$ is multiplied to the sum of log pi. Compared with this return term, $\eta_i$ is calculated based on the difference between ordered returns times a value in (0,1), which has a much smaller numerical scale, thus smaller variability. We demonstrate the bias and variance by simulating from a truncated Gaussian. Suppose ordered samples $x_{(1)}, x_{(2)},...,x_{(n)}$ (i.e., the samples are sorted in increasing order) are from a truncated Gaussian in range [-25, 25] with mean 0 and variance $10^2$. For $\eta_i=\sum_{i=1}^{n-1} \frac{2i}{n} (x_{(i+1)} - x_{(i)})-(x_{(n)}-x_{(i)})$, the bias is $\delta_i = \eta_i - \int_{x_{(i)}}^{25} 2F(t)dt + (25 - x_{(i)})$. We can estimate the bias as $B=\mathbb{E}[\frac{1}{n-1}\sum_{i=1}^{n-1}\delta_i]$. When $n=10$, $B\approx -9.32$. When $n=50$, $B\approx -4.53$. When $n=100$, $B\approx -2.97$. The bias decreases as the sample size increases. We can estimate the variance as $V=\mathbb{E}[\frac{1}{n-1}\sum_{i=1}^{n-1}(\eta_i - \bar{\eta})^2]$, where $\bar{\eta}=\frac{1}{n-1}\sum_{i=1}^{n-1}\eta_i$. When $n=10$, $V\approx 6.96$. When $n=50$, $V\approx 10.79$. When $n=100$, $V\approx 11.66$. This variance is much smaller than the variance of the sampling distribution, i.e, 100. Here we can treat $x_{(i)}$ as the return term used in REINFORCE and $\eta_i$ be the term in GD gradeint. This indicates the variance of $\eta_i$ in GD gradient is relatively small. 2. **Regarding the complexity of the new proposed method?** Since we are updating gradients based on episodes, we suppose that computing the gradient for one trajectory is one time unit. Suppose the total number of training trajectories is $N$. Consider the mean-GD gradient built on top of PPO, based on importance sampling, the gradient for each trajectory is updated a fixed number of times $m$. Then the total complexity is $mN$ time units. When mean-GD gradient is built on top of the REINFORCE baseline, the complexity is in general between $N$ to $mN$ time units since the importance sampling update is terminated if the ratio is too extreme. 3. **Regarding the intuition of GD be more robust to numerical scales than varaince** Thanks for this good question. The intuition directly comes from the definitions of Gini deviation and variance as shown in Equation 6 and in Line 149. Thus $\mathbb{V}[cX]=c^2\mathbb{V}[X]$, while $\mathbb{D}[cX]=c\mathbb{D}[X]$ for $c>0$. We also highlight this property in Line 159. As a result, scaling the reward or the return will scale the Gini deviation linearly, but quadratically for variance. 4. **Regarding compare GD and variance with respect to the sensitivity to numerical scales.** We directly compare mean-variance with mean-Gini deviation in Section 5.1. Following Line 299 in the paper, i.e. the sentence "The failure of variance-based baselines under simple reward manipulation", we provide an analysis of our experiments by modifying the numerical scale of the goal reward of the maze environment. This simple manipulation affects the mean-variance methods, but makes little difference to mean-Gini deviation as shown in Figure 2. --- Rebuttal Comment 1.1: Title: Response Comment: Thank the authors for answering my questions. I will keep my score and vote for acceptance. Please incorporate related discussion in your next version.
Rebuttal 1: Rebuttal: We include a PDF file which contains additional results requested by reviewer cQhS, regarding the sensitivity to $\lambda$ Pdf: /pdf/c71d4827f335a541ccb794d3c3c105e769180bd8.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper discusses the challenges of mean-variance RARL methods, and proposes the Gini deviation as a substitute to variance penalization. Strengths: The paper is clearly written and well-motivated (by the disadvantages of return and instantaneous reward variance). The proposed method is evaluated in several experiments with comparator baselines, which seem convincing. Weaknesses: It seems the proposed algorithm has to sort its samples in every iteration (L245-248) which seems computationally expensive, and which may be sensitive to the parameters chosen for $n$? While I recognize that this is a largely empirical paper, I suppose the paper could have been made more complete by providing some convergence guarantees for policy gradient, e.g., to a stationary point. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Included in “weaknesses”. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive review. We address main concerns below. + **It seems the proposed algorithm has to sort its samples in every iteration (L245-248) which seems computationally expensive, and which may be sensitive to the parameters chosen for $n$?** The sample here corresponds to trajectory's return. In practice, sorting samples in our experiments is in general not very time consuming since the maximum number of trajectory's return to sort is only 50 (in the maze problem in Section 5.1). Regarding the sensitivity, we did not find it difficult to choose $n$ in the experiments. For instance, $n$=10 in HalfCheetah and Swimmer (in Section 5.3) is reasonably good to achieve a low chance of visiting the high variance region in our experiments.
null
null
null
null
null
null
POMDP Planning for Object Search in Partially Unknown Environment
Accept (poster)
Summary: This paper addresses the problem of finding efficient policies for a robot to search for a known target object in a physical environment. The robot can find the target object by either moving around to change its field of view or by moving objects that occlude the target object to reveal the target object. While the robot has access to 3D point clouds and occupancy grid maps of the environment and included furniture, objects on the furniture are not included in the occupancy grid maps. The authors frame this as a POMDP as the state of the environment is partially observed. The authors propose a novel POMDP solver – Growing Partially Observable Monte-Carol Planner (GPOMCP) – which reuses the previous belief tree under very specific conditions as defined in Theorem 1, Corollary 1,2 – where the optimal action obtained by reusing the tree will be the same as direct sampling. Overall this is an important practical problem for robots and a new technique for addressing the infeasibility of exactly solving the resulting POMDP is welcome. I appreciate the responses of the authors during the rebuttal phase and have raised my rating to Accept. Strengths: The authors clearly explain their approach for action execution using ROS interfaces move-base, ros_control and moveit, perception for object detection using point cloud segmentation, point cloud re-projection, sub-image fusion, YOLO detection and SIFT matching, and an estimation of each objects move-ability using the Grasp Pose Detection toolbox, k-means clustering and moveit. The authors explain their planning method, GPOMCP, including their use of the belief tree, and provide appropriate steps, equations, theorems, and corollaries. The authors present simulation results using Gazebo across four environment scenarios based on whether the target object is loose, hidden, or covered and compare their proposed method with the Bellman update (method 1) and the Monte Carlo update (method 2) with the primary POMCP method without the fake object (method 3) and with the fake object (method 4). The authors also introduce the concept of a “fake object” (last paragraph of section 1), which represents a fully occluded object to facilitate creating a distribution for the pose of this object. The authors present results from a Gazebo simulation system comparing their algorithm against prior baselines in 4 levels of difficulty based on the visibility of the target object. Table 1 indicates that the GPOMCP Planner more successfully locates the target object and with fewer steps. Weaknesses: This paper addresses an interesting problem but the Prior Work section misses important precedents. In line 105, the authors state that when the target object is fully occluded, this presents a new challenge to the best of their knowledge. However, this problem – where the target object is fully occluded – is well-known in the robotics literature where it is often referred to as “Mechanical Search”. The problem has also been posed as a POMDP and addressed with Monte-Carlo Search Tree methods (see references below, 1 uses MC Trees). The authors did not include a Limitations section, this must be fixed if the paper is to be accepted. The paper would be improved with more carefully proofreading to make the English clearer, for example the term “fake object” seems inappropriate. A better term might be “hidden object” as there is nothing “fake” about this, and a more detailed description of how the distribution evolves in the experiments under the “Covered” condition. Potentially Relevant Prior Work: Huang, H. et al. (2023). Mechanical Search on Shelves with Efficient Stacking and Destacking of Objects. In: Billard, A., Asfour, T., Khatib, O. (eds) Robotics Research. ISRR 2022. Springer Proceedings in Advanced Robotics, vol 27. Springer, Cham. https://doi.org/10.1007/978-3-031-25555-7_14 Danielczuk, M., et al.: Mechanical search: multi-step retrieval of a target object occluded by clutter. In: 2019 International Conference on Robotics and Automation (ICRA), pp. 1614–1621. IEEE (2019) Kurenkov, A., et al.: Visuomotor mechanical search: learning to retrieve target objects in clutter. In: 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 8408–8414. IEEE (2020) Yang, Y., Liang, H., Choi, C.: A deep learning approach to grasping the invisible. IEEE Robot. Autom. Lett. 5(2), 2232–2239 (2020) Bejjani, W., Agboh, W.C., Dogar, M.R., Leonetti, M.: Occlusion-aware search for object retrieval in clutter. In: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4678–4685. IEEE (2021) Gupta, M., R¨uhr, T., Beetz, M., Sukhatme, G.S.: Interactive environment exploration in clutter. In: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5265–5272. IEEE (2013) Huang, H., et al.: Mechanical search on shelves using lateral access x-ray. In: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 2045–2052. IEEE (2021) Chen, L.Y., Huang, H., Danielczuk, M., Ichnowski, J., Goldberg, K.: Optimal shelf arrangement to minimize robot retrieval time. In: 2022 IEEE 18th International Conference on Automation Science and Engineering (CASE), pp. 993–1000. IEEE (2022) Nakhimovich, D., Miao, Y. and Bekris, K.E., 2023. Resolution Complete In-Place Object Retrieval given Known Object Models. arXiv preprint arXiv:2303.14562. Technical Quality: 3 good Clarity: 3 good Questions for Authors: In lines 53-54, the authors claim that the method will always converge to a delta distribution at the correct target pose. It would be helpful to justify with a proof, or consider conditions where the method may not converge or may converge to the incorrect object pose. The paper would also be improved with more details on the specific experiments as Figure 4 is difficult to follow (is the blue box the target in all 4 examples? This is never stated). On a minor note, the paper has some typos or style choices that need correction, such as: Line 68: ‘an action’ not ‘a action.’ Line 145: ‘shows’ not ‘to show .’ Line 147: ‘and’ not ‘and but .’ Line 290: ‘rebuilt’ not ‘rebuild’ The authors did not include a Limitations section, this must be fixed if the paper is to be accepted. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors did not include a Limitations section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: $\textbf{Response about weakness}$: W1. This paper... $\textbf{Answer}$: We appreciate the reviewer's insightful feedback regarding the Prior Work section. Apologies for overlooking the concept of 'Mechanical Search' in our initial version. We have revised the paper and connected our framework with “Mechanical Search”. Some related papers have been added in the revised version, especially for the concept-related paper (Mechanical search: multi-step retrieval of a target object occluded by clutter) and the MCTS-based paper (Mechanical Search on Shelves with Efficient Stacking and Destacking of Objects). W2. The authors... and Q4. The authors... $\textbf{Answer}$: Thanks. Please refer to the “Limitation” section of the global comments file for more details. W3. The paper... $\textbf{Answer}$: Thanks. Please refer to the “Fake object” section. The distribution evolution in the $Covered_1$ case follows a similar pattern to other scenarios. Initially, a uniform distribution spreads across the grid (The odds is 1 for each grid), followed by odds updates using Eq. 1 inspired by occupancy grid map updates adapting with field of view (FOV) changes: • In FOV where no objects are detected, the log-odds value is incremented by a negative value, indicating that the grid is unlikely to contain the target object. • In FOV where non-target objects are detected, the log-odds value is augmented by a negative value corresponding to the object detection confidence (<0.5), signifying that the grid is less likely to contain the target object. • In target object-detected FOV areas, log-odds rise positively with confidence (>0.5), signifying higher target likelihood. To provide a clearer understanding of the odds updating process in the $Covered_1$ case, we manually executed a designated sequence of actions: “move_head_5 -move_base_0-move_base_1-move_base_2-move_base_3-move_lift_2-move_base_0-move_base_1-move_base_2-move_base_3-remove_object_4-move_head_8-move_base_0-move_base_1-move_base_2-move_base_3” for our method with 2 cm (We are here to correct a typo in our previous submission. All the experiments are implemented in 2 cm resolution instead of 5 cm.) and 10 cm resolution, and Fig. 3 shows resulting grid world probability distribution changes. In the last two images, locate two green dotted circles. For the one with a 10 cm resolution, the updated probability corresponding to the target object is not predicted well, which is a near-0 value, after all actions. Our 2 cm method offers better prediction (~1), benefiting from finer object resolution. This detail aids in updating reliability, especially for small targets. This advantage becomes particularly pronounced when dealing with smaller target objects, but with slightly higher computation (but odds updating is very cheap commonly). The salient observation is that when the resolution is relatively large, grid probability updates lose precision, particularly when the target object is situated close to an obstacle. The use of a larger resolution not only fails to enhance our method's performance but also introduces potential distortions in the information of the fake object. As evidence, we present statistical results for the covered scenario using a 10 cm resolution: 24803.5±3792.3|39.9±6.9|60%. It is worse than the POMCP without using fake object in success rate. Recommendation: Resolution < object sizes for accurate odds updating. $\textbf{Response to questions}$: Q1. In lines 53-54... $\textbf{Answer}$: Thanks. Apologies for the seemingly overly strong claim, especially in the context of the delta distribution. For instance, when dealing with multiple target objects situated at different positions, the resulting distribution may not exhibit the characteristics of a delta distribution. I've adjusted the claim: “As new accurate observations are perceived, the robot refines its belief about the fake objects, and eventually the belief has a high chance to converge to the correct target object positions based on suitable parameter settings.” However, it is our intention to demonstrate that, in general, the probability tends to behave in a certain manner. Assuming all grids are updated at least $N$ times in the whole action sequence with $N_1>=N$ observations and, in each observation $z_t$, any $t\in[0, N_1]$, the values $\log Odd(g_i|z_t), i=1,2,3,4$ corresponding to different cases (a. the grid $g_1$ in FOV where no objects are detected, b. $g_2$ in the FOV area where non-target objects are detected, c. $g_3$ in FOV where the target object is detected, d. $g_4$ outside the FOV area) are accurate enough satisfying $\log Odd(g_i|z_t)<-∆<0, i = 1,2$, $\log Odd(g_i|z_t)>∆>0, i=3$, and $\log Odd(g_i|z_t)=0, i=4$. We have: $\log Odd(g_i|z_{1: N_1})<= \log Odd(g_i|z_{1: N})<-N∆, i=1,2$ and $\log Odd(g_i|z_{1:N_1}) >=\log Odd(g_i|z_{1:N})>N∆, i=3$, so we have: $Odd(g_i|z_{1:N_1})<\exp^{-N∆}, i=1,2$, and $Odd(g_i|z_{1:N_1})>\exp^{N∆}, i=3$, If $N→+∞$, $Odd(g_i|z_{1:N_1})→0$ and $P(g_i|z_{1: N_1})= Odd(g_i|z_{1:N_1})/(1+Odd(g_i|z_{1:N_1}))→0, i=1,2$ and $Odd(g_i | z_{1:N_1})→+∞$ and $P(g_i|z_{1:N_1})→1, i=3$. Normalized probabilities (0-1 range) across the grid world lead to convergence towards actual target object grid cells. This progressive refinement of probabilities contributes to a more precise understanding. Q2. The paper... $\textbf{Answer}$: Thanks. We've added sentences clarifying the target object and other specifics, including “The comparison is implemented based on 4 different scenarios with different object numbers, including LOOSE_1 (4 objects in 1 workspace), LOOSE_2 (6 objects in 2 workspaces), Hidden_1 (7 objects in 1 workspace), and Covered_1 (7 objects in 1 workspace), as shown in Fig. 4. The target object is the blue snack box. Only the black headset box and the dish rack are not movable, and others are all movable.” Q3. On a... $\textbf{Answer}$: Thanks for your thorough review. Typos have been corrected in the revision. --- Rebuttal Comment 1.1: Title: Thank you for Authors Response to my Review Comment: I appreciate the authors' response to my review. I have also carefully read the other reviews and responses. I appreciate the clarifications and thoughtful Limitations sections that the authors will add to the final paper. I am willing to increase my rating from Borderline Accept to Accept but am very open to the discussion with all reviewers in the next phase. --- Reply to Comment 1.1.1: Title: Appreciation for Your Responses and Possible Score Update Comment: Thanks for your kind feedback on possible score updating. We will continue to improve our final version and try to include all the discussed content in our last version, especially for the Limitation Section (main document) and ablation study (supplementary material). Please feel free to let us know if you have any questions or further suggestions.
Summary: This paper presents a POMDP based approach for finding objects in partially unknown environments. They present a planning algorithm that reuses the belief tree and uses a fake object to guide the search. The system is tested in four simulated scenes in Gazebo and outperforms a POMCP baseline. Strengths: - The proposed system achieves better performance than the baseline in several experiments. Weaknesses: - The main experiment of the paper is a set of comparisons to POMCP [13], which was published in 2010. The related works section of the proposed work mentions several other more modern approaches. It is not clear why these works were not also compared to. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - Are the object positions the same in the 20 trials for each scenario or are they randomized? If they are the same, only testing on four different configurations of objects is too few, especially when the experiments are done in simulation. - I am confused about how the fake object is used and what the intuition behind it is. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Limitations are only very briefly discussed in the conclusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: $\textbf{Response about weakness}$: W1. The main experiment of the paper is a set of comparisons to POMCP [13], which was published in 2010. The related works section of the proposed work mentions several other more modern approaches. It is not clear why these works were not also compared to. $\textbf{Answer}$: Thank you very much for this comment. Please refer to the “Baseline methods” section of the global comments file for more details. $\textbf{Response to questions}$: Q1. Are the object positions the same in the 20 trials for each scenario or are they randomized? If they are the same, only testing on four different configurations of objects is too few, especially when the experiments are done in simulation. $\textbf{Answer}$: Thanks for your comments. Yes, they are the same, not randomized. However, we like to point out that in the area of this paper’s main contribution (online POMDP solving and its application in robotics systems), such evaluation is common and acceptable. The main reason being the methods under this approach are less (if not minimally) reliance on data. Our method does fall under the category of the randomized algorithm, and for this purpose, we did evaluate each scenario 20x with different seeds for the random number generator and presented the results with its 95% confidence interval. Moreover, pur presented scenario settings are specifically defined to be more difficult/challenging than randomized, especially for the covered and hidden case. Randomized cases-often lead to simple occlusion relationships (easy to observe in one or multiple poses, similar to “loose” cases). Last but not least, we do recognize the importance of understanding the effects of the different components and parameters of the method on its overall performance. To this end, we have taken the initiative to introduce an ablation study that systematically spans an escalating range of object quantities, ranging from 2 to 10 at intervals of 2. The scenarios, as depicted in Figure 1 of the attached pdf file, comprise 20 trials each, all adhering to a consistent planning time limit of 60 seconds per step. The results of these trials are documented in Figure 2 of the attached pdf file, which we hope would provide a comprehensive statistical insight into the performance of our method. Our method indicates more benefit, if the object number is relatively large (blue box areas in Figure 2), because we reuse the useful belief tree and avoid branch-cutting caused by the newly detected objects, which is more common in scenarios with more objects. Q2. I am confused about how the fake object is used and what the intuition behind it is. $\textbf{Answer}$: Thanks for pointing this out. Please refer to the “Fake object” section of the global comments file for more details. $\textbf{Response to limitation}$: L1. Limitations are only very briefly discussed in the conclusion. $\textbf{Answer}$: Thank you for the constructive question. Please refer to the “Limitation” section of the global comments file for more details.
Summary: This paper proposes a method for mobile robots to efficiently search for objects in complex environments. They use a Partially Observable Markov Decision Process (POMDP) formulation and a planning algorithm called Growing Partially Observable Monte-Carlo Planning (GPOMCP) to improve the robot's success rate and speed in finding target objects. Strengths: * A Monte-Carlo planning method to perform object search in the indoor environment based on the grid-type 3D representation. * A systematic framework and experiments demonstrate the effectiveness of the whole sense-plan-act pipeline. * A novel design for growing state space using a belief tree. Weaknesses: * The whole framework seems complex and required carefully mannual design of parameters (e.g. $R_{max}$ and $R_{min}$ ). It is unclear how the sim-to-real gap will impact these parameters. * It seems the baseline methods are POMCP and a variance of the proposed approach with replaced modules. It would be good if the authors can discuss or compare some other related object searching methods. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: * L123: the object pose is usually regarded as 6D or SE(3) although using quaternion. * A concern regarding using growing state space and tree search is that the problem complexity when the object number increase. What is the computational time for the planner with a varying number of objects? * How does the resolution of the grid representation impact the performance? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The authors have discussed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: $\textbf{Response about weakness}$: W1. The whole... $\textbf{Answer}$: Thank you for providing valuable comments. While some parameters require manual configuration, it's worth noting that the majority of these parameters do not significantly impact the final performance of the framework. Most of them are not sensitive to the final performance. As an example, the performance does not change too much, if orders of magnitude between $R_{max}$ and $R_{min}$ satisfy $R_{max}>>R_{min}$. For instance, varying $R_{min}$ from -1 to -20 does not exert a substantial influence on performance outcomes. To elucidate this phenomenon, we present a comparison between the results using $R_{min}=-1$ (the case in our paper) and $R_{min}=-20$ for the $Hidden_1$ case in Table 1 of the attached pdf file. Few parameters may affect the performance a lot. For example, before successfully declaring, we need to update the belief of the log-odds of 8 grid values and then complete the declaring action by comparing the mean value of the minimal $n_{odds} = 2$ log-odds with the thresholds $C_d^o$ for the obstacle object and $C_t^o$ for the target object. The smaller $n_{odds}$ means that this condition is hard to be satisfied and we need to observe more directions of the objects. As $n_{odds}$ decreases, the removing and declaration actions become more dependable, owing to the augmented array of diverse observations derived from different orientations. Our method retains more useful branches after receiving observations and it will show obvious advantages when the target belief is harder to reach. With this goal, we increase $n_{odds}$ to 4 and 6 and show the statistical results of the scenario with 6 objects (Fig. 1) in Table~2 with 20 trials setting. In our opinion, the primary challenge of deploying our system to real-world robots lies in addressing errors and failures in the perception and navigation parts rather than our focused planning part, which constitutes our main contribution. If the considered parameters work fine in simulation and the perception and navigation methods work well, they should be fine to apply in real-world cases directly. This relatively direct sim-to-real parameter “transfer” is common in the applications of non-learning-based POMDP solvers to physical robots. W2. It seems... $\textbf{Answer}$: We appreciate the suggestion to consider more baselines for comparison. Please refer to the “Baseline methods” section of the global comments. $\textbf{Response to questions}$: Q1. L123: the... $\textbf{Answer}$: Sorry and thanks for pointing it out. 7D in our original formulation means 7 dimensions instead of degree of freedom. We have revised it into a 6D (Degree of Freedom) pose. Q2. A concern... $\textbf{Answer}$: Thanks for your insightful comments. Indeed, the computational complexity of our framework will mainly increase with the number of updating objects. These objects play a crucial role in both transition and visual observation functions, causing fewer sampled particles. Consequently, the computational expenditure associated with these declared objects becomes relatively economical, as it no longer necessitates exhaustive testing of occlusion relationships among all other objects. The good news is that commonly the number of the updating objects is limited by the declaring action. To facilitate a clearer understanding of how our method performs under varying object counts, we have conducted an ablation study. This study includes a comprehensive set of experiments, encompassing an increasing number of objects ranging from 2 to 10, with increments of 2. The scenarios, as depicted in Fig.1 of the attached pdf file, comprise 20 trials each, all adhering to a consistent planning time limit of 60 seconds per step. The results are meticulously documented in Fig.2. Our method will perform obviously better, if the object number is relatively large (blue box areas), because we reuse the useful belief tree and avoid branch-cutting caused by the newly detected objects, which is more common in scenarios with more objects. Q3. How does... $\textbf{Answer}$: This question presents a fascinating insight. The grid world's finer resolution offers a unique advantage in enhancing the reliability of probability updating within the grid. This advantage becomes particularly pronounced when dealing with smaller target objects. It's important to acknowledge that opting for this finer resolution does entail a slightly higher computational complexity (odds updating is very cheap for large datasets). Upon conducting tests, it becomes evident that the good grid resolution should be smaller than the minimum dimensions of the target object along both x and y axes. We are here to correct a typo in our previous version. All the experiments are implemented in 2 cm resolution instead of 5 cm. For comparison, we present statistical results for the $Covered_1$ scenario using 10 cm resolution: 24803.5±3792.3|39.9±6.9|60%. The result is poorer than the POMCP method without using fake object in success rate. To dissect this phenomenon, we manually executed a designated sequence of actions: “move_head_5-move_base_0-move_base_1-move_base_2-move_base_3-move_lift_2-move_base_0-move_base_1-move_base_2-move_base_3-remove_object_4-move_head_8-move_base_0-move_base_1-move_base_2-move_base_3” for our method with 2 cm and 10 cm resolution, and the resulting changes in probability distributions across the grid world are visually depicted in Fig.3. The salient observation is that when the resolution is relatively large, grid probability updates lose precision, particularly when the target object is situated close to an obstacle. The use of a larger resolution not only fails to enhance our method's performance but also introduces potential distortions in the information of the fake object. In short, we recommend the used resolution had better be smaller than the object sizes to improve the accuracy of updating odds values. --- Rebuttal Comment 1.1: Comment: I appreciate the author's response and will maintain my ratings. --- Reply to Comment 1.1.1: Title: Appreciation for Your Responses Comment: Thank you for giving us the previous comments about the ablation study. They are valuable. --- Rebuttal 2: Comment: Thank you for your review. The authors have provided a detailed response to your review. Please be sure to read it and reply indicating the extent to which the authors have addressed your initial questions and concerns. Best,\ AC
Summary: The paper presents a full object search pipeline based on a MCTS solver for POMDPs. The object are represented by a octo-grid containing the odds of the object being is the target or an obstacle. A 3D point cloud map of the environment *without any object* is given to the algorithm, allowing for easier object detection and segmentation. The authors propose a more efficient MCTS algorithm for their problem to allow tree re-use when a new state variable is added (i.e. a new object is detected). They test their method in gazebo environments using a simulated fetch robot, and their method performs better than POMCP with nearly always 100% success rate. Strengths: This is a very strong paper, systems-wise. The amount of work to create a full object search system such as this, even if just tested in simulation, is significant. The scientific contributions, for me, are 1: the minor (but still valid and significant) modification to POMCP to allow for a growing state-space. 2: the POMDP modeling of the object search problem, which was a very nice textbook-like example of how to model a challenging problem as a POMDP and actually solve it. Weaknesses: The are not many other baselines. There exists many other POMDP solvers than POMCP, and probably other non-MCTS approaches to object search. I had trouble understanding the terminology "fake object", it's just the belief about the target object, which can be wildly inaccurate at first of course. The existence of the point cloud map without any object "is assumed to be available before planning, which is a reasonable and 216 realistic assumption achieved by mapping the environment prior to planning.". I disagree, it is not a realistic assumption to have an object-free map prior to doing the task. Imagine deploying this is a home, a cluttered home at that. You would need to clear the whole house before mapping it! Please discuss alternatives for future work. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: What limitations do you think would pop-up when deploying in the real-world? You touch on this subject, but how would you handle unsuccessful execution of primitives like move or lift? These will happen in the real-world. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: There's no dedicated limitations paragraph or section, and it's hard to grasp from the paper. I think the object-free pre-existing map is a significant limitation, but I'd like the authors to do the mental exercise of writing down what would need to be done for this to be deployed in a home on a real-robot, and infer the limitations of their approach. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: $\textbf{Response about weakness}$: W1. The are not many other baselines...... $\textbf{Answer}$: Thank you for your valuable comments. We appreciate the suggestion to consider more baselines for comparison. Please refer to the “Baseline methods” section of the global comments file for more details. W2. I had trouble understanding the terminology...... $\textbf{Answer}$: Thanks for pointing this out. Please refer to the “Fake object” section of the global comments file for more details. W3. The existence of the point cloud map without any object...... $\textbf{Answer}$: Thank you for pointing out the reasonability of the assumption. We partially agree with this point. A more complicated and realistic scenario is that the map of the environment is missing, and the navigation and perception errors will make the object search problem even more challenging. However, even with the given maps, the object search within a partially observable environment is still very challenging, which is our considered problem. In this paper, we focus more on the planning component of the system, while the perception is based on existing methods. If we consider the other planning-focused object search papers, like these papers ([1], [2]), a classical assumption is to offer the given maps and even the given object information, like object size, position, and orientation. The point cloud of the surrounding environment can be corrected once and our method can repeatedly use the map, which is not very expensive commonly. Of course, it is better to have stronger navigation in a full-unknown environment. In the future, we will apply the online semantic SLAM methods [3] to build the navigation map online, identify the workspaces, and complete the task without manual settings, like the autonomous continuous action domain instead of the discrete action domain. We have revised the statement about the reasonability of the assumption and state it in future work. Revise “In our framework, the point cloud map ${M}$ of the robot environment with some furniture and known objects is assumed to be available before planning, which is a reasonable and realistic assumption achieved by mapping the environment prior to planning.” into “To simplify the scenario and reduce the navigation effort, we assume the availability of a pre-built point cloud map (${M}$) of the robot's environment, including furniture and known objects, which can be created beforehand through environment mapping and reused during planning.” Add "Our future directions involve reducing manual settings, such as discrete pose candidates, dropping prior information, such as pre-built point cloud map $\mathcal{M}$, and exploring continuous action domains." [1] A. Wandzel, Y. Oh, M. Fishman, N. Kumar, L. L. S. Wong and S. Tellex, "Multi-Object Search using Object-Oriented POMDPs". [2] Zheng, K., Sung, Y., Konidaris, G. and Tellex, S., 2021, September. Multi-resolution POMDP planning for multi-object search. [3] Rosinol, Antoni, Andrew Violette, Marcus Abate, Nathan Hughes, Yun Chang, Jingnan Shi, Arjun Gupta, and Luca Carlone. "Kimera: From SLAM to spatial perception with 3D dynamic scene graphs." $\textbf{Response to questions}$: Q1. What limitations... $\textbf{Answer}$: Thank you for the constructive question. Please refer to the “Limitation” section of the global comments file for more details. Q2. You touch on this subject... $\textbf{Answer}$: This is a very good question for further improving the practice of the whole framework. Unsuccessful execution may cause totally different challenges to our framework. Some of them are fine to solve based on our current framework. For example, the robot may fail to pick up a detected object and instead push it down, which will cause the data association to stop updating the standing object and introduce a newly detected object based on our framework. This may increase the number of steps in finding the target objects, but commonly fine in completing the final tasks. However, some failures in primitive actions may affect the final performance. There are several ideas about dealing with poor primitives. (1) Based on prior experiments, we can identify the failure rate of different actions in the given scenario and consider the obtained state transition probability in the POMDP transition function, which means considering the failure in planning and reducing their effect; (2) Another way is to improve the successful rate of the primitives using the following possible ways: Action Execution Monitoring: Implement a robust action execution monitoring system that can detect and handle failures during primitive action execution. This could involve incorporating sensors or feedback mechanisms to validate the outcome of each action. If a failure is detected, the system can trigger a reattempt of the action or take corrective measures to ensure successful execution. Better Action Implementation Methods: Enhance action implementation capabilities using more robust and advanced methods to mitigate the impact of uncertainties on primitive execution. Like the move_base and move_it toolboxes may not be robust enough in the complicated environment based on the default setting. For example, we are testing the locomotor, which is an extensible path planning coordination engine that replaces move_base and we are adjusting the used configurations in move_it to increase the removing successful rate. Robust Perception and Sensing: Enhance perception and sensing capabilities to mitigate the impact of uncertainties on primitive execution. By using more robust and diverse sensory inputs, the system can better adapt to changes in the environment and make informed decisions even in the presence of imperfect or incomplete information. $\textbf{Response to Limitation}$: $\textbf{Answer}$: Thanks for your comments. Please refer to W3 and the “Limitation” section of the global comments file for more details. --- Rebuttal Comment 1.1: Title: Further comment about map assumption Comment: Thank you for the detailed reply which addresses most of my comments. I thought about the map assumption some more, and the rephrasing saying "the map includes *known* objects" rather than being devoid of objects makes sense to me. Thank you. I think my concerns have been addressed and would like to reiterate that I feel pretty positive about this paper. --- Reply to Comment 1.1.1: Title: Appreciation for Your Responses and Positive View Comment: Thank you very much for your valuable comments. Your input has highlighted an important research direction, particularly in relation to eliminating prior information from the map. Moving forward, we are committed to further developing this area and conducting additional real-world robot tests to enhance the practicality and robustness of our proposed framework. --- Rebuttal 2: Comment: Thank you for your review. The authors have provided a detailed response to your review. I realize that your review is favorable, but please be sure to read their response and reply indicating the extent to which the authors have addressed your initial questions and concerns. Best,\ AC
Rebuttal 1: Rebuttal: $\textbf{Baseline methods}$ $\textbf{Answer}$: We appreciate the suggestion to consider more baselines for comparison. However, as a system-level application, it is generally fair because many object search approaches are based on the pure POMCP method with different problem settings, eg.[1] [2]. Our GPOMDP solver is designed specifically for problems with growing state space using belief tree reuse and a modified upper confidence bound for improved upper bound estimation. This method can be easily applied to other online POMDP solvers that use particle representations with belief trees, such as ABT[3] and DESPOT[4], offering a competitive advantage when dealing with problems involving a growing state space. Other approaches, which are mainly learning-based methods, eg.[5], often require substantial training datasets and may achieve a general performance level of 40%-60% success rate, which is lower than our results. [1] Zheng, Kaiyu, et al. "A System for Generalized 3D Multi-Object Search." [2] Wandzel, Arthur, et al. "Multi-object search using object-oriented POMDPs" [3] Kurniawati, Hanna, et al. "An online POMDP solver for uncertainty planning in dynamic environment." [4] Somani, Adhiraj, et al. "DESPOT: Online POMDP planning with regularization." [5] Druon, Raphael, et al. "Visual object search by learning spatial context." $\textbf{Fake object}$ $\textbf{Answer}$: Our fake object is essentially a state variable that represents a “guess” on the target object. Our fake object is not a real object or a future detected object. The real target object will be included in one of the later sub-state-vector $s_{o_1}$ to $s_{o_n}$. In our problem, initially, we do not know the objects in the environment, neither the number of objects nor their positions, which means the whereabouts of the target object are also initially unknown. To capture uncertainty about the target object compactly, we propose to “guess” the target object. It can be considered as an additional visible object, related to the target object, in transition and visual observation functions of the POMDP framework, but not detectable in the real world. It uses the same structure as the structure used to represent an object in the system, which includes an 8-cell occupancy grid map where each cell maintains a probability of whether the cell intersects with the target object. Under the POMDP framework, a belief over the fake object variable represents multiple guesses with different probabilities about the target object. Initially, the uncertainty about the target object can be quite large, but over time the belief in the position of the fake object tends to converge to a probability mass whose mode is at the true target object. The distribution of the fake object (grid world) saves the belief about the target object. But, you know, in POMDP problems, we need to predict the future probability of all girds in MCTS by going through the transition functions and observation functions, which will be time-consuming and memory-consuming. The application of the fake object becomes meaningful to involve the useful information (represent grid world), is very computational (fast for transition functions and observation functions) and memory (a low-dimension vector) cheap in exploration and rollout, and is friendly for coding (use the same transition and visual observation function as the other detected real objects). We have added some sentences for better explanation in the revision. Last but not least, we apologize for the seemingly odd “fake object” terminology and will revise the name to “guessed target object”. The way to use the fake object is shown in Fig.4 of the attached file. Its grid world is updated based on the real-world observation from the camera and its position will be sampled and the other terms of the fake object will be re-initialized to use in the transition function and observation function of the Monte Carlo tree search. Then, this iterative process persists until the task is completed. We will add more explanations in the supplementary material. $\textbf{Limitation}$ $\textbf{Answer}$: When deploying the system in the physical world, our main limitations mainly stem from the errors and failures from perception and navigation components, rather than our focused planning part. Our planning method should be fine to apply in real-world cases directly with reliable support from perception and navigation methods. Under the POMDP framework, for primitive actions, some of the errors and failures can be handled. However, large failures with multiple knock-on effects would be difficult to handle. An example is when the robot’s failed grasping resulted in the object falling under the table or falling and causing many other objects to fall too. Another limitation is that our framework is based on point cloud segmentation to identify different objects. It may generate incorrect bounding boxes for objects with extensive contact areas. This misinterpretation can result in erroneous data association, impacting belief updates and removing actions. Furthermore, YOLO and SIFT object detection techniques may struggle in scenarios with limited SIFT features and YOLOv5 toolbox coverage, especially in low-light environments. However, we believe that addressing these limitations is possible by incorporating more advanced perception and navigation methods. In terms of the algorithm itself, scalability to problems with too many unknown objects in the environment can be difficult. In such scenarios, one could cluster objects hierarchically, and apply the proposed method to clusters of objects, rather than single objects, in the higher level of the hierarchy. Application at the single object level can then be limited to a medium-sized area after the system has a better understanding of the target object. We have added a new paragraph to emphasize the limitations of our method in the revised paper by summarizing the above points. Pdf: /pdf/b804a67936bbc8e5a58c39ffb0186424d9c337ec.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
A Batch-to-Online Transformation under Random-Order Model
Accept (poster)
Summary: This is a paper about online learning in the random arrival/order model, i.e., the adversary chooses samples but they are randomly permuted before being presented to an online algorithm. The authors propose a general tool that establishes a reduction to the offline/batch setting. The bounds that one can obtain with that tool are in terms of approximate regret, i.e., on top of the standard regret one also loses an epsilon fraction of the optimum.  The key idea behind the reduction is to use a coreset to reduce the sensitivity of the offline algorithm; the rest is fairly straightforward. After presenting the general tools, the authors show how to apply it to three specific problems: clustering, low-rank matrix approximation, regression. Strengths: The main result is a general tool that potentially can be applied to different problems. Obtaining this result required a certain level of technical mastery, and was not completely trivial. The paper is well written and easy to read. Weaknesses: Technical novelty is very limited. The paper combines known results in a very natural (though technically difficult) way. One could argue that the hard (and novel) part of the job was choosing the right pieces to put them together – but e.g. the idea of using coresets in a very similar context appears already e.g. in [Cohen-Addad et al., 2021] (the authors do not mention that). There are no experiments – even though one of the applications is a clustering problem studied by Cohen-Addad et al. [2021], who provided some preliminary experimental results, so it would be fairly easy to compare to them. It would be particularly interesting because the theoretical bounds in the two papers are not directly comparable. Even though the paper provides new algorithms for three problems previously studied in the literature, because of the specific setup, the theoretical bounds are, as far as I understand, incomparable to the previous ones. The results allow only for approximate regret bounds (and only under random arrivals). It seems to me to be a big limitation, and the authors did not manage to convince me that this is not the case. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: I’d be curious to see an experimental comparison with [Cohen-Addad et al., 2021]. Definition 3.1 (approximate regret) seems to be a standard one, so you could provide a reference and a bit more of a context. Minor remarks: Line 116: Is F supposed to be A? Lines 123, 235: \citet -> \citep Line 166: data -> data point Line 191: logarithmic -> polylogarithmic Algorithm 3, Input: “approximation ratio eps \in (0, 1)” -> “approximation ratio 1+eps, eps \in (0, 1)” Line 221: square -> squares Lines 227, 259: assumptions are also assumed -> assumptions are also made Line 253: outermost brackets should be bigger, there should not be one O inside of another Line 270: missing closing bracket Line 276: a -> an Line 288: \citep -> \citet Line 295: “an offline approximation algorithm” -> “offline approximation algorithms” Line 364: the second to last inequality requires some justification Line 395: insensitivity -> insensitive, algorithm 6 -> Algorithm 6 Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 2 fair Limitations: The results yield only approximate regret bounds, which seems to me somewhat limiting. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful comments. In the following, we address the raised concerns point by point. Minor comments and typos have already been addressed and corrected in our manuscript. # 1. Technical Novelty We respectfully **disagree** with the reviewer's comment suggesting that our results are merely a combination of known results. The example from [Cohen-Addad et al., 2021], which uses coresets for $k$-means clustering, does not diminish the originality and novelty of our approach. It is important to note that while both works employ coresets, they serve different purposes in entirely distinct contexts. In [Cohen-Addad et al., 2021], coresets are used to reduce the number of experts in the MWUA algorithm, whereas, in our work, coresets are used to achieve a small average sensitivity. Our main technical contribution is to the general framework for batch-to-online transformation via the notion of average sensitivity, which applies to a range of applications. Our results for $(k,z)$-clustering is only an example of applications of the general transformation framework. On the $(k,z)$-clustering problem, we also want to emphasize that the scope of our results extends beyond the $k$-means clustering discussed in [Cohen-Addad et al., 2021]. Our findings are applicable to the general $(k, z)$ clustering problem, where $k$-means represents a specific case for $z = 2$. It is also worth noting that using a dimension-independent coreset method for general clustering with the algorithm proposed by [Cohen-Addad et al., 2021] does not automatically lead to sublinear and dimension-independent regret. This is primarily because other components of their algorithm still rely on the assumption that $z = 2$, and they do not permit dimension-independent results. # 2. Experiments We first would like to remark that Cohen-Addad et al.~[2021] only provided an example of FTL and FLT-MWUA in $k$-means clustering to demonstrate their poor performance. The proposed algorithm (Algorithm 1, $\epsilon$-regret minimization) is not empirically evaluated. This, as well as the absence of released example code, prevents us from conducting a direct comparison to their approach. It is also important to highlight that our work is primarily theoretical in nature, as our focus lies in exploring and analyzing the theoretical foundations of the general framework proposed. Nonetheless, we provide a preliminary empirical evaluation of our framework in the context of online $k$-means clustering, online linear regression (see the additional file in the general rebuttal), and with various approximation ratios and experimental setup ($\epsilon = 0.1, 0.01, 0.001$, with $k=3$ or $k=5$ clusters). We compare the performance of the proposed algorithm to the hindsight optimal solution. For $k$-means clustering, we obtain the hindsight optimal solution by applying $k$-means++ to all the data. In the context of regression, we utilize the least square formula to compute the hindsight optimal solution. Our experimental results demonstrate that the proposed algorithm is highly effective, and its performance aligns with our theoretical findings. # 3. Exact regret bound and comparison to previous bounds In the case where the problem is NP-complete (for example, $(k,z)$-clustering), no efficient algorithm can attain sublinear exact regret, and thus our framework can only obtain approximate regret. Using the definition of approximate regret, we can rearrange the terms and obtain $$\mathbb{E} _{\mathcal{A},\\{x _t \\}}\left[\sum^n _{t=1} \ell(\theta _t, x _t) - \mathrm{OPT}\right] = \epsilon \mathrm{OPT} + \mathrm{Regret} _\epsilon(n) .$$ Then by applying the (generalized) AM-GM inequality to the right-hand side and tuning $\epsilon$ appropriately so the inequality holds as equality, we can obtain an exact regret. For example, from the approximate regret of our online matrix approximation result, we can translate it to $O(\mathrm{OPT}^{2/3} (d \log n \log(\epsilon^{-1} kn))^{1/3})$. From the approximate regret of our online regression result, we can translate it to $O(\mathrm{OPT}^{2/3} (d \log n \log(\epsilon^{-1} dn))^{1/3})$. We also want to highlight that, in the case of $k$-means ($z = 2$), our Theorem 5.1 gives a logarithmic regret bound (on $n$), which is also dimension independent. In comparison [Cohen-Addad et al., 2021] depend on the dimension $d$ and give a $O(\sqrt{n})$ for regret. For online regression, [Garber et al., 2020] give $O(\sqrt{n})$-type regret when the matrix $\boldsymbol{A}$ has a small condition number. In comparison, our result attains polylogarithmic $\epsilon$-approximate regret, while maintaining no requirement on the loss function or the condition number. # 4. Line 364, Lemma 4.2, second to last inequality Let $\mathcal{D}$ be the distribution over pairs of outputs that attains $\mathrm{TV}(\mathcal{A}(X), \mathcal{A}(X^{(i)}))$. Then, we have $$\mathbb{E }_{\mathcal{A}}[\ell(\mathcal{A}(X),x_i)] - \mathbb{E} _{\mathcal{A}} [\ell(\mathcal{A}(X^{(i)}), x_i)] = \mathbb{E} _{(\theta,\theta^{(i)}) \sim \mathcal{D}}[\ell(\theta,x_i) - \ell(\theta^{(i)}, x_i)] $$ $$\leq \mathrm{TV}(\mathcal{A}(X), \mathcal{A}(X^{(i)})) \cdot \max _{\theta'} \ell(\theta', x_i) \leq \mathrm{TV}(\mathcal{A}(X), \mathcal{A}(X^{(i)})) , $$ where the last inequality is by the assumption that $\ell(\cdot,\cdot) \leq 1$ (In the case where this is not true, we can normalize the losses to satisfy this assumption). --- Rebuttal Comment 1.1: Comment: Thank you for the response. Could you please elaborate a bit more on point 3? How exactly do you apply AM-GM and tune epsilon to get such a bound for online matrix approximation? --- Reply to Comment 1.1.1: Title: Using AM-GM to get exact regret bound Comment: From the definition of regret, we can obtain $\mathbb{E} _{\mathcal{A},\{x _t \}}\left[\sum^n _{t=1} \ell(\theta _t, x _t) - \mathrm{OPT}\right] = \epsilon \mathrm{OPT} + \mathrm{Regret} _\epsilon(n) $. In the case of online matrix approximation and online regression, $ \mathrm{Regret} _\epsilon(n) = O \left(\epsilon^{-2}k \log n\log(\epsilon^{-1}kn)\right)$ and $\mathrm{Regret} _\epsilon(n) = O\left(\epsilon^{-2}d \log n \log (\epsilon^{-1}dn)\right)$ respectively. Let us short hand this term to be $\mathrm{Regret} _\epsilon(n) = O (B / \epsilon^{-2})$. We now use a generalized AM-GM inequality, $\epsilon \mathrm{OPT} + B / \epsilon^{-2} \geq \left( \epsilon^2 \mathrm{OPT}^2 \cdot B/\epsilon^2 \right)^{1/3}$. We then pick $\epsilon = O \left( \frac{B ^{1/3}}{\mathrm{OPT}^{1/3}} \right)$ to make it holds as equality. Take this value of $\epsilon$ in gives a regret bound of $O \left( B^{1/3} \mathrm{OPT}^{2/3}\right)$, which is $O(\mathrm{OPT}^{2/3} (k \log n \log(\epsilon^{-1} kn))^{1/3})$ in the case of matrix approximation.
Summary: This paper provides a general framework for how to use good offline methods to construct online methods under the random-order setting. It points out the importance of the average sensitivity in the transformation from good offline methods to good online methods. Strengths: 1. The framework is very general for the algorithm design of online algorithms based on offline algorithms. 2. The connection between regret and average sensitivity is beautifully characterized. Weaknesses: 1. It looks the method cannot cover the setting with $\epsilon=0$. If you can, please clarify it and I will further raise my rating. 2. it is also better to have some numerical experiments of the performance on some specific problems. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. can this method also solve the case with $\epsilon=0$? At least for the online linear programming problem, offline LPs can be accurately solved, and the regret (epsilon=0) can also be small under the random-order setting. Can this setting also be converted by your paper? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful and positive comments! In the following, we address the raised concerns point by point. # 1. Recover $\epsilon = 0$ (exact regret) We can obtain exact regret for our algorithms. Using the definition of approximate regret, we can rearrange the terms and obtain $\mathbb{E} _{\mathcal{A},\\{x_t\\}}\left[\sum^n _{t=1} \ell(\theta_t, x_t) - \mathrm{OPT}\right] = \epsilon \mathrm{OPT} + \mathrm{Regret} _\epsilon(n) $. Then by applying the (generalized) AM-GM inequality to the right-hand side and tuning $\epsilon$ appropriately so the inequality holds as equality, we can obtain an exact regret. From the approximate regret of our online matrix approximation result, we can translate it to $O(\mathrm{OPT}^{2/3} (d \log n \log(\epsilon^{-1} kn))^{1/3}) $. From the approximate regret of our online regression result, we can translate it to $O(\mathrm{OPT}^{2/3} (d \log n \log(\epsilon^{-1} dn))^{1/3})$. # 2. Online LP We believe that our general transformation framework can provide insights into new algorithms for online LPs in the random-order setting. Let us consider the following online integer LP (the binary version of this is also studied in [Li et al., 2020], $ \max \ r^{\top} x , \text { s.t. } A x \leq b$, where $r =\left(r_1, \ldots, r_n\right)^{\top} \in \mathbb{R}^n, \boldsymbol{A}=\left(a_1, \ldots, a_n\right) \in \mathbb{R}^{m \times n}$, and $b =\left(b_1, \ldots, b_m\right)^{\top} \in \mathbb{R}^m$. Here $a_j=\left(a_{1 j}, \ldots, a_{m j}\right)^{\top}$ denotes the $j$-th column of the constraint matrix $\boldsymbol{A}$. At each time step $t$, we receive $r_t, a_t$ and are asked to compute $x_t$. In the offline case, this can be exactly solved by the cutting plane or ellipsoid methods. In the online case, we first want to choose a subset of columns and weights, $A=(a'_i,\ldots,a'_k), w_1,\ldots,w_k$ such that $Ax \leq b$ (approximately) holds whenever $\sum_{i=1}^k w_i a'_i x_i \leq b$ holds. This step is similar to lines $6$-$9$ of Algorithm 5. Then one can use an exact method at each step on the sketched matrix. Using a similar analysis of Theorem 8.1, one should be able to obtain a regret bound for online linear programming. We leave the detailed derivation of the algorithm and analysis for future work. Li, X., Sun, C., \& Ye, Y. (2020). Simple and fast algorithm for binary integer and online linear programming. Advances in Neural Information Processing Systems, 33, 9412-9421. # 2. Experiments We would first like to highlight that our work is primarily theoretical in nature, as our focus lies in exploring and analyzing the theoretical foundations of the general framework proposed. Moreover, the related works are mostly theoretical as well, such as [Cohen-Addad et al., 2021] (for online clustering), [Garber et al., 2020] (for online regression, and online matrix approximation in PCA). Due to the scarcity of available code for baseline algorithms, it is hard to conduct a fair empirical evaluation of our approach in the limited time of the author's response period. Nonetheless, we here provide a preliminary empirical evaluation of our framework in the context of online $k$-means clustering, online linear regression (see the additional file in the overall rebuttal file), and various approximation ratios and experimental setup ($\epsilon = 0.1, 0.01, 0.001$, with $k=3$ or $k=5$ clusters). We compare the performance of the proposed algorithm to the hindsight optimal solution. For $k$-means clustering, we obtain the hindsight optimal solution by applying $k$-means++ to all the data. In the context of regression, we utilize the least square formula to compute the hindsight optimal solution. Our experimental results demonstrate that the proposed algorithm is highly effective, and its performance aligns with our theoretical findings. --- Rebuttal Comment 1.1: Comment: Thank you for these clarifications and answers. I raise my rating, especially for the answer to the first question. --- Reply to Comment 1.1.1: Comment: Thank you for championing our paper!
Summary: The paper presents a general method for converting an offline approximation to a learning problem into an online algorithm for the random-order problem which enjoys low (epsilon-approximate) regret. Specifically, the regret of this method depends on the sensitivity of the offline approximation; intuitively, this is the amount by which the distribution of outputs for the algorithm changes upon a small change to the input. First, the paper presents the construction for converting offline approximation algorithms to online algorithms for the random-order model. This construction is quite simple: at any step, we run the offline approximation on the part of the input that has already arrived, and use the output as the choice of the online algorithm for the next step. This is shown to yield low epsilon-approximate regret when the offline algorithm has low sensitivity. Next, the paper presents/analyzes low-sensitivity offline algorithms for a few online problems (online clustering, online matrix approximation and online regression); these algorithms can then be plugged into the aforementioned framework in order to yield regret bounds for these online problems. To my understanding, the main algorithmic concept used in designing low-sensitivity offline approximation is using coresets, which are subsets of the input that accurately represent the overall input. The paper analyzes a method for constructing such coresets, and shows that it has low-sensitivity. This coreset construction is then used to choose stable subsets of the input, yielding online algorithms with low regret and low inconsistency (which is a measure of changes in the online algorithm's output over time). Strengths: The algorithmic framework introduced is general, and could be applied to additional online learning problems. The paper introduces the goal of having low-sensitivity offline approximations for the sake of this offline-to-online conversion; this could be used to guide future research efforts. The algorithmic technique used for designing low-sensitivity algorithms in the paper is nice. Weaknesses: The offline-to-online conversion itself is quite simple, and does not introduce any algorithmic techniques (arguably, this is the first conversion that comes to mind). Technical Quality: 3 good Clarity: 3 good Questions for Authors: none Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive comments! We hope to address the concerns raised in the following response. > The offline-to-online conversion itself is quite simple and does not introduce any algorithmic techniques (arguably, this is the first conversion that comes to mind). We believe that the online-to-offline reduction through low average sensitivity, while may be seemingly straightforward, can provide insights into developing new online learning algorithms. Specifically, the general transformation framework provides a simple recipe for designing low-regret algorithms for a range of online learning problems with off-the-shelf offline algorithms. We would also like to note that although the overall conversion might seem straightforward, much more algorithmic techniques and insights are required to design appropriate coreset construction methods and obtain the theoretical guarantee. For example, in order to obtain a low average sensitivity coreset construction method, Algorithm 2 is proposed with a careful perturbation on the weights of coresets. The specific algorithm used for deriving the results for the applications, such as clustering, requires even more design techniques to obtain such dimension-independent results. --- Rebuttal Comment 1.1: Comment: Thanks for your response.
Summary: The authors present a framework for reducing online to offline learning. More presicely, they consider random order online learning where at each round a random point of some domain $X$ is presented to the learner (as opposed to some adversary choosing the example). They first show that an offline algorithm with low average sensitivity, i.e., the output of the algorithm does not depend too much on any single point, then it can be effectively used to solve the corresponding online problem via a standard "follow-the-leader" type reduction. In particular, at round $t$ having observed the points $X_{t-1} = x_1,\ldots, x_{t-1}$ in the online learning setting, the learner runs the offline algorithm on $X_{t-1}$ and uses its answer to play at the current round. They next show how to obtain offline algorithms with low average sensitivity by constructing a coreset via sensitivity sampling. They apply their framework to various problems such as online clustering and online linear regression. Strengths: The online-to-batch problem considered in this work is well-motivated and interesting. The authors provide a unified approach to obtain online algorithms for random-order online learning tasks and show some non-trivial results for popular online problems such as online regression. I believe that those results are going to be of interest for the NeurIPS community. The applications of the framework proposed in this work yield improved regret bounds compared to the prior work. In the case of online matrix approximation application, this work provides a regret bound that is only logarithmic in the horizon $n$ while the prior work the regret was roughly $\sqrt{n}$ and also depended on a ``condition number'' of the observed matrices. The paper is very carefully written and the main results and proofs were easy to follow. Weaknesses: This work considers the weaker online learning model of random order (instead of the more standard adversarial model). In Remark 6.1 (online clustering) the authors compare with online algorithms that dealt with the adversarial setting. I think a table with precise comparisons with prior work would help this paper. The online to offline reduction in the random ordering assuming low average sensitivity is not very surprising and the proof (see proof of Lemma 4.2) is rather straightforward. Also, it seems to crucially rely on the fact that the online learning is random order. Would a this or a similar approach based on low average sensitivity yield any guarantee for online learning in the adversarial setting? Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: See weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful comments! # Applicability in adversarial setting and comparison to the previous bounds We thank the reviewer for the constructive advice, we added a comparison table for the results we obtained in our manuscript. We believe that the online-to-offline reduction through low average sensitivity, while may be seemingly straightforward, can provide insights into developing new online learning algorithms. Specifically, the general transformation framework provides a recipe for designing low-regret algorithms for a range of online learning problems with off-the-shelf offline algorithms. We also want to remark that in section 5, we show that the low average sensitivity assumption can be realized via insensitive coreset construction methods. Regarding the applicability of our framework in the adversarial setting, we do not think the argument applies to the adversarial setting, as $\sum_{t} \ell(A(X_t), x_{t+1}) - \ell(A(X_{t+1}), x_{t+1}) \leq \sum_t \beta(t)$ (Lemma 4.2) does not hold in the adversarial setting. This is because we can only guarantee that on average, removing a point does not change $X_{t+1}$. However, we cannot guarantee the worst case. So for every $t$, removing $x_{t+1}$ might be the worst change to $X_{t+1}$. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: I would like to thank the authors for their response. I remain in favor of acceptance of this work. --- Reply to Comment 1.1.1: Comment: Thank you for championing our work!
Rebuttal 1: Rebuttal: We thank the reviewers for their insightful comments and constructive advice! We would first like to highlight that our work is primarily theoretical in nature, as our focus lies in exploring and analyzing the theoretical foundations of the general framework proposed. Moreover, the related works are mostly theoretical as well, such as [Cohen-Addad et al., 2021] (for online clustering), [Garber et al., 2020] (for online regression, and online matrix approximation in PCA). Due to the scarcity of available code for baseline algorithms, it is hard to conduct a fair empirical evaluation of our approach in the limited time of the author's response period. We here provide a preliminary empirical evaluation of our framework in the context of online $k$-means clustering, online linear regression (see the additional file), and various approximation ratios and experimental setup ($\epsilon = 0.1, 0.01, 0.001$, with $k=3$ or $k=5$ clusters). We compare the performance of the proposed algorithm to the hindsight optimal solution. For $k$-means clustering, we obtain the hindsight optimal solution by applying $k$-means++ to all the data. In the context of regression, we utilize the least square formula to compute the hindsight optimal solution. Our experimental results demonstrate that the proposed algorithm is highly effective, and its performance aligns with our theoretical findings. Pdf: /pdf/763b41a7a74376d11140c8d63cd5343dc57dc637.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper considers the random order model originally proposed by Garber et al. '20, and presents a generic framework to take any (approximate) offline algorithm and via usage of coresets turn it into an online algorithm with low average sensitivity, which can be then used to obtain an (approximate) regret bound. Some specific cases are considered where this framework is applied, including online clustering, online low-rank matrix approximation, and online regression. Strengths: * The high level idea is elegant, the paper is coherent and well written. * The proposed method offers novel algorithmic approach for batch-to-online reductions. Weaknesses: * I was missing some context that would allow better understanding the approach presented in the paper; specifically comparison to the approaches considered in [Garber et al., 2020, Sherman et al., 2021]. The role of uniform convergence in coresets construction vs running the offline algorithm directly on the cumulative loss, and average sensitivity vs algorithmic stability. See some of my questions below. * Perhaps this is not a weakness per se, but (please correct me if I'm wrong - this was not entirely clear to me) the rates obtained do not strictly improve any prior art excluding Section 6 which improves Cohen-Addad et al., '21 but applies less generally. The rest of the results pertain to $\epsilon$-approximate regret while the previous works mentioned obtain standard regret bounds. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: * **Coresets vs direct uniform convergence of the cumulative loss.** At a high level, is it correct to say that the coreset construction is based on uniform concentration of the loss, and that in some specific cases this uniform concentration comes cheap? If so, could we alternatively take the offline algorithm and run it on the cumulative loss as is, then claim a regret bound owed to uniform concentration of the cumulative loss directly? Some more questions related to this point: * Does big-$O$ hide a dimension factor in Lemma 5.2? * Section 5 presents a coreset construction method, but then in the section that follows (6.2) a different method (that of y Huang and Vishnoi '20) is used. Why? * **Average sensitivity vs algorithmic stability.** TV is the infimum (over all couplings of the two RVs) of the expected 0-1 distance. This is an upper bound on the Wasserstein distance which is the infimum (over all couplings of the two RVs) of the expected euclidean distance. This Wasserstein distance is basically average algorithmic stability. Is it correct that average sensitivity is a strictly stronger notion? * **Applicability in the full adversarial setup.** It seems to me the method you propose should also work in a fully adversarial setup. Average sensitivity is akin to uniform stability; the output of your offline algorithm "stabilized" via coresets does not care about stochasticity in the examples. If the output of an approximate ERM does not change much between rounds this should lead to a regret bound, perhaps? Roughly; $\sum_{t} \ell(A(X_t), x_{t+1}) - \ell(A(X_{t+1}), x_{t+1}) \leq \sum_t \beta(t)$ and by "Be-the-Leader" Lemma (Kalai & Vempala '03), $\sum_{t} \ell(A(X_{t+1}), x_{t+1}) \leq \sum_{t} \ell(\theta^\star, x_{t+1})$. * **$\epsilon$-regret vs competetive-ratio**. How does $\epsilon$-regret compare to competitive ratio from competitive analysis? * Lines 36-37 "... and when combined with the approximation algorithm ..." - to my understanding, whether the the offline algorithm that operates on the coreset is approximate or exact is irrelevant to the point you are trying to make, here and in the paper more generally, no? ## Minor comments * "Although the stochastic setting is not often satisfied in real applications, the performance 15 and guarantees of online algorithms in the adversarial case are considerably compromised" - the meaning of this sentence is not clear to me. * Lemma 4.2: I didn't see the notation "$x = r \pm \beta$" defined anywhere (I gather it signifies $r-\beta \leq x\leq r+\beta$). * What does it mean to "run $\mathcal A$ with approximation ratio of $(1-\epsilon')$..." (line 4 of Algorithm 3)? Theorem 6.1 does not mention explicitly any requirements from $\mathcal A$. Looking into the detailed version of the algorithm (Algorithm 6), it seems $\mathcal A$ should be a PTAS. Perhaps putting this in the main text could make this a bit clearer. * ״We remark that the importance of sampling steps in the framework is similar to 193 the ones described in Section 5, which thus allows us to analyze its average sensitivity.״ --> "the importance sampling steps"? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful comments! Due to the space limit, we address the major comments here. # 1. Comparison to previous works. It is possible to convert our results to standard regret bounds. Using the definition of approximate regret, we can rearrange the terms and obtain $\mathbb{E} _{\mathcal{A},\\{x_t\\}}\left[\sum^n _{t=1} \ell(\theta_t, x_t) - \mathrm{OPT}\right] = \epsilon \mathrm{OPT} + \mathrm{Regret} _\epsilon(n) $. Then by applying the (generalized) AM-GM inequality to the right-hand side and tuning $\epsilon$ appropriately so the inequality holds as equality, we can obtain an exact regret. From the approximate regret of our online matrix approximation result, we can translate it to $O(\mathrm{OPT}^{2/3} (d \log n \log(\epsilon^{-1} kn))^{1/3}) $. From the approximate regret of our online regression result, we can translate it to $O(\mathrm{OPT}^{2/3} (d \log n \log(\epsilon^{-1} dn))^{1/3})$. We also want to highlight that our results may be more general in some applications. For online matrix approximation with the random-order setting, the previous result by [Garber et al., 2020] is $O \left(\zeta^{-1} \sqrt{kn}\right)$, where $\zeta$ is the smallest difference between two eigenvalues of $\boldsymbol{A} \boldsymbol{A}^\top$. In contrast, our result does not depend on this $\zeta$. For online regression, [Garber et al., 2020] give $O(\sqrt{n})$-type regret when the matrix $\boldsymbol{A}$ has a small condition number. In comparison, our result maintains no requirement on the condition number. # 2. Coresets vs direct uniform convergence of the cumulative loss. We agree with the reviewer's comment on a high level. Yet the coreset construction method is still different from the uniform concentration of loss on the technical details because the former guarantees that the multiplicative error is small whereas the latter usually refers to guarantees on additive error. In the case where the offline algorithm is approximate, optimizing the cumulative loss at each step may not lead to small regret. The following shows that at least the be-the-leader lemma does not work. Suppose that at each time step, we run an offline algorithm to obtain an $(1+\epsilon)$-approximate solution on the cumulative loss. Let $\theta^\ast = \min_\theta \sum^t_{i=1} \ell(\theta, x_i)$, we obtain $\theta_t$ such that $\sum^t_{i=1} \ell(\theta_t, x_i) \leq (1 + \epsilon)\sum^t_{i=1} \ell(\theta^\ast, x_i)$. Note that we assume that one has access to $x_t$ before making the decision at time $t$, in our setting one does not even have this. Then the regret is $$\left(\ell(\theta_1, x_1) + \ldots + \ell(\theta_T, x_T)\right) - (1+\epsilon)\left(\ell(\theta^\ast, x_1) + \ldots + \ell(\theta^\ast, x_T)\right) \leq \left(\ell(\theta_1, x_1) + \ldots + \ell(\theta_T, x_T)\right) - \left(\ell(\theta_T, x_1) + \ldots + \ell(\theta_T, x_T)\right) $$ Because $\{\theta_t\}^T_{t=1}$ are approximate solutions, we cannot proceed the be the leader lemma as we cannot say $\left(\ell(\theta_T, x_1) + \ldots + \ell(\theta_T, x_{T-1})\right) \geq \left(\ell(\theta_{T-1}, x_1) + \ldots + \ell(\theta_{T-1}, x_{T-1})\right) $. In the case where the offline algorithm is exact, optimizing with respect to the cumulative loss at each step (follow the leader) works in the stochastic setting. Yet it remains unclear to us whether this result still holds in the random order setting. Related questions: No, Lemma 5.2 is independent of dimension $d$. The coreset construction method required to derive Theorem 5.1 is obtained by applying Algorithm 2 and the coreset construction proposed by Huang and Vishnoi '20 together to obtain dimension-independent results. The overall algorithm is given in the appendix (Algorithm 7). We will revise this in the final version to make this more explicit. # 3. Average sensitivity vs algorithmic stability. The average sensitivity cannot be directly compared to the algorithmic stability. The algorithm stability describes the change in loss functions by removing or replacing a data point while the average sensitivity describes the change in the algorithm's output by removing a data point. For more discussion on this, we refer to Varma, N., \& Yoshida, Y. (2021). Average sensitivity of graph algorithms. SODA. # 4. Applicability in the full adversarial setup We do not think the argument applies to the adversarial setting, as $\sum_{t} \ell(A(X_t), x_{t+1}) - \ell(A(X_{t+1}), x_{t+1}) \leq \sum_t \beta(t)$ does not hold in the adversarial setting. This is because we can only guarantee that on average, removing a point does not change $X_{t+1}$. Yet for every $t$, removing $x_{t+1}$ might be the worst change to $X_{t+1}$. # 5. $\epsilon$-regret vs competetive-ratio The $\epsilon$-regret quantifies the cumulative discrepancy between the losses incurred by an online algorithm and the $\epsilon$-approximation of the losses obtained by the optimal hindsight solution. The competitive ratio determines the maximum factor by which the online algorithm's cost could exceed the cost of an optimal offline solution in the worst case. The optimal solution is also defined differently, in regret analysis, we consider $\min_\theta \sum_t \ell(\theta,x_t)$ as the optimal value, but in competitive analysis, we consider $\sum_t \min_\theta \ell(\theta,x_t)$ as the optimal value. In general, $\epsilon$-regret and the competitive ratio are not interchangeable. # 6. Line 36-37 The approximation ratio of the coreset is irrelevant to the point we are trying to make, but we do need to operate the offline algorithm on the obtained coreset (which enjoys low average sensitivity) to have the overall algorithm enjoy low average sensitivity, and hence low regret. # Misc "the importance sampling steps": We are referring to the importance sampling (based on sensitivity scores) mentioned in Section 5 in the overall coreset construction algorithm for clustering (which is in the appendix, Algorithm 7). --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their rebuttal. ### 2. Coresets vs UC Could you please clarify the relation between Algorithm 2 and Algorithm 7? ### 3. average stability vs algorithmic stability Whether removing or replacing a datapoint is not important in the definition of algorithmic stability, and a stronger notion (and the one mostly used in practice) of average stability measures the change in output, not loss function. The difference pointed out in Varma, N., & Yoshida, Y. (2021) is with regards to *uniform* stability. I don't see how it applies to average stability (or uniform stability where the datapoints are averaged over; in average stability, both the input sets and datapoints are averaged over). --- Reply to Comment 1.1.1: Comment: We thank the reviewer again for the comments. We address the raised question in the following. # 1. Algorithm 2 and Algorithm 7 Algorithm 2 provides us a template for designing coreset construction algorithm with small average sensitivity by perturbing the weights (Line 5 - 10). Algorithm 7 applies this idea to Huang and Vishnoi [2020]'s algorithm. Specifically, Lines 6 and 11 describe the perturbations to the weights. # 2. Average sensitivity vs algorithmic stability We first want to remark that average sensitivity does not take the average over the input set, but average stability does (we follow definition 4 from Lei, Y., & Ying, Y. (2020) here). If the average sensitivity is bounded by beta, then the average stability is bounded by beta times the maximum distance between $\theta$'s, which is the algorithm parameter (or the loss difference incurred by them, if the average stability is defined wrt to the loss). As average sensitivity consider the worst case (in terms of input set), we do not believe that the average stability is sufficient to derive our results. In the stochastic setting, it is not clear whether average stability would be sufficient. In our current analysis, we crucially used the fact that TV is bounded (so we can say that the outputs are the same with high probability). For average stability to be sufficient, the loss has to be Lipschitz, which may not be always true (for example when the matrix has a high condition number in linear regression). We are also a bit confused about why TV upper bounds the Wasserstein distance (we believe that this is true if data are assumed to be in a unit ball, but we do not think this is true in the general case). Lei, Y., & Ying, Y. (2020). Fine-grained analysis of stability and generalization for stochastic gradient descent. In International Conference on Machine Learning (pp. 5809-5819). PMLR. --- Reply to Comment 1.1.2: Comment: We thank the reviewer again for the detailed comments. We hope our response answered your questions, and we will be more than happy to answer any additional questions.
null
null
null
null
null
null
Universal Gradient Descent Ascent Method for Nonconvex-Nonconcave Minimax Optimization
Accept (poster)
Summary: The paper proposes the doubly smoothed gradient descent ascent method and studies its convergence under nonconvex-KL and nonconvex-concave minimax problems. Strengths: This paper proposes a novel model to address the limiting cycle phenomena in nonconvex-nonconcave minimax optimization. The theoretal analysis match the best-known results for other methods. Weaknesses: The algorithm proposed in this paper comes naturally. From my experience, I can expect that DSGDA performs well in practice. My main concern is the theoretical guarantee of DSGDA. However, it seems that the results given by this paper do not show any superiority of DSGDA compared with other methods. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Are there any differences between the convergence of Doubly Smoothed GDA and Smoothed GDA (Zhang et.al 2020) / Smoothed PLDA (Li et. al 2022) in theory? From the theoretical viewpoint, why is Doubly Smoothed GDA better than Smoothed GDA? Can we show that the Doubly Smoothed GDA provably gets rid of limiting cycles? The authors claimed that "our work demonstrates, for the first time, the possibility of having a simple and unified single-loop algorithm for solving nonconvex-nonconcave, nonconvex-concave, and convex-nonconcave minimax problems." Can the authors explain that why the Smoothed GDA cannot have such a "unified analysis"? By the way, I think it is better to use "nonconvex-KL" instead of "nonconvex-nonconcave" since the difficulties of these two problems seem to be very different from my point of view. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your comments. We hope the clarification can clear your concerns and may reevaluate our contribution. **Q1: Superiority of DSGDA compared with other methods.** DSGDA can directly achieve both $\epsilon$-GS and $\epsilon$-OS points at the same rate. Furthermore, its convergence rate stands out as the fastest among the current body of literature, aligning with the results of SGDA. For the superiority of DSGDA compared with SGDA, please refer to the global response to Q2. **Q2: Differences between the convergence of DSGDA and SGDA (Zhang et al. 2020)/Smoothed PLDA (Li et al. 2022) in theory. Why is DSGDA better than SGDA? Can we show that the DSGDA provably gets rid of limiting cycles?** - The introduction of double extrapolation in DSGDA fundamentally differentiates it from SGDA. Several key aspects highlight these differences. 1. The previously used Lyapunov function in SGDA is no longer applicable and a novel Lyapunov function tailored for DSGDA is constructed (see line 198). 2. To establish the sufficient decrease property of this newly proposed Lyapunov function, the derivation of a new proximal error bound is required (refer to Proposition 1). 3. Conceptually, due to the two extrapolation steps, extending the proof technique from SGDA to DSGDA presents significant challenges in primal-dual balancing. To be specific, our theoretical analysis, as shown in Theorem 1, two additional parameters, $r_2$ and $\mu$, make it difficult to provide explicit bounds for all hyperparameters. In SGDA, since there is only one-sided extrapolation, the relationship between different parameters is rather simple. The step-sizes can be sufficiently small. While in DSGDA, due to the double extrapolation, their relationship is complicated and the step-sizes will have non-trivial lower bounds. Although the algorithmic extension of DSGDA may seem intuitive and straightforward, we firmly believe that substantial effort and theoretical development are required to establish the global convergence of our proposed DSGDA method. The complexities introduced by double extrapolation demand rigorous investigation and are the focus of our research efforts. - From the theoretical viewpoint, DSGDA is the first simple and universal single-loop algorithm for solving NC-NC, NC-C, and C-NC minimax problems. SGDA does not have such "unified analysis". Please refer to the global response to Q1. - Yes, we believe DSGDA can be proven for getting rid of the limit cycles. Regarding the gap between the practical claim and the theoretical claim, please refer to the global response to Q3. **Q3: Can the authors explain why the SGDA cannot have such a "unified analysis"? It is better to use "NC-KL" instead of "NC-NC" since the difficulties of these two problems seem to be very different.** - For the SGDA proposed in [1] and [2], it can only solve concave (or NC-KL) problems. Further changes in algorithms are needed for solving KL-NC problems. Please refer to the global response to Q1 for details. - Intuitively, DSGDA is designed to address general NC-NC problems. Although our theoretical analysis currently focuses on cases that satisfy one-sided KL (or convex/concave) problems, we observe that DSGDA effectively escapes the limit cycle in numerous challenging NC-NC examples without any regularity condition (see Section 2). Moreover, our algorithm can also be applied to NC-C problems, which may not satisfy the KL property (see counterexamples in Corollary 4 of [3]). It may be better to use "a class of Constrained Nonconvex-Nonconcave Minimax Optimization" instead. [1]. Zhang J, Xiao P, Sun R, et al. A single-loop smoothed gradient descent-ascent algorithm for nonconvex-concave min-max problems. [2]. Li J, Zhu L, So A M C. Nonsmooth Composite Nonconvex-Concave Minimax Optimization. [3]. Bolte J, Pauwels E. Curiosities and counterexamples in smooth convex optimization. --- Rebuttal Comment 1.1: Title: Thanks a lot for helping me understanding this paper Comment: The authors' rebuttal helps me further understand the contribution and technical challenges in this paper. I now think that this article has made a sufficient theoretical contribution and shed light on the nonconvex-nonconcave minimax problems. I think many useful insights provided in this paper would be of great help for future research on nonconvex-nonconcave minimax problems. I now like this paper very much. I will be glad if I see this paper published in NeurIPS 2023. I decide to raise my score to 6. By the way, I agree with reviewer gc6W that it would be better to use “S-GDA” / “DS-GDA” instead of "SGDA" / "DSGDA to avoid possible confusion. --- Reply to Comment 1.1.1: Title: Thanks for your response Comment: Thanks for your response. We are delighted to hear you are willing to re-evaluate our work and happy to see you appreciate our work. For the abbreviation of S-GDA/DS-GDA, we will revise it in the updated version.
Summary: This paper addresses the nontrivial general minimax problem by proposing a novel algorithm called Doubly Smoothed Gradient Descent Ascent (DSGDA). The authors provide both experimental and theoretical evidence to support their claims. Strengths: 1. Overview I enjoyed reading this work. It's well written and organized, although some refernece and comparison is missing (see below). 2. The proposed algorithm is not complicated and the intuition is sound. 3. Convergence results are sound. Weaknesses: While I acknowledge the soundness of the algorithm and the obtained results, I do have several concerns about this paper. ### W1: The theoretical convergence results demonstrating that the proposed method converges to a stationary point do not necessarily showcase its effectiveness in addressing the issue of **general** minimax problems. Although it is generally difficult for first-order methods to derive other convergence properties, such as local minimax points, for nonconvex-nonconcave problems, could the authors show the superiority of the algorithm under other settings? ### W2: The authors claim that their work demonstrates, for the first time, the possibility of having a simple and unified single-loop algorithm for solving nonconvex-nonconcave, nonconvex-concave, and convex-nonconcave minimax problems. However, to my knowledge, GDA-AM (ICLR 2022) also demonstrated such theoretical results (Thm C.4 and C.9) and showed convergent experiments. Additionally, the fast extragradient (NeurIPS 2021) made similar claims, as mentioned in Remark 1. Could the authors discuss the differences in contributions between their work and these existing papers? ### W3: Experiments The authors only compared their algorithm with SGDA. However, what about other methods? For example, how does it compare to fast extragradient or GDA-AM on these problems? 1. He et.al. GDA-AM: On the effectiveness of solving minimax optimization via Anderson Acceleration, ICLR 2022 2. Lee et.al. Fast Extra Gradient Methods for Smooth Structured Nonconvex-Nonconcave Minimax Problems, Neurips 2021 Except several concerns unaddressed, I tend to accept. I look forward to the authors' response. Technical Quality: 3 good Clarity: 3 good Questions for Authors: ### Explainantion on Theorem 1 Could authors make additional comments are conditions of theorem 1. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Authors discussed limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your comments. We are happy that the reviewer enjoys our paper and would like to thank you for your insightful comments. Here are some illustrations based on your questions/suggestions. **Q1: Superiority of the algorithm under other settings.** We kindly request the reviewer to confirm whether we misunderstand your question. We answer it in the following two possible directions. We'd be happy to clarify and answer your follow-up questions during the discussion period. * For general C-NC and NC-C problems, we have shown our algorithm can achieve both $\epsilon$-game stationary (GS) and $\epsilon$-optimization stationary (OS) points. They are the best points we can get in these settings and similar results can be obtained by SGDA. For C-C problems, we believe DSGDA can achieve Nash equilibrium point and we intend to explore it in future research. * For the superiority of DSGDA on C-NC and NC-C problems, our convergence rate matches the results of SGDA, which is the SOTA. For the superiority compared with SGDA, please refer to the global response to Q1. **Q2: Differences with GDA-AM (ICLR 2022) and fast extragradient (NeurIPS 2021).** Thanks for pointing out two useful references. We believe our proposed DSGDA is fundamentally different from GDA-AM and fast extra-gradient. Here are detailed comparisons: **For GDA-AM:** GDA-AM only enjoys the global convergence guarantee for **unconstrained** C-C and C-NC problems. We have gone through the proof details, to the best of our knowledge, GDA-AM cannot handle the constrained case and its extension to the constrained scenario is highly non-trivial. Even under the unconstrained setting, the convergence of NC-NC problems is unsolved. As far as we know, no first-order method can converge for unconstrained NC-NC problems. We conduct a new experiment for the ``sixth-order polynomial'' example with initialization $(x,y)=(15,15)$, none of the first-order methods (SimGDA, AltGDA, OMD, EG, DSGDA, AltGDA-RAM, SimGDA-RAM) converge (refer to Figure (h) in **PDF**). For constrained NC-NC problems that we studied in the paper, the applicability of GDA-AM remains uncertain. In contrast, we have demonstrated that our algorithm effectively eliminates limit cycles in challenging constrained NC-NC scenarios, accompanied by theoretical guarantees for KL-NC and NC-KL settings. **For fast extra-gradient:** Firstly, it is only shown to converge under the negative comonotone condition, which is a stronger assumption than weak MVI. Such an assumption is restrictive and general C-NC/NC-C problems are easy to violate it. For example, the violations of this condition for C-NC problem $\min_{x\in \mathcal{X}}\max_{y\in \mathcal{Y}}f(x,y)=2x-y^2+4xy^6$ with $\mathcal{X}=\mathcal{Y}=\\{z:-1\leq z\leq 1\\}$ can be checked. Secondly, the convergence for C-NC/NC-C problem is unaddressed in this paper (i.e., NC-C/C-NC problems are not necessary to satisfy the negative comonotone condition), while DSGDA enjoys the global convergence with an iteration complexity of $\mathcal{O}(\epsilon^{-4})$. Finally, CurvatureEG+, an algorithm that converges under the weak MVI condition, is compared with DSGDA in Section 2. We can observe that CurvatureEG+ diverges or falls into a limit cycle for many examples. Given that weak MVI represents the weakest VI condition, we can reasonably anticipate similar outcomes for the fast extra-gradient method. **Q3: Experiments for GDA-AM and fast extragradient.** GDA-AM cannot directly apply to constrained problems and the fast extra gradient will have similar performance with CurvatureEG+. Please refer to the response to Q2 for details. **Q4: Explanation on Theorem 1: Could authors make additional comments are conditions of theorem 1.** Please refer to the global response to Q2. **Missing reference** Thanks for your advice. We would add the missing reference in the updated version. --- Rebuttal Comment 1.1: Title: Thanks for the clarification Comment: Thank the authors for their comprehensive response. It helped me to understand the contributions of DSGDA. I will read two references again and more closely. Will come back later.
Summary: This paper introduces a novel single-loop algorithm called the doubly smoothed gradient descent ascent method (DSGDA). DSGDA effectively balances the primal and dual updates, eliminating limit cycles in various challenging nonconvex-nonconcave scenarios in the literature. The paper establishes that under a one-sided Kurdyka-Łojasiewicz condition, with an exponent $\theta\in (0,1)$ (or for convex primal/concave dual functions), DSGDA can discover a game-stationary point with an iteration complexity of $O(\epsilon^{-2\max\{(2\theta, 1)\}})$ (or $O(\epsilon^{-4})$), respectively. These complexity results match the best outcomes achieved by single-loop algorithms for solving nonconvex-concave or convex-nonconcave minimax problems, as well as problems satisfying the restrictive one-sided Polyak-Łojasiewicz condition. Strengths: **Originality**: The paper provides a comprehensive discussion of existing algorithms for mini-max optimization problems and introduces a novel algorithm aimed at avoiding limit cycles. The proposed algorithm is supported by both theoretical results and empirical validation, strengthening its efficacy. **Quality and Clarity**: The paper exhibits a high level of writing quality, featuring a well-structured presentation that is easy to follow. It effectively employs examples and explanations to enhance comprehension of both the algorithm and the underlying theory. Weaknesses: **Significance**: While I acknowledge the valuable contributions made by this paper in terms of algorithmic and theoretical aspects, it is important to note that the current theory relies on one-sided KL or one-sided convex/concave conditions. Considering the paper's objective of addressing limit cycles, it would be beneficial to establish a clearer connection between the existing theory and the role of DSGDA in avoiding such cycles. Elaborating on this relationship would significantly enhance the paper's overall strength and clarity. Technical Quality: 3 good Clarity: 3 good Questions for Authors: **Main Questions** 1. Relation between OS and GS: I would suggest including further discussions in the paper regarding the relationship between OS and GS notations. It appears that there exists a translation method between these two notations, as mentioned in [1]. Elaborating on this relationship can greatly enhance the paper's quality and comprehensibility. 2. Odd constants in Theorem 1:The extrapolation step in the algorithm introduces exceedingly large constants, which might seem counter-intuitive. Although the calculations on Page 23 provide some insight into their derivation, further explanation is required to clarify the reasoning behind these values and make them more understandable to readers. 3. Connection between theory and practice: Considering the paper's objective of addressing limit cycles, it would be beneficial to establish a clearer connection between the existing theory and the role of DSGDA in avoiding such cycles. Elaborating on this relationship would significantly enhance the paper's overall strength and clarity. 4. Unconstrained setting: While the paper focuses on the two-sided constrained setting, it is worth noting that some existing papers tackle the scenario where only the domain of the max variable is bounded, as demonstrated in [1]. To enhance the paper, it could be valuable to discuss the disparities and challenges encountered in the unconstrained setting, providing a comparative analysis between the two scenarios. 5. Stochastic setting: It would be interesting to see if similar theoretical results can be obtained in the stochastic setting as provided in [2] **Minor issues** 6. I would recommend not directly using “nonconvex-nonconcave” in the title as the paper does not consider the general nonconvex-nonconcave setting. 7. I recommend not using “SGDA” to represent the smoothed GDA as SGDA is well-known for stochastic gradient descent ascent algorithms. Using “S-GDA” and “DS-GDA” instead would avoid confusion. 8. Line 77: The size of the right parenthesis **Reference** [1] Lin, Tianyi, Chi Jin, and Michael Jordan. "On gradient descent ascent for nonconvex-concave minimax problems." International Conference on Machine Learning. PMLR, 2020. [2] Yang, Junchi, et al. "Faster single-loop algorithms for minimax optimization without strong concavity." International Conference on Artificial Intelligence and Statistics. PMLR, 2022. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper is theoretical and does not have any potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are happy that the reviewer enjoys our paper and would like to thank you for your insightful comments. Below we provide a point-by-point response to your comments and questions. **Q1: Connection between theory and practice: It would be beneficial to establish a clearer connection between the existing theory and the role of DSGDA in avoiding such cycles.** Thanks for your advice. We will add additional discussions in our updated version. * All existing work for NC-NC problems relies on restricting the function classes to achieve convergence. There are mainly three types of regularity conditions in the literature, PL condition, VI-related condition, and $\alpha$-interaction dominance (see Appendix B). With PL condition imposed on the dual function, the inner max function $\phi(\cdot)=\max_{y\in \mathcal{Y}} f(\cdot,y)$ is smooth and we can regard minimax problem as a pure smooth minimization problem. VI conditions regard the primal and dual variables as integral and update them together, which does not follow the nature of sequential games. The one-sided $\alpha$-dominance condition imposes the dominant player before the game. Instead, we address the NC-NC minimax problem by achieving primal-dual balance through algorithmic developments. We believe the double smoothing technique is an efficient manner to get rid of the limit cycle. * Extensive experiment results also validate the power of DSGDA (see Section 2). To the best of our knowledge, DSGDA is the first algorithm, which can converge on all four difficult examples in literature. We admit the theoretical result of DSGDA is restricted to one-sided KL (convex/concave) cases. Although there is a gap between practice and theory, we hope it opens a new path for studying and developing new theoretical frameworks. **Q2: Relation between OS and GS** Thanks for the good question. We also have such GS and OS translation results: **If $(x,y)\in \mathcal{X}\times \mathcal{Y}$ is an $\epsilon$-GS, then it is an $\mathcal{O}(\epsilon^{\min\\{1,1/2\theta\\}})$-OS.** One should be noted that our algorithm-dependent result (Theorem 2 for the proposed DSGDA) will achieve the GS and OS at the same rate, which is stronger than directly applying the translation result when $\theta\in(\frac{1}{2},1)$. We will add the proof in our updated version and here is a proof sketch. **Proof Sketch** Building on the Proposition 2 in Appendix, we have the following bound of the measurement of OS: $$\\|x^*(x)-x\\|\leq \\|x^*(x)-x(x,y)\\|+\\|x(x,y)-x\\|\leq \omega_2\\|y-y(x,y)\\|^{\frac{1}{2\theta}}+\\|x(y,x,v)-x\\|+\sigma_1\\|y-y(x,y)\\|.$$ By the nonexpansiveness of the projection operator and error bounds in Lemma 2 and 3, we can further bound $\\|y-y(z,v)\\|$ as follows: $$\\|y-y(z,v)\\|\leq \sigma_8 \\|y-y_{+}(z,v)\\|\leq L_y\alpha\sigma_8\\|x-x(y,z,v)\\|+\left(2+\alpha L_y+\alpha r_2\right)\sigma_8\\|y-y(x,z,v)\\|.$$ Next, we will explore the relationship between $\\|x-x(y,x,v)\\|$ and $dist(z, \nabla_{x}f(x,y)+\partial1_{\mathcal{X}}(x))$. Let $x_{+}(y,x,v):=proj_{\mathcal{X}}(x-c\nabla_{x}F(x,y,x,v))$, then from the primal error bound (see [3]), we know that $$\\|x-x(y,x,v)\\| \leq \frac{cL_x+cr_1+1}{cr_1-cL_x}\\|x-x_{+}(y,x,v)\\|.$$ Moreover, since $\nabla_{x}F(x,y,x,v)=\nabla_{x}f(x,y)$, then following from Lemma 4.1 of [4], we get $$\\|x-x(y,x,v)\\| \leq \frac{cL_x+cr_1+1}{cr_1-cL_x}\\|x-proj_{\mathcal{X}}(x-c\nabla_{x} f(x,y))\\|\leq \frac{cL_x+cr_1+1}{r_1-L_x} dist(z, \nabla_{x}f(x,y)+\partial1_{\mathcal{X}}(x)).$$ The similar analysis can be applied to derive the bounds for $\\|y-y(x,z,y)\\|$. Thus, if $(x,y)$ is an $\epsilon$-GS point, then it is also an $\mathcal{O}(\epsilon^{\min\\{1,1/2\theta\\}})$-OS. **Q3: Odd constants in Theorem 1** We kindly request the reviewer to confirm whether we misunderstand your question. From our side, the extrapolation step $\beta$ should decrease to zero w.r.t. iteration $T$ when $\theta\in (\frac{1}{2},1)$ (see Theorem 2). This implies that the sequence ${z_t}$ will undergo a decreasing rate of change, gradually approaching convergence. Conceptually, you can regard $z$ as an approximated proximal mapping of the max function $\max_{y\in\mathcal{Y}} f(\cdot,y)$. For further insights into parameter selection, please refer to the global response to Q2. **Q4: Unconstrained setting** For the NC-C problem, the primal boundedness requirement can be removed and only the lower boundedness of the max function is enough, which allows the unbounded case and match those results in [1]. Otherwise, for the NC-NC case, the boundedness requirement is needed to ensure the lower boundedness of the Lyapunov function. **Q5: Stochastic setting** Yes, it can be done. There are no intrinsic difficulties. However, since the paper is already 32 pages, it may disperse the current focus of the paper if the proof of stochastic setting is included. **Q6: Minor issues** Thanks for your advice. We will modify our title to "Doubly Smoothed GDA for a class of Constrained Nonconvex-Nonconcave Minimax Optimization". For the other suggestions on abbreviation and right parenthesis, we will revise it afterward. [3]. Pang J S. A posteriori error bounds for the linearly-constrained variational inequality problem. [4]. Li G, Pong T K. Calculus of the exponent of Kurdyka–Łojasiewicz inequality and its applications to linear convergence of first-order methods. --- Rebuttal Comment 1.1: Comment: Thanks for the response! I am looking forward to seeing the upcoming revision, particularly regarding the relation between OS and GS. I would be much happier if authors can provide more theoretical and fundamental analyses on *"We address the NC-NC minimax problem by achieving primal-dual balance through algorithmic developments. We believe the double smoothing technique is an efficient manner to get rid of the limit cycle."*
Summary: Constructing a method that finds an optimal point of nonconvex-nonconcave problems is of interest. Built upon the "one-sided" smoothed GDA (SGDA) that exploits the Moreau-Yosida smoothing technique, this paper studies the "doubly" smoothed GDA (DSGDA), applying the smoothing to both the primal and dual variables. As an illustration, this paper considers four challenging nonconvex-nonconcave minimax problems that do not satisfy any of the regularity conditions considered in the literatures and that none of existing methods work. It turns out that the DSGDA is the first known method that works for all four difficult problems. To complement this empirical success, the authors theoretically show that the DSGDA converges to a stationary point under an one-sided KL condition. (The aforementioned four problems do not satisfy the one-sided KL condition.) The authors also present that the DSGDA works for a wider range of hyperparameters than the SGDA for nonconvex-nonconcave problems. Strengths: - The proposed DSGDA methods empirically converges to stationary points of four notoriously difficult minimax problems, while other existing methods do not, which is impressive. - The authors shows that the DSGDA converges to a stationary point of nonconvex-nonconcave problems with one-sided KL property. The method achieves the best known rate for a more restrictive setting that the dual function satisfies either the concavity or the PL condition. Weaknesses: * The DSGDA does not outperform SGDA under the one-sided KL setting, where the proposed DSGDA is theoretically shown to work well. Would it be possible to show that the SGDA also converges under the one-sided KL setting? Or instead, would it be possible to show that it does not converge? Are there other methods that work under the one-sided KL setting? * As mentioned by the authors, there remains a gap between the theoretical claim and practical claim. * The robustness claim is only based on one nonconvex-nonconcave experiment. In addition, the region of convergence of DSGDA for the nonconvex-nonconcave case is relatively small, so it does not seem to be sufficient to claim that DSGDA is robust (even though it is relatively robust than the SGDA). * The choice of hyperparameters that guarantees convergence in Theorem 1 is complicated, so it is not easy to find an optimal choice in terms of the rate, making it difficult for the users to choose. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: * Line 33: Why does no player inherently dominate the other in nonconvex-nonconcave problems? Later in line 50, the authors claim that the primal player becomes dominant for the nonconvex-concave problem. Then, consider a nonconvex-nonconcave problem that is locally nonconvex-concave at some stationary points. Then will the primal player suddenly become dominant locally? * Line 34: How do you achieve a good balance between primal and dual updates in this paper? Of course, applying smoothing to both variables improve the balance, as claimed by the authors. However, the choice of regularization parameters $r_1$ and $r_2$ and step sizes $\alpha$, $c$, $\beta$, $\mu$ are not the same for both variables. At this point, I am not following what it really means by the good balance. * line 83: In what sense, is this work the best known convergence analysis? * Are there practical examples that satisfy the one-sided KL property but not the one-sided PL property? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: This work has potential but the gap between the practical claim and the theoretical claim seems not negligible, and I think that the paper could have been better if such gap was further reduced. For example, an experiment with one-sided KL where DSGDA outperforms SGDA could have better supported the paper's claim. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your constructive comments. We hope the clarification that we emphasize may help for a better understanding of our contributions. **Q1: Does SGDA converge under the one-sided KL setting? As there other methods that work under the one-sided KL setting?** Recent work [1] has demonstrated the convergence of the SGDA under the one-sided KL condition, and it is currently the sole method to have a global convergence guarantee under this setting. Despite this, DSGDA still outperforms SGDA due to its universality. Please refer to the global response to Q1. **Q2: Gap between theory and practice.** Please refer to the global response to Q3. **Q3: The region of convergence of DSGDA for the nonconvex-nonconcave case is small, so it is not sufficient to claim that DSGDA is robust.** The NC-NC example ($a=11$) we evaluate in the paper (Figure 3(c)) is already an extremely challenging case as SGDA fails to converge at $a=10$. The challenge stems from the difficulty in identifying the dominant side. Thus, the problem becomes easier and the range of parameters becomes wider when $a$ becomes larger (see Figure (e)-(f) in the **PDF**). To further corroborate the robustness of DSGDA, we provide the parameter ranges for several less challenging problems when $a=11.6$ and $a=13$. It's evident that both robustness (see Figures (c)-(d) in the **PDF**) and fast convergence (see Figures (a)-(b) in the **PDF**) of DSGDA persist across these NC-NC problems. **Q4: It is not easy to find an optimal choice in terms of the rate.** The choice of parameters will not affect the convergence rate. Actually, all parameters only relate to the Lipschitz constants of the gradients $L_x, L_y$. Please refer to the global response to Q2. **Q5: Will the primal player suddenly become dominant locally for local NC-C problem?** Thanks for pointing out it. We would like to clarify that dominance is a global concept. One player is said to dominate the other if its decisions determine the convergence no matter how good action the other player has done. For NC-NC problems, there is no player inherently dominates the other and the local structure plays no effect on global convergence. **Q6: How do you achieve a good balance between primal and dual updates in this paper? The choice of parameters is not same** The unbalance between primal and dual updates in minimax raises from the different optimal directions and these related changing quantities of two players. That is, one aims to minimize the function value, while the other aims to maximize it, making it hard to guarantee the sufficient decrease property of a function. Thus, achieving a good balance means that we can construct a novel Lyapunov function that possesses the "sufficient decrease" property (see Theorem 1). Here, the parameters are not the same in general. They are precisely controlled to ensure the 'sufficient decrease' property, achieving a good balance between primal-dual updates. **Q7: In what sense, is this work the best-known convergence analysis?** Thanks for your questions. We will clarify this point in our updated version. * **NC-C:** DSGDA attains both the $\epsilon$-GS (game-stationary point) and $\epsilon$-OS (optimization-stationary point) with a complexity of $\mathcal{O}(\epsilon^{-4})$, matching the sharpest rate among single-loop algorithms for NC-C minimax problems. * **NC-PL/NC-SC:** DSGDA attains both the $\epsilon$-GS and $\epsilon$-OS with a complexity of $\mathcal{O}(\epsilon^{-2})$, which is already optimal. **Q8: Are there practical examples that satisfy the one-sided KL property but not the one-sided PL property?** PL property is only defined for unconstrained problems, while KL property is more general and can be applied to constrained (nonsmooth) problems. Additionally, even in unconstrained cases, PL condition is a special case of KL with exponent $\theta=\frac{1}{2}$. Hence, KL functions are considerably broader than PL functions. Many constrained minimax problem examples will be that case. More specifically, we may consider the widely considered max-structured problem $\min_{x\in\mathcal{X}} \max_{y\in\Delta} y^\top G(x)$, where $\Delta =\\{y \in \mathbb{R}^d:\sum_{i=1}^d y_i = 1, \ y \ge 0\\}$ is the standard simplex. Such a problem arises frequently in machine learning applications, including distributionally robust optimization, adversarial training and fairness training. It can be shown that this problem possesses the KL property with exponent $\theta=0$ for the dual problem under the mild condition that there exists $\delta>0$ such that $\max_{i\in [d]} G_i(x^*)\ge G_j(x^*)+\delta$ for $j\in[d]$ satisfying $y_j^*=0$, see [1] for details. Moreover, the constrained NC-SC and NC-PL problems all satisfy the one-sided KL property. For example, the nonconvex-regularized variant of DRO problem and multi-class classification problems mentioned in [2] possesses KL property with $\theta=\frac{1}{2}$ for dual problem. **Q9: An experiment with one-sided KL where DSGDA outperforms SGDA could have better supported the paper's claim.** Thank you for your suggestions. To further demonstrate the superiority of DSGDA over SGDA for the NC-KL case, we have included an additional experiment, see Figure (g) in the **PDF** (the example we test in the global response to Q1). As we have mentioned, the universal applicability of DSGDA has been clarified in the global response to Q1. We further support it with the new experiment illustrated in Figure (g). Notably, DSGDA, even without prior knowledge, demonstrates faster convergence upon entering the local region of the stationary point. This efficiency contrasts with SGDA, even when equipped with the correct smoothing side. [1] Li J, Zhu L, So A M C. Nonsmooth nonconvex-nonconcave minimax optimization: primal-dual balancing and iteration complexity analysis. [2] Zhang X, Aybat N S, Gurbuzbalaban M. Sapd+: An accelerated stochastic method for nonconvex-concave minimax problems. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. I am still reading other reviews and rebuttals, so I will finalize my decision shortly. In the meantime (as the discussion period is nearing the end), I have a question about the universality of the proposed method. The authors claim that the DSGDA is universal due to its symmetry of extrapolation, unlike the SGDA that extrapolates only one side. I agree with that. However, still doesn't the DSGDA choose the hyperparameters asymmetrically? In the paper, I was not able to find any formal mathematical statement that DSGDA with the chosen asymmetric hyperparameters can converge across NC-NC, NC-C, and C-NC problems. If you need to adjust the hyperparameters manually, then I think the universality claim should be weakened. Let me know what I am misunderstanding here. --- Reply to Comment 1.1.1: Comment: Thank you for your response and follow-up questions. We would like to emphasize and clarify that the symmetric ability to extrapolate offered by DSGDA does not imply the symmetric selection of primal-dual parameters; DSGDA is indeed theoretically ensured to converge across various scenarios (NC-C/C-NC/NC-KL/KL-NC) without the need for manual parameter adjustments. In standard optimization problems, an algorithm is typically considered **universally effective** in ensuring a performance guarantee when there exists a well-defined parameter prescription, based on local problem characteristics and global Lipschitz constant, that attains the desired performance guarantee across a broad class of problems. In our (more complex setting involving min-max optimization) this is what DSGDA achieves. Note that every parameter's choice depends only on Lipschitz constants and (importantly) not on the knowledge of the one-sided KL exponent/convexity/concavity. Maybe, the referee can provide some insight into why symmetry of parametric selection is something that is reasonable to expect or even desirable in this setting, especially when the nature of the problem and algorithm here is very non-symmetrical in that the roles of the variables are non-exchangeable. Then, we offer a comprehensive elucidation of why DSGDA does not enforce manual tuning of hyperparameters. All hyperparameters are just based on the Lipschitz constants. To start with, we want to emphasize only two factors that contribute to the non-symmetric parameter selections within DSGDA. We will discuss two factors separately. 1. [Theorem 1] The asymmetric parameters selection in DSGDA only stems from which variables we choose to update first (i.e., whether to update $x$ followed by $y$, or vice versa), **which is not related to the inherent asymmetry of NC-C/C-NC/NC-KL/KL-NC at all.** As you can observe from Theorem 1 (basic descent estimate), all hyperparameters are solely based on the Lipschitz constant $L_x, L_y$. We did not rely on any convexity/concavity or one-side KL exponent. We can conclude that the symmetric ability offered by DSGDA does not imply the symmetric selection of primal-dual parameters. 2. [Theorem 2] Moreover, upon Theorem 1, we use Proposition 1 (error bound condition) to quantitatively control the negative term in basic estimate descent (line 216) in Theorem 1 and establish the main theorem (see Theorem 2). The remaining asymmetry only arises from the selection of parameters $\beta$ and $\mu$, which is attributed to which side we want to apply Proposition 1 (error bound condition). Thus, we can always choose the smallest extrapolation steps between these two sides with KL $\theta =1$, e.g., $\mu$ and $\beta$ as the same order $\mathcal{O}(T^{-\frac{1}{2}})$ to guarantee the convergence of DSGDA. However, it is worth noting that this universality will result in a suboptimal rate as $\mathcal{O}(\epsilon^{-4})$. In cases where additional information is available regarding the KL exponent $\theta$ (**what we discussed in Theorem 2**), a better choice of $\beta$ or $\mu$ can be selected to achieve a sharper convergence rate of $\mathcal{O}(\epsilon^{-2\max\{2\theta,1\}})$. In summary, we survey here why our parameter selection obeys only the global Lipschitz constant and the impact of additional information on sharper convergence rate supplied by the KL exponent. We will add this detailed discussion in our updated version. We believe our response will address your concerns about the universality issue. We'd be happy to take more follow-up questions.
Rebuttal 1: Rebuttal: We appreciate and thank the reviewers for their instructive comments. Below we provide our response to some common comments and questions. We believe that our responses will effectively address your major concerns, leading to a re-evaluation of our paper's contribution and quality. **Q1: Is there any superiority compared with SGDA?** Yes. One of the key advantages, the universality of DSGDA across NC-NC, NC-C, and C-NC problems, might have been overlooked by the reviewers. In practice, justifying the convexity/KL property of the primal or the concavity/KL property of the dual is a considerably challenging task. Our proposed DSGDA can be applied without knowing this prior information, owing to its inherent symmetry. However, prior to implementing SGDA, we have to choose which side we would like to employ the extrapolation. If we choose the wrong side, it will result in a slow convergence or even diverge. To validate this, we conduct a new experiment for a KL-NC problem $\min_{x\in \mathcal{X}} \max_{y \in \mathcal{Y}} f(x,y)=2x^2 - y^2 + 4xy^6 + 4y^3/3-y^4/4$ with $\mathcal{X}=\mathcal{Y}=\\{z:-1\leq z\leq 1\\}$. With a wrong smoothing side, SGDA (with primal smoothing) leads to a slower convergence compared with dual smoothing, see Figure (g) in the **PDF**. We kindly request all reviewers to reevaluate the universal applicability of DSGDA. We will provide more details on this universality in our updated version. On the theoretical side, the convergence rate matches the results of SGDA under the same condition. On the practical side, DSGDA has better empirical performance than SGDA, which stands out as the only algorithm capable of overcoming the limit cycle phenomenon in all four challenging NC-NC examples (see Section 2). Moreover, DSGDA is more robust in parameter selection (see Section 5). **Q2: Explanation on Theorem 1: Could authors make additional comments on conditions of theorem 1?** Thank you for bringing up this matter. We would like to emphasize that we only offer a single set of workable upper and lower bounds for all hyperparameters. However, there are numerous alternative selections, which is essentially not a big deal. The principle behind selecting those hyperparameters is to ensure the sufficient decrease property of the novel Lyapunov function, see Theorem 1. In practice, as we have shown, the range of viable parameter values can be quite large. **Q3: There remains a gap between theory and practice.** Regarding the gap between the practical claim and the theoretical claim, it's important to emphasize that achieving a game stationary point through first-order oracles for smooth constrained separable NC-NC optimization problems (where constraint sets of primal and dual variables are independent) within polynomial time remains an open question over an extended period. Recently, the pioneering work by [1] has demonstrated that finding a game stationary point in smooth constrained **non-separable** NC-NC optimization problems is already a PPAD-complete problem. We believe our paper marks the initial stride towards bridging this gap and aims to encourage further engagement from researchers in tackling this issue. [1] Daskalakis C, Skoulakis S, Zampetakis M. The complexity of constrained min-max optimization. Pdf: /pdf/315c011a99b561defae12ec0ce90902e889628a7.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Communication-Efficient Federated Bilevel Optimization with Global and Local Lower Level Problems
Accept (poster)
Summary: The paper studied the federated bilevel optimization problems, in which the upper level losses are non convex and lover level losses are strongly convex. Two algorithms, named as FedBiOAcc and FedBiOAcc-Local, were proposed for whether the lower level problems requires to be consensus or not. The authors provided both communication can computation complexity bounds for the proposed algorithms. Two real-world tasks are adopted to validate the theoretical results. Strengths: The authors studied the bilevel optimization problems under federated learning framework and provided sufficient research motivation and literature review. They proposed recovering the hyper-gradient via solving a federated optimization problem, which is novel. Weaknesses: There have been many existing works on federated/distributed bilevel optimization. The contribution of this paper might not be significant. Also, the numerical comparison needs to be more fair. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: * It is a good idea to estimate the hyper-gradient via solving the federated quadric problem. * However, during the updating iteration, both $x_t$ and $y_t$ will be changing, and thus $l$. It leaves me concern that whether $u_t$ would converge. * Also how would the initialization of $u_t$ affect the algorithm convergence? * in line 161, would $u^*$ depend on $t$? * BTW, would you remind me the meaning of subscript of $2$ in $\nabla l^{(m)}$ in line 149? It looks like the definition is missing. * The paper leveraged the STORM type of variance reduction to accelerate the convergence. However, in the original STORM [1], as well as its variant SUSTAIN [2], the initial batch size is a fixed number. Why does it need $O(M^{-1}\epsilon^{-0.5})$ initial batch size? * The numerical studies looks somehow unfair to me. * In terms of communication, most of the other algorithms only transmit two vectors $x_t$ and $y_t$ but FedBiOAcc needs three vectors $x_t$, $y_t$ and $u_t$. Instead of looking at communication rounds, would the communicated bytes be a more fair metric to measure the communication loads? * The authors also didn't report the computation cost. Rather than the simple convergence rate (which omit lots of key factors), the computation time / number of gradient/Hessian computation would be a better metric? * BTW, what is the FedBio? It doesn't have any description. What's the different between FedBio and FedBioAcc? Some minor questions * Why doesn't the paper also include local-upper-shared-lower level problems? * Would the results or algorithms be extended to the case with strongly convex / convex upper level problem? [1] Momentum-based variance reduction in non-convex sgd [2] A Near-Optimal Algorithm for Stochastic Bilevel Optimization via Double-Momentum Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The authors didn't mention any limitation and potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for spending time reviewing our manuscript. Below are our responses to your concerns and questions: Response to Weakness: Our main contributions are three folds: Firstly, we view the estimation of hyper-gradient as solving a federated quadratic problem and therefore can get the unbiased estimation of the hyper-gradient in a communication-efficient way; Secondly, by combining with the momentum-based variance reduction, we obtain the optimal convergence rate (both the sample complexity and the communication complexity) and achieves linear speed up w.r.t. the number of clients. It is non-trivial to perform analysis which requires a careful trade-off of various types of estimation errors. Finally, we also consider the special case of local lower level problem for the first time, this type of federated bilevel problems has wide application over the personalized FL setting. Response to Questions: **Q1**: The estimation error to $x_t$, $y_t$ and $u_t$ are intertwined with each other (in particular, please refer to Lemma C.6 for a bound to the estimation error of $u_t$), and each of them is chasing a moving target. However, as shown by Theorem 3.6, $x_t$ converges to a stationary point of $h(x)$, then naturally, both $y_t$ and $u_t$ also converge as $x_t$ stops updating. The initial value of $u_t$ affects the convergence, and it appears in the constant factor of the bound in Theorem 3.6. Please refer to Line 728 in the manuscript for its explicit dependence. $u^*$ depends on t, as it is a function of the current state of the upper level variable $x_t$ as defined in Eq. 4. In line 149, we use the subscripts to differentiate the samples that are used to estimate different properties. For example, In line 149, $\mathcal{B}_{g,2}$ represents samples used for the estimation of hessian vector product. **Q2**: The large initial batch size is needed to mitigate the effect of client-drift error, which is generated when clients perform local updates. Note that the STEM [1] method which applies STORM to the single level FL problem also requires a large initial batch size to reach the optimal convergence rate (See corollary 1 in [1]). **Q3**: the metric of communication rounds is widely used in Federated Optimization [1] and Federated Bilevel Optimization [2,3] works for comparison of different methods. In particular, the state-of-the-art Federated Bilevel Optimization method FedNest [2] also uses communication round for comparison, and we follow the same experimental setting as their work. In fact, the communication rounds is consistent with the theoretical analysis where we hide the constant factors. We use FedBiO to refer to the algorithm without using momentum-based variance reduction, and its update rule is shown in Eq.6 of the manuscript, where we update the lower variable, upper variable and hyper-gradient estimation variable alternatively. **Minor points**: The case of local-upper problem is also interesting, and our FedBiO/FedBiOAcc can be extended to this case by not averaging the upper level variable, we leave a more detailed discussion of this case as a future work. We can follow the same analysis framework to analyze the simpler strongly-convex/convex upper level problem case, and we also leave this as a future work. References [1] Khanduri, Prashant, et al. "Stem: A stochastic two-sided momentum algorithm achieving near-optimal sample and communication complexities for federated learning." Advances in Neural Information Processing Systems 34 (2021): 6050-6061. [2] Tarzanagh, Davoud Ataee, et al. "Fednest: Federated bilevel, minimax, and compositional optimization." International Conference on Machine Learning. PMLR, 2022. [3] Yang, Yifan, Peiyao Xiao, and Kaiyi Ji. "SimFBO: Towards Simple, Flexible and Communication-efficient Federated Bilevel Learning." arXiv preprint arXiv:2305.19442 (2023). --- Rebuttal Comment 1.1: Title: Additional Discussion about the communication and computational complexity Comment: In this response, we want to further discuss the **communication and computation cost of our FedBiO/FedBiOAcc compared with other baselines**. First, we want to clarify the **DEFINITION** of one communication round is: **performing one round of global update to the upper level variable $x_t$**. (**Communication cost**): **FedAvg** has the lowest communication and computational cost, however it does not perform any data cleaning and fails to converge under high heterogeneity level; **FedNest** (similar for its variants FedMBO, AggITD and CommFedBiO) first performs multiple rounds of inner update (the FedINN algorithm in FedNest), then evaluates the hyper-gradient based on the Neumann Series (the FedIHGP algorithm in FedNest). Note that every round of inner update requires communication, and evaluating the Neumann series need to transmit intermediate states (whose dimension is at the order of the model parameter) for multiple rounds . **This means that FedNest and its variants do not not only transfer $x_t$ and $y_t$ per communication round**. In comparison, our method only transfers the $u_t$, $y_t$ and $x_t$ once; **Local-BSGVR** only transfers the local and upper variable at each communication round, however, it uses local hyper-gradient to estimate the global hyper-gradient, thus only works under the homogeneous assumption. **In summary**, the 'communication round' is a fair metric in terms of comparing the communication cost, in particular, FedNest and its variants have **HIGHER** per-round communication cost than our method. (**Computation cost**): We compare the computation cost of our method with the FedNest algorithm (similar arguments hold for its variants such as AggITD). Theoretically, our FedBiOAcc needs $O(I = M^{-1}\kappa^{5/3}\epsilon^{-0.5})$ number of gradient queries/hessian-vector product queries per communication round, while the FedNest needs $O(\kappa^4)$. Our FedBiOAcc takes a larger local steps to obtain the optimal $O(\epsilon^{-1})$ communication rate, which is consistent with the idea of FL that we use more local computation to get lower communication cost. However, in experiments, we set $I=5$ (number of local steps) for our FedBiOAcc, and set $T=5$ (number of inner update rounds), $N=5$ (number of hyper-gradient evaluation rounds) for FedNest, so the overall computation cost is similar for both methods, yet our FedBiOAcc has a faster convergence rate.
Summary: This paper proposes an algorithm named FedBiOAcc that solves the federated bilevel optimization problem. In particular, the hyper-gradient of the bilevel optimization problem is evaluated by solving a reformulated quadratic federated optimization problem. The upper-level problem, lower-level problem and the hyper-gradient estimation are optimized in an alternative manner. A rigorous convergence analysis of the algorithm is provided. Also, the paper novelly introduces a new problem of bilevel optimization with local lower-level problems and proves the same convergence rate for a FedBiOAcc-Local algorithm that adapts to this setting. Strengths: 1. Reformulation of the general federated bilevel optimization into an easier-to-solve federated quadratic optimization problem is interesting and sound. 2. The convergence analysis of both algorithms in the heterogeneous case is complex and sound. 3. The paper introduces a new problem setting with local lower-level problems, which could be even more challenging to solve. The algorithm and analysis are adapted to this new setting. 4. The formulations of the data cleaning task and the hyper-representation learning task into federated bilevel optimization problems are reasonable. Weaknesses: 1. The motivation of the proposed problem can be better instantiated with actual needs or examples in real-life. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. While bilevel federated optimization is technically challenging, what is the motivation to solve bilevel problems in a federated manner? There might need more concrete motivations and a clear, practical application or usage. Is there a need for distributed computing resources in such problems? Is there a requirement to ensure data privacy for bilevel optimization practical problems? 2. The convergence analysis of the paper claims linear convergence speed-ups with the number of clients $M$. Does the convergence always become faster with the number of clients? For example, it is possible that the clients are heterogeneous. Also, does the dataset size of each client affect the convergence? It is hard to believe that given a fixed total number of samples, distributing them among more clients makes learning easier/faster. 3. [Minor] Some notations used in the paper were not explicitly explained before the first appearance. For example, the $\tau$ and $\mathcal{B}$ in Line 149. Also, what do the subscripts mean for $\mathcal{B}$? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I do not find any particular limitations in the proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for spending time reviewing our manuscript and providing insightful comments. Below are our responses to your questions: Response to Weakness: Many real-world applications in FL exhibit a nested structure, and therefore can be solved as a federated bilevel problem. Firstly, the data-cleaning task in the FL setting can be formulated as a bilevel problem, as we discussed in section 5.1 of the manuscript. Next, various formulations of the Personalized FL has a bilevel structure: in our manuscript, we consider the hyper-representation approach in Section 5.2, where all clients jointly learn a backbone network, and each client individually learns a linear classifier with its local data. Furthermore, another way of Personalized FL is by training a subnetwork on each client, meanwhile we need train a mask for each client to do the selection. Response to Questions: **Q1**: Please refer to our response to the weakness above for some real world FL applications that are bilevel. **Q2**: The linear speed-up w.r.t. the number of clients achieves under certain conditions. Firstly, by observing the second term of the convergence bound in Theorem 3.6, we can see the number of iterations actually scales with the **product of batch size and number of clients**, which means if the total number of samples are fixed, distributing them to more clients might not improve the convergence speed, as the batch-size might be decreased. Secondly, the linear speedup conclusion still holds when the heterogeneity exists, however, note that the heterogeneity coefficients exist in the constant factors of the convergence bound in Theorem 3.6. If we take this constant factor explicitly, we have $ \frac{1}{T}\sum_{t = 1}^{T-1} \mathbb{E} \bigg[ \|\nabla h(\bar{x}_t) \|^2 \bigg] = O(\frac{\zeta^2\kappa^{16/3}}{(bMT)^{2/3}})$, and $\zeta$ denotes the heterogeneity coefficients. This equation shows that larger heterogeneity level leads to worse convergence bound, although increasing the number of clients $M$ still decrease the bound. **Q3**: $\tau$ is the step size/learning rate to solve the quadratic problem (its value choice be found in Theorem 3.6), $\mathcal{B}$ represents mini-batch of samples used at each iteration, furthermore, we use the subscripts to differentiate the samples that are used to estimate different properties. For example, In line 149, $\mathcal{B}_{g,2}$ represents samples used for the estimation of hessian vector product. Note that, to guarantee the convergence, it is essential to use independent samples to estimate different properties (gradient, hessian-vector product etc.) at each iteration. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: Thanks for the clarifications above. W1 & Q1: While the data cleaning task definitely presents a nested structure, my question is more on the need to perform this task in a federated manner. Are the advantages (e.g., distributed computing, data privacy) really necessary for these tasks (e.g., data cleaning)? If not, the marriage between bilevel optimization and FL could appear forced. --- Reply to Comment 1.1.1: Comment: Thanks for your comments. We want to respond to your concerns about the motivation of Federated Data Cleaning as follows: **Motivation of Federated Data Cleaning.** Recall that we perform Federated Learning because of two major constraints: **insufficient local data** and **privacy concern**. This motivation also applies to the Federated Data cleaning task. We can consider the following scenario: suppose we have client A which has a large amount of noisy data, and a client B which has a well annotated clean dataset. Since annotation is expensive and takes a lot of time and effort, client B won't share its clean data with A directly. However, client B can provide a service based on our Federated Data Cleaning formulation to help A clean the data and meanwhile gets some return. Besides the Federated Data Cleaning task, Personalized FL [1,2] is also a very important application of Federated Bilevel Optimization, which is motivated by the data heterogeneity in FL: since clients have different local data distributions, a personalized model is desired on top of the jointly learned global model. References: [1] Fallah, Alireza, Aryan Mokhtari, and Asuman Ozdaglar. "Personalized federated learning: A meta-learning approach." arXiv preprint arXiv:2002.07948 (2020). [2] Collins, Liam, et al. "Exploiting shared representations for personalized federated learning." International conference on machine learning. PMLR, 2021.
Summary: This paper focuses on the problem of bilevel optimization in a federated learning or FL environment. Bilevel optimization has various FL applications, and a few recent works proposed versions of bilevel optimization schemes for FL. A challenging step in bilevel optimization is the computation of the "hypergradient", and the existing schemes are able to obtain an estimate of the hypergradient, albeit a biased estimate, with a substantial communication overhead (multiple rounds of communication per server-side update). The paper reformulates the hypergradient computation as a quadratic federated optimization problem that can provide an unbiased estimate of the hypergradient, while requiring only a single round of communication for every server-side update. Based on this insight, the paper proposes a federated bilevel optimization algorithm that updates the two variables in the bilevel problem, and the hypergradient estimate alternately on each client, with an intermittent averaging step across all clients. To improve the convergence with a constant batch size, the paper utilizes momentum based variance reduction techniques for all the three variables. The theoretical analysis establishes convergence with an improved iteration complexity compared to existing federated bilevel schemes, with a significantly improved communication complexity. Next, the paper studies an alternate form of a federated bilevel problem where all clients have local lower level variables, with applications in personalized FL. This setup allows for the local hypergradient computation without any need for communication across clients. The paper presents a modification of the previous algorithm for this problem and establishes convergence. The empirical evaluation considers two applications and compares the proposed scheme against various baselines, highlighting the improved communication complexity of the proposed scheme across various problem settings. Strengths: **Critical federated hypergradient estimation as a federated optimization.** A key strength of this paper is a simple (yet of significant practical impact) reformulation of the hypergradient estimation using a standard quadratic program. A key property that the authors leverage is the fact that the global quadratic objective can be decomposed into per-client quadratic objectives, which is not true of the global hypergradient (which cannot be decomposed into per-client hypergradients). This simple yet powerful insight is then utilized to obtain an estimate which, upon proper solution of the global least-squares problem, is unbiased, and can be efficiently updated along side the upper and lower level variables in the bilevel problem. While this global least-squares reformulation does facilitate an intuitive communication-efficient algorithm, the paper also proposes a momentum-based variance reduction scheme, and establishes a very favorable convergence rate both in terms of iterations (and thus sample complexity) and communication. The overall algorithm makes the solution of federated bilevel problems significantly more practical. **Favorable convergence rate.** Along with the novel federated hypergradient estimation, this paper is also able to get a favorable convergence rate across various dimensions. First, the results are able to provide convergence with a constant batch size, and establish a sample complexity of $O(\epsilon^{-1.5})$, which is very strong for bilevel optimization. Second, the analysis establishes a $(1/M)$-dependence on the sample complexity, indicating that having more parties in the FL setup allows for smaller per-client sample complexity. Finally, as I discussed earlier, the analysis establishes a $O(\epsilon^{-1})$ communication complexity which matches the rate of the best single-level federated optimization algorithm. **Bilevel personalized FL.** The paper explicitly studies a form of the federated bilevel optimization where each client has its own lower level problem, and I believe this general problem would encompass various personalized FL problems and applications. While the problem is easier in terms of the computation of the hypergradient, it is still an interesting problem, and the paper establishes convergence again with favorable iteration and communication complexities. **Positioning against existing literature.** In addition to the section on related works, the authors do a great job at discussing their proposed scheme in comparison to existing literature during the algorithm development. For example, after presenting the main idea behind the communication efficient hypergradient computation, the paper positions this work in comparison to FedNest. **Intuitive presentation of the theoretical results.** In addition to establishing favorable theoretical guarantees, the authors also present the theoretical results in an intuitive manner. First, they clearly present the Lyapunov function, highlighting the different terms in it corresponding to the different variable iterates. Then the relevant parts of the Appendix are referenced which makes it easy to verify the results while reading. After the main theorem is presented, the authors also do a good job of highlighting what it implies, and how it compares to other federated algorithms. Weaknesses: **Increased hyperparameter space.** The proposed framework utilizes various hyperparameters such as the initial learning rates $\gamma, \eta, \tau$, the learning rate schedule $\lbrace \alpha_t, t \in [T] \rbrace$, the momentum parameters $c_{\omega / \nu / u}$, the aggregation frequency $I$. As per the theoretical analyses, it can be seen that the best convergence rate of any execution will critically depend on an appropriate setting of these problem-dependent hyperparameters. Since these hyperparameters often depend on quantities that cannot be efficiently estimated (such as Lipschitz constant), the practical bilevel implementations usually utilize some form of hyperparameter search. Hyperparameter optimization is known to be a hard unsolved problem in FL because of the overall communication overhead. This makes it hard to see how the proposed federated bilevel framework can live up to its practical potential -- one can view this proposed federated bilevel framework as having shifted the communication overhead from the model training stage to the hyperparameter optimization stage, without reducing the overall communication necessary for good training convergence (which involves trying various hyperparameters and training with them). Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Line 2 in algorithm 1: Should it be $q_1^{(m)}$ since it is computed using $g^{(m)}$ and $f^{(m)}$? - Line 176-177: The ordering of the momentum terms and the corresponding variables in the "... respectively" needs to be updated. - In theorem 1, the results provide precise dependence on $\kappa$, but I am unable to find the definition of $\kappa$ in the theorem statement or anywhere else in the main paper. Can you please clarify what this $\kappa$ is supposed to signify? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I did not find any discussion on limitations in the main paper or the supplement. However, I do not anticipate any potential negative societal impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your detailed and insightful comments. Response to Weakness: We agree with the reviewer over the increased hyper-parameter space. In fact, the optimal convergence rate of the momentum-based variance reduction technique is achieved by carefully balancing the learning rate, momentum coefficients. Next, since we view the Federated Bilevel Optimization problem as three intertwined federated optimization problems, naturally, the number of hyper-parameters is roughly three times compared to that of its non-distributed single level counterpart. In practice, we can first set the base learning rate $\frac{\delta}{u^{1/3}}$ to be around 0.1, then choose the learning rate coefficient $\gamma$, $\eta$, $\tau$ respectively according to the structure of each problem, finally, we choose the momentum coefficient $c_{\omega}$, $c_{\nu}$ and $c_u$ such that the initial momentum to be around 0.9/0.99. Response to Questions: **Q1**: Yes, there should be a super-script, i.e. $q_1^{(m)}$; **Q2**: Yes, the order of $\omega^{(m)}_t$ and $\nu^{(m)}_t$ should be adjusted; **Q3**: $\kappa = \frac{L}{\mu}$ denotes the condition number, and $L$ is the smoothness coefficient, $\mu$ is the strong-convexity coefficient for the lower problem. --- Rebuttal Comment 1.1: Title: Thank you for the author response Comment: Thank for the response and for the description of the hyperparameter selection process and what definition of $\kappa$. It seems that there will still be some parameters that need to be tuned (the momentum parameters probably don't need to be tuned too much). It would be good to add this discussion in the appendix, with a broader discussion on how one might approach a practical implementation of this algorithm. I will continue to keep my score of 7 for this paper. --- Reply to Comment 1.1.1: Comment: Thanks for your comment. We will add this discussion of hyper-parameter selection in the appendix, furthermore, we will also add some pseudo code to illustrate the practical implementation of our algorithm.
Summary: this paper proposed new algorithms for federated bilevel optimization. The authors apply the idea of [1] to the federated setting, which views the hypergradient computation as solving a quadratic subproblem. Combining with the STORM algorithm, the author proposed fedbioacc with a convergence guarantee based on this idea and then extend the results to a simpler case with local lower-level problems. numerical experiments on data cleaning and hyper representation are reported. [1] Dagréou et al. "A framework for bilevel optimization that enables stochastic and global variance reduction algorithms" Strengths: 1. the proposed algorithms are pure single-loop algorithms compared with other global federated bilevel algorithms. 2. the authors provide convergence guarantees for both proposed algorithms. Weaknesses: 1. One of my primary concerns is that single-loop algorithms in bilevel optimization perform worse than double-loop algorithms because of the hyper gradient estimation error introduced by the inexactness of the lower-level solution. this is different from the minimax problem because the hypergradient in bilevel optimization involves Jacobian and hessian inverse computation, thus being more complex. In practice, people are using a double-loop algorithm to reduce this hyper gradient estimation error. 2. the contribution of this paper looks trivial to me, the main ideas come from STORM and [Dagréou et al]. the proposed algorithm is very complex and thus difficult to implement and tune in practice. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: see weakness Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: see weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for spending time reviewing our paper and provide comments. Below are our responses to your concerns: **W1**: We agree with the reviewer that bilevel optimization is harder than the minimax optimization problems. However, our three-level perspective of the bilevel optimization can eliminate the difficulty of dealing with the Hessian Inverse: we estimate the hyper-gradient by solving a quadratic problem, and then we perform alternative update between the upper level variable, lower level variable and the hyper-gradient estimation variable. In fact, we theoretically prove the convergence of our single loop algorithm and empirically verify its efficacy through the Federated Data Cleaning task and the Federated Hyper-representation task. **W2** Our main contributions are three folds: Firstly, we view the estimation of hyper-gradient as solving a federated quadratic problem and therefore can get the unbiased estimation of the hyper-gradient in a communication-efficient way; Secondly, by combining with the momentum-based variance reduction, we obtain the optimal convergence rate (both the sample complexity and the communication complexity) and achieves linear speed up w.r.t. the number of clients. It is non-trivial to perform analysis which requires a careful trade-off of various types of estimation errors. Finally, we also consider the special case of local lower level problem for the first time, this type of federated bilevel problems has wide application over the personalized FL setting. As for the complexity, our algorithm is well-structured and for each of the three problems: upper level, lower level and quadratic problem, we perform the same type of acceleration operation: momentum-based variance reduction. As a result, our algorithm is straightforward to implement. Furthermore, in terms of the hyper-parameter tuning, we can first set the base learning rate $\frac{\delta}{u^{1/3}}$ to be around 0.1, then choose the learning rate coefficient $\gamma$, $\eta$, $\tau$ respectively according to the structure of each problem, finally, we choose the momentum coefficient $c_{\omega}$, $c_{\nu}$ and $c_u$ such that the initial momentum to be around 0.9/0.99. --- Rebuttal Comment 1.1: Title: Thanks for your review Comment: Dear reviewer: Thanks for spending time reviewing our paper again! Since the author response deadline is approaching, I want to send this message to check if you have any further concerns that we can answer?
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposed a communication efficient federated bilevel algorithm in both global and local lower-level problem setting that can achieve the best theoretical communication complexity. Extensive numerical experiments are provided to test the effectiveness of the proposed algorithm. Strengths: They provide convergence analysis for the proposed algorithm. The numerical experiments are abundant and solid to verify the effectiveness of the proposed algorithm. Also, the federated bilevel with local lower-level problem is intriguing. Weaknesses: 1. The analysis technique for bilevel method that treats the Hessian-vector product as a solution of a quadratic problem was also utilized in [R1]-[R5]. I’m afraid this would alleviate some novelty of this paper. Also, some relevant literature on fully single loop bilevel algorithms like [R1]-[R2] and communication efficient federated bilevel methods [R4]-[R5] are missing. Especially, [R4] allows for any heterogeneity without Assumption 3.5 and [R5] achieves the same communication complexity as this work, could you elaborate the pros and cons of your work and [R4]-[R5]? [R1] Junyi Li, Bin Gu, and Heng Huang. A fully single loop algorithm for bilevel optimization without hessian inverse. In Proceedings of the AAAI Conference on Artificial Intelligence, 2022. [R2] Michael Arbel and Julien Mairal. Amortized implicit differentiation for stochastic bilevel optimization. arXiv preprint arXiv:2111.14580, 2021. [R3] Mathieu Dagreou, Pierre Ablin, Samuel Vaiter, and Thomas Moreau. A framework for bilevel optimization that enables stochastic and global variance reduction algorithms. In Proceedings of the Advances in Neural Information Processing Systems, 2022. [R4] Quan Xiao, Han Shen, Wotao Yin, and Tianyi Chen. "Alternating Implicit Projected SGD and Its Efficient Variants for Equality-constrained Bilevel Optimization." arXiv preprint arXiv:2211.07096, 2022. [R5] Yifan Yang, Peiyao Xiao, and Kaiyi Ji. "SimFBO: Towards Simple, Flexible and Communication-efficient Federated Bilevel Learning." arXiv preprint arXiv:2305.19442, 2023. 2. The role of heterogeneity is not expressly delineated in Theorem 3.6. It would be beneficial to see a how heterogeneity influences the convergence rate. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The three-level optimization perspective appears to be applicable to the local lower-level bilevel problem as well. Is there a specific reason for selecting the Neumann series approximation for the local lower-level problem, apart from the warm start strategy? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your insightful comments. Below are our responses to your questions and concerns: Response to weakness: **W1** : Although the idea of viewing hyper-gradient estimation as solving a quadratic problem is investigated in several papers, these paper only consider the non-distributed setting, and we are the first one to apply this idea into the federated setting. In fact, it is even more favorable to apply this idea into the FL setting than the non-distributed setting. Since the global quadratic problem is a linear combination of the client-wise quadratic problems, we can apply the standard FedAvg algorithm to get an unbiased estimation of the true hyper-gradient in a communication-efficient way, furthermore, its optimization can also be efficiently integrated with the update of upper and lower level variables to have a three-level federated optimization problem. Furthermore, we achieve the optimal convergence rate by combining our three-level federated optimization formulation with the momentum-based variance reduction technique. **Comparison with [4]**: This work adopts a similar strategy as FedNest, where they first update the lower level variable with multiple steps and then update the upper level variable multiple steps, in contrast, our FedBiOAcc algorithm performs a simple alternative updates between the lower level variable, upper level variable and the hyper-gradient estimation variable. Furthermore, [4] adopts the recently proposed ProxSkip for the update of the lower level variable, thus does not need the heterogeneity assumption, however, the best algorithm E2-AiPOD in [4] only sub-optimal communication rate of $O(\epsilon^{-1.5})$, while our FedBiOAcc achieves the optimal $O(\epsilon^{-1})$ rate. **Comparison with [5]**: Similar to our algorithm, SimFBO also consider a three-level optimization problem, however, it only gets sample complexity of $O(\epsilon^{-2})$ which is sub-optimal, while our FedBiOAcc achieves the optimal $O(\epsilon^{-1.5})$ sample complexity. Finally, the convergence analysis in both [4] and [5] does not include the dependence over the condition number $\kappa$, which is an important factor to affect the performance of the algorithm in practice, and they don't consider the special case of local lower level problem, which can incorporate various Personalized FL formulations. **W2**: The term related to the heterogeneity coefficients are part of the constant factors absorbed by the Big O notation. Please refer to Line 715 and 728 in the manuscript for the precise form of the dependence, approximately, we have $\frac{1}{T}\sum_{t = 1}^{T-1} \mathbb{E} \bigg[ \|\nabla h(\bar{x}_t) \|^2 \bigg] = O(\zeta^2)$, and $\zeta$ denotes the maximum of the heterogeneity coefficients defined in Assumption 3.5. **Q1**: For the case of local lower-level bilevel problem, the local hyper-gradient is an unbiased estimation of the global hyper-gradient, therefore, we don't need to solve an extra quadratic federated optimization problem to estimate the global hyper-gradient as in the case of global lower level problem. To estimate the local hyper-gradient, we can either use Neumann series or solve a quadratic problem and they both don't need extra communication. We choose the Neumann series approach due to the following reason: Neumann series does not need to keep an extra variable $u$ as the quadratic approach, furthermore, the Neumann series approach leads to a good approximation of the hyper-gradient with only a few number of hessian-vector products in practice, thus the quadratic approach does not outperform the Neumann series approach significantly in computational efficiency. References [1]. Quan Xiao, Han Shen, Wotao Yin, and Tianyi Chen. "Alternating Implicit Projected SGD and Its Efficient Variants for Equality-constrained Bilevel Optimization." arXiv preprint arXiv:2211.07096, 2022. [2]. Yifan Yang, Peiyao Xiao, and Kaiyi Ji. "SimFBO: Towards Simple, Flexible and Communication-efficient Federated Bilevel Learning." arXiv preprint arXiv:2305.19442, 2023. --- Rebuttal Comment 1.1: Comment: Thank you for your careful response and I decided to keep my score. --- Reply to Comment 1.1.1: Comment: Thanks for your comment! For a more detailed discussion about our motivation, contribution and challenges of our method, please check our most recent response to Reviewer Cbyn. --- Rebuttal 2: Comment: Hi Reviewer rgL1, The author-reviewer discussion period will end very soon. Could you please respond to the author's rebuttals? At the minimum, please reply by acknowledging that you have read them. Thanks, AC
null
null
null
null
null
null
Online Nonstochastic Model-Free Reinforcement Learning
Accept (poster)
Summary: This paper introduces an extension to model-free RL that has its roots in non-stochastic control. The main idea is to adjust a nominal control of a base policy (from e.g. a RL algorithm) by a control signal that can be computed from estimates of the process noise. As a main contribution, the authors present three distinct methods to compute those noise estimates. Strengths: - Control theory provides us with many ideas that can be exploited and possibly transferred to model-free RL algorithms. Thus, I think it is important to think about how such ideas can be utilized and adapted to make RL-algorithms more reliable, sample-efficient etc. - The ideas are formally justified and the derivations appear to be mostly correct. For LDS, the authors provide convergence proofs of their method. - The paper is well structured and well written. The contributions are clearly set out and the structure of the paper allows the reader to follow along easily. - The authors provide example code for some of their experiments. Weaknesses: - The results suggest, that PD1 produces similar results to the baseline and that PD2 was only able to really outperform its baseline on one task (noisy hopper). The only PD-estimator that consistently beat its baseline was PD3, which uses a simulator to compute the disturbance signal. Now, the question arises, whether this is still a model-free algorithm or not, since we would need this model also during inference. The paper would benefit from more evidence, that at least PD2 provides stronger performance than the baseline. Furthermore, it would be interesting to see if you could use a learned model to compute the disturbance and how this would affect performance. - In the definition of PD(1), in Line 169, the term on the right-hand side is called "gradient of a TD error.". I do not think that this is an accurate statement. Recall that the TD-Error is defined as $$c(x_t, u_t) + \gamma V_\pi(x_{t+1}) - V_\pi(x_t).$$ From the definition of $Q$ and $V$ and $\mathbf{\hat w_t}$, we can derive $$ \begin{align*} &\gamma V_\pi(f(x_t, u) + w_t) - (Q_\pi(x_t, u) - c(x_t, u)) \\ &=\gamma V_\pi(\hat x_{t+1}) - (\underbrace{c(x_t, u) + \gamma \mathbb E_{x_{t+1}}[V_\pi(x_{t+1})]}_{=Q(x_t, u)} - c(x_t, u)) \\ &= \gamma (V_\pi(\hat x_{t+1}) - \mathbb E_{x_{t+1}}[V_\pi(x_{t+1})]), \end{align*} $$ which I would not call a TD error (it does not measure the consistency of the value function over timesteps). It appears to me that it is more like a measure between the expected value of the next state and the value of the predicted next state. - In the evaluation, the plots are kind of hard to read. I would recommend to increase the font size for the axis/legend, especially for Fig. 3. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - TD-Error: (See above) - Why does PD2 not improve the performance in noisy walker and only marginally in noisy ant? - Line 173: Here the authors state that the gradient of $V_\pi$ can be efficiently estimated online. This needs more explanation in my opinion, since it is not clear how this is done and why we do not have this gradient term in equations (1) and (2). - Code: You provided code for the LDS experiments. Why not also for the robotics environments? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors did not dicsuss the limitations of their approach. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and thoughtful review. We address the main points below: Weaknesses: Our contribution is mostly conceptual and theoretical, thus we focused on methods with provable guarantees. Using a learned model to compute disturbances could be interesting, but we don’t see how to develop theory based upon such a learned model at this time, especially if it is nonlinear. PD3 indeed performs best from our methods, but having a simulator is common in many applications, thus we think our method can be potentially practical. Sorry for the confusion regarding PD1: what we mean in “TD error” is the update/error in expected SARSA (Sutton and Barto, 2018), which derives from TD error. We recognize that typically TD error is defined on state value functions (instead of state-action ones). We will clarify this in the final version. Briefly, our "TD error" is designed so that it is zero in expectation in absence of disturbances, thus deviations of it (on average) signal presence of unmodeled disturbances. A central result we point is that the expected value of its gradients can be used to recover the disturbances for linear systems. We are happy to increase font sizes as you suggest. Questions: PD2: We are not sure why PD2 does not improve performance of noisy walker and only marginally in noisy ant. This is an interesting question for future investigation. Gradient formula: We can explain this in more depth in the paper. The gradient term is not in (1) and (2) because we use sampling to estimate this gradient. We are using a single point zeroth order gradient estimator (See equation (1) in https://arxiv.org/pdf/2006.06224.pdf ). In our case, with quadratic value functions there is no smoothing error as can be seen in the proof in Appendix D.1 of the supplement. Code: We could not release the code for the larger scale experiments because these were implemented and tested within a proprietary benchmarking suite and there are restrictions on release. We plan to implement the experiments in open source and release. More importantly, we are very happy to release the algorithm's code itself very soon. Limitations: We disagree that we did not discuss limitations of our methods. We discuss limitations of each signal in section 3.4 as is attested by Reviewer MnP5. We agree that theory being restricted to linear dynamical systems is a limitation, though we never claim provable guarantees beyond this setting. Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 2018. --- Rebuttal Comment 1.1: Comment: I thank the authors for responding to concerns raised during my review. Furthermore, I want to apologize for overlooking the section where the limitations were discussed. Most of my concerns were addressed. However, my main point of criticism (weak performance of PD1&2, access to simulator for PD3) remains. > ... having a simulator is common in many applications, thus we think our method can be potentially practical. Yes, that is true. But isn't (model-free) RL a technique that is mainly applied when we do not have a model? Access to a model would allow to use planning algorithms, which do not suffer from typical model-free RL problems. I think that including PD3 in the paper is still insightful for comparison to PD1 and PD2. However, I would like to see more evidence that the general approach and the truly model-free disturbance signals are indeed more robust. --- Reply to Comment 1.1.1: Comment: Thank you for the quick response. We are glad we were able to address many of your questions. We would like to highlight three points in relation to concerns regarding the generality of our approach: 1. We had included (at the end of the rebuttal period) new figures in a PDF file on a common thread demonstrating faster training on sinusoidal and adversarial sign gradient disturbances for the pendulum environment using PD2. While we were limited in our ability to do massive experiments during this short rebuttal period, we hope that this adds to the body of evidence that underscores the generality of our approach. 2. While PD2 does not perform as well as PD3, it is fairly domain agnostic and does improve performance in settings of interest. For example, in all our experiments, we use the coordinates of the state for each auxiliary reward. This suggests simple, domain-agnostic auxiliary reward signals can deliver significant utility for PD2. 3. We agree that PD3 is limited in that it requires access to a potentially inaccurate simulator; inaccurate in that it doesn't capture the perturbations present in the real world . Even with access to an exact simulator, model-free methods are also used for planning (which one can query at arbitrary states, see e.g., [Guez et al.](http://proceedings.mlr.press/v97/guez19a/guez19a.pdf)), due to standard planning algorithms like iLQR, RRT, analytical policy gradient (a.k.a. direct backprop) exhibiting poor performance in presence of contact forces or high dimensionality (respectively). Similarly, in our setting where we permit rollouts in the real-world to deviate from the given simulator, this suggests that model-free methods (like DDPG) might offer better performance than classical planning approaches. In this context, our main contribution is developing model-free algorithms whose behavior degrades gracefully the larger the gap between the real world and the simulator. Finally, we hope that the points above address some of the concerns from the reviewer.
Summary: This work introduces the notion of disturbance-based policies for model-free reinforcement learning, as opposed to traditional state-based policies. The disturbances capture unmodeled deviations from observed dynamics. In the model-free setting, since these disturbances are not known to the learner, the paper proposes three signals which can be used as pseudo-disturbances. These include the gradient of the TD error, the difference between auxiliary value functions of consecutive states, and the difference between the observed state and a simulated state. These three signals each require different assumptions and have their own advantages and disadvantages. These can recover the true perturbations up to linear transformations assuming time-invariant linear dynamical systems and a linear policy. A new algorithm, MF-GPC, is introduced which can adapt existing RL methods under this framework. Notably, this approach has sublinear regret bound under certain assumptions. Experiments on noisy versions of OpenAI gym environments demonstrate the effectiveness of the proposed method. Strengths: The paper presents a novel paradigm for model-free reinforcement learning based on unmodeled disturbances. There is an existing line of work on disturbance-based techniques which employ model-based control. However, this paper is the first work to extend this approach to the model-free setting, to the best of my knowledge. The use of various signals as pseudo-disturbances is an original idea and the three proposed variants seem sound. The mathematical guarantees that these pseudo-disturbances are linear transformations of the true disturbance for linear systems, as well as the regret bound for MF-GPC, are valuable contributions and strengthen the quality of the paper. The concepts introduced in the paper are presented with sufficient clarity. Weaknesses: The practical applicability of the proposed framework raises some concerns. Among the three proposed pseudo-disturbances, PD3 requires an accurate simulator which is in most cases not available. PD2 can be applied to specific systems where there are additional signals available from the environment. PD1 seems to be the one most generally applicable to typical RL environments and dynamical systems, however it is unreliable and the empirical results demonstrate the ineffectiveness of this signal. See questions for detailed comments. Another consideration is that the analysis considers linear dynamical systems and the regret bounds and the guarantees require further assumptions. While such constraints are typical when performing rigorous analysis, it does raise the question of how generally applicable this framework is to various other types of systems. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. For PD1, both the state-value and the action-value function are learned estimates. If so, calculating the pseudo-disturbances based on the error between two functions which are being learned online can be unreliable, especially in the early stages of training. This is acknowledged in the paper and the results do not offer much improvement. This is concering because among the three proposed variants, PD1 seems to be the most easily applicable to general RL setups. 2. For PD2, the experiments section mentions that the first few units of the last critic layer are used as V and Q. The reason for this choice is not clear to me, these units would not, in general, correspond to any cumulant function. An explanation from the authors would be helpful. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 2 fair Limitations: The paper does not address the limitations of the proposed framework. As mentioned in my review, the main limitation seems to be the usefulness of the pseudo-disturbances to generic environments and systems. In addition, the analysis is limited to linear dynamical systems, and it is not clear how this framework extends to other types of systems. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and thoughtful review. We address the main points below: We note that PD2 is actually quite general, and does not require specific domain-engineered signals from the environment. For example, in all our experiments, we use the coordinates of the state for each auxiliary reward. This suggests simple, domain-agnostic auxiliary reward signals can deliver significant utility for PD2. We acknowledge that PD3 generally requires a simulator, though it fits in the framework and can be useful when a simulator is available. We acknowledge PD1 did not perform well, despite this being our initial idea and the nicest one in theory. We suspect this is due to the PD1 estimator's high variance. We leave improving this via variance reduction to future work Regret minimization in general online MDPs with adversarial dynamics is known to be computationally hard (Yadkori et al., 2013), so restricting the setting is necessary to make progress. We will add discussion of this in the paper. We believe our setting is still meaningful, and sheds light on general principles that apply to nonlinear settings. Questions: 1. Good question! While it’s true the Q,V functions are learned estimates, as long as the Bellman (backup) equation holds, this works. Generally, one can increase the number of gradient-based update steps for the critic per episode to make sure this error is small. We will add a short discussion of this to the paper. 2. We were a bit imprecise when describing this in the paper and will clarify in the paper. We use the same architecture for critic networks but with a wider final layer (since the output is no longer a scalar) to produce values for each auxiliary reward. We are not reusing the representations of the original critic. Limitations: We disagree that we did not discuss limitations of our methods. We discuss limitations of each signal in section 3.4 as is attested by Reviewer MnP5. We agree that theory being restricted to linear dynamical systems is a limitation, though we never claim provable guarantees beyond this setting. [1] Yasin Abbasi Yadkori, Peter L Bartlett, Varun Kanade, Yevgeny Seldin, and Csaba Szepesvári. Online learning in markov decision processes with adversarially chosen transition probability distributions. Advances in neural information processing systems, 26, 2013. --- Rebuttal Comment 1.1: Comment: We thank the reviewer again for their time. In our rebuttal, we have (a) substantiated our claim of PD2’s generality, (b) listed the fundamental difficulty in extending any of our results to the non-linear setting, and (c) address the questions from the reviewer (e.g., regarding Q/V functions that are learned online). Are there any other concerns that we can address before the discussion period ends?
Summary: This work considers model-free RL in the setting with additive disturbances in the environments' forward dynamics. In particular, it focuses on "disturbance-based policy" which adds a correction policy to the vanilla state-based policy. Because the disturbances are unknown in model-free RL, the correction policy instead conditions on "pseudo-disturbance" which are proxies of the actual disturbance. The paper proposes and analyzes 3 pseudo-disturbances that are feasible to compute in a model-free RL setting. The main algorithm trains the disturbance-based policy in a typical model-free RL training loop and is independent of the base RL algorithm. The authors provide theoretic guarantees of the method under linear settings. Empirical results show that the algorithm brings substantial improvement in a collection of noisy control tasks. Strengths: 1. The problem setting is well-motivated and clearly formulated. 2. New concepts are nicely explained. 3. The different choices of pseudo-disturbance make sense intuitively and are well supported for their properties in the linear case. 4. Empirical results show significant benefits from the algorithm. Weaknesses: 1. The authors mention potential adversarial disturbance but empirical results are shown in environments with uniform additive noise. The paper would be stronger with experiments containing adversarial disturbances. 2. The gym experiments are done with only one baseline method. Max-entropy RL algorithms like Soft Actor-critic might be more robust to noises in the environment. I would be more convinced if the method is compared with a few more state-of-the-art methods. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: This links back to weakness #2. I appreciate the fact that disturbance-based policies provide theoretic guarantees in linear settings. Does this formulation provide other benefits? How does the proposed method compare to other model-free RL methods? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors cover the limitation of each pseudo-disturbance. They are also upfront about PD1 not performing well in RL. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and thoughtful review. We address the main points below: Weaknesses: We acknowledge that paper would be stronger with non-uniform disturbances. We will add detailed experiments for sinusoidal disturbances and adversarial sign gradient disturbances. Benchmarking against other more state-of-the-art methods like SAC would be useful, and we will add this. We will note that our algorithm is an augmentation or modification on top of a base algorithm, so we can apply our method on top of other baselines (SAC included) as well. We wanted to show an improvement on the base algorithm with our experiments so this is a proof of concept for now. We have not completed this for the rebuttal because we did not have the time to run the appropriate hyperparameter sweeps, which are needed even for new baselines in our modified noisy environments. Questions: Beyond our theory for linear dynamical systems, we note that crucially MF-GPC policies are non-markovian (take more than immediate state into account), which could potentially help in non-markovian environments. This point is also highlighted by our theory for linear systems. In contrast, most of the RL algorithms are designed for markovian settings. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my concerns and the new experiment results in particular. I agree that using some sign gradient attack in experiments provides a stronger narrative. PD3 could be useful in settings where people try to train RL agents on real robot hardware directly. In most cases, they have access to some imperfect simulator (imperfect because it is hard to recreate the exact task scenes in simulation, but the dynamics model of the robot itself is available). Experiments along this line might provide better justifications for this work. --- Reply to Comment 1.1.1: Comment: We are glad our additional experiment results were helpful to you. Indeed, we agree that PD3 has great potential in robotics settings involving an imperfect simulator, as you described. Our principal contribution here is the pseudo-disturbance framework and the principled design of pseudo-disturbance signals, backed by theoretical guarantees. Our experimental evaluations, in our view, serve as proof-of-concept exercises to demonstrate the potential for its real-world use; hence, we did not intend them to be exhaustive.
null
null
Rebuttal 1: Rebuttal: **Please see attached pdf file.** Dear reviewers, we thank you for your time and effort in reviewing our paper! The main weakness jointly pointed by the reviewers is lack of experimentation with more varied noise patterns, as opposed to stochastic noise. We attach below in pdf format two experiments that show faster training for PD2 with: 1. sinusoidal disturbances 2. adversarial sign gradient attack disturbances (similar to [this](https://arxiv.org/pdf/1412.6572.pdf)) We will expand on these at the earliest chance of revision. More detailed responses are addressed below to each reviewer separately. Thanks again, we hope this will alleviate your concerns. Pdf: /pdf/258baa6adf3cd6e658369167c97432a35c47d7a0.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Optimal Time Complexities of Parallel Stochastic Optimization Methods Under a Fixed Computation Model
Accept (poster)
Summary: Consider the classical statistical risk minimization framework with stochastic gradients, and propose new lower bounds on the optimal time complexity. Also propose a new method Rennala SGD that attains the lower bound. This work makes the case for asynchronous methods. In particular, this work proposes a new oracle-based lower bound on the optimal convergence rate in time, rather than iterations, and develops an algorithm (essentially straggler-based SGD) that achieves this bound in the shared-system/parameter-server setting. Strengths: **Significance** This work takes a step towards developing more practical asynchronous optimization methods, and providing convergence rates and complexity bounds that more closely reflect practical considerations of convergence time, rather than iterations. **Originality** The proposed method is not entirely novel (e.g., see Dutta et al., 2018 or related works on straggler-based SGD methods that ignore computation from stragglers), however the new lower bounds, problem formulation, and analysis are of sufficient novelty, to the best of my knowledge. **Clarity** The work is well written and easy to follow. Weaknesses: Lacking empirical evaluations, the only such experiments are on quadratics, buried deep inside the appendix. While these plots are compelling, I would encourage the authors to consider small-scale numerical results on other machine learning workloads. Another weakness of this work is on assuming a fixed computation time for each worker. Such models are highly restrictive and impractical. I would encourage the authors to consider ways for extending their framework to arbitrary, but bounded worker delays. I would appreciate a discussion on how such lower-bounds (or the optimal method), may change if these bounds were overly conservative and long-tailed. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Please see my questions provided in-line above, in the weaknesses section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Limitations of the work are not adequately addressed. There are for instance, many assumptions that are taken for granted in the analysis that may limit the applicability of the findings to practical settings... namely the fixed delay model per worker, and the relationship between the batch-size S and the number of workers n. I am curious how findings would change in settings where S >> n or S << n. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the interesting comments! > Lacking empirical evaluations, the only such experiments are on quadratics, buried deep inside the appendix. While these plots are compelling, I would encourage the authors to consider small-scale numerical results on other machine learning workloads. Our work is theoretical. Thus we seek to understand the theoretical properties of asynchronous methods and their limits. One can see that even on simple quadratic optimization tasks, Asynchronous SGD can have bad convergence properties. We expect such behavior on other machine learning tasks. Having said that, we will follow your advice and include a few more small-scale experiments with different tasks. We expect the results to be similar. > Another weakness of this work is on assuming a fixed computation time for each worker. Such models are highly restrictive and impractical. I would encourage the authors to consider ways for extending their framework to arbitrary, but bounded worker delays. I would appreciate a discussion on how such lower-bounds (or the optimal method), may change if these bounds were overly conservative and long-tailed. Please note that unlike much of existing asynchronous optimization/SGD literature, which works with the formalism of **(iteration) delays** (which does not refer to time but to the difference in iteration counters between when a model was read and an update by a "delayed" worker written/applied), we directly work with the notion of **time** it takes for each worker to perform its computation. This is a different paradigm. The classical iteration delays are not first-class citizens in our formalism, i.e., they are not assumed. Instead, they are the result of the algorithm and the problem (e.g., larger noise $\sigma^2$ and/or smaller error tolerance $\varepsilon$ lead to larger batch size in Thm 7.4, and in classical asynchronous SGD, this would lead to larger iteration delays since such an approach counts every stochastic gradient as an iteration), and an indirect function of the speeds $\tau_i$, which are the first-class citizens in our work. We believe our approach is more reasonable - we assign processing times to all workers, and the delays are a function of the algorithm. Not assumed, but observed. We say this to make it more clear that our constants $\tau_i$ should not be mistaken with the iteration delays from the majority of previous works on asynchronous SGD (we are not saying the reviewer made such a mistake - we just want to make this very clear since the reviewer refers to delays in his/her question). These prior works sometimes indeed work with non-constant and even unbounded delays. Due to the different nature of iteration delays and compute times $\tau_i$, the fact that they do so, and that we work with fixed times $\tau_i$ should therefore not be seen as a relative shortcoming of our approach. We therefore interpret the question as follows: " I would encourage the authors to consider ways for extending their framework to arbitrary, but bounded worker computation times $\tau_i$." Having said that, it absolutely does make sense to extend our setup to non-constant processing times $\tau_i$, e.g., - processing times belonging to some interval, - or random processing times following some distribution (with mean $\tau_i$ an variance $\sigma_i^2$, - or processing times which depend on the stochastic gradient sampled). We believe these are all good ideas for future study, which should be simpler given our foundational work in the constant $\tau_i$ setting. We believe that more complicated computation models deserve a separate research effort and analysis. We will add a future work section where we will comment on this question. We remark that Reviewer 563Q also asked us a similar question, too. > Limitations of the work are not adequately addressed. There are for instance, many assumptions that are taken for granted in the analysis that may limit the applicability of the findings to practical settings... namely the fixed delay model per worker, and the relationship between the batch-size S and the number of workers n. I am curious how findings would change in settings where S >> n or S << n. The assumption "the fixed delay model per worker" (here delays mean computation/processing times $\tau_i$, and not iteration delays) was never hidden. It is immediately considered in the introduction. Even in the title of the paper, we say that we get theoretical complexities "Under a Fixed Computation Model." All our findings will hold in both regimes $S \gg n$ or $S \ll n$. Let us clarify it. If $S \gg n$, then $\frac{\sigma^2}{\varepsilon} \gg n.$ It is a "high noise/small # of workers" regime. Then one can show that (11) is minimized when $m \approx n.$ In the "high noise/small # of workers" regime, we need the contribution of all workers. However, when $S \ll n,$ then $\frac{\sigma^2}{\varepsilon} \ll n.$ It is a "low noise/large # of workers" regime. In this case, the optimal $m$ can be much smaller than $n$ (see Lines 199-205). We will be happy to add this clarification to the camera-ready paper. --- Rebuttal Comment 1.1: Title: Re: Rebuttal Comment: I acknowledge having read the author response, which addressed my minor comments. I have no major issues with this work, and will maintain my recommendation.
Summary: This paper proposes a protocol that generalizes the classical oracle framework approach. Using this protocol, it establishes minimax complexities for parallel optimization methods that have access to an unbiased stochastic gradient oracle with bounded variance. Strengths: - The motivation of this paper is clear. - The theoretical analysis seems sound. Weaknesses: I am not an expert on this topic of this paper. 1. The challenge of theoretical analysis should be made more clear, which highly relates to the contribution of this paper. 2. The empirical study lacks. 3. The organization of this paper is strange. E.g., related work is located in 6.1, and conclusion section lacks, etc. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Please cf. weaknesses part. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: none Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review! > The challenge of theoretical analysis should be made more clear, which highly relates to the contribution of this paper. Let us briefly explain the main challenges of the paper: 1. Even before we started proving the lower bounds, the main challenge was to understand how to analyze parallel optimization methods. In Section 3, we explain that previous papers were counting the number of oracle calls. We proposed a completely different paradigm, where instead of counting *oracle calls*, we count *time*. We believe that this is the first challenge. 2. The next challenge comes from analyzing the lower bound for the complexity (6) (lines 113-144 in the paper). Nobody before us analyzed this complexity. The analysis required us to develop new non-trivial proof steps. For example, we discuss these non-trivialities in Section D (e.g., lines 571-573, 596-605, 670-674). There are many technical details that were not considered before. 3. The next challenge was to develop the new optimal methods, Rennala SGD and Malenia SGD. These are new methods that improve the complexity of the celebrated Asynchronous SGD! For a long time, the community *believed* that Asynchronous SGD is the best asynchronous method. Our work shows that Asynchronous SGD is suboptimal. Instead of it, we designed new optimal methods. We believe that these are non-trivial solved challenges. **Summing up, our goal was to understand the theoretical limits of the celebrated stochastic gradient descent (SGD) method in the asynchronous setting with $n$ parallel workers. We believe that we took a nontrivial step towards a better understanding of parallel optimization methods. Further, we believe our work is a fundamental contribution to an important subfield of machine learning, resolving a long-standing open problem. As such, we think that a borderline/weak score is not appropriate. We hope we can convince the reviewer about this.** > The empirical study lacks. Note that in Section J, we consider experiments. There we show that even on simple quadratic optimization tasks, Asynchronous SGD can have bad convergence rates. Our new method significantly outperforms the previous baseline. However, our work is theoretical and we would want it to be judged as such. > The organization of this paper is strange. E.g., related work is located in 6.1, and conclusion section lacks, etc. We understand that related work sections are typically written at the beginning of papers. In our paper, we put related work later because it makes the explanation of the Time Oracle Protocol easier. We can add the future work or conclusion section. We followed an organization that best fits our paper and its contents rather than the other way around: fit the paper into a pre-conceived/static organization structure. We believe the paper benefitted from this. We hope that our comments have addressed the reviewer's concerns. If the reviewer has other questions, we will be happy to continue the discussion. --- Rebuttal 2: Title: A question Comment: Dear reviewer, Please can you let us know if the other reviews and our rebuttal changed your mind about our paper? Thanks in advance for taking the time to reply during Summer/vacation season. Cheers, authors
Summary: This paper studies time complexity of parallel stochastic optimization, establishes lower bounds, and proposes algorithms achieving matching complexity results. Strengths: Based on my reading of the main paper (I haven't read the proofs), this is a solid theoretical work. The theoretical results improve widely used asynchronous SGD, and are even a bit surprising, especially the fact that ignoring some computations actually helps. Weaknesses: General comments: - The organization of the paper is a bit reader unfriendly. The assumptions are mentioned at multiple places, starting with page 1, but are formally introduced almost at the end (page 8). This makes the reading tedious. Minor comments: - line 16, shouldn't the codomain of $f$ be $\mathbb R$ rather than $\mathbb R^d$? - line 39: $L, \Delta$ have not been introduced so far in the paper - Async. SGD (after line 52): step 2 gives the impression that only homogeneous data distribution across devices is considered in the paper. This is true for the main paper, but since the authors do have heterogeneous case results as well, they might consider reflecting that here. - Lines 98-99: "this approach is not convenient" Maybe giving a short intuition why (which becomes clear on the next page), would help the reader. - Typo in line 212 Technical Quality: 3 good Clarity: 3 good Questions for Authors: Doubts: - In (6), S_t is the set of indices that started computing gradients before time t. Shouldn't we consider k's which "finished" computation before t? - Line 205: what does it mean by the freedom to interrupt oracles? Questions: - As the authors themselves remark in line 237, Rennala SGD goes contradictory to the idea of using all stochastic gradients. In that case, have the authors studied how it performs in practice in comparison to async SGD, especially with heterogeneity? - In Theorem 7.4, we need a large batch size in the presence of sizable noise. Since in a lot of analyses, the ratio of learning rate and batch size is the crucial quantity, is it possible to choose $S=1$, and $\gamma = \Theta (\epsilon)$, and still get convergence? One would expect this to be closer to asynch SGD while achieving better rates, and performing better in practice. - I haven't looked at Appendix A, but looking at the heterogeneous result in Table 1, the lower bound and Malenia SGD complexity have no dependence on the heterogeneity term. How come? - The bound in (12) for when the workers start simultaneously: What is the explanation for this approach being slower? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the comments! > The organization of the paper is a bit reader unfriendly. The assumptions are mentioned at multiple places, starting with page 1, but are formally introduced almost at the end (page 8). This makes the reading tedious. We define assumptions about our main problem (1) only in Section 7.1. In the introduction (Section 1), we give the reader a brief description of the class of optimization problems that we consider. If we delete Lines 21-22, then it would be more difficult to explain the history of previous results in Section 1.1. > line 16, ... > line 39, ... Thank you, indeed, we will fix it. > Async. SGD (after line 52): step 2 gives the impression that only homogeneous data distribution across devices is considered in the paper. This is true for the main paper, but since the authors do have heterogeneous case results as well, they might consider reflecting that here. We will add a comment near line 52. Note that we have a long discussion in Sections A.2 and A.4. We do not avoid the fact that Async. SGD was considered in the heterogeneous case. > Lines 98-99: "this approach is not convenient" Maybe giving a short intuition why (which becomes clear on the next page), would help the reader. We tried to explain it in Lines 108-116. Unfortunately, we could not find a nice way to explain why "this approach is not convenient" without introducing our new protocol first. > In (6), S_t is the set of indices that started computing gradients before time t. Shouldn't we consider k's which "finished" computation before t? We consider all k's which "finished" computation before t. Indeed, let us fix an index $k$ and a time $t.$ Let us consider the gradient $g^k.$ From line 5 in Protocol 2, we know that the time of calculation of $g^k$ is $t^k.$ So if $t^k \leq t,$ then by definition of $S_t,$ we have that $k \in S_t.$ > Line 205: what does it mean by the freedom to interrupt oracles? In Section F, we generalize our new protocol and assume that an algorithm can stop the calculations of workers at any time. In Theorem 6.4, we analyze the oracle from (7). Note that an algorithm should wait till the end of every calculation and can not stop/interrupt calculations in (7). In Section F, we answered to the question: "How will the complexity change if an algorithm can stop calculations at any time?." We show that it does not change the complexity. > As the authors themselves remark in line 237, Rennala SGD goes contradictory to the idea of using all stochastic gradients. In that case, have the authors studied how it performs in practice in comparison to async SGD, especially with heterogeneity? In Section J, we have experiments that support our theory. We show that Rennala SGD (that ignores previous stoch. gradients) has better time complexities than Async. SGD. > In Theorem 7.4, we need a large batch size in the presence of sizable noise. ... is it possible to choose $S = 1$, and $\gamma = O(\epsilon)$, and still get convergence? Yes, it is possible. With this choice of parameters, Rennala SGD reduces to the classical SGD method with the fastest worker. However, it is better to choose $S = n,$ then one can show that Rennala SGD will have the suboptimal time complexity of Async. SGD. (we didn't add it to the paper because the choice of the parameter is suboptimal). But this a good question. We should probably add this clarification to the paper. > I haven't looked at Appendix A, but looking at the heterogeneous result in Table 1, the lower bound and Malenia SGD complexity have no dependence on the heterogeneity term. How come? Please take a look at the proof of Theorem A.3 for the heterogeneous case. It is very short and gives the answer to why Malenia SGD complexity has no dependence on the heterogeneity term. The main idea is that Melania SGD calculates unbiased stochastic gradients of the function $f$ (not of the function $f_i.$). So Melania SGD can work in the arbitrary heterogeneous setting! > The bound in (12) for when the workers start simultaneously: What is the explanation for this approach being slower? One should compare (12) and (11). First, the harmonic mean $\left(\frac{1}{m}\sum_{i=1}^m \frac{1}{\tau_i}\right) \leq \tau_m,$ so (11) $\leq $ (12). One can show that (11) $\ll$ (12) by taking $\tau_i = i$ (as a theoretical example to show the gap). Since $\frac{1}{m}\sum_{i=1}^m \frac{1}{i} \approx \frac{\ln m}{m},$ then $$(11) \approx \min_{m \in [n]} \left(\frac{m}{\ln m} \left(\frac{L \Delta}{\varepsilon} + \frac{\sigma^2 L \Delta}{\varepsilon^2 m} \right)\right) \leq \frac{\sigma^2 L \Delta}{\varepsilon^2} \frac{1}{\ln \frac{\sigma^2}{\varepsilon}},$$ where we take $m \approx \frac{\sigma^2}{\varepsilon}$ and $$(12) = \min_{m \in [n]} \left(m \left(\frac{L \Delta}{\varepsilon} + \frac{\sigma^2 L \Delta}{\varepsilon^2 m} \right)\right) \geq \frac{\sigma^2 L \Delta}{\varepsilon^2}.$$ Since $\frac{\sigma^2 L \Delta}{\varepsilon^2} \frac{1}{\ln \frac{\sigma^2}{\varepsilon}} \leq \frac{\sigma^2 L \Delta}{\varepsilon^2},$ we have (11) $\ll$ (12). Thank you for the comments! --- Rebuttal Comment 1.1: Comment: Thanks for the response. I have no further questions. I maintain my score (7) and support the paper's acceptance.
Summary: This paper studies the minimax complexities of distributed asynchronous stochastic optimization methods. By extending the oracle framework previously used in the literature, it establishes new (lower) lower bounds for parallel optimization methods. Based on the insights from their proof, they propose a new algorithm meeting these bounds. Notably, it is provably faster than previous synchronous and asynchronous methods in the homogeneous case, which is verified experimentally. Strengths: * This paper naturally extends the oracle protocol of previous work to cater for analysing parallel methods instead of sequential ones, which allows for a finer analysis of distributed methods. * Efforts have been made in the writing to explain the introduced novelties step by step, which greatly helps understand and position the contributions. * The insight taken from their new complexity proof that “ignoring stale gradients might actually help to converge faster” is interesting and highly relevant to the community, as it led to the development of a new state-of-art optimal method, which has provably and experimentally faster convergence rates than previous asynchronous algorithms in the homogeneous case. Weaknesses: * Many notations are introduced, which makes the paper a bit cluttered and cumbersome to read at times. * No experiments comparing Renata & Asynchronous SGD to minibatch (synchronized) SGD are made to confirm the “provably better rates” claimed Section 8. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * Your model assumes fixed delays per worker: could your analysis also hold with stochastic ones? * You experimented with a fixed delay of $\sqrt i$ for worker $i$ (line 1163). Did you also experiment with real-life delays experienced naturally in the distributed asynchronous setting (as in Mischenko 2022 [1])? (For example, this would hint that your method also work with stochastic delays) * Did you experiment with other convex functions other than the one introduced line 1159, and did you consistently observe that Renata leads to better results than Asynchronous SGD, or is this highly dependent on the convex functions used? * Why Goyal et al 2017 [2] is cited line 45? **Typos :** * Line 212: our framework *is*. **References :** [1] Mishchenko, K., Bach, F., Even, M., andWoodworth, B, *Asynchronous SGD beats minibatch SGD under arbitrary delays*, In NeurIPS, 2022. [2] Goyal, P., Dolla´r, P., Girshick, R., Noordhuis, P., Wesolowski, L., Kyrola, A., Tulloch, A., Jia, Y., and He, K., *Accurate, large minibatch SGD: Training imagenet in 1 hour.* Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: * Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive review! > Your model assumes fixed delays per worker: could your analysis also hold with stochastic ones? > You experimented with a fixed delay of ... Did you also experiment with real-life delays experienced naturally in the distributed asynchronous setting... Great question. We were thinking about this setup when we were writing the paper. However, we decided to stop with the fixed computation model and understand it first. We didn't try to run ahead of the train. Note that the Rennala and Malenia SGD methods can work even with stochastic delays. See Theorem 7.4 and Theorem A.3. These theorems do not assume that the delays are constants. We believe that stochastic delays should be analyzed in future work. We will add a future work section where we will comment on this question. We believe that experiments with stochastic delays will be more appropriate in a paper that will consider such delays. > Did you experiment with other convex functions other than the one introduced line 1159... In this paper, we only experimented with quadratic optimization tasks. In Figure 3, we show that Asynchronous SGD is not robust to slow workers. We expect such behavior in other practical optimization problems also. Quadratic optimization tasks are the simplest and the best-understood problems in optimization, and even on these problems, Asynchronous SGD can show bad performance. > No experiments comparing Renata & Asynchronous SGD to minibatch (synchronized) SGD are made to confirm the “provably better rates” claimed Section 8. Note that \[1, Fig. 1\] showed that Asynchronous SGD is better than Minibatch SGD in their experiments. Unlike \[1\], we provide *theoretical* evidence that asynchronous methods are strictly better. \[1\]: Mishchenko K. et al. Asynchronous SGD Beats Minibatch SGD Under Arbitrary Delays --- Rebuttal Comment 1.1: Comment: * *"Unlike [1], we provide theoretical evidence that asynchronous methods are strictly better."* [[1]]( https://proceedings.neurips.cc/paper_files/paper/2022/file/029df12a9363313c3e41047844ecad94-Supplemental-Conference.pdf ) also provide *theoretical evidences*: this is the bulk of their paper (see for example page 2: *"we prove guarantees for Asynchronous SGD that match the guarantee for Minibatch SGD using exactly M times fewer updates, meaning that our Asynchronous SGD guarantees are strictly better than the Minibatch SGD guarantees in terms of runtime."*) However, in addition to them, they also verify that it leads to faster convergence in *practice*, which is common practice in optimization. Thus, it would strengthen your claims to verify that your theoretical analysis also leads to faster convergence in practice. --- Reply to Comment 1.1.1: Comment: We agree that [1] shows better time complexity guarantees. We emphasize this in Lines 57-63 of our paper. However, [1] compares Asynchronous SGD and Minibatch SGD only. While our paper provides the lower bound for for any method that starts synchronous calculations in Section 8. And we compare the *lower bound* of any such method and the *upper bound* (which simultaneously is the lower bound) of Rennala SGD. In fact, Minibatch SGD is not the best method that starts synchronous calculations. An optimal method is $m$-Minibatch SGD (see details in Section G.1). > However, in addition to them, they also verify that it leads to faster convergence in practice, which is common practice in optimization. We agree; we will follow your advice, and we will add experiments with Minibatch SGD to Section H. We will also add small-scale experiments with real-life delays on standard machine learning tasks. [1]: Mishchenko K. et al. Asynchronous SGD Beats Minibatch SGD Under Arbitrary Delays
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Grammar Prompting for Domain-Specific Language Generation with Large Language Models
Accept (poster)
Summary: This paper studies generating strings from highly structured languages for large language models (LLMs). The authors propose grammar prompting to enhance LLMs to use external knowledge and domain-specific constraints, expressed through a grammar expressed in Backus–Naur Form. Serving as intermediate reasoning steps, these grammars (metalanguage) enhance the model's performance in generating highly structured languages in domains such as semantic parsing and molecule generation. The idea is simple and effective, and the experimental results well support the claim. Strengths: + The paper is well-written and easy to follow. + A simple and effective method was presented, leading to improved downstream performance compared with simple prompting. Weaknesses: - The idea is not that novel, which uses metalanguage as the bridge that leads to the ultimate structured language. The method is an intuitive extension of the standard prompting (or chain-of-thought prompting) method. - Improvements over simple prompting methods are validated, but whether the method can beat more sophisticated algorithms (e.g., the recent tree-of-thought reasoning [3]) is unknown. However, as the authors claimed, the focus of this paper is not to achieve SOTA in downstream tasks. - The proposed constrained generation method is applied at the sub-sentence level, instead of the token level. Besides, the idea of constrained decoding has been explored in previous works [e.g., 1,2], which in turn challenges the novelty of this paper. [1] Hokamp, Chris, and Qun Liu. "Lexically constrained decoding for sequence generation using grid beam search." arXiv preprint arXiv:1704.07138 (2017). [2] Huang, Wenlong, et al. "Grounded decoding: Guiding text generation with grounded models for robot control." arXiv preprint arXiv:2303.00855 (2023). [3] Yao, Shunyu, et al. "Tree of thoughts: Deliberate problem solving with large language models." arXiv preprint arXiv:2305.10601 (2023). Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: NA Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback and helpful comments. We would like to emphasize the contribution and novelty of our work: (1) We focus on enabling LLMs to generate highly structured languages from just a few exemplars, a task where neither the standard nor the chain-of-thought prompting proves sufficient. (2) Our method, grammar prompting along with the constrained decoding algorithm, is a novel and practical method that efficiently solves the grammar and program generation problem. (3) Our experiments on tool use, molecule generation, and PDDL planning are the first extensive set of empirical studies on the general capabilities of LLMs for data-efficient generation across a broad range of structured languages. We will make the novelty and contributions clearer in the new version of this paper. > The idea is not that novel, which uses metalanguage as the bridge that leads to the ultimate structured language. The method is an intuitive extension of the standard prompting (or chain-of-thought prompting) method. While our method can be viewed as “an intuitive extension of the standard prompting”, our motivation, problem (as mentioned above), and ideas are very different from those of standard prompting. To solve the grammar and program generation problem, we drew inspiration from programming language design, where BNF has been a standard protocol, but its use as an intermediary form for DSL generation was never unexplored in the machine learning and natural language processing literature (to the best of our knowledge). Moreover, by generating specialized grammar and programs constrained to the grammar, we can effectively detect the compliance of the program to the grammar, and potentially provide a diagnosis on the validity of the generated program. While chain-of-thought prompting generates the intermediate “thought”, the validity of the answer cannot be directly verified from the “thought” process. > Improvements over simple prompting methods are validated, but whether the method can beat more sophisticated algorithms (e.g., the recent tree-of-thought reasoning [3]) is unknown. The problem that our grammar prompting method targets are those with a combinatorial output space characterized by domain-specific grammars. Although the "tree-of-thought" approach is gestured toward broad problem-solving capabilities, instantiating it for more complex search challenges is nontrivial. For instance, grammar prompting employs the Earley algorithm for efficient traversal and potential backtracking within the search space. In comparison, naive usage of BFS or DFS in the tree-of-thought would require a significantly higher number of LLM calls for exploration and backtracking. And please kindly note that the preprint of the tree-of-thought reasoning paper was released after the NeurIPS deadline. > The proposed constrained generation method is applied at the sub-sentence level, instead of the token level. Besides, the idea of constrained decoding has been explored in previous works [e.g., 1,2], which in turn challenges the novelty of this paper. Generating specialized grammar and constrained programs at the sub-sentence level is a major strength of our work, instead of a limitation. Prior studies typically assume unrestricted access to smaller models (e.g., NMT models in 1), whereas our approach focuses on scenarios with limited and costly access to much larger blackbox models like GPT-3.5 and Codex. Constraining at the sub-sentence level can significantly reduce the number of LLM calls, thereby reducing the associated costs of employing LLMs. The constraints discussed in [2] are much simpler than the DSL constraints we specify using context-free grammars. And unfortunately, their constrained decoding cannot generalize to problems examined in this paper. Thank you again for bringing to our attention that we can make our contributions and novelty clearer. We will make sure to include the above discussions in the paper. We also really appreciate the pointers to the related work and we will cite in the paper.
Summary: The authors investigate the effectiveness of grammar prompting as a simple approach to help LLMs utilize external knowledge and domain-specific constraints. This approach is motivated by the goal of enabling LLMs to generate DSL outputs that differ significantly from those encountered during pretraining. The authors achieve this by using a BNF grammar during in-context learning. In their framework, the LLM first predicts a BNF grammar based on a test input and then generates output constrained by the rules of the grammar. The experiments conducted demonstrate that grammar prompting enables LLMs to perform competitively on a diverse range of DSL generation tasks, such as semantic parsing, PDDL planning, and molecule generation. Strengths: - Novel method for prompting and constraining LLM generation using BNF grammar. - Strong experimental results. - The method is described clearly and is sound. - The method is relatively simple yet effective. The framework is refreshing and is expected to have a significant impact on the semantic parsing community. Weaknesses: Minor: The contributions and novelty should be stated at the end of the introduction section. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Have you explored other grammar forms? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: No negative societal impacts Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback and constructive suggestions. > Minor: The contributions and novelty should be stated at the end of the introduction section. We will make the contributions/novelty clearer in the introduction. > Have you explored other grammar forms? We explored only the BNF meta-syntax formalism and did not consider alternatives (e.g., Wirth syntax notation), which are less commonly used in practice to describe syntaxes. However, this is a very interesting avenue and we hope to explore this direction in our future work. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I will keep the current ratings based on the responses and other reviews
Summary: This paper presents an approach for improving few-shot prompting of LLMs for tasks that produce structured outputs, where the structure can be described by a grammar in BNF form. The approach is similar to chain-of-thought prompting: for a given input x, the LLM is prompted to generate the minimal subset of grammar rules that can produce the corresponding output G[y], then conditions on x and G[y] to produce y. Decoding can be done in the standard way (e.g. greedy decoding), but the paper also presents an improved version that uses a modified Earley parsing algorithm to ensure that the outputs y conform to the predicted grammar G[y]. All variants of the approach are effective, providing substantial benefits over reasonable baselines (e.g. standard few-shot prompting and constrained decoding) across a range of tasks, including realistic semantic parsing benchmarks, SMILES molecule generation, and plan initialization for PDDL planning. Strengths: The approach is elegant and seems simple to implement, treating the LLM as a black box and needing only knowledge of the output grammar. While the constrained generation procedure is a bit more complex (and requires multiple LLM calls), it's also well-motivated and elegant, and isn't required for strong performance. I could definitely see the overall approach being practically useful in tasks that require DSLs that were low-resource in pre-training, such as tool use. The experiments are thorough, with strong results across a range of tasks and datasets. The compositional generalization results on GeoQuery were especially interesting to me, as they indicate that the grammar examples in the prompt don't need to fully cover the grammar rules used in the output. The paper is very clearly written, especially given the number of experiments presented and space constraints (but see below for a few questions). Weaknesses: One minor weakness is that it would benefit the paper to also show results on other LLMs to give evidence that the approach is broadly applicable, especially since of the three models evaluated, GPT-3.5 derives from Codex (I believe, if the paper is using code-davinci-002) and GPT-4's training data is unknown. StarCoder-15B could be one such model, although it may not be few-shot promptable. A few details could be made clearer, if space is available, see below. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Q1) How are the examples being retrieved in the retrieval-based ICL experiments? Do they use similarity of the generated grammar as well, or just the inputs? Q2) Which Codex model is being used in the experiments here? Minor clarification questions/points (don't need to address in author response): - Be more consistent with the bolding of numbers in results tables, e.g. in Table 5 - The limitations mentions a lack of improvement for regexes, were these in experiments not reported in this paper? - I didn't fully understand the Macro experiments on the PDDL tasks (although I did not check the appendix carefully). - spelling of "conditioned" in Fig 2 caption - while the semantic parsing experiments are well-motivated by tool use, they aren't really tool use IMO, so renaming 4.1 might be appropriate Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The paper's limitation section did a good job, I think. One additional potential limitation, though, is that, like in chain-of-thought prompting, the approach effectively introduces a pipeline: the model needs to first predict the grammar rules for an example, then condition on these rules to predict the output. This could lead to error propagation if the first step is wrong. However, the results are strong enough to indicate this probably isn't happening, and if it did it could be addressed with self-consistency / consensus decoding or another MBR-like approach. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback and the recognition of the strengths of our work. > “One minor weakness is that it would benefit the paper to also show results on other LLMs …” In our recent experiments with PaLM2-Large using grammar prompting, we utilized the same 100 examples mentioned in Table 3. Our results indicate that grammar prompting outperforms the standard prompting baseline in two of the three benchmarks, though the performance improvement is less than was the case with the GPT family of models. | | GeoQuery | SMCalFlow | Overflow-Blk | |:------:|:-----:| :-----:|:-----:| | Standard Prompting | 90 | 14 | 59 | | Grammar Prompting | 87 | 17 | 62| > Q1) How are the examples being retrieved in the retrieval-based ICL experiments? Do they use similarity of the generated grammar as well, or just the inputs? Following prior work (e.g., Qiu et al. ’22), examples are retrieved based on their BM25 scores, using only the input natural language (NL) questions to calculate these scores. Although employing generated grammar for retrieval might enhance the quality of the examples obtained, it necessitates a minimum of two calls of an LLM per instance. In comparison, relying on NL questions for retrieval usually demands just a single call of LLM. > Q2) Which Codex model is being used in the experiments here? We used code-davinci-002 in the experiments. > Minor clarification questions Thank you for pointing out, and we will clarify these in our revised version. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the response! I still feel very positively about this paper after reading it and the other reviews and responses.
Summary: This work explores grammar prompting as a simple approach for enabling LLMs to use external knowledge and domain-specific constraints, expressed through a grammar expressed in Backus–Naur Form (BNF), during in-context learning. Grammar prompting augments each demonstration example with a specialized grammar that is minimally sufficient for generating the particular output example, where the specialized grammar is a subset of the full DSL grammar. The authors apply grammar prompting to various domain specific languages for semantic parsing (SMCalFlow, Overnight, GeoQuery), AI planning (PDDL), and molecule generation (SMILES), and find that it can meaningfully improve upon standard prompting baselines in the few-shot setting. Strengths: This paper is interesting and well-written. This work is an efficient prompting generation algorithm for LLM. Experimental results show that the proposed method seems effective in generating sequences with grammars. Weaknesses: In experiment part, how do you ensure the results are enough to verify the proposal? For example, the superiority in molecule generation is not well established compared with the other graph-based generation methods. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback and the recognition of the strengths of our work. > “In experiment part, how do you ensure the results are enough to verify the proposal? For example, the superiority in molecule generation is not well established compared with the other graph-based generation methods.” In molecule experiments, we focus on low-data class-specific molecule generation, as opposed to general molecule generation with massive amounts of data. In the low-data regime, recent work [29] has verified that current graph-based models fall short significantly compared with their grammar-based model, in the same settings as our molecule experiments. Our grammar prompting further improves upon [29] and thereby outperforms graph-based models in the specific low-data regime. Furthermore, our research reveals the initial success of applying LLMs in molecule generation tasks, suggesting a potentially fruitful avenue for future exploration.
Rebuttal 1: Rebuttal: We appreciate all reviewers' time and efforts in reviewing our paper. We are glad to find that reviewers generally recognized our key contributions and clear presentation of our paper: **Method**: “The approach is elegant and seems simple to implement” [rfkj], “Novel method for prompting and constraining LLM generation using BNF grammar.” [pwxh], “This work is an efficient prompting generation algorithm for LLM.” [qhz7], “A simple and effective method was presented,” [8y4i] **Experiment**: “The experiments are thorough, with strong results across a range of tasks and datasets.”[rfkj], “Strong experimental results.” [pwxh] **Presentation**: “The paper is very clearly written” [rfkj], “The method is described clearly” [pwxh], “The paper is well-written and easy to follow.” [8y4i], “This paper is interesting and well-written.” [qhz7] Also, we thank all reviewers for their valuable and constructive suggestions. We reply to each reviewer individually below.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Constrained Proximal Policy Optimization
Reject
Summary: This paper introduces the constrained version of PPO that is devised for solving constrained MDP problems in the discounted reward setting with discounted cost constraints. Strengths: The paper presents the constrained PPO. It also gives some analytical results and presents a heuristic algorithm that is seen to perform well numerically. Weaknesses: 1. Inadequate literature survey. The authors should broaden the scope of their survey - in fact constrained actor critic was proposed in the paper: "V.S.Borkar, An actor-critic algorithm for constrained Markov decision processes, Systems and Control Letters, 54(3):207-213, 2005'', for the full state case and "S.Bhatnagar, An actor–critic algorithm with function approximation for discounted cost constrained Markov decision processes, Systems and Control Letters, 59(12): 760-766, 2010, in the function approximation setting. 2. The authors point to the fact that they have not provided a convergence analysis for their algorithm even though other works in the literature such as the ones listed above do possess an asymptotic analysis of convergence. 3. The results given in the text carry imprecise statements, for instance, Prop. 3.1 says "if there are a sufficient number of sampled v, then E[v]=1 and E[vlog v] <= var(v-1). This has to be made precise. Does it mean that in the limit that the number of samples goes to infinity, the statement is valid? Then, if so, the question will be what will be the form of the statement in terms of the number of samples N. 4. The recovery update problem in Sec 3.2 is not clear - how it has been arrived at? 5. Subsequently a heuristic procedure has been proposed and the authors say that an optimal solution to (5) is provably obtained, But if it is provably obtained, how is the procedure a heuristic procedure? 6. After (4), it is said that (4) can be directly solved through existing convex optimization techniques. This is not clear since one needs A(s,a), A_c(s,a) etc. to be known in order to solve it directly. But that is not known and needs to be estimated. So not clear what the authors mean? **** 7. On further reading of the paper and supplementary material post the response of the authors, I feel the technical results are imprecise and flawed. For instance, there is no way to verify Assumption 3.5. Moreover, the statement of Proposition 3.1 seems to suggest that it depends on a "sufficient number of sampled v" but gives a bound on the true expectation. Why should the true expectation depend on the number of samples of v? Also, when you say "sufficient number of sampled v", what does it mean? How many v is a sufficient number? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. On page 2, is it \pi(a|s) instead of \pi(s|a)? 2. Beginning of page 3: definitions of Q(s_t,a_t), V(s_t), Q_c(s_t,a_t) and V_c(s_t) are confusing. This is because you have a summation of t from 0 to \infty, and then you condition on s_0=s_t, a_0=a_t etc. What does it mean? It is too confusing. 3. After (1), how is d^\pi defined? 4. In (3), is there a relationship between d and \delta? Can one be found from the other? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The major limitation is in terms of a lack of credible analysis. Even the theoretical results presented are not precise, so nothing much can be said about the algorithm. In addition there are several typos and grammatical errors throughout the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **We would like to express our sincere gratitude to you for valuable and constructive comments. Our responses are as follows.** Some comments from reviewers are not displayed in full but replaced by ... to save space. **W1**: Inadequate literature survey, ... , in the function approximation setting. **R1**: Thank you for your comment. We have carefully reviewed the suggested papers and incorporate them into our revised paper's citation list. **W2**: The authors point to the fact that ... listed above do possess an asymptotic analysis of convergence. **R2**: Thank you for your comment. The convergence analyses presented in the aforementioned papers rely on treating the Lagrangian multiplier as a **constant** and demonstrate the convergence of the V-value, as defined under this constant multiplier, using the contraction mapping theorem. It is important to note that these methods are **specifically tailored for the Lagrangian-based approach**. However, the feasible region method like CPO, CVPO, and the proposed CPPO method operates without the necessity of a global Lagrangian multiplier for optimization. Consequently, the convergence analyses put forth in the mentioned papers cannot be directly applied to prove the convergence of our approach. **W3**: The results given in the text carry imprecise statements, ..., in terms of the number of samples N. **R3**: Thank you for your comment. It is evident that $v$, being the probability ratios, has an expected value of $1$. This implies that if we were to sample a large number of instances of $v$, the mean value of these samples would equal 1. Consequently, we believe that the statements presented in Proposition 3.1 are precise. However, in practice, we can observe that even when hundreds of $v$ samples are considered, the mean value of $v$ can still be very close to 1. This observation is particularly evident in the PPO method, where one can calculate the mean value of $v$ during each optimization step. As an on-policy RL methods, it is common to obtain thousands or even tens of thousands of sampled $v$ in a single rollout, thereby affirming the validity of Proposition 3.1 for the proposed method. **W4**: The recovery update problem in Sec 3.2 is not clear - how it has been arrived at? **R4**: Thank you for your comment. As mentioned in Sec. 3.2.2, the purpose of the recovery update is to bring the current policy back from the infeasible region. This involves reducing the episode cost of the current policy by finding an alternative policy that *minimizes costs while either preserving or minimizing any reduction in the overall reward return*. In other words, we seek to minimize the episode cost return without negatively impacting the episode reward return. Consequently, obtaining the three recovery cases shown in Fig. 1 is a straightforward process. **W5**: Subsequently a heuristic procedure has been proposed, ... , how is the procedure a heuristic procedure? **R5**: Thank you for your comment. We acknowledge that the paper does not explicitly state that we have *proved* the optimal solution is attained. In Section 3.3, we refer to the algorithm as a *heuristic* procedure because it relies on **Assumption 3.5**, as stated in **Line 258**. If this assumption is validated, the algorithm will be demonstrated to yield the optimal solution. Presently, the assumption remains unproven; however, we firmly believe in its correctness based on our verification of the solution against the results obtained from Matlab's *fmincon* function. **W6**: After (4), ... So not clear what the authors mean? **R6**: Thank you for your comment. Before proceeding with the optimization in equation (4), we performed an estimation of both $A(s,a)$ and $A_c(s,a)$ using GAE technique[1]. This approach is widely adopted in various RL algorithms, such as PPO and MPO. It's important to note that we obtained the advantage-value/q-value **prior** to initiating the optimization process. **Q1**: On page 2, is it \pi(a|s) instead of \pi(s|a)? **A1**: Thank you for your comment, we have reviewed and revised the typo in the paper according to your comments. **Q2**: Beginning of page 3: ... It is too confusing. **A2**: Thank you for your comment. We acknowledge that the definitions of $Q$, $V$, $Q_c$, and $V_c$ in our paper are consistent with those used in other works within the CRL field. Nevertheless, we understand the importance of clarity and will make revisions to enhance the definitions for better understanding. **Q3**: After (1), how is d^\pi defined? **A3**: Thank you for your comment. As defined in **Line 162**, $d^{\pi}$ represents the state distribution under the current policy $\pi$. In practical implementations, this state distribution is implicitly represented by the states of the trajectories sampled from policy $\pi$, following the same representation method used in PPO and TRPO. **Q4**: In (3), is there a relationship between d and \delta? Can one be found from the other? **A4**: Thank you for your comment, the definition of $d$ can be found in **Line 165**, where it represents the cost constraint. On the other hand, $\delta$ corresponds to the reverse KL constraint, as defined in **Line 177**. It is essential to note that these two parameters are independent of each other, each serving a distinct purpose in our study. **Lastly, we would like to express our gratitude for your patience in reviewing our response, and for your invaluable assistance in enhancing our paper thus far! Please let us know if you have any further questions. We are actively available!** \[1\] Schulman, John, et al. \"High-dimensional continuous control using generalized advantage estimation.\" arXiv preprint arXiv:1506.02438 (2015). --- Rebuttal Comment 1.1: Title: response to rebuttal for referee ChuP Comment: R2: No, the Lagrange multiplier in the mentioned references is not constant but those papers have an update rule and they prove the convergence of even the Lagrange multiplier. Please take a closer look. I have read through all the responses of the authors. While I appreciate the responses provided by the authors, I am convinced the paper lacks concrete analysis. In particular, Assumption 3.5 makes no sense. In Proposition 3.1, the statement reads that if there are a sufficient number of sampled v, then E[v]=1 and E[vlog v] \leq var(v-1). Why should such a result depend on the number of samples of v when E[.] and var(.) are meant to be the true expectation and variance respectively? These are the imprecisions I am pointing to in the review. Since the results are technically flawed, I am reducing my rating of the paper to 3. I have also updated my review. --- Reply to Comment 1.1.1: Comment: Thank you for the further comments! For R2, when they prove the convergence of the **value function**, they treat the Lagrange multiplier as a constant. This assumption doesn't affect their method's Lagrange multiplier updates. However, it is crucial to highlight that this approach is **not suitable for proving the convergence of the trust region method**. This is because the trust region method doesn't have a global Lagrange multiplier. For Assumption 3.5, we **partly confirmed this hypothesis by testing it using a Matlab function**. If you have doubts about this assumption, it could be helpful to find counterexamples to support your point. Also, note that E[] and var() represent the average and variance of **sampled** probability ratios. These are different from the actual expected values and variances of probability ratios, which don't change based on the number of samples used. **Lastly, we would like to express our gratitude for your patience in reviewing our response.**
Summary: The paper introduces Constrained Proximal Policy Optimization (CPPO) for Constrained Reinforcement Learning (CRL). The CPPO method is designed to overcome the limitations of existing methods by offering a first-order feasible region method that doesn't require dual variables or second-order optimization, and it is an incremental extension of the CVPO algorithm (Constrained Variational Policy Optimization for Safe Reinforcement Learning). The improvement seems incremental, improving the computational efficiency of CVPO. The method is evaluated in different environment, comparable or even superior performance to other baseline methods. . However, the paper does not provide a direct comparison between CPPO and CVPO. Strengths: - The authors propose a new first-order method for constrained RL. The method seems to be designed to be simple, and overcome limitations of existing methods such as CPO and CVPO (e.g., the authors do not require the usage of a dual variable). - The proposed method demonstrates comparable or even superior performance compared to other baseline methods. Weaknesses: - Clarity: the paper could benefit from improvements in clarity and readability. The presentation of their ideas is somewhat dense and could be difficult for readers to follow. A concise description of the algorithm is missing. - Novelty and results: The method builds directly upon CVPO. The authors need to clearly highlight this fact when introducing their method, and clearly explain what are the differences. As of now, the changes seem incremental, and therefore the paper seems to lack in novelty. Furthermore, the paper does not provide a direct comparison between CPPO and CVPO, especially in terms of computational complexity - No theoretical results are provided, therefore I'd have expected more extensive numerical results. - Typos and notation: there are several typos and errors in notation throughout the paper. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: See above Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: Authors briefly discuss limitations and broader impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **We would like to express our sincere gratitude to you for valuable and constructive comments. Our responses are as follows.** **W1**: Clarity: the paper could benefit from improvements in clarity and readability. The presentation of their ideas is somewhat dense and could be difficult for readers to follow. A concise description of the algorithm is missing. **R1**: Thank you for your comment, we have reviewed and revised the paper according to your comments. **W2**: Novelty and results: The method builds directly upon CVPO. The authors need to clearly highlight this fact when introducing their method, and clearly explain what are the differences. As of now, the changes seem incremental, and therefore the paper seems to lack in novelty. Furthermore, the paper does not provide a direct comparison between CPPO and CVPO, especially in terms of computational complexity **R2**: Thanks for your comment. We believe the proposed CPPO method is an extension of PPO in constrained RL field, rather than a straightforward on-policy CVPO approach. The only thing these two methods in common is that they share the same idea of finding an optimal policy within a trust region and progressively pushing the current policy towards the optimal one in an EM manner. Similar ideas have been successfully applied in CPO, MPO, and V-MPO as well. The most important contribution of our work is using the **probability ratio** instead of **probability density** to represent the optimal distribution. Previous MPO-based algorithms (MPO, CVPO, V-MPO) endeavour to directly derive the probability density $\psi$ according to $\int\psi(a|s)da=1$. However, note that this formulation does not inherently yield that $\sum\psi(a|s)=1$, **which is a error persists across all three aforementioned algorithms**, leading to the **incorrect normalization** of $\psi^*$. In contrast, our work addresses this issue by employing the probability ratio $v$, which allows for a more straightforward calculation of the distribution with ensuring $E(v)=1$. Furthermore, in contrast to the KL divergence, the utilization of the $l_2$-norm ($\chi^2$ divergence) during the E-step offers a distinct **geometric interpretation** of the feasible region. This perspective facilitates the formulation of the **recovery update method**, which effectively minimizes costs while maintaining rewards. In M-step, the proposed CPPO algorithm conduct a policy update process akin to that of PPO, thereby obviating the necessity for additional hyperparameters as seen in other MPO-based approaches. For the computational complexity, based on our experiment, with the same number of sampled states, CVPO requires approximately 5s for one epoch update, whereas CPPO achieves the same update in less than 2s. **W3**:No theoretical results are provided, therefore I'd have expected more extensive numerical results. **R3**: Thank you for your comment. We acknowledge that due to limited resources, we could only conduct tests on a restricted number of benchmark environments. In our future work, we will make a effort to include additional numerical experiments. **W4**: Typos and notation: there are several typos and errors in notation throughout the paper. **R4**: Thanks for your comment. We have reviewed the entire paper and make necessary revisions to address typos and notation errors in the paper according to your comments. **Lastly, we would like to express our gratitude for your patience in reviewing our response, and for your invaluable assistance in enhancing our paper thus far! Please let us know if you have any further questions. We are actively available!** --- Rebuttal Comment 1.1: Comment: I sincerely thank the authors for taking the time to answer my concerns. Although I acknowledge the contributions, an effort needs to be made to improve the readability of the paper (especially to better emphasize the difference w.r.t. CVPO) and the technical analysis. I'm grateful for the authors' dedication to responding to the reviewers, and I'm eager to observe the enhancements in future versions. --- Reply to Comment 1.1.1: Comment: Thank you for the further comments! We will continue to revise and polish our work according to your advice. Your feedback is truly valuable to us!
Summary: This paper proposes a novel first-order feasible method, CPPO, for efficient constrained reinforcement learning. The proposed approach integrates the Expectation-Maximization (EM) framework to solve the policy optimization problem by treating the CRL as the probabilistic inference. In the E-step, CPPO calculates the optimal policy distribution within the feasible region. In the M-step, CCPO conducts a first-order update for policy optimization. The authors also propose an iterative heuristic algorithm from a geometric perspective to efficiently solve the E-step and a recovery update strategy to improve constraint satisfaction performance. They evaluate their algorithm in several benchmark environments. The reported results show comparable performance over other baselines in complex environments. Strengths: (1) The proposed algorithm converts the CRL problem into a convex optimization problem with a clear geometric interpretation which mitigates the impact of approximation errors and strengthens the capability of the proposed method to satisfy constraints. (2) Since the proposed method does not require second-order optimization techniques or the use of the primal-dual framework, the policy optimization process is largely simplified. Weaknesses: (1) I suggest further polishing and improving the mathematical formulation and notations to make them more rigorous. For example, the objectives and constraints in (5) (6) are represented by the dot product of two vectors. In my understanding, it should be easier to understand with a transpose on the top of the first vector. (2) The policy update strategy looks overly conservative. Figure 1 shows that only in case 3, the policy can be updated toward seeking a higher reward. Does this strategy result in over-conservativeness? I am also wondering in cases 1 and 3 why the solution is not at the feasible boundary (that is to find the feasible distribution to maximize A). (3) It is very impressive that CPPO can work well in the AntCircle and Push tasks. However, the experiment lacks sufficient baselines and environments for comparison. For example, the recovery update in Section 3.2.2 is similar to the idea in Yang et. al [1, 2], where an additional projection step is introduced to recover the safe policy, so I am wondering how does the author compare CPPO against these methods, both in theory and empirically. In addition, the results only present a subset of tasks in SafetyGym, so I wonder how the algorithm performs in other tasks. [1] Yang, Tsung-Yen, et al. "Projection-based constrained policy optimization." arXiv preprint arXiv:2010.03152 (2020). [2] Yang, Tsung-Yen, et al. "Accelerating safe reinforcement learning with constraint-mismatched baseline policies." International Conference on Machine Learning. PMLR, 2021. Technical Quality: 3 good Clarity: 3 good Questions for Authors: (1) This work seems like an on-policy version of CVPO, so I am wondering how does the proposed approach compare against CVPO in the experiments? (2) Could the authors provide more details about baseline implementations, such as the PPO-Lag and TRPO-lag? I am not able to find them in the supplementary material. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **We would like to express our sincere gratitude to you for valuable and constructive comments. Our responses are as follows.** **W1**: I suggest further polishing and improving the mathematical formulation and notations to make them more rigorous. For example, the objectives and constraints in (5) (6) are represented by the dot product of two vectors. In my understanding, it should be easier to understand with a transpose on the top of the first vector. **R1**: Thank you for your comment, we have reviewed and revised the paper according to your comments. **W2**: The policy update strategy looks overly conservative. Figure 1 shows that only in case 3, the policy can be updated toward seeking a higher reward. Does this strategy result in over-conservativeness? I am also wondering in cases 1 and 3 why the solution is not at the feasible boundary (that is to find the feasible distribution to maximize A). **R2**: Thank you for your comment. The recovery update strategy is selectively employed when the current policy **violates the cost constraint**. It is essential to consider that the distribution obtained from the E-step might not adhere strictly to a Gaussian distribution, potentially hindering the actor policy's ability to reach the optimal policy. Therefore, we try to introduce some over-conservativeness to guide the current policy back to the feasible region. As a result, in cases 1&3, our proposed solution lies on the boundary of the trust region, rather than solely on the boundary of the feasible region." **W3**:It is very impressive that CPPO can work well in the AntCircle and Push tasks. However, the experiment lacks sufficient baselines and environments for comparison. For example, the recovery update in Section 3.2.2 is similar to the idea in Yang et. al [1, 2], where an additional projection step is introduced to recover the safe policy, so I am wondering how does the author compare CPPO against these methods, both in theory and empirically. In addition, the results only present a subset of tasks in SafetyGym, so I wonder how the algorithm performs in other tasks. **R3**: Thank you for your comment. We believe PCPO and SPACE are effective representations for efficiently solving CRL problems through the use of projection methods. The incorporation of projection allows for a substantial reduction in the number of constraint violations, enhancing their overall performance. However, it's important to acknowledge that these projection methods still rely on first/second-order optimization to estimate the cost, which may lead to cost violations in complex environments, as exemplified by CPO's performance in Safety Gym. In contrast, the recovery update strategy is designed to maximize cost reduction while preserving the reward during the recovery phase. This approach helps mitigate the side effects of approximation errors, making it a promising alternative to traditional projection methods in theory. Due to the limited computing resources, we were only able to test a few environments. In future work, we plan to expand the scope and incorporate additional test scenarios to further validate and enhance the findings of our research. **Q1**: This work seems like an on-policy version of CVPO, so I am wondering how does the proposed approach compare against CVPO in the experiments? **A1**: Thank you for your comment. Considering that CVPO is an off-policy algorithm and due to the limited computing resources within our team, we have not included this algorithm as a benchmark in our current study. Nevertheless, we acknowledge its significance and will include it for comparison in our future work. **Q2**: Could the authors provide more details about baseline implementations, such as the PPO-Lag and TRPO-lag? I am not able to find them in the supplementary material. **A2**: Thank you for your comment. The baseline implementations in our paper are built upon the code provided by Safety Gym [1]. The code can be accessed at <https://github.com/openai/safety-starter-agents> **Lastly, we would like to express our gratitude for your patience in reviewing our response, and for your invaluable assistance in enhancing our paper thus far! Please let us know if you have any further questions. We are actively available!** \[1\] Ray, Alex, Joshua Achiam, and Dario Amodei. \"Benchmarking safe exploration in deep reinforcement learning.\" arXiv preprint arXiv:1910.01708 7.1 (2019): 2. --- Rebuttal Comment 1.1: Comment: Thank you for your response. Having reviewed the other reviews and the corresponding rebuttals, I have decided to retain my original score. While I acknowledge the contributions of the paper, I believe that conducting more comprehensive comparison experiments and providing deeper theoretical analysis could significantly enhance the quality of the work in future versions. I appreciate the authors' efforts in addressing the feedback and look forward to seeing potential improvements in subsequent iterations. --- Reply to Comment 1.1.1: Comment: Thank you for the further comments! We will continue to revise and polish our work according to your advice. Your feedback is truly valuable to us!
Summary: This paper proposes a novel CPPO method to solve the contained RL (CRL) problem. Specifically, CPPO leverages the probabilistic inference and converts the CRL problem formulation based on the probabilistic ratio, resulting in the first-order optimization solution. CPPO also develops the recovery update method to safely optimize the policy when there are inaccurate cost evaluations and infeasible solutions. The resulting EM-based framework of CPPO shows its effectiveness across various safety gym scenarios compared to baselines. Strengths: 1. CPPO solves the constrained RL based on first-order and does not use dual variables and second-order optimization, resulting in a simpler, intuitive (i.e., geometric perspective), and computationally efficient method. 2. The recovery update method is developed thanks to CPPO's first-order optimization/geometric perspectives and shows its effectiveness (as shown in Figure 4). Weaknesses: 1. As stated in Section 3.2.1, CPPO builds on CVPO and has two main differences (using advantage instead of q and using the probability ratio instead of directly calculating q). While Some readers may understand these as a limited novelty. Possibly adding comparisons against CVPO in the evaluation section can convey the importance of these differences better. 2. While I agree that CPPO reduces computational complexity, there are no empirical results in the evaluation section. Possibly, adding computation time results in the evaluation section can help. 3. I understand the benefits of converting the CRL problem into first-order optimization, but it is unclear what potential limitations/disadvantages the first-order optimization may have. Could there be an approximation error compared to second-order optimization? Related to this concern, could the authors clarify further why Assumption 3.5 is a fair assumption? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Could the CPO baseline or other baselines also apply a similar recovery update strategy (i.e., cases 1-3 described in Section 3.2.2)? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **We would like to express our sincere gratitude to you for valuable and constructive comments. Our responses are as follows.** **W1**: As stated in Section 3.2.1, CPPO builds on CVPO and has two main differences (using advantage instead of q and using the probability ratio instead of directly calculating q). While Some readers may understand these as a limited novelty. Possibly adding comparisons against CVPO in the evaluation section can convey the importance of these differences better. **R1**: Thanks for your comment. We believe the proposed CPPO method is an extension of PPO in constrained RL field, rather than a straightforward on-policy CVPO approach. The only thing these two methods in common is that they share the same idea of finding an optimal policy within a trust region and progressively pushing the current policy towards the optimal one in an EM manner. The most important contribution of our work is using the **probability ratio** instead of **probability density** to represent the optimal distribution. Previous MPO-based algorithms (MPO, CVPO, V-MPO) endeavour to directly derive the probability density $\psi$ according to $\int\psi(a|s)da=1$. However, note that this formulation does not inherently yield that $\sum\psi(a|s)=1$, **which is a error persists across all three aforementioned algorithms**, leading to the **incorrect normalization** of $\psi^*$. In contrast, our work addresses this issue by employing the probability ratio $v$, which allows for a more straightforward calculation of the distribution with ensuring $E(v)=1$. Furthermore, in contrast to the KL divergence, the utilization of the $l_2$-norm ($\chi^2$ divergence) during the E-step offers a distinct **geometric interpretation** of the feasible region. This perspective facilitates the formulation of the **recovery update method**, which effectively minimizes costs while maintaining rewards. In M-step, the proposed CPPO algorithm conduct a policy update process akin to that of PPO, thereby obviating the necessity for additional hyperparameters as seen in other MPO-based approaches. Considering that CVPO is an off-policy algorithm, we have not included it as a benchmark in our current study. Nonetheless, we acknowledge its significance and plan to incorporate it for comparison in our future work. **W2**: While I agree that CPPO reduces computational complexity, there are no empirical results in the evaluation section. Possibly, adding computation time results in the evaluation section can help. **R2**: Thank you for your comment. Based on our experiment, when using the same number of sampled states, the CVPO algorithm requires approximately 5 seconds for one epoch update, whereas CPPO only takes less than 2 seconds for one epoch update. In our future work, we plan to include further computation time comparisons to gain a more comprehensive understanding of the algorithms' performance. **W3**: I understand the benefits of converting the CRL problem into first-order optimization, but it is unclear what potential limitations/disadvantages the first-order optimization may have. Could there be an approximation error compared to second-order optimization? Related to this concern, could the authors clarify further why Assumption 3.5 is a fair assumption? **R3**: Thanks for your comment. You are correct in pointing out that first-order optimization can introduce errors, primarily due to the fact that the optimal distribution calculated in the E-step may not align perfectly with a Gaussian distribution. As a result, the actor policy may not precisely reach the optimal policy. This limitation also necessitates the implementation of a recovery update strategy. For Assumption 3.5, this assumption is based on the geometric intuition that the optimal solution of equation (5) will consistently lie on the boundary of the feasible region. To validate our hypothesis, we conducted a comparison between our solution and the results obtained using Matlab's *fmincon* function, affirming the accuracy of our approach. Thus, we firmly believe in the validity of this hypothesis. **Q1**: Could the CPO baseline or other baselines also apply a similar recovery update strategy (i.e., cases 1-3 described in Section 3.2.2)? **A1**: Thanks for your comment. The recovery update strategy is specifically tailored for the feasible region method, indicating its theoretical applicability to CPO. However, the practical implementation of this recovery process on CPO could be challenging. For the primal-dual approaches, they doesn't necessarily require this strategy, as its convergence relies on the convergence of the Lagrange multiplier. **Lastly, we would like to express our gratitude for your patience in reviewing our response, and for your invaluable assistance in enhancing our paper thus far! Please let us know if you have any further questions. We are actively available!** --- Rebuttal Comment 1.1: Comment: I appreciate the authors' clarifications. They answer my questions and concerns about this paper. After reading other reviewers' comments, I would like to retain my rating, which I am unsure about the acceptance. While I acknowledge the paper's technical contributions, the story of the paper may need to be entirely re-written to highlight the difference against CVPO (e.g., highlighting the issue of incorrect normalization and why this incorrect normalization is problematic in practice) as most of the reviewers are concerned about the novelty aspect. I also understand that directly comparing against CVPO is not straightforward, but having former comparisons (e.g., computation time, correct normalization) against the baseline is important. Thank you again for clarifying my questions. --- Reply to Comment 1.1.1: Comment: Thank you for the further comments! We will continue to revise and polish our work according to your advice. Your feedback is truly valuable to us!
Rebuttal 1: Rebuttal: **We would like to express our sincere gratitude to reviewers for valuable and constructive comments.** Many reviewers have raised questions regarding the novelty of our work. It is important to emphasize that the proposed CPPO method is an extension of PPO in constrained RL field, rather than a straightforward on-policy CVPO approach, despite they shares the same expectation-maximization principle. The most important difference and contribution lies on the utilization of the **probability ratio** instead of **probability density** to represent the optimal distribution. This modification not only simplifies the computation of the optimal distribution but also lays the foundation for subsequent geometric interpretations and the development of the recovery update strategy. **Lastly, we would like to express our gratitude for reviewers' patience in reviewing our response, and for reviewers' invaluable assistance in enhancing our paper thus far! Please let us know if you have any further questions. We are actively available!**
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper studies constrained reinforcement learning problems. It proposes a new EM-type algorithm and designs a heuristic version for practical use. The authors also conduct some numerical experiments to validate the performance of the algorithm. Strengths: 1. The algorithm is first-order and thus computationally efficient in practice. 2. The numerical results look convincing. Weaknesses: 1. The algorithm looks very similar to CVPO and only has some small modifications. 2. This paper does not have convincing theoretical analysis. For example, the authors claim that the heuristic algorithm in Section 3.3 will stop in just a few iterations (remark 3.6) but there are not any theoretical proofs. It would be better if the paper can provide some theoretical study, even only for simple cases. Technical Quality: 3 good Clarity: 3 good Questions for Authors: None. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **We would like to express our sincere gratitude to you for valuable and constructive comments, thanks for your support. Our responses are as follows.** **W1**: The algorithm looks very similar to CVPO and only has some small modifications. **R1**: Thanks for your comment. We believe the proposed CPPO method is an extension of PPO in constrained RL field, rather than a straightforward on-policy CVPO approach. The only thing these two methods in common is that they share the same idea of finding an optimal policy within a trust region and progressively pushing the current policy towards the optimal one in an EM manner. Similar ideas have been successfully applied in CPO, MPO, and V-MPO as well. The most important contribution of our work is using the **probability ratio** instead of **probability density** to represent the optimal distribution. Previous MPO-based algorithms (MPO, CVPO, V-MPO) endeavour to directly derive the probability density $\psi$ according to $\int\psi(a|s)da=1$. However, note that this formulation does not inherently yield that $\sum\psi(a|s)=1$, **which is a error persists across all three aforementioned algorithms**, leading to the **incorrect normalization** of $\psi^*$. In contrast, our work addresses this issue by employing the probability ratio $v$, which allows for a more straightforward calculation of the distribution with ensuring $E(v)=1$. Furthermore, in contrast to the KL divergence, the utilization of the $l_2$-norm ($\chi^2$ divergence) during the E-step offers a distinct **geometric interpretation** of the feasible region. This perspective facilitates the formulation of the **recovery update method**, which effectively minimizes costs while maintaining rewards. In M-step, the proposed CPPO algorithm conduct a policy update process akin to that of PPO, thereby obviating the necessity for additional hyperparameters as seen in other MPO-based approaches. **W2**: This paper does not have convincing theoretical analysis. For example, the authors claim that the heuristic algorithm in Section 3.3 will stop in just a few iterations (remark 3.6) but there are not any theoretical proofs. It would be better if the paper can provide some theoretical study, even only for simple cases. **R2**: Thanks for your comment. The heuristic algorithm presented in Section 3.3 exhibits a two-step recursion process, effectively addressing the optimization problem in (5). When the solution encompasses a set of $N$ elements, the algorithm can be expressed as follows: * Step 1: We neglect the lower bound constraint in (5) and derive an optimal solution following Theorem 3.4. If this solution satisfies the lower bound constraint, it is considered valid, and we **output the solution**. If not, we proceed to Step 2. * Step 2: In the case where $k$ elements violate the lower bound constraint, based on Assumption 3.5, we set the final solution for these $k$ elements to the lower bound. Subsequently, we transform the remaining $N-k$ elements, creating **a new optimization problem with the same form in (5)**. We can then resolve this problem by repeating Step 1 and Step 2 iteratively. By employing Step 1, we attain the optimal solution for the unconstrained scenario. The definition of Assumption 3.5 enables us to deduce that the final solution will consistently be the optimal solution for equation (5) if the assumption holds true. **Lastly, we would like to express our gratitude for your patience in reviewing our response, and for your invaluable assistance in enhancing our paper thus far! Please let us know if you have any further questions. We are actively available!** --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal! However, I am not sure whether it is reasonable to assume Assumption 3.5. In addition, I still feel the novelty of the algorithms is not very significant. Therefore, I will retain my score. --- Reply to Comment 1.1.1: Comment: Thank you for the further comments! We will continue to revise and polish our work according to your advice. Your feedback is truly valuable to us!
Summary: This paper focuses on constrained reinforcement learning and proposes a method called Constrained Proximal Policy Optimization (CPPO). Experimental results demonstrate the improved performance in terms of episodic return and episodic cost. Strengths: + The proposed CPPO achieves improved balance between return and cost empirically. Weaknesses: - The novelty and contribution of this work is not clear. The difference between the proposed CPPO and CVPO does not seem to be very significant. Adopting the advantage value instead of Q-value is a natural extension. Besides, it is unclear how much sample complexity can be reduced by replacing $q$ by $v$. After all, $v$ should still satisfy the expectation constraint. - The paper does not have any theoretical characterization of the proposed CPPO algorithm. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - It is unclear why replacing Lagrangian based approach by the feasible region based approach can reduce the computational complexity. Solving a hard-constrained optimization problem seems to be more challenging. Please clarify. - Can an off-policy version be developed for CPPO to improve its sampling efficiency? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The paper mentions that CPPO method is an on-policy constrained RL, which suffers from lower sampling efficiency compared to other off-policy algorithm. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **We would like to express our sincere gratitude to you for valuable and constructive comments, thanks for your support. Our responses are as follows.** **W1**: The novelty and contribution of this work is not clear. The difference between the proposed CPPO and CVPO does not seem to be very significant. Adopting the advantage value instead of Q-value is a natural extension. Besides, it is unclear how much sample complexity can be reduced by replacing $q$ by $v$. After all, $v$ should still satisfy the expectation constraint. **R1**: Thanks for your comment. We believe the proposed CPPO method is an extension of PPO in constrained RL field, rather than a straightforward on-policy CVPO approach. The only thing these two methods in common is that they share the same idea of finding an optimal policy within a trust region and progressively pushing the current policy towards the optimal one in an EM manner. Similar ideas have been successfully applied in CPO, MPO, and V-MPO as well. The most important contribution of our work is using the **probability ratio** instead of **probability density** to represent the optimal distribution. Previous MPO-based algorithms (MPO, CVPO, V-MPO) endeavour to directly derive the probability density $\psi$ according to $\int\psi(a|s)da=1$. However, note that this formulation does not inherently yield that $\sum\psi(a|s)=1$, **which is a error persists across all three aforementioned algorithms**, leading to the **incorrect normalization** of $\psi^*$. In contrast, our work addresses this issue by employing the probability ratio $v$, which allows for a more straightforward calculation of the distribution with ensuring $E(v)=1$. Furthermore, in contrast to the KL divergence, the utilization of the $l_2$-norm ($\chi^2$ divergence) during the E-step offers a distinct **geometric interpretation** of the feasible region. This perspective facilitates the formulation of the **recovery update method**, which effectively minimizes costs while maintaining rewards. In M-step, the proposed CPPO algorithm conduct a policy update process akin to that of PPO, thereby obviating the necessity for additional hyperparameters as seen in other MPO-based approaches. For the sample complexity, CPPO efficiently estimates the cost return by combining the surrogate cost objective (calculated from the **cost advantage value**) with the **cost return of the current policy**. Unlike CVPO, which necessitates sampling several actions under the same state, CPPO avoids this requirement, making it with less sample complexity. **W2**: The paper does not have any theoretical characterization of the proposed CPPO algorithm. **R2**: Thanks for your comment. Conducting the convergence analysis of CPPO method is a challenging work and it is hard to address this issue in the current paper. We will look into adding convergence analysis for the CPPO algorithm in our future work, if possible. **Q1**: It is unclear why replacing Lagrangian based approach by the feasible region based approach can reduce the computational complexity. Solving a hard-constrained optimization problem seems to be more challenging. Please clarify. **A1**: Thanks for your comment. As mentioned in the Introduction, the advantage of the feasible region-based approach over the primal-dual method is its superior **convergence speed**. The feasible region method implicitly determines the Lagrange multiplier, eliminating the need for updating it through the gradient. The experimental results in the Circle environment and the comparison[1] between CPO and PDO[2] also further validate this advantage. **Q2**:Can an off-policy version be developed for CPPO to improve its sampling efficiency? **A2**: Thanks for your comment. We are in the process of developing the off-policy CPPO algorithm, wherein we leverage sampled trajectories from the trajectories buffer. This method bears similarities to ACER[3]. We believe that this approach holds promise and can potentially address certain challenges in our research domain **Lastly, we would like to express our gratitude for your patience in reviewing our response, and for your invaluable assistance in enhancing our paper thus far! Please let us know if you have any further questions. We are actively available!** \[1\] Achiam, J., Held, D., Tamar, A., & Abbeel, P. (2017, July). \"Constrained policy optimization\". In International conference on machine learning (pp. 22-31). PMLR. \[2\] Chow, Yinlam, et al. \"Risk-constrained reinforcement learning with percentile risk criteria.\" The Journal of Machine Learning Research 18.1 (2017): 6070-6120. \[3\] Wang, Ziyu, et al. "Sample efficient actor-critic with experience replay." arXiv preprint arXiv:1611.01224 (2016). --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. After going through the reviews and rebuttals, I feel the novelty and contribution of this work need to be strengthened in order to get it accepted. I have decided to keep my original rating. --- Reply to Comment 1.1.1: Comment: Thank you for the further comments! We will continue to revise and polish our work according to your advice. Your feedback is truly valuable to us!
null
null
null
null
Fast Approximation of Similarity Graphs with Kernel Density Estimation
Accept (spotlight)
Summary: Given a set of n points and a pairwise kernel k(x,y), the fully connected similarity graph K is the weighted graph that has an edge between every pair x,y with weight k(x,y). The paper presents an algorithm for computing a sparse graph G that approximates K in a certain sense that is useful for spectral clustering. The algorithm is based on black-box calls to an efficient kernel density estimation (KDE) algorithm. For some kernels such black-boxes are available and have been the focus of much work recently, and they lead to efficient graph construction in this paper. The result is given in the form of a formal theorem and empirical evaluation. Strengths: The paper is generally clearly written and contains nice and convincing theoretical and empirical findings that contribute to the literature on the topic and could be useful in practice. It is a nice application of the recent progress on efficient KDE to graph clustering. Even though it may not be as novel and general and advertised (see below), I still find it above the bar for acceptance. Weaknesses: (1) Novelty is somewhat limited, and seems overstated. The intro (lines 61-62) claims "a novel connection between the KDE and the fast construction of similarity graphs". This connection is quite well-known already and has been the focus of some recent works, including [1], [6] and "Spectral Sparsification of Metrics and Kernels" (Quanrud, SODA 2021). Anyway this connection between efficient KDE and kernel similarity graphs is very natural and straightforward, and it is unsurprising that all these works appeared shortly after the recent progress on efficient KDE. These works instantiate the connection in possibly different ways, but the general idea is similar. The novelty in this work seems to me restricted to the specific technical application to the SZ clustering framework, and not so much a conceptual point about kernels and graphs. (2) A more specific concern about novelty is that the main sampling method at the heart of the algorithm (lines 195-204, Algorithm 1) seems to have appeared already in [6] (see their "sample random neighbor" primitive). The work is cited only for a certain conditional hardness result, while its very similar algorithmic ideas are not mentioned. How do these techniques relate to each other? (3) The generality of the result itself is also overstated, particularly in the abstract that the result is "applicable to arbitrary kernels". I suppose this alludes to the fact that the algorithm uses KDE as a black-box, this is rather misleading, since there relatively few and specialized instances of known black-boxes of the type needed for this algorithm, and for a relatively limited set of kernels. For arbitrary kernels the black-box does not exist and the result of the paper is not applicable. This affects the writing in the intro. Lines 35-40 formulate the main question as "constructing a sparse graph that preserves the cluster structure", focusing on sparsity and eschewing running time, possibly in order to circumvent said limitations on generality. However, this renders the main question moot, since trivially the full graph can be constructed and then spectrally sparsified. It would probably have been preferable to promptly discuss running times with their inevitable limitations. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Please clarify connection to sampling in [6] (see above) 2. The paragraph about LSH in lines 90-94 (related work) sounds odd. It says it's unclear how to use LSH for approximating geometric similarity graphs. However, much of the work you cite for efficient KDE builds directly on LSH (and consequently, so do some instantiations of your algorithm, as well as similar graph algorithms from prior work mentioned earlier). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: No major concerns, though some limitations on the generality of the results may not be sufficiently discussed, as mentioned above in weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for your positive and detailed report, and valuable comments. We will take these into account when preparing the final version of our paper. Here are our responses to your questions and comments. **Question 1.** We agree that there is some similarity between our algorithm and the one in [6]. We remark that our algorithm involves sampling a random neighbour of *every* vertex in a single procedure, whereas the one in [6] samples a neighbour of one vertex. Due to this difference, the analysis of our algorithm requires a careful consideration of each row of the recursion tree in Figure 4 with respect to the sets $S_{i, j}$ and $X_{i, j}$. Furthermore, our application of these samples to construct a cluster-preserving sparsifier is novel and requires a much more involved analysis than the analysis of SZ [21] due to the sampling method based on KDE. **Question 2.** We will revise this paragraph in the final version. **Weakness 1.** We agree that the novelty in our work is the application of efficient KDE to the construction of a cluster-preserving sparsifier of the complete kernel similarity graph. It was not our intention to claim more than this, and following your comment we will make it clearer in the final version. **Weakness 2.** See the answer to Question 1. **Weakness 3.** Based on our black-box reduction, our algorithm is applicable to any kernel with an efficient KDE algorithm. We will clarify this and remove the claim that our algorithm is applicable to arbitrary kernels. As you point out, our goal is to develop an algorithm with fast running time for constructing a sparse similarity graph, and we will make this clearer in the introduction. --- Rebuttal Comment 1.1: Comment: Thank you for your answers. About [6], I understand that your application of this sampling method (to many vertex simultaneously) and the analysis around it are different from [6], since you use it to a different end. But nonetheless, the sampling method that you present as the main idea in the beginning of section 4 is the same---calling it "some similarity" seems a gross understatement given how identical it is to [6]---and is presented without mention nor acknowledgement that it has appeared in prior work. This does not seem acceptable to me. Reiterating my original review, this submission is okay for acceptance in terms of its technical content, but the substantial issues with its claims to generality and novelty and lack of reference to prior work need to be addressed and not minimized or dismissed.
Summary: This work presents an algorithm framework for creating a (weighted) sparse similarity graph, which is an important object for a variety of ML problems. The algorithm samples pairs of points to include in the graph using a sampler based on kernel density estimation (KDE). The idea is to recursively split the data into subsets, compute the KDE on each subset, and select a subset proportional to the density. This process is done for every point in the dataset (node in the graph), resulting in a collection of edges in the sparse graph. A nice feature of this method is that it provably preserves the spectral clustering properties of the graph. To my knowledge, this is the only sub-quadratic (in space / time) algorithm that does this. The method is also agnostic to the choice of KDE solver, leading to interesting practical algorithms. When paired with a classical KDE method (the Fast Gauss Transform), the method outperforms a number of reasonable baselines for graph construction + spectral clustering in low dimensions. Strengths: **S1: Novel and interesting reduction,** from KDE to similarity graph construction. The sampling mechanism (Algorithm 1) is particularly nice, and it likely has further applications to other settings where we want to sample proportional to kernel sums. The proposed framework also benefits from and motivates the growing body of work on the KDE problem. This leads to exciting possibilities for practical algorithms, depending on which KDE method is used in the framework. **S2: Good theoretical results.** It is great to see rigorous guarantees on the quality of the subsequent partitioning of the similarity graph as well as the time / space complexity. Many studies only perform the latter, as it is difficult to prove this kind of result. **S3: Well-engineered implementation,** with reproducible experiments. I was able to fully reproduce all of the experiments in this paper. I think the implementation will be of use to the community. Weaknesses: **W1: Notation.** The notation in Section 2 is understandable but would benefit from some polishing. For example, $w_G(u, v)$ is the edge weight function over node inputs and is defined only for those $(u,v)$ that are in the graph. But there is a $w_G(S, V\setminus S)$ (with a different-script $G$ and set-of-node inputs), which is the sum of outbound edge weights from within $S$ to other parts of the graph. There is also a $w(u,v)$ used in the definition of the node degree, which isn't defined anywhere. A graph is defined as $G = (V, E)$ (with a separately-specified weight function) but the similarity graph is given as $K = (V, E, \mathrm{weights})$. The kernel density sum KDE is introduced with a subscript $g_{[a,b]}$, but the analysis doesn't use this notation and later introduces $g_X$ as the KDE sum over a set of points $X$. This is all a bit confusing. One possible fix is to define everything in terms of a weighted graph $G = (V, E, w)$. Here, $w$ is the weighting function from $(u,v)\to \mathbb{R}^{+}$ where $w(u,v) = $ edge weight if $(u,v) \in G$ and 0 otherwise. This would give a much simpler definition for the adjacency matrix ($A_{ij} = w(v_i, v_j)$), the degree ($d(v) = \sum_{u \in V} w(v, u)$), and the outbound edge weight sum, which can be written as a unary function $w_o(S) = \sum_{u \in S, v \not \in S} w(u,v)$ to further reduce confusion with $w(u,v)$. Also meanings of $\rho_K$ and $\lambda$ should also be introduced before they are used in the main theorem, even if the full definition is deferred until later. **W2: Literature positioning / context.** The flexibility of the framework is a major strength of the paper, so it would be nice to highlight the (growing) number of KDE algorithms. For example: - Hashing-Based Estimators (HBE): Uses LSH tables to do KDE, where we only compute the KDE over the points that collide in the LSH table. The original paper was in [[FOCS 2017]](https://arxiv.org/abs/1808.10530), with a practical version in [[ICML 2019]](http://proceedings.mlr.press/v97/siminelakis19a) and some incremental progress in [[NeurIPS 2019]](https://openreview.net/forum?id=H1xEABrgIH). (only provably works for some kernels, fast in high dimensions, good in practice). - Interpolation KDE: [[AISTAT 2021]](http://proceedings.mlr.press/v130/turner21a/turner21a.pdf) performs KDE via polynomial interpolation (for arbitrary smooth kernels). - RACE: [[WWW 2020]](https://dl.acm.org/doi/fullHtml/10.1145/3366423.3380244) performs fast, online KDE, using a few MB of space (only for a limited set of kernels, but extremely space / time efficient). - Discrepancy coresets: [[COLT 2019]](https://proceedings.mlr.press/v99/karnin19a.html) performs KDE using online coresets (a re-weighted sub-sample of the dataset). In practice, this requires slower preprocessing than other methods but is more general / can be more accurate. Each of these methods will present unique tradeoffs when incorporated into this framework. While a full comparison of various KDE subroutines is probably out-of-scope, it would be nice to discuss some of these tradeoffs / make recommendations. A few similarity graph construction methods have also been developed since the review paper by Luxburg in 2007, which should be mentioned: - NN Descent: [[WWW 2011]](https://dl.acm.org/doi/abs/10.1145/1963405.1963487) This is an iterative technique that progressively refines a random guess for the initial k-NN graph. - FLASH: [[SIGMOD 2018]](https://dl.acm.org/doi/abs/10.1145/3183713.3196925) This is an LSH-based method that constructs a similarity graph (i.e. finds all pairs $(u,v)$ such that $w(u,v)$ exceeds a threshold). This is much faster than other LSH methods because it uses clever counting tricks to avoid doing any explicit distance / kernel calculations. Finally, the method in the paper, "Learning space partitions for nearest neighbor search" [[ICLR 2020]](https://openreview.net/forum?id=rkenmREFDr), might be relevant. This paper provides a framework for approximate near neighbor search that starts with a similarity graph, then partitions the data via a balanced min-cut partitioning of the graph. These partitions are then used to do a partition-based (FAISS IVF-style) similarity search, with provable guarantees (first theory for this type of algorithm). From a practical perspective, the hard part of this framework is to get the initial similarity graph, so your algorithm might help. **W3: Experiments.** A couple of highly competitive baselines are not represented in the experimental evaluation, notably FLASH (which is $O(N \mathrm{polylog}(N))$) and NN-Descent (which scales about $O(N^{1.15})$, based on empirical results). For example, the C++ implementation of FLASH can construct a similarity graph for the webspam dataset (N = 350k) in under 10 seconds and for the friendster dataset (N = 65 million) in 1578 seconds. This 3-10x faster than the result of extrapolating Figure 5, and these datasets are of much higher dimension than two moons. The datasets in the experiments are also all low-dimensional: even the BSDS task is only 5-dimensional. Many clustering applications (e.g. those that handle text embeddings) have inputs in the hundreds of dimensions. It would be nice to understand how this method performs in the higher-dimensional setting. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: 1. Does T_KDE include preprocessing + query time? Most of the recent work in fast KDE requires a preprocessing algorithm (typically $O(N)$) followed by a query algorithm (typically much faster than $O(N)$). It looks like this is the case based on the T_KDE reported for FGT and the LSH method, but it wasn't clear from the definition. This is important because Algorithm 1 calls the KDE subroutine on different subsets of X. 2. What is the state of the art result (using e.g. neural network models) on the BSDS image segmentation dataset? Is it close to the result for k-NN / similarity graph algorithms or is it much better? **Minor items:** To get this to compile under MacOS arm64 (M1 chips), I did the following. 1. Pass an up-to-date clang compiler to cmake (via -D CMAKE_CXX_COMPILER). 2. Add the flag "-I/usr/local/include" to the compile_args in setup.py. 3. Break the function declarations in stag_lib/KMeansRex/mersenneTwister2002.c into a header. 4. pip install stag (stag isn't in requirements.txt, for some reason) Nitpick on Line 92: "Despite extensive research on LSH-based algorithms, it remains unclear whether such techniques can be employed to approximately construct a similarity graph in nearly-linear time." This has been done at scale (albeit without theoretical guarantees on the quality of the similarity graph), see section 4 of [the FLASH paper](https://arxiv.org/pdf/1709.01190.pdf). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 2 fair Contribution: 3 good Limitations: I do not foresee any negative societal impacts of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for your positive and detailed report, and valuable comments. We will take these into account when preparing the final version of our paper. Here are our responses to your questions and comments. **Question 1: KDE preprocessing and query time.** In our paper, $T_{\mathrm{KDE}}$ includes both the preprocessing and query time of the KDE algorithm. As you correctly point out, our algorithm applies the KDE algorithm with different subsets of the data points, and the preprocessing step is required each time. We will clarify this in the final version of the paper. **Question 2: State-of-the-art on the BSDS dataset.** The state-of-the-art for the (specific) BSDS dataset is recorded in [this paper](https://ieeexplore.ieee.org/document/5557884), and this achieves a Rand Index of 0.85. However, to the best of our knowledge, the Segment Anything Model (SAM) available [here](https://segment-anything.com/) presents the state-of-the-art for most image segmentation datasets. SAM has been trained on a dataset of 11 million images, and produces much better segmentation results than most unsupervised approaches. We emphasise that our algorithm is a general clustering algorithm, and is not developed specifically for image segmentation. **Weakness 1.** We agree with your comment. We'll take this into account, and improve our use of notation in the final version of the paper. **Weakness 2.** We're pleased to see that your consider the flexibility of our developed framework as a major strength of the paper. Following your suggestion, we will spend more effort discussing recent work on KDE algorithms, and their tradeoffs when incorporated into our developed framework. **Weakness 3.** We'll discuss the application of our algorithm on higher-dimensional datasets, and compare it against the algorithms (FLASH and NN-Descent) that you mentioned in the final version. Conducting such experimental studies is difficult at the moment, due to the limited time of the rebuttal stage. **Minor items.** We are pleased that you were able to reproduce our experimental results with the provided code and for your comments on compiling and running the code. We will update our README based on your feedback. --- Rebuttal Comment 1.1: Comment: Thanks for the response! I am still not fully convinced on the experiments, but I am willing to accept the segmentation results as evidence that the algorithm scales to N = 150k (though not to high dimensions). Today's most interesting applications of clustering (in NLP, vision, search, recommendation, etc) are in much higher dimensions, and it is not clear from these experiments whether this algorithm will yield SOTA or even scale all that well. However, I do not think this is a fatal issue. The main contribution of the paper lies in proposing a general framework, and the theoretical contribution is strong enough to place this paper above the bar for acceptance. Furthermore, the experimental evaluation is on just one instantiation of the framework - given the recent work in the KDE area, this is likely not even the strongest configuration, so the method might actually be a lot better in practice than it seems from these results. I'll be very interested to see the comparison with FLASH / NN-Descent (there are also likely to be some other, more recent similarity graph construction methods that would need to be considered if this were primarily an empirical paper). I do understand the difficulty of turning around experiments in the short rebuttal time frame, though, and issues with experiments should not count against the submission.
Summary: The paper proposed an efficient approximation of the similarity graph using the Kernel Density Estimation (KDE) method as a black box. The proposed framework can preserve the clustering structure and can be run in nearly linear time. Lastly, experiments on synthetic and image segmentation datasets are provided to support these claims. Strengths: 1. [Originality] The author proposed a novel algorithm by combining the ideas of subset/subspace dividing (somewhat related to LHS) with the help of probability measure estimated from KDE. 1. [Clarity] An excellent written paper that provides clear insights into motivations and formulation. Great overview of how the algorithm works Figures 3-4 and Algorithms 1-2. The fast runtime is achieved by the $O(\log(n))$ recursive call to the KDE as well as the superaddictive property of the $T_{KDE}$. 1. [Quality] The authors compared their proposed algorithm with publicly available nearest neighbor implementations (FAISS, sklearn) and show drastic speedup in the large sample size regime. 1. [Significance] I believe this paper makes a significant contribution in the unsupervised learning domains. The graph sparsification/Laplacian construction usually is done in a two-way approach, i.e., calculate pairwise distances and compute the similarity graph (with an additional sparsification step). The proposed method can do the aforementioned steps in a single step, thus drastically reducing the run time. Weaknesses: 1. The authors have shown theoretically that the proposed algorithm is a {\em cluster-preserving sparsifier}. This claim can be further supported by showing a comparison between the eigenvalues (or eigengaps) of the true, graph sparsifiers (e.g., from [13, 14, 19]), and the proposed method. 1. There is another kind of similarity graph, called the $\varepsilon$-radius graph. The graph $G(V, E)$ is constructed with $E = { (x_i, x_j) \in V^2 : \|x_i - x_j\| \leq \varepsilon}$. I think it will be informative to discuss this in the Introduction (L27-34). 1. It is quite confusing to see Theorem 1 without the full definition, or even a high-level description, of what $\rho$, $\lambda$, $N_K$ are in Section 1. Readers can somewhat guess what they are based on notational convention, but to improve readability, I would suggest to either include the full definition or (even better) simplify it to make it less rigorous when it is first introduced in Section 1 (then provided a detailed version later in Section 4 or Appendix if no space). 1. For the runtime of the FGT (L148), it might be more informative to write out all the terms inside the log rather than writing $\widetilde O(m+n)$, because the error term $\epsilon$ is hidden in the log. 1. It will be interesting to see experiments on higher dimensional (real or synthetic) datasets and compare the runtime correspondingly. The experiments are run on (relatively) low-dimensional and straightforward examples, e.g., synthetic datasets (d=2) and image data (d=5). Some potential high-dimensional real datasets to consider e.g., [A,B]. --- [A] Mahmoud, Eman, Ali Takey, and Amin Shoukry. “Spectral Clustering for Optical Confirmation and Redshift Estimation of X-Ray Selected Galaxy Cluster Candidates in the SDSS Stripe 82.” Astronomy and Computing 16 (2016): 174–84. [B] Chmiela, Stefan, Alexandre Tkatchenko, Huziel E. Sauceda, Igor Poltavsky, Kristof T. Schütt, and Klaus-Robert Müller. “Machine Learning of Accurate Energy-Conserving Molecular Force Fields.” Science Advances 3, no. 5 (2017): e1603015. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Related to Weakness #2, in practice, people usually use exponentially decaying kernel $k$ for the fully connected graph. You can approximate the fully connected graph with $\varepsilon$-radius graph by choosing the radius $\varepsilon$ and the bandwidth $\sigma$ appropriately, e.g., if using the exponential kernel as per L132, you can choose $\varepsilon = 3\cdot \sigma$; that way the far enough pairs will have small kernel values, i.e., $=\exp(-9) \sim 10^{-4}$, thus is negligible. I am curious if your method can be extended to the $\varepsilon$-radius graph under this scenario as well? 1. I am curious how difficult it is to extend this framework to the scenario of higher-order (neighborhood) simplicial complex and/or k-Laplacian. There are several works trying to use this information in data analysis [A-D], as well as algorithms [E] to sparsify the higher-order simplicial complexes with higher-order Cheerger constant. The reason I am asking is because the Higher-order k-Laplacian has dependency on $O(n_k)$ where $n_k$ is the cardinality of the k-complex (e.g., size of edges is $n_1 = |E|$), so if we can successfully sparsify the simplicial complex and/or k-Laplacian it can reduce the runtime drastically for those studies. 1. How can this framework be extended to the manifold learning or diffusion map setting? In this scenario, we care not only about the approximation of the null space but also the first few eigne-values. Can we have similar guarantees for these scenarios? 1. [Minor language usage] L21-23 (“Thanks to its out-performance over traditional clustering algorithms like k-means, this approach has…”) might be able to be improved by something like “Due to its superior performance compared to conventional clustering algorithms such as k-means, this approach has….” 1. [Minor notational consistency] For the equation between L194-195, you might be able to improve the clarity by changing the $z \in X_1$ to $x_j \in X_1$ and the $x_j$ in summation to another variable. This is because you used $x_i$ and $x_j$ in the discussion above (L192). Similar to L13 in Algorithm 1, you might want to reconsider the use of $y_i$ there to make the notation more consistent. --- [A] Dey, Tamal K., Jian Sun, and Yusu Wang. “Approximating Loops in a Shortest Homology Basis from Point Data.” In Proceedings of the Twenty-Sixth Annual Symposium on Computational Geometry, 166–75, 2010. [B] Chazal, Frédéric, Brittany Fasy, Fabrizio Lecci, Bertrand Michel, Alessandro Rinaldo, Alessandro Rinaldo, and Larry Wasserman. “Robust Topological Inference: Distance to a Measure and Kernel Distance.” The Journal of Machine Learning Research 18, no. 1 (2017): 5845–84. [C] Chen, Yu-Chia, and Marina Meila. “The Decomposition of the Higher-Order Homology Embedding Constructed from the k-Laplacian.” Advances in Neural Information Processing Systems 34 (2021). [D] Keros, Alexandros D., Vidit Nanda, and Kartic Subr. “Dist2cycle: A Simplicial Neural Network for Homology Localization.” In Proceedings of the AAAI Conference on Artificial Intelligence, 36:7133–42, 2022. [E] Osting, Braxton, Sourabh Palande, and Bei Wang. “Spectral Sparsification of Simplicial Complexes for Clustering and Label Propagation.” ArXiv:1708.08436 [Cs], February 1, 2019. http://arxiv.org/abs/1708.08436. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: I believe that the authors have covered all the limitations of the algorithms (e.g., it is only cluster-preserving not spectral sparsification, runtime will depend on the implementation of $T_KDE$, etc.) in the current version, but adding it and summarize it in a specific section will be beneficial. Negative social impact is not applicable in this case, as this work primarily constitutes a theoretical contribution. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for your positive and detailed report, and valuable comments. We will take these into account when preparing the final version of our paper. Here are our responses to your questions and comments. **Question 1.** Our method constructs a sparsifier which has the same cluster structure as the fully-connected kernel graph. As you point out, for an exponentially-decaying kernel the $\epsilon$-radius graph will have a very similar structure to the kernel graph. Hence our constructed graph could also be viewed as a cluster-preserving sparsifier of an $\epsilon$-radius graph. **Questions 2 and 3.** Thank you for the questions on generalising our approach to higher-order structures, and to manifold learning. These are excellent directions for future work, but we have not considered these so far. We will add some discussion in the final version of the paper. **Questions 4 and 5.** We will make necessary changes based on your suggestions. **Weakness 1.** Classical algorithms for constructing spectral sparsifiers [13, 14, 19] are either based on a complicated graph decomposition framework, or Laplacian solvers. Despite of their nearly-linear time running time proven in theory, implementing these algorithms is a very challenging task. In fact, as far as we know, there is no publicly available implementation of nearly-linear time algorithms for spectral sparsifiers that work for a large-scale graph, and hence comparing the eigenvalues (or eigen-gap) between our constructed graph and spectral sparsifiers isn't feasible for a large-scale graph. Taking this into account, we compare the eigen-gap of our sparsifier with the ones of a fully connected graph, and an SZ sparsifier. We study the two moon dataset, and in Table 1 in the attached PDF we report the eigengap $\lambda_3 / \lambda_2$, where $\lambda_i$ is the $i$th smallest eigenvalue of the normalised Laplacian matrix of each constructed graph. We find that both the SZ algorithm and our proposed algorithm preserve a large eigen-gap between $\lambda_2$ and $\lambda_3$, which implies that the cluster structure of the graph is preserved. **Weaknesses 2 - 4.** We will incorporate these editorial suggestions into the final version of the paper. **Weakness 5.** We will add some discussion on applying our algorithm to high-dimensional data in the final version of the paper. --- Rebuttal Comment 1.1: Comment: Thank you for your answers. The additional experiment (from Table 1) looks good, a larger eigengap than SZ seems to imply a better sparsification result compared to SZ. I have no further questions.
Summary: Authors proposes a fast approximation method for constructing sparse similarity graphs from data for spectral clustering. They leverage Sun and Zanetti (2019)'s sampling-based approach to find "cluster-preserving sparsifier" for similarity graphs. This is combined with black box KDE and binary search algorithms to generate a fast & memory efficient approximation of SZ's algorithm. The experimental results are shown to be fast and accurate compared to Scikit-learn and FAISS's implementation. Strengths: This paper combines existing ideas (e.g. SZ algorithm, KDE algorithm, binary search) to create a novel approach to kernel graph construction that has potential to be used by many researchers in the community. The experimental results including runtime comparison and image segmentation (e.g. Figure 2) are quite impressive. It was also well written and clearly presented. Weaknesses: Please see questions below. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. Given how it was used as a major motivation in this paper, I expected to see a comparison with the output of SZ algorithm in the experiments section. Have the authors compared SZ and proposed method experimentally for clustering? 2. What do the calculated probabilities and sampled sparse graphs look like? Can the authors show any examples on a small graph or using a synthetic dataset? And how does the graph actually compare to that from SZ? 3. It's missing a discussion on limitations of this work. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Not really. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for your very positive report. Here are our responses to your questions. **Question 1.** In Figure 1 of the attached PDF file, we provide an updated version of the experiment on the two moons dataset to include the results of the SZ algorithm. The SZ algorithm iterates over every edge weight in the fully connected similarity graph in order to construct a sparsifier, and so its running time has a quadratic dependency on the number of data points. **Question 2.** Figure 2 of the attached PDF file illustrates the structure of the constructed graph on a small toy dataset with 100 data points. We observe that both graphs preserve the cluster structure of a fully connected similarity graph. We will include an illustration demonstrating the sparse graph structure in the final version of the paper. **Question 3.** We will add more discussion on the limitations of our work in the final version of the paper. --- Rebuttal Comment 1.1: Comment: Thank you for the comments and additional experimental results. I acknowledge that I've read comments from the authors and other reviewers, and have no further questions.
Rebuttal 1: Rebuttal: We thank all reviewers for their positive and detailed reviews. We respond to their specific questions individually, with reference to some additional figures and table in the attached PDF. Pdf: /pdf/17fc5a9b4d9bcc04ab397d481f3c85f7d9cf248a.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Recurrent Temporal Revision Graph Networks
Accept (poster)
Summary: Temporal graphs are more accurate for modeling real-world scenarios compared to static graphs, but the current approach of extending neighbor aggregation from static graphs to temporal graphs is computationally expensive when considering all historical neighbors. To address this, the authors propose a novel framework that uses recurrent neural networks with node-wise hidden states to integrate information from all historical neighbors for each node, ensuring complete neighbor information without subsampling biases. Strengths: 1. The model proposed by the authors looks reasonable, at least their experiments suggest so. 2. The authors do a good job in describing the model. They propose two measures of expressiveness for temporal graph models, specifically in the context of temporal link prediction and temporal graph isomorphism tests. They establish a theoretical connection between these measures and demonstrate the superior expressiveness of their framework, while also presenting new findings on the expressiveness of other baseline methods. 3. The ablation studies display the positive effect of each mechanism. Weaknesses: Some concerns in this paper should be discussed precisely. 1. In introduction, the heterogeneity in temporal revision is not explained. 2. Figure 1 lacks a specific description. The other baselines and the framework in this paper can be specified in the legend. 3. This paper lacks a description of Table 1. The authors can add a description of Table 1. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. Eq.3, 4 and 5 all refer to arbitrary learnable function, could you please specify what function is used in this paper? 2. In the experiment shown in Table 3, is it possible to display the results of all datasets? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive feedback which we will consider seriously in our revised paper. **Regarding your comment on weakness 1.** Thank you for this valuable feedback. We will add a simple explanation of what heterogeneity means in the context of temporal revision. In simple terms, it means we will treat self-probing nodes differently in revision calculation. **Regarding your comment on weakness 2.** In our revised paper, we will refine the legend and caption to more clearly indicate which figure illustrates which. **Regarding your comment on weakness 3.** Thank you for this valuable feedback. We will provide a summary of the results and the main insights therein in the caption of Table 1. **Regarding your question 1.** We would like to refer you to Appendix A.2, where we have provided detailed implementations of all learnable functions we mentioned in the main paper. We will mention the implementation ideas in the main paper and refer readers to the Appendix for more details. **Regarding your question 2.** We selected 4 out of 8 datasets, due to limited computational resources, as representative which have covered different characteristics such as bipartite/non-bipartite and with/without edge features. We now have the full experiment results on all 8 datasets, which are provided in the uploaded PDF in the global rebuttal. As we can observe, the basic trend remains the same as before. Baseline methods show limited improvement when transitioning from the 1-layer to the 2-layer setting. By contrast, our proposed RTRGN exhibits significant improvements in the 2-layer setting (except for Wikipedia and Reddit where the precision of the 1-layer model is already as high as 98\% and the 2-layer model may suffer from overfitting) and clearly outperforms all baselines. We sincerely appreciate your valuable feedback and hope that our responses could adequately address your concerns. --- Rebuttal Comment 1.1: Title: Thanks a lot! Comment: Thanks for your response and the additional experiments. I would like to keep my score.
Summary: The paper studies the problem of temporal graph learning. The authors propose the recurrent temporal revision (RTR) layer, which involves learning a hidden state for each node and utilizing recursive neural networks to integrate information from all neighbors. Additionally, the authors introduce heterogeneity in RTR and theoretically demonstrate that this heterogeneous revision enhances expressiveness beyond Temporal-1WL. Strengths: S1. The majority of the paper is easy to follow. S2. Experimental results show that RTR outperforms state-of-the-art methods. Weaknesses: W1. My main concern is with the motivation of paper. Why do we need to make each node embrace complete neighbor information? In real-world recommendation system data, the browsing/purchasing tendencies of users are often more closely related to their short-term interests. The inclusion of excessive and outdated information may introduce more noise to the model instead. W2. The proposed RTR can be understood as weight learning and important neighbor selection based on complete historical information. Could changing the sampling strategy from selecting k nearest neighbors to selecting k most important historical neighbors for methods like TGN also result in similar performance improvements? W3. The paper lacks an analysis of time complexity as well as a comparison of the model's parameter count. Minor comments M1. Page 1, Line 16: social network, recommender system -> social networks, recommender systems M2. Page 4, Line 160: an learnable function -> a learnable function M3. Page 8, Figure 4 caption: TGN and GRU-GCN fails to -> TGN and GRU-GCN fail to Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to W1-W3. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I have some concerns about the scalability of the proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful feedback which we will consider seriously in our revised paper. **Regarding your comment W1.** We would like to emphasize that long-term user behavior has been a major focus in recommender systems. Numerous studies, such as [R1, R2, R3, R4, R5], have demonstrated its effectiveness in improving the accuracy of user behavior modeling. While it is true that short-term behaviors can have a significant impact on a user's near-future actions, they come with limited volume and thus may not fully capture the preferences of users over time, which may impair the performance in more general scenarios. The hidden state in our proposed RTRGN will be fed with the entire interaction sequence under a recurrent update scheme, which enables it to flexibly focus on either long-term or short-term behaviors based on the characteristic of the problems/datasets at hand. We will discuss more about the long-term and short-term trade-off and further explain the motivation behind our approach in the revised paper. **Regarding your question in W2.** Switching from k nearest neighbors to k most important historical neighbors could potentially benefit methods like TGN. However, 1) a fixed k can limit its performance. By contrast, RTRGN with its RNN and hidden state could potentially take a broader range of historical information into account; 2) accurately and effectively calculating/maintaining the k most important neighbors of dynamically evolving nodes can be time-consuming and challenging. In comparison, RTRGN with RNN can use simple nearest sampling to efficiently model the entire interaction history. **Regarding your question W3 and your concern about the scalability.** We would like to note that we have a thorough theoretical analysis of both the time and space complexity in Appendix A.4. Furthermore, we have practical run-time comparisons on real-world datasets, which can be found in Appendix D.6. To ensure easy access to this information, we will highlight the references to Appendix A.4 and D.6 in the revised paper. We will also include a comparison of the parameter counts of models. It is important to note that a substantial portion ($>$90\%) of the parameters in temporal graph models are the base embeddings (the node-wise hidden states of the base RNN, which are non-trainable, updated by the base RNN). An RTR layer has roughly 75\% more parameters than a TGN layer of the same configuration, where the increment of trainable parameters is mainly attributed to the additional state transition functions (Eq. (5)) and message function (Eq. (4)) introduced in RTRGN. When considering the whole model, further including the parameters of the base GRU and the time encoders, the growth ratio of trainable parameters of RTRGN compared with TGN would be less. As an example, when the hidden state dimension is 172, a 2-layer RTRGN model has about 2M trainable parameters, while a classic 2-layer TGN model with the same setting contains 1.57M trainable parameters. We sincerely appreciate your valuable feedback and hope our responses could largely address your concerns. [R1] User behavior retrieval for click-through rate prediction, SIGIR 2020. [R2] Search-based user interest modeling with lifelong sequential behavior data for click-through rate prediction, CIKM 2020. [R3] Adversarial filtering modeling on long-term user behavior sequences for click-through rate prediction, SIGIR 2022. [R4] Sampling is all you need on modeling long-term user behaviors for CTR prediction, CIKM 2022. [R5] TWIN: TWo-stage Interest Network for Lifelong User Behavior Modeling in CTR Prediction at Kuaishou, arXiv 2023.
Summary: In this paper, the authors propose a novel aggregation layer based on a recurrent neural network dubbed recurrent temporal revision (RTR) for temporal graph networks to solve the dynamic temporal graph embedding problem. Specifically, a new aggregation function is proposed, which encodes neighbor state information and event information. This paper also proposes two expressive measures and uses them to demonstrate theoretically the difference from previous work. Empirical results on several well-known datasets show the effectiveness of the proposed method. Strengths: 1. The explanations of the components of the approach are clear and detailed. 2. This work uses the proposed two measurements, and theoretically, fully justifies the difference from the previous methods. It also includes algorithm time complexity and running time analysis 3. Ablation settings are very detailed. Weaknesses: 1. It is unclear which part of the ablation experiment corresponds to which of the two proposed measurements. It would be good to clarify that. 2. It’s better to have an Algorithm line by line to clearly correspond to specific formulae and modules for Figure 1b. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Some symbols and formulations need to be clarified: a) Eq. (4) should be accurately formulated. b) r_t^0(t) in Line 148 should be a vector. 2. Which variance or function in Eq3 and Eq4 can reflect the two proposed measurements? 3. The organization of Section 3 should be improved. Each subsection ends abruptly without a clear takeaway message. 4. Table 1 is hard to read. 5. The differentiation between the current work and previous works (especially TGAT, and TGS) need to be explained more clearly. 6. Why do the 2-layer experiments only run in selected datasets? What’s the selection criterion? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive feedback which we will consider seriously in our revised paper. **Regarding your confusion about the correspondence between the ablation experiments and the two proposed measurements.** We would like to clarify that while the first two experiment sections (5.1 and 5.2) are dedicated to studying the expressiveness of temporal link prediction and temporal graph isomorphism test respectively, the entire ablation experiment section (5.3) was focused on the temporal link prediction setting: all the ablation experiments therein are conducted on the temporal link prediction task and measured by the average precision of temporal link prediction. Note that our theoretical analysis has suggested that the heterogeneous revision (HR) is the essence of improved expressiveness in the temporal graph isomorphism test, which is also verified by the results in Table 4 (w/o HR). This is the main reason why our ablation studies were primarily focused on temporal link prediction. Another reason is that most real-world problems revolve around link prediction and most datasets are also predominantly suited for the link prediction task. **Regarding your suggestion to enhance the illustration with the help of algorithm pseudocode.** We appreciate this constructive feedback, and will provide such an algorithm alongside and make necessary improvements to Figure 1b in our revised paper. **Regarding your question 1.** For a more accurately formulated Eq. (4), we would like to refer you to Appendix A.2, where we have provided detailed implementations of all learnable functions we mentioned in the main paper. We intended to present our framework from a general perspective in the main paper, deliberately separated the implementation details to the appendix, to encourage the exploration of potentially improved implementations in the future. Considering the importance of implementation details to a full understanding as suggested, we will mention the implementation ideas in the main paper and refer readers to the Appendix for more details. And yes, $r_t^0(t)$ in Line 148 should be a vector. We will enhance the notation, as well as other related ones, to make it clear. **Regarding your question 2.** Strictly speaking, the proposed expressiveness measurements are general and independent of the proposed model. We assume you mean which formula of the proposed model contributes to the improved expressiveness. In fact, the heterogeneous revision (Eq. (6)) is the key to the theoretical improvement of expressiveness, when assuming the full neighborhoods are involved in aggregation -- which is the common practice in theoretical analysis. While practically, as baseline methods typically only involve subsampled neighbors, our proposed Recurrent Temporal Revision framework (Eq. (3) - (5)) contributes to the majority of practical improvements of performance, due to its better modeling of neighborhood information. **Regarding your question 3.** Thank you for this valuable feedback. We will add takeaway messages to each subsection. **Regarding your question 4.** We will briefly describe the main message of Table 1 in its caption. We will also try to improve its coloring and layout for better readability. **Regarding your question 5.** We assume by TGS, you were referring to TGN. Despite other minor differences, the most important distinction between our proposed model and baseline models like TGN and TGAT lies in the aggregation strategy. We adopt the proposed Recurrent Temporal Revision, while TGN and TGAT employ the classic temporal aggregation. As illustrated in Figure 1a, classic temporal aggregation typically involves subsampled neighbors in each step, which leads to incomplete and biased neighbor information. By contrast, the proposed RTRGN, ideally, can integrate the whole information of neighbors. We will clarify this in the revised paper. **Regarding your question 6.** We selected 4 out of 8 datasets, due to limited computational resources, as representative which have covered different characteristics such as bipartite/non-bipartite and with/without edge features. We now have the full experiment results on all 8 datasets, which are provided in the uploaded PDF in the global rebuttal. As we can observe, the basic trend remains the same as before. Baseline methods show limited improvement when transitioning from the 1-layer setting to the 2-layer. By contrast, our proposed RTRGN exhibits significant improvements in the 2-layer setting (except for Wikipedia and Reddit where the precision of the 1-layer model is already as high as 98\% and the 2-layer model may suffer from overfitting) and clearly outperforms all baselines.
Summary: The paper introduces a novel framework called Recurrent Temporal Revision (RTR) to address the challenge of neighbor subsampling in temporal graphs, an issue commonly encountered in real-world applications like social networking, recommender systems, traffic forecasting, and crime analysis. RTR serves as a standard building block for any temporal graph network, utilizing a recurrent neural network to integrate all neighbor information through a hidden node state. This hidden state mitigates the problem caused by neighbor subsampling and aims to capture complete neighbor information. The paper also introduces a new concept of "temporal revision" to update the hidden state by capturing the state changes of neighbors. To further boost the theoretical expressiveness of this technique, the authors propose incorporating heterogeneity into temporal revision by recursively identifying and marking certain nodes as specialties in the revision calculation process. The authors suggest that this new approach offers superior expressiveness in terms of temporal link prediction and temporal graph isomorphism test, a claim that is backed by theoretical proofs and experimental results. Additionally, they provide a new Ecommerce dataset for evaluating temporal graph models in real-world settings. Their experimental results indicate the proposed framework's significant improvement over state-of-the-art methods, particularly as the number of aggregation layers increases. Strengths: - The authors introduce innovative ways of evaluating the expressiveness of graph algorithms by defining two novel metrics: the temporal graph isomorphism test and the temporal graph link prediction. They apply these new measures to several state-of-the-art models, successfully establishing an equivalence between Temporal-1WL and these baseline models which implies the shared drawbacks of these models. - The paper presents a unique temporal learning model grounded on two inventive ideas: 1) Utilizing hidden states to retain aggregated information which is subsequently updated through a temporal revision layer, and 2) Embedding heterogeneity by giving special treatment to the central node during the recursive revisioning process. The authors compellingly argue that their proposed method outperforms state-of-the-art models in terms of expressiveness, as defined within the context of this paper. - The proposed method demonstrates promising performance in real-world temporal graph tasks, particularly in the temporal link prediction task. Additionally, it excels in the graph isomorphism test examples, reinforcing its potential for practical applications in various fields. Weaknesses: - The rationale behind the new definition of expressiveness isn't effectively conveyed in the paper. While it's true that there isn't a universally accepted definition for the expressiveness of a temporal graph model at present, the paper falls short in clarifying why its proposed isomorphism test improves upon existing ones. The lack of a clear explanation leaves a gap in understanding the superiority of the proposed metrics. Technical Quality: 3 good Clarity: 3 good Questions for Authors: From the proof of proposition 3, the extra expressiveness of the proposed model comes from the heterogenous revision which could be in principle added to other IMP-TGN representation model. Can we conclude that heterogeneity help improve expressiveness of temporal graph model? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful feedback which we will consider seriously in our revised paper. **Regarding your consideration of the rationale behind our new definition of expressiveness.** We develop the set of new expressiveness measures mainly for the following two reasons. First, as you have reckoned, there is currently no universally accepted definition for the expressiveness of a temporal graph model, especially for temporal link prediction. Our definition is an effort towards addressing this gap, providing a consistent definition for the expressiveness of both the temporal graph isomorphism test and the temporal link prediction. Second, existing definitions of expressiveness are relatively complex. By contrast, our proposed ones are concise and straightforward, avoiding the introduction of Identifiable Set as in [5] and Temporal Computation Tree as in [33]. We will reorganize related text to make the rationale behind more clear. **Regarding your question about heterogeneity.** Indeed, previous studies [43], as mentioned in our related work, have shown that incorporating heterogeneity is a principal approach to enhancing the expressiveness of static graph models. Our work in a sense has extended this finding to temporal graph models. We believe that it could potentially also lead to improvements in expressiveness if integrated into other IMP-TGN models. However, it requires further verification. --- Rebuttal Comment 1.1: Title: Thanks for rebuttal. Comment: Hi authors, Thanks for your rebuttal that solved my concerns. I will retain my score.
Rebuttal 1: Rebuttal: We sincerely thank all the reviewers for their constructive feedback. We provide the extended Table 3 in the attached PDF, while all other concerns reviewers raised are addressed in their respective rebuttals. Pdf: /pdf/bdf7fd885b9edf50ce797a9df9523e84a47f580b.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Counterfactually Comparing Abstaining Classifiers
Accept (poster)
Summary: This paper outlines a method for estimating the counterfactual performance of an abstain classifier on cases where it abstained (what would have happened if it had not abstained?). Using tools from causal inference and the potential outcomes framework in particular, they cast this is a missing data problem, and provide estimators for the relevant quantity, proving identifiability under standard conditions and showing convergence. Experimentally, they use semi-synthetic setups to show that this estimator produces good confidence intervals. Strengths: - This paper is clearly presented and technically sound - the notion of connecting abstaining classifiers to missing data problems is interesting and I believe novel - the demonstration of the applicability of the missing at random (conditional on X) assumption to this problem is insightful and helpful - experiments are clear and I believe the practical effectiveness of this method Weaknesses: - Motivation: I'm not convinced that the problem this paper solves is a realistic one - the examples seem fairly contrived to me. I'm not sure why any API would be provided in the form described in 1.1 (with a free abstaining tier), and examples in the appendix all assume an importance of the "hidden" predictions which doesn't seem super realistic. For instance, in A.3 it doesn't seem well motivated to want to evaluate a classifier's fairness based on its hidden predictions, as those do not actually result in impacts or harms on downstream populations. - Novelty: I think that the connection of this problem to causality + missing data is novel, but I'm not sure there's much novel insight here. I do think the argument around the missing at random assumption is good (Assumption 2.1) as stated previously, but there isn't much else in this paper which is specific to the abstaining classifier problem. Smaller notes: line 4: "stake" -> stakes line 50: unclear what "provably unavoidable" means line 213: is \mu_0 defined anywhere? line 225: the word "either" here is confusing - probably want to use "each" instead line 324: the word "nuisance" is used many times in this section - is what it refers to defined? line 397: might be worth defining the Gini impurity more clearly and explaining why this is a good metric Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - what are some other ways that missing data/causality tools can be adapted to this problem specifically? for instance, are there implications around confidence-based rejection that can be leveraged for estimation methods? - in some cases, the underlying prediction may not even be well defined when R = 1 - for instance in cascading models where computation is only proceeded with after the rejection decision is made (if necessary). how do these models fit into the conceptualization of this paper? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 2 fair Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. We sincerely hope that our point-by-point responses below address your concerns. --- > Motivation Please see our common response to all reviewers for this point. --- > Novelty First, highlighting and formalizing the connection between abstaining classifier evaluation and causal inference is a central contribution of our work. Given that the connection is novel, it was imperative that we first clearly establish the correspondences across the topics, such as how the MAR condition translates to having independent evaluation data, and how the positivity condition reveals a need for a policy-level approach, before jumping onto new methods. We also emphasize that, while the proof techniques are adapted from the causal inference literature, the framework and the results are new to the abstaining classifier literature, and the theorems are also not direct corollaries of any previously stated result in a different setup (e.g., for the ATE). Aside from advocating for a causal point-of-view to the abstaining classifiers literature, our work presents a shift of focus from the _learning_ perspective (how to train), which represents the view of most papers in the literature, to the _(black-box) evaluation_ perspective (how to compare), which represents many users and regulators of the classifiers. The evaluation view reveals that there are cases when we want the hidden predictions to be good nevertheless, as opposed to the hidden predictions being bad in the learning view. In App. A.2, we make this contrast explicit by comparing our score with Condessa et al. (2017)’s score, which is an evaluation metric that rewards having low accuracy on abstentions. We state there that our method can also estimate this score, as it still involves a counterfactual (of the expected score under abstentions). Note that Condessa et al.’s paper still largely takes the learning view, as their approach requires white-box access to the underlying model and assumes that 'good' abstaining classifiers would have bad performance on their abstentions. Yet another insight that we view as novel and important is in our discussion (Sec. 5 & App. D) relating the positivity condition to policy-level approaches in safety-critical contexts, in which abstaining classifiers are popularly used. The concrete suggestion we make about requiring an upper bound on the abstention rate is quite specific to abstaining classifiers. We believe that these insights add substantially to the literature on evaluating black-box abstaining classifiers, and we hope that you reconsider your assessment of the novelty of our work. --- > What are some other ways that missing data/causality tools can be adapted to this problem specifically? One example in our paper is to use methods that handle positivity violations, such as sample trimming. These may yield valid inference under certain deterministic abstentions, at the cost of restricting inference to a subpopulation. Other adaptable tools include diagnostics for positivity violations (Petersen et al., 2012; Lei et al., 2021) and sensitivity analysis for MAR violations (e.g., measuring how much of the evaluation data is "contaminated" by training data; see Bonvini & Kennedy, 2020). Our framework opens up these avenues for future work (noted in the revision). > for instance, are there implications around confidence-based rejection that can be leveraged for estimation methods? If a classifier makes confidence-based rejections, then the abstention mechanism $\pi$ should reflect the confidence model across inputs. This means that our estimated abstention mechanism $\hat\pi$ can reveal the confidence regions of the classifier. Of course, this is just a learned binary classifier, and it is a byproduct of our overall estimation procedure. --- > In some cases, the underlying prediction may not even be well defined when R = 1 - for instance in cascading models where computation is only proceeded with after the rejection decision is made (if necessary). how do these models fit into the conceptualization of this paper? The broad answer here would be that the counterfactual score can always be defined whenever a supposed prediction on the classifier’s abstentions/rejections can be meaningful to the evaluator. If there are no supposed predictions on the rejections, then the counterfactual score would not make much sense as a metric. For cascading models specifically, the counterfactual framework can assess the performance of cascading classifiers in a unique way. Imagine a two-stage cascading classifier that, in its first stage, uses a small model $f_0$ and then determines whether it is necessary to go through the expensive computation of the larger model $f_1$. Let $\pi$ be the mechanism with which the model defers its predictions. The counterfactual score of the pair $(f_0, \pi)$ can be defined straightforwardly as the expected score had only the small model been used, and it can be estimated as long as we know when the large model was invoked. For the large model, we can assess the counterfactual score of the pair $(f_1, \bar\pi)$, where $\bar\pi = 1 - \pi$, and this would correspond to the expected score had only the large model been used. These ideas can be extended to 3+ stage cascading classifiers if we are interested in estimating the score of a classifier in each stage individually. We note that the two-stage setup is relevant to the learning-to-defer setup, which Reviewer 9p41 pointed out, where the large model is an (imperfect) "expert" on the task. In such a setup, we can devise a target that combines (i) the selective score of the small model and (ii) the relative improvement by switching to the expert. As we point out in our response to R#9p41, our approach can also estimate this new target using essentially the same tools. --- > Smaller notes We incorporated many of these in our revised draft. We defined $\mu_0$ in line 212 and nuisance functions in line 245. --- Rebuttal Comment 1.1: Title: Response Comment: Thanks for the rebuttal. - I appreciate the points in the main response re: motivation and I think that the more fully fleshed out argument is more compelling than what's in the paper currently. I think it's probably important to include a more in-depth motivation section in a paper like this. - I agree that there is novelty in the abstaining classifier-missing data connection and it's possible I underrated it's value in my original review. - I agree that the counterfactual score is defined in the cascading case - my fault for stating it poorly in the original review. In general, on re-reading my review and scanning the paper again, I'm inclined to re-consider my score, and will do so as I discuss with the other reviewers. --- Reply to Comment 1.1.1: Title: Author Response Comment: Thank you for your encouraging response. We will further flesh out the motivation section of the paper by incorporating the additional points from our rebuttal. We also appreciate you for bringing up the cascading model example, and we will incorporate our response in the paper. We hope that these responses help you reconsider your assessment of our work.
Summary: This paper proposes to compare black-box abstaining classifiers (neither the base classifier nor the abstention mechanism is known to the evaluator) by the counterfactual score, that is the expected score of the abstaining classifier had it not been given the option to abstain. They prove that this quantity is identifiable under two standard conditions, that are missing at random and positivity, and then develop nonparametric and doubly robust methods to estimate it. Experiments on simulated data and CIFAR-100 are conducted. Strengths: This paper is well-organized and clearly written. Comparing abstaining classifiers is an important problem in machine learning and the counterfactual score proposed in this paper is a new evaluation metric that is relevant in practice. Weaknesses: 1. As stated in the paper, the counterfactual score is unidentifiable if abstentions are deterministic which includes a fair amount of abstention methods in the literature. 2. It would be better to briefly discuss some related works on abstention instead of just referring to surveys. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In the learning-to-defer setting, where instead of rejecting an example, the example can be deferred to an expert and let that expert make the prediction, does the counterfactual score still make sense? minor: 1. There is a redundant 'is' at the end of line 420. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback on our work and for acknowledging the practical importance of our work and the problem. See below for our responses to each of your concerns and questions. ---- > As stated in the paper, the counterfactual score is unidentifiable if abstentions are deterministic which includes a fair amount of abstention methods in the literature. While we acknowledge the concern from an applicability standpoint, we emphasize that this should not be considered a weakness from a methodology standpoint, given that the score is never identifiable under deterministic abstentions (lines 194–196). As we state in lines 424–428, if we are interested in the counterfactual score of an abstaining classifier, then no other method can estimate it under deterministic abstentions without resorting to restrictive modeling assumptions. This is why we first mention a policy-level approach that may require vendors to meet the positivity requirement in order to enable buyers to compare the products before choosing one (lines 419–423 and Appendix D). In our opinion, the fact that our paper does not apply to all abstaining classifiers is perhaps ok, as long as we provide a solution for stochastically abstaining ones and we are upfront about this restriction. We’d also like to reiterate that there are recent papers that show the superiority of stochastically abstaining classifiers (lines 201–210), similar to the effectiveness of randomized classifiers in the fairness literature. We believe that the counterfactual framework can be more relevant in the future when more abstaining classifiers adopt stochastic mechanisms. ---- > It would be better to briefly discuss some related works on abstention instead of just referring to surveys. We have now added a few more references and details, but given that our approach is largely agnostic to the specific learning algorithms for abstaining classifiers, we put our emphasis on the evaluation side. We do explicitly mention directly relevant papers on abstention throughout the paper, including Chow (1970) and El-Yaniv and Wiener (2010) for their discussion of the selective score & coverage formulations, as well as the recent works that develop methods using stochastic abstentions (Kalai and Kanade, 2021; Schreuder and Chzhen, 2021). ---- > In the learning-to-defer setting, where instead of rejecting an example, the example can be deferred to an expert and let that expert make the prediction, does the counterfactual score still make sense? Yes, the counterfactual score makes sense. In the learning-to-defer setting involving an expert, the counterfactual score would refer to the expected score of the overall system had the classifier not deferred at all. The counterfactual score is thus an evaluation metric primarily for the classifier, and it is independent of the expert’s predictions, even when the classifier is adaptive to the expert’s tendencies. In the case where the goal is to assess the _joint_ performance of the algorithm and the expert, then it may be useful to estimate a variation of Condessa et al. (2017)’s score, which we summarize in both Section 2 and Appendix A.2. If we denote the expert’s score as $E$, then equation (1) in the Appendix can further be generalized to $$ \theta^E := \mathbb{E}[S \mid R=0] \mathbb{P}(R=0) + \mathbb{E}[E - S \mid R=1] \mathbb{P}(R=1). $$ For each rejection ($R=1$), we assess the system by the difference in the quality of expert prediction and the model prediction ($E-S$). If the expert is an oracle ($E=1$), then this recovers Condessa et al.’s score. Note that Condessa et al. primarily focused on defining and justifying $\theta$ in the “white-box” setting, where the score on rejections is known, and it does not discuss the counterfactual estimation problem that arises in the black-box setting. When it comes to _estimation_, as we state in Appendix A.2, our method can analogously estimate Condessa et al.’s score $\theta$. Thus, the approach can further estimate $\theta^E$, whenever we can estimate $\mathbb{E}[E \mid R=1]$. If we are estimating the performance of a joint system, possibly learned using a learning-to-defer method, then we can estimate $\theta^E$, as long as either the expert’s performance is known (e.g., $E=1$ for an oracle) or the deferral decision $R$ is recorded. --- Rebuttal Comment 1.1: Comment: Thank you for your response. It addressed some of my concerns. I have also read the other reviews and rebuttals and would like to keep my initial rating.
Summary: The authors focus on the problem of evaluating abstaining classifiers while also taking account their counterfactual predictions had they not abstained on certain data points. As an evaluation metric they consider the expected accuracy of the classifiers over both their actual and counterfactual predictions, which they define as the counterfactual score. They provide the conditions under which the counterfactual score is identifiable and also propose a doubly robust estimator to compute the counterfactual score of an abstaining classifier. Finally, they evaluate their estimator with simulations and experiments on real data. Strengths: The idea of considering the counterfactual predictions of the classifier had it not abstained for evaluation is conceptually quite interesting and also could have a significant impact in scenarios when the expert, for example, who predicts when the classifier abstains is also uncertain and would still choose the classifier’s prediction. The paper is very nicely written, well structured and organised, well motivated and convincing. The experimental evaluation includes both simulations and real data experiments, the setup and the results are clearly explained and also the code is provided for reproducibility. Weaknesses: There see to be in general no major weaknesses. It is a bit confusing the pointer to table 1 in line 86, where the notation has not yet been introduced. It would have been nicer perhaps not to include notation at that point or explain the notation in the caption. Moreover, the captions of the tables should have been above the tables according to the author instructions. Typos: -Line 87: ‘of significant interest’ -Line 367: there seem to be a missing verb -Line 420: the sentence does not make sense, it seems that some part is missing. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: N/A. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors adequately address the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your positive and comprehensive feedback on our work. In our revision, we incorporated all of your editorial notes, including moving Table 1 to the beginning of Section 2 and having all table captions appear before the tables themselves.
Summary: The paper proposes a new way to evaluate counterfactually, abstain classifiers. By considering the task of selective classification, the authors reformulate the problem into a Missing data problem which can thus be seen as a causal inference (Counterfactual) problem. The authors also state the corresponding assumptions needed for the above to be identifiable and propose a doubly robust estimator for their proposed metric. Strengths: - The paper proposes a new view on evaluation of abstain classifiers through the lens of Rubins Counterfactuals - The paper clearly states the assumptions for the problem to be identifiable - The paper clearly formulates the approach of doubly robust estimations and how it can be used in their setting for abstain classifier comparisons. Weaknesses: - My biggest concern of the paper is the motivation. As stated in their example 1.1 the example they have given is quite contrived and not really applicable in most cases. I believe much stronger motivating examples would be useful. If the authors would clearly give me better examples (in addition to the appendix ones) that would greatly help me position this paper. - Secondly, I believe the experiments are not clearly written out to me. Could the authors tell me how exactly this metric is preferred over simply looking at the accuracy of the selected labels and the inaccuracy of the non-selected labels? I might have misunderstood this part, so if the authors could help me clarify this part it would be very helpful. The authors mention that this is an "inverse" problem, but not sure if I understood this part. - Lastly, I wonder if the authors have thought of implementing the second point mentioned in this review. I am confused about why there are no other baselines. I would have thought that the second point would constitute a simple baseline. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: see in weakness I am more than happy to raise my score if the above are clarified. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Yes the authors have addressed limitations of their work Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review of our work and for acknowledging the novelty and clarity of our exposition. See below for our responses to each of your concerns. ---- > My biggest concern of the paper is the motivation. As stated in their example 1.1 the example they have given is quite contrived and not really applicable in most cases. I believe much stronger motivating examples would be useful. If the authors would clearly give me better examples (in addition to the appendix ones) that would greatly help me position this paper. Please see our common response to all reviewers which includes our response to this point. ---- > Secondly, I believe the experiments are not clearly written out to me. Could the authors tell me how exactly this metric is preferred over simply looking at the accuracy of the selected labels and the inaccuracy of the non-selected labels? I might have misunderstood this part, so if the authors could help me clarify this part it would be very helpful. The authors mention that this is an "inverse" problem, but not sure if I understood this part. In brief, if two classifiers make predictions on different subsets of the data, have different frequencies and patterns of abstaining, and have different accuracies on the ones they do predict, it is actually not so obvious how to combine that information to compare them in a sensible and fair manner. Our examples (1.1 and A.1-A.3) motivate cases in which it _hurts_ to be inaccurate on abstentions (non-selected labels): * If a free-trial classifier from Example 1.1 is highly inaccurate in its abstentions, and we (the evaluator) are impressed by its performance on non-abstentions and purchase the full non-abstaining service, then we will suffer a decrease in the total accuracy from the free trial phase to the paid use phase. * If the self-driving car from Ex. A.1 or the hospital from Ex. A.2 attempts to use the abstaining classifiers’ hidden predictions in a failure mode, then it will suffer more when the classifier is less accurate on its abstentions. This is the “inverse” sense from a typical _learning_ scenario for abstaining classifiers: whereas learning objectives for abstaining classifiers often reward abstaining on inaccurate predictions, the counterfactual score is adequate for evaluation setups where such abstentions can be hurtful. The metric that you discuss here roughly corresponds to Condessa et al. (2017)’s score, which we formally defined in Appendix A.2. We state there that: * The two metrics assess abstaining classifiers from different viewpoints, depending on whether it is good or bad for a classifier to perform poorly on its abstentions. * Our estimation approach can be applied to estimate Condessa et al.’s score in the black-box setting, in which their score is also a counterfactual (we do not know how well or poorly the classifier performs on its abstentions). Note that Condessa et al. primarily focused on defining their score in the “white-box” setting, in which the classifier’s accuracy on its abstentions is known, but we do not assume this. Finally, as for the experiments, given that our primary method is a statistical inference approach (a confidence interval), we focus on empirically examining its validity/coverage, that is, whether a 95% CI correctly covers the true parameter (fully known in simulated experiments) approximately 95% of the time across repeated simulations. The complementary metric to coverage is the power, as measured by how tight the CI is (a tighter CI would give more certainty to the user). Our real data experiment validates how the CI estimates a number that we expect to see in each scenario (zero if the base classifiers were the same; nonzero otherwise). These experiments are NOT meant to provide further justifications on why the counterfactual score is a viable alternative _as a metric_ to existing ones; they are simply meant to show that our method for estimating the counterfactual score is sensible and works well in practice. In our revision, we clarified the main goals of our experiments up front (at the beginning of the section). ---- > Lastly, I wonder if the authors have thought of implementing the second point mentioned in this review. I am confused about why there are no other baselines. I would have thought that the second point would constitute a simple baseline. As per our previous response, our primary baselines are other _estimation_ approaches, namely the plug-in and IPW estimators, and not other _metrics_. The metrics can be computed straightforwardly, but they do not reveal insights about whether our methods work with simulated and real data, and it is hard to compare different methods that are meant for different metrics. Having said that, we already include the most commonly used metric for evaluating abstaining classifiers: a combination of the selective score and coverage. In our simulated experiments, these numbers are shown in the first paragraph of Sec. 4.1. Note that both of these quantities are just sample averages, and a standard CI can be computed for estimation. Another existing metric would be Condessa et al.’s score, which we recap in App. A.2 and in the above. For our simulated setup, this can also be computed straightforwardly: classifier A obtains 0.4145, while classifier B obtains 0.4345 (they both abstain quite a bit, and A is heavily penalized for abstaining on correct predictions). ---- We sincerely hope that our responses address your concerns. --- Rebuttal Comment 1.1: Title: Thanks for the reply Comment: First of all i would like to thank the authors for the reply. The motivation part has been partially justified for me, even though i remain skeptical, I believe there is a chance this might become useful in the future. In terms of the metric, I do not understand exactly why the metric by Condessa et al is not applicable in "evaluating" abstain classifier? To me it seems like the most natural and basic way to evaluate a selective classifier. Could the authors please let me know what the proposed score does, that current methods cant exactly. I would like to hear an exact and precise example please. Also I dont understand this sentence. So what is the point then? I am really confused now, could you please clarify this part. "These experiments are NOT meant to provide further justifications on why the counterfactual score is a viable alternative as a metric to existing ones; they are simply meant to show that our method for estimating the counterfactual score is sensible and works well in practice". As it stands, i cannot raise my score and hope that the authors can convince me otherwise. --- Reply to Comment 1.1.1: Title: Further clarification Comment: > In terms of the metric, I do not understand exactly why the metric by Condessa et al is not applicable in "evaluating" abstain classifier? To me it seems like the most natural and basic way to evaluate a selective classifier. Could the authors please let me know what the proposed score does, that current methods cant exactly. We did not say anywhere that Condessa et al.’s metric is not applicable in evaluating an abstaining classifier. We elaborate on when the counterfactual score is preferred over Condessa et al.’s score in lines 89–103 in our paper, including the following line: _While [Condessa et al.’s] view is relevant to the training of abstention rules, it is at odds with black-box settings where the underlying predictions may still be executed even when the method abstains, motivating the counterfactual score._ The questions of (1) whether a score is natural and (2) what “current methods can’t do” are separate. 1. Our whole intro is devoted to justifying that the counterfactual score, as we defined it in our paper, can be useful (rather than Condessa et al.’s). Both metrics can be viable metrics for evaluating abstaining classifiers, but they simply evaluate the classifiers in a different manner. The two scores can be useful in different scenarios, and we focused on scenarios in which our proposed metric can be useful. 2. Condessa et al. (or any other work, to our knowledge) do not discuss the black-box setting that requires a _statistical estimation_ approach, as they already assume full knowledge of whether the classifier would have classified each input correctly or not. But this is not known in any of the black-box settings we describe. Both Condessa et al.’s score and the counterfactual score require how accurate the classifier _would have been_ on its abstentions, and we do not know this in the black-box scenario like Example 1.1 (we have no clue about whether an API would get a 100% or 0% accuracy on inputs that it chose to abstain from making predictions). This unknown quantity is really the “counterfactual,” and thus we devote Sections 3 and 4 of our paper to discuss how we can _estimate_ the counterfactual score in the black-box setting. No other "current method" can estimate the counterfactual score (because it was never defined formally), and we further mention in Appendix A.3 that our method can further estimate Condessa et al.’s score in the black-box setting. > I would like to hear an exact and precise example please. To give a simplified example, suppose that we compare two abstaining classifiers on 100 data points, whose inputs are sampled uniformly on $[-1, 1] \times [-1, 1]$. Suppose that classifier A achieves a 1.0 accuracy on the left half of the input space ($x_1 < 0$) but a 0.8 accuracy on the right half ($x_1 \geq 0$). It abstains at an 80% rate on the right half, while it does not abstain at all on the left half. On the left half, the classifier makes 50/50 correct predictions on the left half; on the right half, it makes 8/10 correct predictions and 40 abstentions (for which it would have been correct 80% of the time). Recall the definitions from Appendix A.3. The counterfactual score of classifier A is 58/60 * 0.6 + 32/40 * 0.4 = 0.9, whereas Condessa et al.’s score for this classifier is 58/60 * 0.6 + (1 - 32/40) * 0.4 = 0.66. Note that the classifier’s selective score is 58/60=0.97 while its coverage is 60/100=0.6. Next, suppose that classifier B is the same as classifier A, except that it achieves a 0.6 accuracy on the right half. Then, classifier B’s counterfactual score would be 0.8, lower than classifier A, whereas Condessa et al.’s score for B would be 0.82. If the evaluator needs to access the classifier’s hidden predictions on the right half, they would prefer classifier A, which has a higher accuracy had no abstentions were made. In Example 1.1, the right half would correspond to the inputs that the free-trial API chose to abstain on. > Also I dont understand this sentence. So what is the point then? I am really confused now, could you please clarify this part. "These experiments are NOT meant to provide further justifications on why the counterfactual score is a viable alternative as a metric to existing ones; they are simply meant to show that our method for estimating the counterfactual score is sensible and works well in practice". Because we developed an estimation approach, we need experiments to validate whether the estimator actually estimates the target quantity well on simulated (and real) data. We note that it is _not_ trivial whether this score can be estimated well in practice, given that it is an unknown counterfactual quantity. The real data experiments illustrate different comparison scenarios and what the estimated counterfactual score would be. We hope that these responses clarify your concerns.
Rebuttal 1: Rebuttal: We appreciate all of our reviewers for their feedback on our work. We are particularly thankful to them for acknowledging the problem’s significance as well as the novelty of the connections we elucidate between abstaining classifiers, black-box evaluation, and causal inference. A main concern shared by Reviewers paQd and 9d5v is about the practicality of our motivating examples, so we give a common response here. Given the fairly recent advances in learning algorithms for abstaining classifiers, we find it imperative to lay out the tools with which we can evaluate and compare the resulting models in imaginable settings. We argue in the paper that existing metrics, such as Chow’s score and other combinations of selective score and coverage, are not adequate in many such scenarios. Thus, we take the first step in a different type of formal evaluation of black-box abstaining classifiers, by taking a counterfactual point of view. When it comes to practicality, we believe that not all works need to be directly motivated by an existing practical problem. One can foreshadow conceptual problems and work to address them in advance of them showing up in practice. As abstaining classifiers become more common, so will the desire to compare them, and especially ask counterfactual questions about their performance had they not abstained. We specifically elaborate on two of our examples, in response to Reviewer 9d5v. * Ex. 1.1: Closed-source ML APIs, like those offered by Azure, AWS, Google, and many startups, are becoming ever more popular, and the predictions they provide are clearly important assets of theirs. It is not difficult to imagine that these services will deem certain predictions as more valuable than others (think of classifying CT scans). This can incentivize them to allow users to try a free version of the software that abstains, such that potential buyers can get some sense of the quality of predictions but yet avoid giving away all valuable predictions. * Ex. A.3: This is one example in which the hidden predictions are not directly used, but it rather illustrates a different use case in which the hidden predictions may explain the inner workings of an otherwise black-box prediction model. For example, the auditor may notice that certain demographic groups receive more abstentions by the model than others. In that case, the counterfactual score may reveal the extent to which the predictions on those groups are bad. If they are actually much worse, then better training data or fairer classifiers may be needed; if not, then only a fairer abstaining mechanism may be needed. In response to Reviewer paQd: we think that the clarifications above may help address the motivation aspect more clearly than coming up with additional examples. Nevertheless, we are happy to discuss further during the discussion period about these and other examples. Aside from the motivation, the reviewers also raised interesting questions about our paper, including connections to the learning-to-defer settings and cascading models. Please see our point-by-point responses to each reviewer’s comments below.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Parameterizing Context: Unleashing the Power of Parameter-Efficient Fine-Tuning and In-Context Tuning for Continual Table Semantic Parsing
Accept (poster)
Summary: The author presents a novel approach that combines PEFT and ICT to train a continual table semantic parser. The proposed solution is based on a teacher-student framework. Two task streams are developed from WikiSQL and Spider semantic parsing benchmarks. Experiments using C3 and baselines are conducted using these task streams. The results indicate that their method outperforms existing competitors, achieving state-of-the-art performance across multiple metrics. Strengths: - The fuse of ICT and PEFT continual framework is novel and effective on the proposed continual table semantic parsing task. - The description of the framework and method is clearly written. Weaknesses: - The authors did not clearly state whether the Problem Formulation in Section 2.2 has been studied previously. - The Continual table semantic parsing problem is not well motivated. For example, the authors claimed `After training on a new task, the performance of the parser on the previous task may plummet attributed to parameter updates.`, but there is no empirical evidence of this claim as a motivating example. - The authors stated the "invisibility of past demonstrations" for ICT, which seems unrealistic even under data privacy concerns. ICT typically only requires a few examples, which can be obtained from public databases or synthetic/obfuscated database content. - The authors can have additional baseline: Train on a reasonable size of public text2sql data with a mixture of domains and then zero-shot on the Stream datasets. The public data will not have privacy concerns. This will show the accuracy gap and rationale behind the continual training. It seems line 196 is similar to this, what is the training set size? Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - Is the Problem Formulation in Section 2.2 novel? Have other papers examined and studied this problem? If not, the problem formulation and its motivation should be part of the paper's contribution. - In Table 1, what is the PEFT results on T5-large? - In Figure 1, what are the blue arrows going into PLM in the ICT portion? - In Figure 1, why are past demonstrations invisible? This seems to be an unrealistic setting, even under data privacy concerns. We can certainly have a diverse pool of demonstrations from public databases or synthetic/obfuscated database content. - In Figure 3, what is the x-axis label? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: No negative societal impacts Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are very grateful to you for providing us with valuable feedback and suggestions for our paper. We will provide explanations and clarifications for each weakness and question. #### For Weakness 1: Before us, the problem formulation in $\rm{Section} \ 2.2$ was proposed by [7], so we did not include this as one of our contributions. However, their scenario is also slightly different from ours. They guarantee a portion of unsupervised data for each task on the stream, while this guarantee is removed in our scenario, making it more challenging. #### For Weakness 2: For this "the parser's performance on the previous task may drop drastically due to parameter updates" claim, we have actually shown it experimentally. Specifically, in $\rm{Figure} \ 4$, the Fine-Tuned parser exposes a significant performance degradation of about 20% in terms of TA (Task Accuracy) and EA (Example Accuracy) on both datasets as the task increases. The dramatic drop in MD (Memory Decay) metrics also proves that the model really has catastrophic forgetting about previous tasks. #### For Weakness 3: Public or synthetic/obfuscated databases are indeed a possible way to get demos. However, obtaining a valid demonstration may not be easy for two reasons: 1. First, a demonstration is an NLQ-SQL pair, and database content alone still makes it expensive to construct a large scale of natural language that matches human expressions, especially for some small domains. 2. Second, recent studies [23, 24] have shown that only in-context learning using demonstrations similar to the test samples can maximize performance gains. Unfortunately, for domain-specific databases, there is no guarantee that similar demonstrations will be found using public data resources. To verify this, we added an experiment: We assume that $D^0_\rm{train}$, the training set of task_0, is an always-visible public demonstration pool from which all demos for subsequent tasks are sampled. The following table shows the average accuracy of the Teacher Parser when it encounters each task at the first time. | **Method** | Spider-Stream (%) | Combined-Stream (%) | | ---------------------------------- | :---------------: | :-----------------: | | Teacher Parser (Demos from task_0) | 76.3 | 70.6 | | C3 Teacher Parser (Ours) | 78.2 | 72.4 | From the results, when sampling the demonstration from $D^0_\rm{train}$, the performance of the Teacher parser shows a significant drop on both datasets. This again proves that the quality of the demonstration has a great impact on the performance of ICT. #### For Weakness 4: Yes, our setup at line 196 is trying to simulate this scenario. We randomly choose k (6 for Spider-stream and 5 for Combine-stream) tasks to merge as an initial task (task_0) for a mixed domain. For the Spider-stream, the size of the training set for task_0 is 2082; For the Combined-stream, the size of the training set for task_0 is 1373. #### For Question 1: Due to space limitations, please see response to Weakness #1 for details. #### For Question 2: We added a experiment to evaluate the performance of PEFT when using T5-Large. The experimental results are shown in the following table. | **Method** | TA (%) | EA (%) | MD (%) | TA (%) | EA (%) | MD (%) | | -------------------- | :----: | :----: | :----: | :----: | :----: | :----: | | T5-Large + PEFT | 69.7 | 67.4 | 0 | 67.3 | 70.0 | 0 | | T5-Large + C3 (Ours) | 70.7 | 68.9 | 0 | 69.0 | 71.2 | 0 | When using T5-Large as a backbone, our C3 still performs better than PEFT. If this paper is accepted, we'll add these results to the camera-ready version. #### For Question 3: Here the blue arrow indicates that the model should maintain performance on the inputs from the previous task even when it encounters subsequent tasks. To avoid misunderstandings, we will add an explanation in caption in subsequent releases. #### For Question 4: Due to space limitations, please see response to Weakness #3 for details. #### For Question 5: The x-axis represents the ID of the segmented task. Note that this is just the initial slice of the task (independent of the order), and as we mentioned before, when the methods are actually run, a random $k$ of them will constitute the initial task. The remaining $n - k$ tasks are shuffled as the task 2 to $n - k + 1$. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I have updated my rating
Summary: The paper introduces a method combining parameter-efficient fine-tuning (PEFT) and in-context tuning (ICT) to address the issues of overfitting and catastrophic forgetting in training a continual table semantic parser with limited training examples. Through a task-adaptive PEFT framework and a teacher-student setup that utilizes ICT, the method demonstrates enhanced performance in comparison to established baselines, as validated by experiments on two benchmarks. Strengths: 1. This paper is overall good written and easy to follow. 2. This paper proposes a method that fuses PEFT with ICT to resolve the overfitting and catastrophic forgetting problem. 3. In addition to PEFT + ICT, the authors also propose a teacher-student framework that distills ICT results on teacher to a student model. Weaknesses: My main concern about this paper is its technical novelty. 1. Combining PEFT and ICT (demonstration) is not new. e.g., [Gao et al.](https://arxiv.org/abs/2012.15723) 2. Distilling soft prompts, on the other hand, is not new either. https://arxiv.org/abs/2212.10670 https://arxiv.org/abs/2304.08467 3. I found it unconvincing to exclude results from [7]. C3 also uses demonstration retrieval, which indicates the same pool of supervised data is used. 4. Following 3, the claim that the method is few-shot is inaccurate. Since the examplars are selected from a pool examples, comparing to methods that only use a few examples is unfair. Technical Quality: 3 good Clarity: 3 good Questions for Authors: ### Questions 1. Why the authors choose tabular semantic parsing? Is this method generalizable to other tasks (e.g., general NLU/NLG tasks)? 2. Why use different backbones for other methods in Table 1? How do those methods perform on T5? ### Typos Ln. 23: "finance [3], and ?" Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: No Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are very grateful to you for providing us with valuable feedback and suggestions for our paper. We will provide explanations and clarifications for each weakness and question. #### For Weakness 1: Our method is fundamentally different from that of [Gao et al.] for the following two reasons: 1. [Gao et al.] is mainly based on hard prompt + ICT, which still requires fine-tuning the whole model rather than a subset of parameters during the training process, so it cannot be considered as a PEFT approach. In contrast, our method only updates soft prompt while freezing the whole PLM during the training of the teacher model, which reflects the parameter efficiency. 2. [Gao et al.] focuses on how to improve the few-shot learning capability of smaller pre-trained models on static datasets, whereas we focus on a dynamic task-stream scenario, exploring how to avoid catastrophic forgetting while guaranteeing few-shot learning capability. * Gao, Tianyu, Adam Fisch, and Danqi Chen. "Making pre-trained language models better few-shot learners." arXiv preprint arXiv:2012.15723 (2020). #### For Weakness 2: 1. Although [Huang et al.] also proposed in-conext learning distillation, their student model is essentially different from ours in terms of motivation. Their student model focuses only on the few-shot learning capability and still employs in-context learning for prediction without soft prompts. In contrast, our student model needs to handle dynamic task-stream scenarios. The invisibility of the past task context forces C3 to use PEFT to inject in-context learning capabilities into the soft prompts, avoiding catastrophic forgetting while improving few-shot learning. 2. The goal of [Mu et al.] is also not to solve the few-shot or continual learning problem, but to compress the prompt to improve LLM inference efficiency and save storage space. This is completely different from the dynamic task steam scenarios we've focused on. * Huang, Yukun, et al. "In-context Learning Distillation: Transferring Few-shot Learning Ability of Pre-trained Language Models." arXiv preprint arXiv:2212.10670 (2022). * Mu, Jesse, Xiang Lisa Li, and Noah Goodman. "Learning to compress prompts with gist tokens." arXiv preprint arXiv:2304.08467 (2023). #### For Weakness 3: We did not compare with [7] because [7] defines a slightly different scenario than the one we are concerned with. They first assume that each task in the stream has corresponding unlabeled training set $ D^i_\rm{unsup}$, which contains only NLQs without gold SQL queries. Then, their method SFNet performs semi-supervised learning of the labeled training set $D^i_\rm{train}$ in combination with $D^i_\rm{unsup}$, whereas our method uses only $D^i_\rm{train}$. To ensure that all methods use the same training data, we did not compare with [7]. However, to make the comparison more convincing, we added an experiment running [7] on Combined-Stream. The results are shown in the table below. | **Method** | TA (%) | EA (%) | MD (%) | | ------------------- | :----: | :----: | :----: | | SFNet [7] | 61.9 | 59.6 | -3.2 | | T5-Base + C3 (Ours) | 67.7 | 66.7 | 0 | The reason why we only use Combined-stream and not on Spider-stream here is that Combined-stream only randomly selects a portion of samples from the original Spider and WikiSQL datasets, and the rest can be used as unsupervised data to fulfill the scenario requirements of [7]. Despite utilizing additional unsupervised data for semi-supervised learning, [7] still does not perform as well as our proposed C3. Our method achieves better results under weaker assumptions. #### For Weakness 4: You may have some misunderstanding of our method. For $i$-th task, all the demonstrations of our C3 are sampled from the training set $D^i_\rm{train}$ (line 153), so the training set of C3 is still $D^i_\rm{train}$, which is consistent with all compared methods. So comparisons are fair. Our few-shot learning scenario setup follows existing work [8,9]. We adopt the standard $N$-way $K$-shot definition: "way" denotes the database and "shot" denotes the training example corresponding to the database. * For Spider-stream/Combined-Stream, there were an average of 337.8/292.7 training examples per task and 10.1/52.5 databases, i.e., 10-way 34-shot / 53-way 6-shot. We will include this setting to the future release. #### For Question 1: Yes. Theoretically, our proposed method is generalizable and has the potential to be adapted to other NLP tasks because it is not limited to specific backbone models and does not involve task-specific modules. We chose TSP for several reasons: 1. Databases are one of the most widely used information carriers, playing a fundamental role in various domains. As a key technology for natural language interfaces to databases, the study of TSP is of great significance for the enhancement of many fields. In addition, the natural continuous updating of databases leads to the task stream. 2. TSP is a difficult task. Unlike general NLU tasks (NER or RE), it requires the model to understand all aspects of the natural language query, including the entities (column names, table names) and the overall logical framework, and complete the mapping with the corresponding structured tables. Accordingly, the supervised data for this task is more difficult to obtain, resulting in the few-shot challenge. 3. Unlike NLG tasks (like summarization, machine translation), TSP is more objective in evaluating the result metrics. Any small error in the SQL will lead to completely incorrect results. We believe that this task can more rigorously reflect the true performance of our method. In future work, we will choose more different types of NLP tasks to evaluate our method. #### For Question 2: Due to space limitations, please see response to Weakness #1 mentioned by reviewer vuh7 for details.
Summary: This paper introduces a novel method for training a continual table semantic parser, which aims to translate natural language into SQL based on task-specific tables with limited training examples. The proposed method integrates parameter-efficient fine-tuning (PEFT) and in-context tuning (ICT) to overcome the challenges of overfitting and catastrophic forgetting. The paper presents a task-adaptive PEFT framework and a teacher-student framework-based solution. Experimental evaluations demonstrate the superiority of the proposed method over existing baselines. Strengths: - The paper addresses an important problem in the field of table semantic parsing, namely the challenge of training a parser on a sequence of tasks with limited supervision. - The proposed method integrates two existing techniques, PEFT and ICT, to overcome overfitting and catastrophic forgetting, respectively. This combination of methodologies is neat and straightforward. - This paper provides a well-structured review of relevant literature, discussing prior work on table semantic parsing and related topics. - The analysis is comprehensive and gives more insight on the future research. Weaknesses: - The baseline comparison appears somewhat unfair. While I appreciate the inclusion of multiple baselines by this paper, I noticed that most of them were implemented on GRAPPA to highlight their inferior performance, whereas this paper's proposed method was applied to T5 (base and large). Additionally, there is a lack of reported MD performance for both PEFT and C3, which should be the key metric to reflect the forgetting degree. This inconsistency in reporting the experimental setup leaves me somewhat confused about the evaluation process and the methodology employed in this paper. - The results in Table 1 show that PEFT is dominant for catastrophic forgetting, while C3 is less dominant. in-context tuning does not play a prominent role here. And PEFT is already well-known for its advantage to avoid catastrophic forgetting, with the limitation that it cannot learn very different tasks fast. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - I notice "Demo 2" appears twice in Figure 1. Is it a typo? - I noticed that the paper utilizes the first dataset to initialize the model parameters for the entire model. This approach raises a natural concern: does it imply that the proposed method is only applicable to scenarios where the logical form is similar, but not when the logical form differs completely? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are very grateful to you for providing us with valuable feedback and suggestions for our paper. We will provide explanations and clarifications for each weakness and question. #### For Weakness 1: We chose GRAPPA as the backbone PLM for baselines for two main reasons: 1. Unlike T5, GRAPPA is pre-trained specifically for the TSP task (including Spider and WikiSQL), so we intuitively thought it would have stronger performance in our scenario. 2. Since some of the baselines we used predominantly applied PLMs with encoder-only architectures in the original papers, e.g., [8] (RoBERTa), [11] (BERT), we also followed them to use an encoder-only PLM (GRAPPA) in order to maintain the consistency with the original papers. In fact, by comparing the fine-tuning results in $\rm{Table} \ 1$ and $\rm{Table} \ 2$, it can be seen that GRAPPA performs better on Spider-stream. On the contrary, on Combined-stream, T5 has considerably less forgetfulness compared to Grappa's. We hypothesize that this is because T5's text-to-text transfer learning capability makes it less sensitive to significant differences in the logical form across tasks of Spider and WikiSQL. To ensure a fairer comparison, we added the experiments that apply the best-performing EMR and EMAR (in $\rm{Table} \ 1$ of our paper) to T5 as the new baselines. The experimental results are shown as follows: | Method | | Spider-Stream | | | Combined-Stream | | | ------------------- | :--------: | :-----------: | :--------: | :--------: | :-------------: | :--------: | | | **TA (%)** | **EA (%)** | **MD (%)** | **TA (%)** | **EA (%)** | **MD (%)** | | T5-Base + EMR | 59.1 | 56.7 | -13.4 | 62.5 | 62.4 | -5.7 | | T5-Base + EMAR | 57.8 | 53.6 | -15.7 | 62.0 | 59.7 | -6.5 | | T5-Base + C3 (Ours) | 67.7 | 66.7 | 0 | 66.4 | 67.7 | 0 | From the results, C3 still has a significant improvement compared to the strong T5 baselines on both datasets. If this paper is accepted, we'll add these results to the camera-ready version. Note that the reason we did not add the MD performance of PEFT and C3 in $\rm{Table} \ 1$ and $\rm{Table} \ 2$ is because they have no forgetting, i.e., MD=0. Freezing PLM parameters ensures that they are not updated. Simply loading the checkpoint (soft prompt) for each task allows for an exact replication of the performance on previous tasks. We initially thought we should just omit it here because these cells will be constant values (zero). We regret that this omission misled you, and we will make a special note of it in the caption in subsequent version. #### For Weakness 2: In our scenario, catastrophic forgetting is more challenging compared to the few-shot problem. We add ICT's teacher model to PEFT precisely to enhance its lack of fast learning on new tasks. Although ICT's contribution to the overall performance is not as significant as PEFT, it is true that C3 achieves about 2% improvement over PEFT on both benchmarks in the case of multiple runs. #### For Question 1: Sorry, this is a typo, it should actually be `Demo 3`. #### For Question 2: In fact, we built Combined-Stream based on such a similar motivation as you mentioned. In Combined-Stream, the first task is based on Spider, while the second task is based on WikiSQL, and then alternates in turn. Spider has a complex SQL structure with syntax such as `JOIN`, `Nested Query`, `GROUP BY`, etc., whereas WikiSQL has only simple single-table SQL without complex syntax. To some extent, this setting can reflect the impact of the initial task on subsequent tasks. From the following three plots in $\rm{Figure} \ 4$, the performance oscillations in the baseline due to this stronger impact, which is effectively mitigated by our proposed method.
Summary: The paper proposes a new continual learning method C3 for table semantic parsing which combines parameter-efficient fine-tuning (PEFT) and in-context tuning (ICT). The C3 framework contains a teacher network (ICT) that extracts contextual information from demonstrations and a student network (PEFT) that learns the teacher's output distribution. The ICT aims to enhance the ability of few-shot learning and PEFT aims to reduce catastrophic forgetting. The author construct two stream datasets from WikiSQL and Spider to evaluate their proposed method. C3 outperforms few-shot learning and continual learning baselines on these stream datasets. Strengths: 1. The paper combines parameter-efficient fine-tuning (PEFT) and in-context tuning (ICT) with a teacher-student framework to leverage the unique advantages of each approach. The proposed methods are effective in the continual table semantic parsing task. 2. The proposed framework is model-agnostic and seems generalizable to other continual learning tasks, further enhancing its value and applicability. Weaknesses: I don't see significant weakness but please address my concerns in Questions. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In Table 3, when employing a T5-base model as the student model, it is observed that T5-large outperforms GPT-3 as the teacher model. However, interestingly, when the student model is upgraded to T5-large, the choice of teacher model seems less important. I am curious about the performance implications if the teacher model is completely removed when utilizing T5-large as the student model. I don't see this ablation study in Table 2. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are very grateful to you for providing us with valuable feedback and suggestions for our paper. We will provide explanations and clarifications for each question. #### For Question 1: We hypothesize that when the teacher model and student model share the same architecture (T5), the greater the difference in scale between the two, the more pronounced the distillation effect is. We explored the performance of C3 when removing the teacher model and using only T5-Large as the student model. The experimental results are shown in the following table. | Method | | Spider-Stream | | | Combined-Stream | | | -------------------- | :--------: | :-----------: | :--------: | :-------------: | :--------: | :--------: | | | **TA (%)** | **EA (%)** | **MD (%)** | **TA (%)** | **EA (%)** | **MD (%)** | | T5-Large + PEFT | 69.7 | 67.4 | 0 | 67.3 | 70.0 | 0 | | T5-Large + C3 (Ours) | 70.7 | 68.9 | 0 | 69.0 | 71.2 | 0 | When using T5-Large as a backbone, removing the teacher model still results in a performance degradation compared to the entire C3. If this paper is accepted, we'll add these results to the camera-ready version. --- Rebuttal 2: Comment: Thank you for your response!
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Partial Counterfactual Identification of Continuous Outcomes with a Curvature Sensitivity Model
Accept (spotlight)
Summary: This paper studies the partial identification of counterfactual probabilities from observational data. More specifically, the authors consider the standard bandit model containing a treatment A and a reward Y; exogenous variables U exist affecting both A and Y. The treatment A is a binary variable; while the reward Y could take any arbitrary real value bounded in an interval [l, u]. The goal is to evaluate the counterfactual probability P(y_a | a’, y’) from the observational data drawn from P(Y, A). The author first shows that this learning setting is ill-posed without any additional assumptions. That is, the learner could only obtain non-informative bound [l, u] from the observational data. This observation suggests that one should explore additional parametric assumptions to obtain informative bounds. More specifically, the authors propose a sensitivity model called Curvature Sensitivity Model. They further propose an implementation of our Curvature Sensitivity Model in the form of a novel deep generative model, called Augmented PseudoInvertible Decoder. Strengths: - This paper is well written and well organized. The authors provide extensive examples and graphical illustrations. This is much appreciated. - Bounding counterfactual probabilities over a continuous outcome from the observational data is a challenging problem in causal inference. The authors study partial identification in multi-armed bandit models, which are widely applied in reinforcement learning literature. The target query P(y_a | a’, y’) is called the expected counterfactual outcome of un]treated ECOU; its applications include fairness analysis in machine learning (Kushner et al., 2018). Since ECOU is generally non-identifiable and existing bounding algorithms focus on discrete domains, this paper could have an impact on the causal inference community. - The non-informative bounding result in Theorem 1 is interesting. As far as I am aware, this result is novel. The authors also provide a novel sensitivity model of Curvature Sensitivity Model (CSM), which permits one to obtain informative bounds by bounding the curvature of level sets of the functions. They further show that CSMs generalize existing parametric assumptions for point counterfactual identification methods when the bound of the curvature is set to zero. Weaknesses: - This paper studies the bounding of a specific counterfactual probability, called ECOU in canonical bandit models. However, some statements in the abstract sound more general than it is. For instance, the paper states "We prove that, in general, the ignorance interval of the counterfactual queries has non-informative bounds, already when functions of structural causal models are continuously differentiable." Based on my reading, the non-informative bounds in Theorem 1 only apply to bounding ECOU in bandit models. However, if we consider bounding a causal effect P(y_a) in the bandit model, Manski's bound (1990) has shown to be informative and applicable to continuous reward signals. - The simulation plots could be improved. For instance, it is unclear what the ground truth ECOUs are in Figure 7. Without that, it is hard to evaluate the validity of the proposed bounding strategy. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - What is the actual ECOU in Figure 7? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Assumptions behind the proposed algorithms are explicitly stated. This paper is theoretical and its long-term societal impact is not immediate to see. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for the review. It is great to hear that you found our paper well-written and well-organized, and that you think that the paper will have an impact on the causal inference community. We would like to address mentioned weaknesses and open questions. ### Response to weaknesses We want to stress, that our setting is different from (causal) bandits in reinforcement learning, as we do **not** have access to the environment. As such, we do **not** have the ability to perform experiments and, therefore, do not aim to maximize some reward. Instead, we have a setting based on observational/experimental data (also called logged data) with treatment and outcome (e.g., from a randomized controlled trial), and we aim to infer the expectation of the counterfactual outcome. The Manski's bound (1990) for the causal effect P(y_a) is fundamentally **different** from our counterfactual query (see examples in [1]), namely ECOU[ECOT]. As such, it is **not** applicable to our setting. The query P(y_a) lies only at the interventional (second) layer of Pearl’s hierarchy of causation [2] (see Table 1 in Appendix A.1). Generally, the inference on different layers almost never coincides (this is known as the *Causal Hierarchy Theorem* [2]). In our setting, ground-truth bounds would be desirable but are *intractable*. The reason is the following: For a given value of $\kappa$, it is computationally infeasible to derive ground truth bounds for ECOU [ECOT], even when the ground truth observation density is known. The ground-truth bounds would require solving a constrained optimization task including partial derivatives and Hausdorff integration. This is intractable, even for such simple distributions as standard normal. ### Response to questions Unfortunately, while ground-truth bounds are desirable, they are *intractable*. As we stated above, the reason is the following. For a given value of $\kappa$, it is computationally infeasible to derive ground truth bounds for ECOU [ECOT], even when the ground truth observation density is known. The ground-truth bounds would require solving a constrained optimization task including partial derivatives and Hausdorff integration. This is *intractable*, even for such simple distributions as standard normal. References: - [1] Alexander Balke and Judea Pearl. “Counterfactual probabilities: computational methods, bounds and applications”. In: Uncertainty Proceedings. Elsevier, 1994, pp. 46–54. - [2] Elias Bareinboim et al. “On Pearl’s hierarchy and the foundations of causal inference”. In: Probabilistic and Causal Inference: The Works of Judea Pearl. Association for Computing Machinery, 2022, pp. 507–556.
Summary: The authors study the problem of counterfactual identification of continuous outcome variable with a specific causal graph. They show that the expected counterfactual outcome of (un)treated has non-informative bound when the function class is arbitrary smooth. Then, they introduce a new model called CSM that give informative bounds. Finally, they propose a deep generative based method for partial counterfactual inference in CSM. Strengths: They study an important problem. The paper is well-written and the Appendix is educational. The analysis is quite interesting as it combines differntial geometry and causal inference. This opens a new door to improve the current results in causal inference and likely to develop a general framework for counterfactual analysis and more. The presented examples in both the main text and the appendix are informative. Theoretical results are quite interesting. In particular, Theorem 1 and 2. Weaknesses: The main question is about the generalizability of this method to more complex causal grpahs. It is not hard to see that as the causal diagram becomes a bit more complex, then applying similar analysis might be quite challenging. It is a bit confusing that CSMs with arbitrary $\kappa$ lead to informative bounds, i.e., $l(\kappa,d)>l_a$ while, for instance, $\mathcal{B}(C^2,d)$ is not informative. Because, as it is also mentioned in the text, as $\kappa\rightarrow\infty$, CSM$(\kappa)\rightarrow \mathcal{B}(C^2,d)$. What is the interpretaion of this? The proof of Theorem 2 is of existance type that is the form of the informative bounds and how they depend on $\kappa$ and $d$ are not given. But is it still possible to say something about their dependencies, e.g., $l(\kappa,d)$ is decreasing with respect to $\kappa$? Although, the augmented Pseudo-Invertible decoder method is a fine application of deep generative methods but it is not clear how sensitive it is with respect to the model hyperparametersa and sample size. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: See above Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review! We appreciate that you found our paper interesting, important, and well-written. We would like to respond to the mentioned weaknesses. - Regarding the generalizability of our method to the more complex graphs: Our CSM generalizes in theory to other Markovian-SCMs (see discussion in Appendix E). This applies to settings where we observe other parents of the outcome, independent of the latent noise. We would like to stress that, to the best of our knowledge, our paper is the first to address partial counterfactual inference for continuous outcomes (therefore we considered a rather simple causal diagram). For this, we are first to provide a measure-theoretic formulation aimed at the partial identification task. Further, we are first to formulate the results of non-informative bounds for smooth functions. We think that our theoretical results are an essential foundation for future work aimed at studying more general causal graphs. - Our CSM with $\kappa = \infty$ (or equivalently $\lambda_\kappa = 0$) should lead to non-informative bounds. However, the heuristic implementation, i.e., APID, sometimes encounters computational and optimization issues that may lead to informative bounds (e.g., in Figure 13 in Appendix H). For example, the gradual “bending” of the level sets is sometimes numerically unstable during training, and some runs omit “bending” at all as it would require passing through the high loss value region. Therefore, the computed bounds for ECOU [ECOT] may be too tight. As reviewer 8Zs8 nicely summarized, “In an ideal world, the computational method (i.e., the Augmented Pseudo-Invertible Decoder in Section 5) would take a given value of $\kappa$ and return the [exact] bounds as defined in Definition 2. However, somewhat understandably, returning the exact bounds is not necessarily numerically feasible.” Instead, our AIPD offers numerical estimates for them. **Action**: We will expand the discussion of our experiments and the above limitations. - By construction, it is guaranteed that, with decreasing $\kappa$, the bounds for ECOU [ECOT] will shrink, as we decrease the class of permissible functions in the constraints of the optimization problem (Definition 2). **Action**: We will add a formal statement as a new Corollary after Theorem 2. - Our APID should be seen as a proof of concept rather than a fully-fledged out-of-the-box method. Nevertheless, we identified a specific set of hyperparameters (incl. hidden dimensionalities, regularization terms, etc.) to work well in all the experiments (see Appendix G.2), including the newly added real-world study. In particular, we found that the goodness-of-fit on the held-out subset (e.g., test log-likelihood) was consistently favorable after the burn-in stage of training for all the experiments. Regarding the sample size, our APID works well as soon as it can fit the univariate density of the outcome well. In our newly added case study, the total number of observations was only $n = n_0 + n_1 = 136 + 124$, for which still obtained robust results.
Summary: This paper studies the partial identifiability of counterfactual queries and proposes an interesting proof of general non-identifiability resulting in non-informative bounds. The authors build upon the intuition used for this proof to propose a new class of structural causal model using principal curvatures. Under this assumption, the authors can achieve informative partial counterfactual identification. The authors then propose an architecture that leverages this assumption. Strengths: This paper proposes a systematic and rigorous study of the partial counterfactual estimation problem. This is exemplified by the breadth and depth of literature search and by the organization of the related works. While general non-identifiabilty of counterfactuals is known, theorem 1 shows that constraints on the smoothness of repsonse function does not improve the identifiability, which I think is novel. The proposed curvature sensitivity model, motivated by Theorem 1, is shown to benefit from the identifiability properties which are theoretically demonstrated. This causal model structure seems to be fairly general and appears to generalize concepts such as non-intersection of response surfaces (which is pointed at in the experiments). The authors further show that the theoretical properties of the CSM can be leveraged empirically be proposing a new architecture. Weaknesses: While there is significant effort in bringing intuition regarding how to construct counterfactuals (bending of the counterfactual level sets), I believe more intuition could be given regarding the general applicability of the CSM in practice. It is unclear when it seems reasonable to assume this model would be valid, and hence the partial counterfactual estimation correct. I appreciate that this is a general problem in counterfactual estimation but more insight into the applicability would be welcome. While there is a strong theoretical work behind this paper, it is more frugal in terms of experiments. I would encourage the authors to provide more evaluations of their approach, with different types of repsonse surfaces and noise levels. The design of more experiments could potentially lead to more intuition in the assumptions of the method. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Could the author motivate the CSM assumption from a more practical perspective ? Extending the experiments section would help strenghten the paper. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: Limitations and assumptions are clearly stated. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive evaluation of our work. We are happy that you consider our paper a novel, systematic, and rigorous study. We would like to discuss the mentioned weaknesses and questions in the following. - We agree that the $\kappa$ in our CSM may be perceived — in parts — as an abstract sensitivity parameter. Nevertheless, the sensitivity parameter $\kappa$ lends to a natural interpretation. It can be interpreted as the *level of non-linearity between the outcome and its latent noises that interact with treatment*. To illustrate this, let us consider two following scenarios: - (i) When the treatment does not interact with any of the latent noise variables, then we can assume w.l.o.g. a BGM (thus: deterministic counterfactual outcome). There is no loss of generality, as all the other SCMs with $d > 1$ are equivalent to this BGM (i.e., their level set bundles coincide for both factual and counterfactual transformation). - (ii) When the treatment interacts with some latent noise variables, then the counterfactual outcome becomes random, and we cannot assume a BGM but we have to use our CSM. In this case, $d$ corresponds to the number of latent noise variables which interact with the treatment. Hence, $\kappa$ bounds the level of non-linearity between the outcome and the noise variables. More formally, $\kappa$ and principal curvatures can be seen as the largest coefficient of the second-order term in the Taylor expansion of the level set (see Eq. 15 in the Appendix). This interpretation of the $\kappa$ goes along with human intuition (see [1]): when we try to imagine counterfactual outcomes, we tend to “traverse” all the possible scenarios which could lead to a certain value. If we allow for highly non-linear scenarios, which interact with treatment, we also allow for more extreme counterfactuals, e.g., interactions between treatment and rare genetic conditions. Importantly, in order to verify the particular value of $\kappa$, we would have to measure all the latent noises which interact with treatment. Verifying counterfactual inference is a very complex task, and can be achieved by, e.g., combining several datasets (known as causal marginal problem [2], yet which has been done only for discrete variables and not for continuous variables as in our work). Still, we are confident that our paper provides a valuable theoretical foundation for future research and can help practical applications (see next bullet item). **Action**: We will add the interpretation of $\kappa$ and its intuition to our revised paper. - We experimented with different coefficients for the $\lambda_\kappa$ (which correspond to different bounds on the curvature of level sets). Our CSM considers *all the possible response surfaces* with a certain max curvature of the level sets, even when the dimensionality of the latent noise is set to $d=2$ (see Corollary 4 in Appendix D). To address the issue of applicability and to strengthen the intuition behind our CSM, we performed an additional experiment with a real-world case study (see **PDF**). Therein, we adopt our CSM to answer “what if” questions related to whether the lockdown was effective during the COVID-19 pandemic. Specifically, we ask what the impact on COVID-19 cases in Sweden would have been, had Sweden implemented a stay-home order (which Sweden did not do but which is under scrutiny due to post-hoc legal controversies). Our results imply that Sweden could have had significantly lower cases had it implemented a stay-home order, even when allowing for some level of non-linearity between outcome and independent latent noise variables that interact with the treatment, e.g., cultural differences, level of trust in government, etc. **Action**: We will add our case study with real-world data to our revised paper. References: - [1] Celar, Lenart, and Ruth MJ Byrne. "How people reason with counterfactual and causal explanations for Artificial Intelligence decisions in familiar and unfamiliar domains." Memory & Cognition (2023): 1-16. - [2] Luigi Gresele et al. “Causal inference through the structural causal marginal problem”. In: International Conference on Machine Learning. 2022.
Summary: This paper presents a method for obtaining bounds on continuous counterfactual outcomes. In general, level-3 counterfactual outcomes (what would have happened under action $a'$, given knowledge of what did happen under action $a$) are not identifiable from either interventional or observational data. Prior work makes structural assumptions on the underlying structural causal model (e.g., monotonicity with respect to a single unobserved noise variable) to obtain identification, while this work instead makes an assumption on the principal curvature of level sets with respect to two unobserved noise variables. A computational approach is provided to obtain bounds based on this principal curvature assumption, and is demonstrated on two simple synthetic cases. Strengths: In terms of originality and clarity, this paper presents a novel approach to obtaining bounds on continuous counterfactual outcomes, and the contribution is clearly placed in the context of related work. Having seen a few papers on the general topic of partial identification, I found Section A (Extended Related Work) in the Appendix to be a well-written, well-researched, and generally helpful overview of how the current work differs from prior approaches, primarily in the problem considered. Moreover, in terms of clarity and quality, I found Example 1 (Figure 3) to be a very helpful example of counterfactual non-identifiability, and a good explanation of the intuition for how "bending" the counterfactual level sets around the factual level set leads to a different distribution of counterfactual outcomes. Weaknesses: ## Summary I see two major weaknesses in this paper, and look forward to discussion with the authors during the response period if I've misunderstood anything below. Both weaknesses relate to the significance of the approach. 1. How should domain experts choose the parameter $\kappa$? Choosing the right bound on the principal curvature of level sets doesn't seem like something a domain expert would plausibly be able to do. 2. Even if we knew the right $\kappa$, can we guarantee that the computational approach will give bounds that are at least conservative (e.g., outer bounds on the true bounds implied by $\kappa$) if not tight? Overall, my current score incorporates my view that these two weaknesses are genuinely challenging problems, so I am somewhat sympathetic to the paper in its current state, but I would at least like to see more discussion of these challenges in the main paper. I'll discuss these weaknesses in more detail below. I am basing my score primarily on these points, though I also had an important clarifying question regarding the ability of the approach to "cover the entire identifiability spectrum". ## Details of major concerns (1) **Domain Knowledge for $\kappa$?** One of the appealing properties of the given approach is that it can interpolate between the extremes of identifiability (though see my clarifying question (3) below). However, there are many trivial ways to interpolate between e.g., point-identification and the non-informative bounds (e.g., by simply choosing a point between them). The value of sensitivity analysis approaches (in my view) lies in the ability to translate domain knowledge into informative bounds. In other areas of sensitivity analysis (for e.g., interventional queries in the presence of unobserved confounding) that rely on domain knowledge to derive bounds, there is an emphasis on creating additional tools to help end-users calibrate their choices of these hyperparameters by e.g., benchmarking (see Figure 2 of Cinelli and Hazlett 2019, full citation below). Informally, this type of benchmarking allows one to say "an unobserved confounder would have to be 3x as strong as the variable 'age' in order to change our conclusions", a statement that is a bit easier for domain experts to engage with versus reasoning about the value of an abstract hyperparameter. I'm aware that this example deals with a different type of causal query (i.e., counterfactuals vs interventional queries), and that the lack of identifiability in the current paper is fundamentally different from unobserved confounding - my point is that for any partial identification approach that derives informative bounds via "domain knowledge", like the current approach, a bit more effort is required to make it plausible for domain experts to convert their domain knowledge into the required (abstract) hyperparameters, like bounding the principal curvature of counterfactual level sets. I would be interested in any comments from the authors on this point. Citation: Carlos Cinelli , Chad Hazlett, Making Sense of Sensitivity: Extending Omitted Variable Bias, Journal of the Royal Statistical Society Series B: Statistical Methodology, Volume 82, Issue 1, February 2020, Pages 39–67, https://doi.org/10.1111/rssb.12348 (2) **Corresponding between computational bounds and theoretical bounds**: Supposing we know the right value of $\kappa$, Definition 2 defines an upper and lower bound on the quantity of interest (e.g., the ECOU), with the additional constraint (e.g., in Theorem 2) that all SCMs satisfy Assumption $\kappa$ and are in the class $\mathcal{B}(C^2, d)$. For the practical algorithm, we use $d = 2$. In an ideal world, the computational method (i.e., the Augmented Pseudo-Invertible Decoder in Section 5) would take a given value of $\kappa$ and return the bounds as defined in Definition 2. However, somewhat understandably, returning the exact bounds is not necessarily feasible. Instead, we are left with heuristics - we optimize an objective over a certain set of SCMs where one of the four terms relates to maximizing / minimizing the counterfactual outcome, and is controlled by the hyperparameter $\lambda_Q$. It was not clear to me how this term trades off with the other terms, e.g., the extent to which we can view the optimization problem as choosing a fixed constraint on $\kappa$ and then maximizing / minimizing the counterfactual subject to this constraint among all observationally-equivalent models. **The use of practical heuristics would be less concerning if this approach resulted in less informative bounds than those in Definition 2, but as I understand it, it may result in *more* informative bounds.** To clarify, this point is briefly discussed in Appendix E (discussion of extensions and limitations) under "Tight bounds". However, my concern is not that the bounds are insufficiently tight (e.g., too wide and could be narrowed), but that they might be *too narrow* (e.g., the theoretical bounds are actually wider than the bounds produced by the algorithm). I believe there is some evidence to suggest that these heuristics change the interpretation of the results. For instance, I would expect that as $\lambda_{\kappa}$ changes, the bound should monotonically increase/decrease. However, looking closely at Figure 7 suggests that in practice there is some crossover (see e.g., $y' = -0.5$, where the lower bound for $\lambda_{\kappa} = 1$ goes above the line for $\lambda_{\kappa} = 10$). I believe my observation below relating to Figure 13 in the appendix (where $\lambda_{\kappa} = 0$ still gives informative bounds) is also related to this point. Overall, it would be good to see a comparison, perhaps for a very simple example, between the ground-truth bounds in Definition 2 and the bounds returned by the approach. (3) **Clarification regarding non-informative bounds with $\kappa=\infty$**: It is stated in Corollary 4 that by setting $d = 2$ and varying $\kappa$, we can "cover the entire identifiability spectrum". I took this statement to mean that if e.g., there is no restriction on $\kappa$, then the resulting bounds should be non-informative (i.e., equal to $[l_a, u_a]$), and that for $\kappa = 0$, all values of $d$ yield the same point-identified solution. I'm deriving my understanding primarily from the explanation given in the introduction (see lines 72-73). Is my understanding correct? If my understanding is correct, then what is going on in Figure 13 in the appendix? It appears that setting $\lambda_{\kappa} = 0$ (corresponding to no constraints on $\kappa$) does not result in non-informative bounds. Is this due to the (perhaps necessary) mismatch between the computational approach and perfectly solving the optimization in Definition 2? ## Minor Points As a minor point, it may be worth clarifying where we are interested in taking a maximum/minimum (e.g., over observationally-equivalent SCMs) and where we are interested in taking an average (e.g., over the counterfactual posterior for a single SCM). E.g., when I first read the procedure for abduction on lines 341-344, I was concerned that the approach draws samples from the pre-image, as opposed to looking for $U_1, U_2$ which yield the maximum/minimum counterfactual outcomes, before I realized that this approach makes sense for a fixed SCM to compute a counterfactual expectation like ECOU / ECOT. I also found Figure 6 difficult to follow, especially the bi-directional arrows for "connections". Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. How would you go about helping domain experts choose an appropriate value of $\kappa$ or $\lambda_{\kappa}$ in practice? 2. Is there any guarantee that the bounds derived from this approach will be less informative than the theoretical bounds? Or, should we generally expect the bounds to be more informative, since we will inevitably fail to find the SCMs that yield the true maximum / minimum for a given $\kappa$? 3. Related to the previous question, is my understanding of "cover the entire identifiability spectrum" correct in Corollary 4? If so, it may be worth revising the theoretical statement to make this clear. If my understanding is correct, then what is going on in Figure 13, i.e., why does setting $\lambda_{\kappa} = 0$ still yield informative bounds? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: In the main paper, I would have liked to see more discussion of the limitations outlined in the "weaknesses" section above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are very grateful for such a rigorous, helpful, and in-depth review. We also appreciate that the reviewer found our paper clearly written, our method novel, and our contribution well-placed in the related work. Here is our answer to the questions: - (1) We stress that counterfactual inference is a non-trivial problem in statistics and probability. Even in human cognition, counterfactuals are much more complicated than causal statements [1]. To the best of our knowledge, our paper is the first to address partial counterfactual inference for continuous outcomes. For this, we provided a measure-theoretic formulation and formulated the results of non-informative bounds for smooth functions. Our CSM is one viable solution (out of many) to build a sensitivity model. We agree that κ in our CSM may be perceived as an abstract sensitivity parameter. Yet, it can be interpreted as the *level of non-linearity between the outcome and its latent noises that interact with treatment*. To illustrate this, let us consider two following cases: - (i) When the treatment does not interact with any of the latent noises, then we can assume w.l.o.g. a BGM. There is no loss of generality, as all the other SCMs with d > 1 are equivalent to this BGM (see Ex. 6). - (ii) When the treatment interacts with some latent noise variables we cannot assume a BGM and have to use our CSM. In this case, d corresponds to the number of latent noise variables which interact with the treatment. Hence, κ bounds the level of non-linearity between the outcome and the noise variable. More formally, κ and principal curvatures can be seen as the largest coefficient of the second-order term in the Taylor expansion of the level set (see Eq. 15 in App. B). This interpretation of the κ goes along with human intuition (see [1]): when we try to imagine counterfactual outcomes, we tend to “traverse” all the possible scenarios which could lead to a certain value. If we allow for highly non-linear scenarios, we also allow for more extreme counterfactuals, e.g., interactions between treatment and rare genetic conditions. **Action**: We will add the interpretation of κ and the intuition behind its choice to our revised paper. To strengthen the practical value of our CSM, we provided a new case study (see **PDF**). Therein, we adopt our CSM to answer “what if” questions related to whether the lockdown was effective during the COVID-19 pandemic. - (2) We agree that, for a given value of κ, it is computationally infeasible to derive ground-truth (GT) bounds, even when the GT observation density is known. This is different from, e.g., layer 2 models like MSM, where the GT solution could be expressed in terms of conditional quantiles. The GT bounds in Def. 2 require solving a constrained optimization task including partial derivatives. This is intractable, even for simple distributions as normal. Thus, we resort to using tractable alternatives and, hence, design our APID. The approach of turning constraints into parts of the loss is a common practice in deep learning for partial identification [2, 3]. In our case, this involves either adding the curvature loss controlled by λ_κ and the query loss controlled by λ_Q. The exact relationship between κ and corresponding λ_κ is unknown, but they are inversely connected. In general, this relationship is usually known only for very simple models, like ridge or lasso regressions. Regarding the choice of λ_Q, we provided additional experiments in our original supplements in App. H.2. Therein, the bounds are simply moved towards/away from BGMs-EQTDs bounds, but their relative location almost does not change. We acknowledge that our APID is a proof of concept rather than a fully-fledged method. Hence, for the multi-modal distribution in Fig. 7, APID sometimes yielded inaccurate bounds (e.g., too tight), mainly due to the computational instability (i.e., the gradual “bending” of the level sets is unstable during training, and some runs omit “bending” at all as it requires passing through the high loss value region). **Action:** We will add this discussion to our Limitations section. - (3) Thank you for this correction. Indeed, by setting κ= 0 in our CSM, we do not achieve point identification. Generally, it is not clear how to generalize the point identification of BGMs, e.g., there is no natural notion of monotonicity for functions from R^d to R. We summarized this result as a part of our *newly added theoretic result*, which we presented in Lemma 3 of the **PDF**. Therein, we introduce a so-called BGMs-EQTDs identification gap. This gap describes the closest we can get to the point identification by setting κ=0. Although our sensitivity model does not achieve full point identification for κ= 0, our CSM can still be useful for decision-making, as we show in our newly added case study (see **PDF**). Regarding the computational experiment (Fig. 13): when setting λ_κ=0 (same as κ= ∞), some runs omitted “bending” as that comes at the cost of the loss increase. **Action:** We will add the new theoretical result from Lemma 3 (BGMs-EQTDs identification gap) to our revised paper. Minor Points * We will clarify throughout our paper that we work with ECOU, which are *expectations* of the counterfactual outcomes. Then, min/max are performed on top of the expectations. * Regarding Fig. 6, we have added an expanded version in the supplements of our original paper (see App. G, Fig. 12) with unidirectional edges, which explains the training and inference of APID. References: - [1] Celar, Lenart et al. "How people reason with counterfactual and causal explanations for AI decisions in familiar and unfamiliar domains." Memory & Cognition (2023): 1-16. - [2] Kevin Xia et al. “Neural causal models for counterfactual identification and estimation”. In: ICLR. 2023. - [3] Kevin Xia et al. “The causal-neural connection: expressiveness, learnability, and inference”. In: Advances in NeurIPS. 2021 --- Rebuttal Comment 1.1: Comment: Thank you for the detailed and helpful response - I'll give my quick reactions below (1), regarding the interpretation of $\kappa$ - I appreciate the intuition regarding the non-linearity between the outcome and latent noise that interacts with treatment. From a mathematical / theoretician point of view, I think there is a certain elegance to that notion. However, I don't think that it helps much with practical application - I would imagine that the type of latent factors considered here (e.g., uniform random variables) often lack a clear real-world interpretation, making it difficult to use domain knowledge to choose bounds on non-linearity. (2), regarding the correspondence between the theoretical bounds & the bounds computed by the algorithm - I suspected as much, that there isn't a clear connection given the intractability of the original problem, and I take your response to confirm that suspicion. I do appreciate your intuition regarding what goes wrong during optimization, e.g., "some runs omit 'bending' at all as it requires passing through the high loss value region)" (3), regarding the extremes of setting $\kappa$ to 0 or $\infty$: * For the unconstrained case where $\kappa = \infty$, it does feel rather unsatisfying that we still have informative bounds, but this appears to be due (based on your responses) to the imperfect optimization procedure. * For the highly-constrained case where $\kappa = 0$, am I correct to understand that the original statement was not entirely accurate? E.g., on Line 366, BGM s referred to a special case of CSM with $\kappa = 0$, and on Line 72 in the intro it is stated that "we further show that we obtain the BGMs from [68] as a special case when setting the curvature to zero". I appreciate the new theoretical result, but just wanted to clarify the context. --- Reply to Comment 1.1.1: Comment: Thanks for reading our response and we are again grateful for such an interest and attention to the details in our work. We would like to further clarify the abovementioned issues. - (1) Thank you for bringing up the issue of applicability. We argue that the real-life application of sensitivity models works in reverse: we rather try out different values of the sensitivity parameter and see what value changes our decision or conclusion. Let us consider the following use case of the marginal sensitivity model (MSM) for the smoking effect on lung cancer study. E.g., with the MSM it was shown that the odds of getting treatment (smoking) have to be almost 5 times higher conditioning on some confounder, to explain away the causal effect on lung cancer. MSM doesn't provide any information what is the interpretation or distribution of such a confounder, i.e., it is up to the domain expert to speculate about the existence of such a variable (or set of variables). In real life, the ground truth sensitivity parameter of the MSM is not known even for this smoking-lung cancer scenario. The same could be said about the application of our CSM (see newly added Real-world case study). Regarding the uniformity of latent variables: considering the limited space of the NeurIPS submission, we only provided the most basic setting of our CSM when latent variables are assumed to be uniform and independent. We relied on this simplification for the sake of clarity and ease of the mathematical notation. The extension to arbitrary (absolutely) continuous variables and how non-linear transformations of those affect the curvature would be an interesting extension of our paper. - (2) Thank you for the nice summary, you understood it right. We would to like stress again, though, that our APID is an imperfect, but tractable heuristical instantiation of the CSM, which is a valuable contribution. - (3) The original statement was accurate but seemed misleading. CSM with $\kappa = 0$ indeed includes BGMs as a special case, but also the whole interval in-between. With Lemma 3 we also showed, that it additionally contains EQTDs and the interval in-between. We will add the discussion above to the revised version of the paper.
Rebuttal 1: Rebuttal: We are very grateful for the positive and in-depth feedback from the reviewers. We have carefully addressed all of the questions in the individual responses below. We will incorporate all changes (labeled with **Action**) into the camera-ready version of our paper. We have additionally uploaded empirical results as a **PDF file**. Our main improvement is the following: we added a **new experiment with a real-world case study**. Therein, we adopt our curvature sensitivity model (CSM) to answer “what if” questions related to whether the lockdown was effective during the COVID-19 pandemic. We find that our CSM provides important and meaningful insights that are of practical value. We want to highlight that, to the best of our knowledge, our paper is the first to address partial counterfactual inference for continuous outcomes. In particular, we are first to provide a measure-theoretic formulation to the partial identification task, and we are first to formulate the results of non-informative bounds for smooth functions. In general, designing a sensitivity model for counterfactual inference is a non-trivial task. For example, it is not clear how to straightforwardly generalize the point identification results for this purpose (e.g., there is no conventional notion of monotonicity for function from R^d to R). Our curvature sensitivity model (CSM) is one viable solution to build such a sensitivity model, where the sensitivity parameter $\kappa$ can be interpreted as the *level of non-linearity between the outcome and its latent noise variables that interact with treatment*. Pdf: /pdf/e05d731265d997c31494ee9287b011f2c7421000.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Model-Free Reinforcement Learning with the Decision-Estimation Coefficient
Accept (poster)
Summary: The paper studies model-free RL algorithm using DEC as the complexity measure. Strengths: Extending previous model-based algorithm to model-free algorithm is a nice contribution. Weaknesses: I believe this paper provides a nice contribution for the DEC family of papers. But I would like the authors to discuss more on the implication side. Why we care about extending model-free algorithms to bilinear classes? There are too many complexity measures for RL function approximation that are proposed recently so I am concerned about how useful those theories are for practice. The algorithm for Bellman-Eluder dimension is model-free. Then can I ask why we need a new algorithm? I hope the authors could pay more attention to the computational side. So far, all the lines of DEC works are not computationally efficient and even lack any experiments. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: 1. No discussion or solution on how to implement the algorithm. 2. The analysis seems to build on two existing works: Zhang [32] and Foster et al. [13]. Not very exciting about the technique. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review and useful suggestions! > I believe this paper provides a nice contribution for the DEC family of papers. But I would like the authors to discuss more on the implication side. Why we care about extending model-free algorithms to bilinear classes? There are too many complexity measures for RL function approximation that are proposed recently so I am concerned about how useful those theories are for practice. The algorithm for Bellman-Eluder dimension is model-free. Then can I ask why we need a new algorithm? The goal of this work is not to obtain a new algorithm for certain model classes, but rather to show for the first time that the DEC and estimation-to-decisions framework for proving regret upper bounds leads to meaningful guarantees in the model-free setting. Prior to our work, the DEC/estimation-to-decisions framework (Foster et al. 2021) offered the most general framework for RL with general function approximation, as well as the only complexity measure that leads to both *upper* and *lower* bounds on regret. However, the upper bounds in Foster et al. (2021) are limited to model-based settings. On the other hand, complexity measures such as bilinear classes and bellman-eluder dimension apply to model-free settings, but are less general than the DEC (and only give upper bounds, not lower bounds). Our work gives a unifying perspective, and enjoys the generality of the DEC framework, yet also gives meaningful guarantees for model-free settings. A secondary advantage of the estimation-to-decisions framework is that it yields a modular algorithm, with a conceptual separation between the estimation and decision-making components. In contrast, the GOLF algorithm (for classes with bounded Bellman-Eluder dimension) is more ad-hoc in nature, employing global optimism to select a value function and its associated greedy policy, which is optimistic and consistent with all prior observations. The estimation and decision-making components of GOLF are thus tightly coupled. Our work shows that these components can be decoupled while still obtaining similar guarantees, which we believe constitutes a significant conceptual contribution. Let us mention in passing that we consider bilinear classes only as a stylized example to illustrate our techniques. Bilinear classes capture many known generalizations of tabular MDPs, including linear MDPs, linear mixture MDPs, MDPs with low Bellman rank, feature selection in low-rank MDPs, and many more. Note, however, that our techniques are not limited to this setting. > I hope the authors could pay more attention to the computational side. So far, all the lines of DEC works are not computationally efficient and even lack any experiments. We emphasize that focusing on statistical complexity as opposed to computational efficiency is a common theme in the line of research on RL with general function approximation, and is not exclusive to the DEC (e.g., existing works on bellman rank, bilinear classes, bellman-eluder dimension, and so on are also inefficient). Understanding how to design computationally efficient algorithms for RL with general function approximation is a fascinating direction for future research, but the first step, which our paper and the papers above works towards, is to understand what is even possible statistically. Nonetheless, we hope that by placing RL theory on solid statistical foundations, our work can serve as a starting point for future work on efficient algorithms.
Summary: This paper studies a variant of the recently proposed framework of Foster et al. for sequential decision making, based on a concept called "decision estimation coefficient" (DEC). The variant proposed here is based on enhancing the estimation step in the "estimation-to-decisions" (E2D) with an optimistic bias inspired by the recent work of Zhang. The authors show that this biased estimation scheme, coupled with an appropriately adjusted decision-making rule, can satisfy very similar regret guarantees for a wide range of sequential decision problems, and can provide improvements in certain settings. In particular, the authors show that their approach can handle a large class of tractable models for reinforcement learning called "bilinear classes". Strengths: The paper is well-written and the technical content is of excellent quality. The proposed approach is justified and explained well, and its analysis is also presented in an accessible way (at least on a high level). While the regret bounds for bilinear MDP classes is not novel in the sense that there exist other algorithms that achieve the same guarantees, I appreciate the conceptual contribution of showing that the DEC framework is also capable of tackling these (relatively) challenging problems. I appreciated the very careful comparison between all the relevant DEC variants in Section 3: the authors didn't just propose a technique and proved some bounds about it, but also explored a range of other opportunities and explained the differences between them in an accessible manner. Weaknesses: On the negative side, the rates that the authors derive are not particularly great: the scaling goes from $T^{2/3}$ in the setting with the most stringent assumptions all the way to $T^{5/6}$ as more and more assumptions are dropped. While the authors discuss this limitation quite openly, I would have appreciated some more discussion as to where this relatively poor scaling comes from. One contributing factor is certainly the use of batched estimation steps. My understanding is that some further looseness may come from the optimistic bonuses added to the estimation procedure, which makes the total estimation error grow polynomially with the number of updates (as opposed to logarithmically, which would allow getting sqrt{T} rates after putting everything together). I wonder though if this intuition is correct, and I would appreciate it if the authors could clarify what rate they would get if they could afford to set n=1 (without paying for it). Altogether, it would have been nice if the to compare the various notions of estimation error with the same care as what the DEC variants have received. Overall, I am leaning towards suggesting acceptance, but I would feel more strongly about my support if the authors were able to address my questions above in a satisfying way. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: See above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive comments and helpful suggestions! > While the authors discuss this limitation quite openly, I would have appreciated some more discussion as to where this relatively poor scaling comes from. One contributing factor is certainly the use of batched estimation steps. My understanding is that some further looseness may come from the optimistic bonuses added to the estimation procedure, which makes the total estimation error grow polynomially with the number of updates (as opposed to logarithmically, which would allow getting sqrt{T} rates after putting everything together). I wonder though if this intuition is correct, and I would appreciate it if the authors could clarify what rate they would get if they could afford to set n=1 (without paying for it). This is a great question. There is indeed room for further work along these lines. The reason for our suboptimal rates differs slightly in the two settings we consider: (1) Corollary 2.1, which makes no Bellman completeness assumption, and (2) Corollary B.1, which assumes Bellman completeness. We discuss both cases separately below. **Without the completeness assumption (Proposition 2.1 and Corollary 2.1):** For our results that do not make use of completeness, we use a divergence based on average bellman error. Batching is necessary for algorithms based on average bellman error (Prop 2.1 and Corollary 2.1) due to the following well-known technical issue regarding average bellman error: Because the quantity D_{bi} that we are interested in is the *square of an average* as opposed to an *average of a square*, we cannot take advantage of concentration across rounds/iterations, and multiple samples are needed to estimate this quantity well for each iteration (note that the guarantee in Proposition 2.1 is vacuous if there is n=1 sample per batch). This issue is precisely why well-known algorithms based on average bellman error such as OLIVE (Jiang et al. 2017) and BiLinUCB (Du et al. 2021) require multiple samples per iteration, and is one of the main reasons why these algorithms are analyzed in the PAC framework instead of regret. Without bellman completeness, we obtain T^{3/4} regret in the on-policy case and T^{5/6} for the off-policy case. The reason that these rates are worse than \sqrt{T} is due to the batching issue above (as well as the additional issue of going off policy in the latter case), and is not related to the use of optimistic estimation. If one is interested in PAC guarantees instead of regret, we expect that our analysis can be adapted to provide tighter guarantees. **With the completeness assumption:** Corollary B.1 gets a rate of T^{2/3}. The algorithm uses a small batch size of n=H, which causes no degradation in rates, and is only needed to ensure that certain conditional independence assumptions required by Theorem C.1 are satisfied; batching is required in Agarwal & Zhang (2022) for the same reason. The improvement in rate that we achieve in the bellman-complete setting (compared to Corollary 2.1) is due to the fact that the batch size does not grow with T, which is facilitated by the two-timescale exponential weights method from Agarwal and Zhang (2022), which takes advantage of the bellman-completeness assumption. In fact, as stated, their result their result would seem to a imply bound on the estimation error in Proposition B.1 that scales as $\log|\mathcal{Q}| + \sqrt{\log|\mathcal{Q}|*T}/\gamma$, which would lead to a \sqrt{T} regret bound for decision making. Unfortunately, there is a gap in their proof (reference [1] in our paper) which is due to the optimistic bonuses added to the estimation procedure. Our self-contained proof fixes this gap, but this leads to a degradation in the rate. We suspect that there exists an estimation algorithm that can obtain the estimation bound claimed above, which would to \sqrt{T} regret, but this likely requires new algorithmic ideas and is out of scope for the current paper. We mention in passing that compared to other model-free algorithms based on completeness (e.g., GOLF), the reason for the technical difficulties around estimation above is that our results require *online* estimation guarantees rather than *offline* estimation guarantees. This is a more stringent requirement, but is necessary to take advantage of the DEC. > Altogether, it would have been nice if the to compare the various notions of estimation error with the same care as what the DEC variants have received. Thank you for the useful suggestion! We will add discussion around this point in the final version. --- Rebuttal Comment 1.1: Comment: Thank you for the response! I really appreciate the clarification. Perhaps it would be useful to include (at least a version) of this text in the final version of the paper. I especially wonder about the nature of the gap in the analysis of Agarwal and Zhang --- has this issue been publicly known? If so, an explicit pointer in the present submission would be appreciated. If not, it could make sense to provide a short description of the issue somewhere in the present submission (of course only after discussing with the original authors to make sure that they are fine with it). --- Reply to Comment 1.1.1: Comment: Thank you for the suggestion! We have been in contact with the authors of Agarwal and Zhang (2022) and they are planning to update their paper with a fix under additional structural assumptions. We will be sure to include a pointer once the updated version is available.
Summary: This paper contributes to a line of research on decision-estimation coefficients (and related algorithms). An optimistic variant of the original DEC is introduced and it is shown that this variant can lead to a new related meta-algorithm with optimism. This new structure and algorithm are shown to provide non-trivial regret bounds for model-free RL for bilinear classes (a known framework for sample efficient learning). It is also shown that the original DEC is insufficient to achieve the same. Finally it is shown that posterior sampling does not solve all the problems that E2D.Opt can solve. Strengths: Overall this is a good paper that is presented clearly and makes an interesting contribution to a growing line of research. The application of E2D-style algorithms to model-free settings is important, as well as the ability to incorporate classes that were previously not handled. The discussion of where advantages exist for optimistic DEC are thorough and helpful. The additional results also shed light on the problems that can be handled by both coefficients and other algorithms. Weaknesses: The new coefficient is used to reproduce an existing result. This leads to a natural new meta-algorithm and theorem, but since they are only shown in one specialized case, it’s unclear whether it’s useful beyond this one setting as a “meta-algorithm.” While it is apparent that one could swap out different divergences and oracles, it may be described better as a single algorithm, inspired by the original E2D. The regret bound appears to have a slightly worse rate than what would be achievable with the original BiLin-UCB, which I believe has T^{2 / 3} if converted properly to regret. It’s not clear how one would implement such an algorithm in practice. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Does optimistic DEC subsume all of the original DEC results? Are there any downsides compared to the original E2D? What benefit does the batching offer if the regret bounds still scale as poly(K)? Why not just use one sample per batch? Prop 3.4: shows that PS does not cover all the problems that E2D.Opt can solve. How about the other way? Prop 3.4: What is the expectation over? Is this an average regret over MDPs in the class and if so, is posterior sampling misspecified? Suggestions: (14) does not seem to point to an equation in the main paper. Lines 94-95: It may be more clear to say that “there exist algorithms/oracles” that do this. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: No, but this work is theoretical. The limitations are either apparent from the assumptions or discussed adequately. A conclusion to summarize would be helpful though. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive comments and helpful suggestions! > Does optimistic DEC subsume all of the original DEC results? Are there any downsides compared to the original E2D? Yes, the optimistic DEC—when equipped with Hellinger distance—recovers all of the main guarantees based on the DEC in Foster et al. (2021). By accommodating more general divergences, we generalize these results. The only possible downside we are currently aware of concerns computation: For some classes (e.g., linear models), there are computationally efficient algorithms for non-optimistic online estimation, but it is unclear whether *optimistic* online estimation can also be performed in a computationally efficient fashion. > What benefit does the batching offer if the regret bounds still scale as poly(K)? Why not just use one sample per batch? Batching is necessary for algorithms based on average bellman error (Prop 2.1 and Corollary 2.1) due to the following well-known technical issue regarding average bellman error: Because the quantity D_{bi} that we are interested in is the *square of an average* as opposed to an *average of a square*, we cannot take advantage of concentration across rounds/iterations, and multiple samples are needed to estimate this quantity well for each iteration (note that the guarantee in Proposition 2.1 is vacuous if there is n=1 sample per batch). This issue is precisely why well-known algorithms based on average bellman error such as OLIVE (Jiang et al. 2017) and BiLinUCB (Du et al. 2021) require multiple samples per iteration, and is one of the main reasons why these algorithms are analyzed in the PAC framework instead of regret. > Prop 3.4: shows that PS does not cover all the problems that E2D.Opt can solve. How about the other way? E2D.Opt subsumes all results we are aware of based on the frequentist posterior sampling framework used in Zhang (2021) and subsequent work (e.g., Agarwal and Zhang (2022), Zhong et al. (2022)). In addition, it can be shown that the DEC is bounded whenever the information ratio of Russo and Van Roy, which is commonly used to analyze posterior sampling and related algorithms, is bounded; see discussion in Foster et al (2021,2022). Understanding connections between the DEC and more general posterior sampling-like algorithms is an interesting question for future research. > Prop 3.4: What is the expectation over? Is this an average regret over MDPs in the class and if so, is posterior sampling misspecified? Note that Proposition 3.4 concerns a frequentist setting, and “Posterior Sampling” in this context refers to the frequentist posterior sampling approach based on optimistic estimation given by Zhang (2022): $\mu^{t}$ is a randomized estimator (distribution over models) produced by an optimistic estimation algorithm, and the frequentist posterior sampling scheme samples a model from this distribution and plays the optimal decision for it. Since we are in a frequentist setting, the expectation only considers the randomness of the algorithm and the randomness of the samples drawn from the model itself. We mention in passing that while the purpose of this example was to compare to the frequentist posterior sampling approach of Zhang et al. (2022), it is straightforward to show that our lower bound construction also applies to classical posterior sampling in the Bayesian framework with a well-specified prior. We are happy to include this result if it is of interest to the reviewer. > The new coefficient is used to reproduce an existing result. This leads to a natural new meta-algorithm and theorem, but since they are only shown in one specialized case, it’s unclear whether it’s useful beyond this one setting as a “meta-algorithm.” While it is apparent that one could swap out different divergences and oracles, it may be described better as a single algorithm, inspired by the original E2D. The goal of this work is not to obtain a new algorithm for certain model classes, but rather to show for the first time that the DEC and estimation-to-decisions framework for proving regret upper bounds leads to meaningful guarantees in the model-free setting. Prior to our work, the DEC/estimation-to-decisions framework (Foster et al. 2021) offered the most general framework for RL with general function approximation, as well as the only complexity measure that leads to both upper and lower bounds on regret. However, the upper bounds in Foster et al. (2021) are limited to model-based settings. On the other hand, complexity measures such as bilinear classes and bellman-eluder dimension apply to model-free settings, but are less general than the DEC (and only give upper bounds, not lower bounds). Our work gives a unifying perspective, and enjoys the generality of the DEC framework, yet also gives meaningful guarantees for model-free settings. > The regret bound appears to have a slightly worse rate than what would be achievable with the original BiLin-UCB, which I believe has T^{2 / 3} if converted properly to regret. Please refer to our response to Reviewer kpYc for detailed discussion around this issue. Thanks again for the suggestions! --- Rebuttal Comment 1.1: Title: Response Comment: Thank you for the helpful response! > it is straightforward to show that our lower bound construction also applies to classical posterior sampling in the Bayesian framework with a well-specified prior. We are happy to include this result if it is of interest to the reviewer. It may be worth mentioning in passing, but I see now it is clear from the original proof.
Summary: This paper adapted the DEC framework and E2D algorithm to incorporate model-free reinforcement learning. Specifically, this paper proposed the optimistic DEC and proved that it appears in the regret bound for the optimistic E2D algorithm with an optimistic estimation oracle. The example of bilinear class is provided. The paper discussed when the optimistic E2D algorithm will have benefit than E2D in its original form. Strengths: This paper is well-written and provides concrete steps toward understanding the general framework of DEC. The original DEC framework is mostly model-based decision making, whereas the new framework can handle model-free RL. Weaknesses: This paper is largely built upon [13] and [32]. The additional results over [13] and [32] are a bit incremental. While using the DEC framework to provide a regret bound for bilinear class is nice, this does not new/improved regret bound for bilinear class. In Appendix A, the authors discussed the relationship with constrained DEC [16], and mentioned that it would be interesting to explore whether optimistic estimation can be combined with the constrained DEC techniques. To make this paper stronger, one possible direction is to discuss the relationship with constrained DEC in details (either provide some results in this direction, or point out the technical difficulty to the combination). Technical Quality: 3 good Clarity: 3 good Questions for Authors: What is the general picture of the lower bound side of optimistic DEC? Am I correct that oDEC is always smaller than DEC, and DEC serves as a lower bound, so it is not meaningful to discuss oDEC as a lower bound? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: na Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive comments! > Am I correct that oDEC is always smaller than DEC, and DEC serves as a lower bound, so it is not meaningful to discuss oDEC as a lower bound? The relation between the oDEC and the DEC depends on the choice of divergence and the model class. As shown by Corollary 3.1, for Hellinger divergence, the two are equivalent up to constants, which means that our results are always tighter than the guarantees in Foster et al. (2021). As shown by Proposition 3.3, there is a model class for which the oDEC with bilinear divergence is bounded, but the DEC with bilinear divergence can be exponential. In general though, the tightest relationship between oDEC and DEC that we are aware of is Proposition 3.1, which shows that the oDEC with divergence D is never larger than the DEC with the “flipped” version of D. > What is the general picture of the lower bound side of optimistic DEC? The DEC defined with Hellinger divergence serves as a lower bound for expected regret. As mentioned before, when defined with respect to Hellinger divergence, the DEC and oDEC are equivalent. Hence, the oDEC with Hellinger divergence also serves as a lower bound on expected regret. Note that the regret upper bound we have obtained for bilinear classes is in terms of the oDEC with squared Bellman error, which in general is not equivalent to the (o)DEC with Hellinger divergence. Proving lower bounds based on variants of the DEC with alternative divergences D such as squared Bellman error is a fascinating question for future research, but is quite subtle, as the choice of D seems to be tied to the model class under consideration (i.e., depending on the choice of D, the oDEC may lead to tight results for some model classes but not others). Let us mention, however, that the goal of this work is not to obtain matching upper and lower bounds on regret in terms of a variant of the DEC, but rather to show that the estimation-to-decision framework for proving regret upper bounds extends to the model-free setting. The question of proving lower bounds is somewhat orthogonal to our focus, though we hope that our results can inspire future work along this direction. > This paper is largely built upon [13] and [32]. The additional results over [13] and [32] are a bit incremental. While using the DEC framework to provide a regret bound for bilinear class is nice, this does not new/improved regret bound for bilinear class. The goal of this work is not to obtain a new algorithm for certain model classes, but rather to show for the first time that the DEC and estimation-to-decisions framework for proving regret upper bounds leads to meaningful guarantees in the model-free setting. Prior to our work, the DEC/estimation-to-decisions framework (Foster et al. 2021) offered the most general framework for RL with general function approximation. However, the upper bounds in Foster et al. (2021) are limited to model-based settings. On the other hand, complexity measures such as bilinear classes and bellman-eluder dimension apply to model-free settings, but are less general than the DEC. Our work gives a unifying perspective, and enjoys the generality of the DEC framework, yet also gives meaningful guarantees for model-free settings. A secondary advantage of the estimation-to-decisions framework is that it yields a modular algorithm, with a conceptual separation between the estimation and decision-making components. In contrast, the GOLF algorithm (for classes with bounded Bellman-Eluder dimension) is more ad-hoc in nature, employing global optimism to select a value function and its associated greedy policy, which is optimistic and consistent with all prior observations. The estimation and decision-making components of GOLF are thus tightly coupled. Our work shows that these components can be decoupled while still obtaining similar guarantees, which we believe constitutes a significant conceptual contribution. We consider bilinear classes only as a stylized example to illustrate our techniques. Bilinear classes capture many known generalizations of tabular MDPs, including linear MDPs, linear mixture MDPs, MDPs with low Bellman rank, feature selection in low-rank MDPs, and many more. Note, however, that our techniques are not limited to this setting. --- Rebuttal Comment 1.1: Title: Thanks for clarification Comment: Thanks to the authors for clarification. I am wondering if the authors can also respond to the question: > In Appendix A, the authors discussed the relationship with constrained DEC [16], and mentioned that it would be interesting to explore whether optimistic estimation can be combined with the constrained DEC techniques. To make this paper stronger, one possible direction is to discuss the relationship with constrained DEC in details (either provide some results in this direction, or point out the technical difficulty to the combination). I agree that oDEC and constrained DEC are two orthogonal directions. The question is whether they can be naturally combined. If they can simply be combined, maybe these can be added to this paper. If not, some thoughtful discussions in this paper will be helpful. --- Reply to Comment 1.1.1: Comment: Thank you for the interest! Recall that Foster et al. (2023) give variants of the constrained DEC for the PAC setting and the regret setting. For the PAC setting, we are optimistic that our techniques can be combined with those of Foster et al. (2023) to derive guarantees based on an optimistic variant of the constrained DEC (however, as shown in Foster et al. (2023), the constrained and offset DEC for PAC coincide up to lower-order terms, so the value of such an extension is unclear). For the regret setting, the algorithms in Foster et al. (2023) are quite tailored to hellinger distance, and make heavy use of the fact that it satisfies the triangle inequality and symmetry. Adapting their techniques to general divergences, even without the use of optimistic estimation, appears to be quite non-trivial, and is likely out of scope for this paper. Nonetheless, we are happy to include additional discussion around these issues in the final version of the paper.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
ProteinNPT: Improving Protein Property Prediction and Design with Non-Parametric Transformers
Accept (poster)
Summary: This paper introduced ProteinNPT, which is a transformer based semi-supervised learning model. By combining MSA transformer and Non-parametric transformer(NPT), it outperformed their baselines on property and mutation effect predictions. Then they tested their model on the protein design task and showed its potential to help on designing novel protein sequences with desired labels. In addition, its contributions also include the new extended benchmark dataset, ProteinGym, as part of their test set to evaluate their model. This paper leveraged large quantities of unlabelled natural sequences to pretrain the model to have a more informative embedding. Moreover, the tri-axial attention helps the model to learn relationships not only within the row, but also across the data points in a batch. All these build up together to boost the performance of the model on multiple downstream tasks. Strengths: The paper addresses an important topic which is the subject of much research. The flow of this paper is clear. It introduces the motivation first and then its solution to the problem fits the motivation nicely. In addition, the experimental design supports their claims by showing their model outperforming the baseline models in different downstrain tasks. Furthermore, they summarize their contributions and novelties clearly The authors nicely demonstrate improvements in the quality of representation learning by comparing ProteinNPT with baselines methods on property prediction and mutation effect prediction tasks. To enhance the claim of advantages of tri-axial attention and auxiliary labels, the paper further shows that it can succeed in the protein fitness prediction problem. In this task, the authors provide some examples to illustrate their good performance while their evaluation metrics show that their model outperforms the other models in general. Using ablation studies, the authors demonstrate that their embedding can help in different cross validation schemes compared to other methods. The paper well built an experiment with ProteinNPT + Bayesian optimization methods and showed its ability to redesign protein with desired properties in an iterative manner. Weaknesses: The paper did not illustrate the model design very clearly. The supplied figure is just too simplistic. The paper’s annotations and formulas give the audience a basic understanding of what the authors aim to do. However, it is hard to follow how the MSA transformer is introduced to the model. They should have more explicit illustration about how it is trained(i.e with what data) and where it is used as part of the ProteinNPT. In addition to that, it is vague about the training data for the MSA transformer in the baselines. It will be helpful for the audience to understand the experiment design if the paper can provide more details about how MSA sequences are generated(if used) and if the ProteinNPT used the same MSA transformer as the baseline. The authors mention their novelties in different aspects. In fact, the dedicate a whole subsection for it, which helps detail the novelty. However, beyond the additional data and testing that come with it, most of the novelties appear to be subtle changes or directly borrowed ideas from MSA transformer or NPT. For example, the authors claim one of their novelties is a semi-supervised architecture but it is actually mentioned in NPT paper. In the Figure 5 and 6, their model actually has similar performance or even lose to the DeepSequence and MSAT models in a lot of datasets. However, there is no analysis of the reason. Minor comments: Line 152, 167, 268: There is no “Fig. 4”. Table 6: the second “100” should be “1000” Define "MSA Transformer" line 91,94 then switch to "MSAT" line 100 "labelled" line 294 "a a sets" line 94 "is that is" line 178 "faction of test" line 197 Description line 291-296 unclear Reference in line 400 does not appear correctly Technical Quality: 3 good Clarity: 3 good Questions for Authors: see above. Also: Line 180: The idea of using informative other labels as features make sense. In reality they use MSAT predictions which makes their model like an ensamble of sorts. What happens if they take these labels out? How is performance affected? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: see above Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **C1. The authors mention their novelties in different aspects. In fact, the dedicate a whole subsection for it, which helps detail the novelty. However, beyond the additional data and testing that come with it, most of the novelties appear to be subtle changes or directly borrowed ideas from MSA transformer or NPT. For example, the authors claim one of their novelties is a semi-supervised architecture but it is actually mentioned in NPT paper.** We summarize the various architectural contributions we made in the first comment to all reviewers. The use of auxiliary labels (section 4.4 and response to your comment C4), multiple optimization with NPTs (section 5.4), conditional sampling (section 4.5 and Fig. A in rebuttal pdf), uncertainty quantification (section 6, appendix G1, and Fig C and D in rebuttal) in NPTs are all novel and non-trivial contributions that have not been discussed in any prior work, in particular the original NPT paper. As for the very last point made by the reviewer, we would like to clarify that NPTs are actually not semi-supervised but purely supervised. A naive application of standard NPT to protein fitness prediction for instance would actually do very poorly, as strong performance requires high quality embeddings typically obtained by training protein language models on substantially larger datasets. To support this claim we ran an additional experiment in which we train a standard NPT where we rely on learned embeddings as opposed to pretrained embeddings from large protein language models, and remove auxiliary labels. We observe a significant performance drop (Table 1). **C2. In the Figure 5 and 6, their model actually has similar performance or even lose to the DeepSequence and MSAT models in a lot of datasets. However, there is no analysis of the reason.** This is a very important point in practice that we will clarify further in the text: the relative ranking of fitness predictors exhibits a lot of assay-to-assay variation, as has been observed both in the zero-shot (Riesselman et al., Laine et al., Notin et al.) or supervised settings (Dallago et al., Hsu et al.). Consequently, robust conclusions about the relative benefits of various model architecture require benchmarking across a wide range of assays. As indicated by reviewer yxtV in comment C5, several papers often focus on single assays, such as GFP, with others going up to the ~40 assays from the DeepSequence benchmark (Risselman et al), such as (Hsu et al), which we compare against abundantly in our work (e.g., OHE - Aug. DS). To our knowledge, no other work has analyzed performance or fitness predictors on a set as broad as covered in this work (100 assays). **C3. Minor comments** Thank you very much for flagging these. We now have corrected all these points in the manuscript. **C4. Line 180: The idea of using informative other labels as features make sense. In reality they use MSAT predictions which makes their model like an ensemble of sorts. What happens if they take these labels out? How is performance affected?** We refer the reviewer to the ablation analysis reported in Table 4 in Appendix B.2 which investigates the critical role of auxiliary labels to performance. We have also included these results in Table A of the pdf. **References** - Riesselman et al. “Deep generative models of genetic variation capture the effects of mutations.” Nature Methods (2018) - Laine et al. “GEMME: A Simple and Fast Global Epistatic Model Predicting Mutational Effects.” Molecular Biology and Evolution (2019) - Notin et al. “Tranception: protein fitness prediction with autoregressive transformers and inference-time retrieval.” ICML (2022) - Dallago et al. “FLIP: Benchmark tasks in fitness landscape inference for proteins.” NeurIPS (2021) - Hsu et al. “Learning protein fitness models from evolutionary and assay-labeled data.” Nature Biotechnology (2022) --- Rebuttal Comment 1.1: Title: Thank the authors for the clarifications Comment: We thank the authors for the clarifications. We did not find any major flaws with the paper before and therefore retain our previous (positive) score.
Summary: The paper researches for the purpose of iterative protein design. It first collects 13 additional fitness metrics to the ProteinGym benchmark. Then, it proposes ProteinNPT, an MSA-Transformer-based model for fitness prediction, and achieved state-of-the-art on ProteinGym. Finally, it applies the model to iterative protein redesign via Bayesian Optimization. Strengths: The experimental results are promising. Weaknesses: 1. Novelty: The way adapts Transformer to property prediction is straightforward. The novelty is limited. 2. Transferability: Additional training is required for different fitness settings. At the same time, the architecture seems to be difficult to even generalize to closely correlated fitness settings, for the labels are plugged into the input. The transferability is limited. 3. Real-world application: Labeled data is required for training, which would bring wet-lab costs. Additionally, the method is based on MSAs consisting of mutated sequences, which tend to be gathered at a certain point in the protein space rather than cover much space, so the method seems unsuitable to be applied in the discovery stage. As for the optimization stage, I doubt whether the precision of the proposed model satisfies the requirements from biology researchers. Overall, I think the space for real-world application is limited. 4. Other suggestions: Exchange "left" and "right" in the text description of Fig. 1, and declare “OHE” as One-Hot-Encoded in the main paper. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: N/A Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **C1. Novelty: The way adapts Transformer to property prediction is straightforward. The novelty is limited.** We would like to first clarify that we do not adapt the standard Transformer architecture to property prediction, but rather apply a non-trivial variant of axial transformer as described in Section 4. We also refer the reviewer to the response we provided to reviewer 8GnD (C2) for equations to clarify the significant differences with standard Transformer and the first comment to all reviewers regarding novelty and contributions. **C2. Transferability: Additional training is required for different fitness settings [...] difficult to even generalize to closely correlated fitness settings, for the labels are plugged into the input. The transferability is limited.** Training protein-specific models on use-case specific labels is a staple of modern protein engineering methods (see response to the next question for a more detailed perspective on this). There is no such thing as fitness in the abstract but rather fitness at a specific temperature, pH, in different cells or in this manufacturing process. Therefore, in protein engineering, there is a huge unmet need for semi-supervised models that take in different kinds of labels in the same overall architecture. This is the ultimate generalizability for design goals. To the reviewer’s point, it would be nice if there were a method and data out there that would nicely generalize to accurately predicting all of these various design objectives in a unified architecture. But the reality of the field is that no such data or approach currently exists and that all the practical ML-driven protein engineering efforts over the past decade have relied on training ML models on use-case and property-specific labels, often across iterative design cycles such as the setting we described in section 6 of our paper. This is particularly true for efforts that involve the simultaneous optimization of multiple properties that may be at odds with one another. For instance the development of protein-based therapeutics is frequently faced with the difficult task of optimizing existing proteins that simultaneously have maximal binding with a specific target, minimal binding with everything else (the so-called de-immunization objective), operate at a certain pH, maintain thermostability. Our approach only requires commodity hardware to train the corresponding models on new labels. Our experiments demonstrate that the same model architecture and training procedure (all final experiments in section 5 are conducted with the same hyperparameters across) is general and performs well across a very wide range of protein properties including binding, thermostability, enzymatic activity, etc. The average wet lab costs typically dwarf compute costs. Any performance lift can save huge costs in wet lab experiments, so the relatively small computational cost involved to fine tune models on labels as we suggest is desirable in practice. **C3. Real-world application: Labeled data is required for training, which would bring wet-lab costs.** There is more and more publicly available labeled data that can be leveraged for design; see for instance the collection of data in Protein Gym (Notin et al. 2022), FLIP and papers (Riesselman et al. 2018; Hopf et al. 2017; Shin et al. 2021; Livesey and Marsh 2022) that have used many such datasets. To date, these kinds of datasets have been successfully used by unsupervised ML methods for validation as far to phenotype as clinical outcome (Frazer et al. 2021) and moving forward to designing for specific fitness with respect to conditions for manufacturing, one can imagine the iterative design of experiments we address in this paper. As for real world applications - this and other generative models have already shown promise for real-world applications. For instance, the original evolutionary coupling models trained on homologous sequences demonstrated the ability of existing data with generative models to predict 3D structure from sequences (Marks et al. 2012), AlphaFold (Jumper et al. 2021) and 3D contacts from MSA Transformer paper (Rao et al. 2021). As for existing.open sources data with labels where the sequences are distant from a wild type - there are also increasing numbers published examples, e.g. chorismate mutases (Russ et al. 2020), GFPs (Weinstein et al. 2023,Gonzalez Somermeyer et al. 2022), AAV capsids (Riley et al. 2021), plastic eating enzymes (Lu et al. 2022), the mega-data set on thermostability mentioned above(Tsuboyama et al. 2023) and many others. Despite amazing progress in the past couple years in de novo design (Watson et al. 2022), zero-shot methods are - to date - unable to generate proteins with specific functions with a high success rate. So the question is not whether labels are needed -- they always are, but rather how to best leverage them in iterative design cycles. The current approaches to do that are either lightweight supervised methods based on hand-crafted protein features or, more recently, supervised methods that use embeddings from large protein language models as input. **C4. Additionally, the method is based on MSAs consisting of mutated sequences [...] Overall, I think the space for real-world application is limited.** This is a misunderstanding: the MSAs that we use in this study are always composed of homologous natural sequences only, and never include mutated sequences. Furthermore, the MSA Transformer used to obtain sequence embeddings is trained across millions of protein families (MSAs), precisely as a means to generalize to unseen regions of sequence space. Lastly, the performance lift provided by supervision over zero-shot methods is substantial in practice (Fig D in pdf), so it is not clear how one could achieve superior performance while at the same time not using labels. **C5. Other suggestions** Thank you for flagging these. We fixed both in the revision. --- Rebuttal Comment 1.1: Title: Thanks for the Rebuttal Comment: I am updating the rating from 4 to 6.
Summary: his paper introduces a non-parametric transformer variant called ProteinNPT, which is utilized for protein property prediction and design tasks. The study compares ProteinNPT with re-implemented top-performing baselines on the extended ProteinGym benchmark. The objective of the paper is to address challenges faced in protein engineering, including the vast design space, sparse functional regions, and limited availability of labels. The results demonstrate that the proposed ProteinNPT outperforms all the compared methods in various protein property prediction tasks and also shows potentials in the protein design task. Strengths: This paper introduces ProteinNPT, a non-parametric variant of transformers, which is employed for protein property prediction and design tasks. The authors extend the ProteinGym benchmark and re-implement several top-performing baselines for comparison with ProteinNPT. The primary focus of ProteinNPT is to tackle the challenge of limited availability of labels, and it can be effectively combined with state-of-the-art protein language models. The problems investigated in this paper hold significant importance and appeal from a biological perspective. The authors have expanded the ProteinGym benchmark and conducted a comprehensive experimental analysis, with very detailed and sufficient results provided in both regular paper and Appendix. In conclusion, the proposed ProteinNPT offers substantial value and can contribute to the advancement of related research. Weaknesses: The paper's presentation style may pose challenges for readers in terms of comprehension. It is important to note that this comment does not reflect the quality of the writing itself, but rather the accessibility and reader-friendliness of the content. In order to fully comprehend the paper, readers are required to go through the entirety of the appendix. In summary, it is good for the paper to be as self-contained as possible. For instance, "the difference with axial transformer, MSA Transformer and Non-parametric transformers" would be better suited within the main body of the paper rather than relegating it to the Appendix. In general, it is crucial to explicitly state the differences between the proposed method and other comparable approaches when the leaving space of paper is enough. Furthermore, it would be beneficial to include additional mathematical formulations pertaining to ProteinNPT in Section 4, rather than solely relying on the architecture diagram for explanation. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: I am interested in the part of model training (Section 4.2 and Appendix B.3). It would be beneficial if the authors could provide a detailed description of the multi-task optimization process. For instance, could you explain how the annealing process is implemented for the token prediction objective? Additionally, could you elaborate on the use of a cosine schedule throughout the training process? Minor Miscellaneous Suggestions - Line 152 and 268: 'Fig 2', not 'Fig 4'? - Table-1: The full name of abbreviation "OHE" needs to be added. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: No obvious limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **C1. The difference with axial transformer [...] better suited within the main body of the paper rather than relegating it to the Appendix.** We have integrated the points made in Appendix B6 within Sec. 2 as suggested. **C2. It would be beneficial to include additional mathematical formulations [...] architecture diagram for explanation. We thank the reviewer for the great suggestion, and are including the main mathematical equations for the ProteinNPT architecture below and in section 4 in the revised manuscript.** **ProteinNPT architecture** Let $(X^{\text{full}},Y^{\text{full}})$ be the full training dataset where $X^{\text{full}} \in \left[[ 1,20 \right]]^{N.L}$ are protein sequences (with N the total number of labeled protein sequences and L the sequence length), and $Y^{\text{full}} \in \mathbb{R}^{N.T}$ the corresponding property labels (where $T$ is the number of distinct such labels, including auxiliary labels). During training, for each gradient step, we sample at random a mini-batch $(X,Y)$ and pass both as \emph{input} to the ProteinNPT architecture. In accordance with our denoising modeling objective, we mask a fixed proportion of input tokens and labels (15\% for both). We then embed sequences and labels separately, with a pretrained and frozen protein language model and a learned label embedding respectively, each amino acid token and label being embedded in a vector of dimension $D$. We obtained our best results (Appendix B2, Table 10) embedding protein sequences with the MSA Transformer, which applies axial attention on a Multiple Sequence Alignment (MSA) for the corresponding family (attention across amino acid tokens and across natural sequences in the MSA). After concatenating the resulting token and label embeddings, we obtain a unique tensor $Z \in \mathbb{R}^{(N.(L+T).D)}$ that is then fed into several ProteinNPT layers. Each ProteinNPT layer applies successively \emph{row-attention}, \emph{column-attention} and a feedforward layer. Each of these transforms is preceded by a LayerNorm operator $LN(.)$ and we add residual connections to the output of each step. For the multi-head row-attention sub-layer, we linearly project embeddings for each labeled sequence $n \in \left[[ 1,N\right]] $ for each attention head $i \in \left[[1,H\right]]$ via the linear embeddings $Wr_{i}^{K}$, $Wr_{i}^{Q}$ and $Wr_{i}^{V}$ respectively. Mathematically, we thus have: $\text{Row-Att}(Z) = Z + \text{Tied-MHSA}(LN(Z)) = Z + \text{concat}(O_1,O_2,...,O_H).W^O \in \mathbb{R}^{N.L.D}$ where the concatenation is performed row-wise, $W_O$ mixes outputs from different heads, $O_i = \text{Tied-Att}(Z.Wr_{i}^{Q}, Z.Wr_{i}^{K}, Z.Wr_{i}^{V})$ and with the tied row-attention is as defined in Rao et al. $ \text{Tied-Att}((Q_n,K_n,V_n)) = \text{softmax}(\sum_{n=1}^{N} ((Q_n.K_n^{T}) / \sqrt{N.D}).V_n)$. We then apply column-attention as follows: $\text{Col-Att}(Z) = Z + \text{MHSA}(LN(Z)) = Z + \text{concat}(P_1,P_2,...,P_H).W^P \in \mathbb{R}^{N.L.D}$ where the concatenation is performed column-wise; $W_P$ mixes outputs from different heads, $P_i = \text{Att}(Z.Wc_{i}^{Q}, Z.Wc_{i}^{K}, Z.Wc_{i}^{V})$; $Wc_{i}^{Q}$, $Wc_{i}^{K}$, $Z.Wc_{i}^{V}$ are linear embeddings for the column-attention sub-layer, and the standard self-attention operator $\text{Att}(Q, K, V) = \text{softmax}(Q.K^{T}/\sqrt{D}).V$. Lastly, the feed-forward sub-layer applies a row-wise feed-forward network: $\text{FF}(Z) = Z + \text{rFF}(LN(Z)) \in \mathbb{R}^{N.L.D}$. Finally, we make predictions for the targets of interests by feeding the corresponding target embeddings from the last layer into a L2-penalized linear projector, and obtain logits over the amino acid vocabulary at each position via a linear projection of the embeddings of each token in each sequence from the last layer as well. **Iterative protein redesign experiment** (Fig. B 1 in pdf) We first select an initial labeled data $D_L$, drawing points at random from the set $D$ of all mutants in the corresponding DMS assay, and keep all other points as our unlabeled pool set $D_{U} = D \backslash D_{L}$. At each cycle, we first train the considered model (ProteinNPT or baselines) on $D_{L}$. We then predict the property and quantify our prediction uncertainty for all possible variants in $D_{U}$. We then sequentially acquire a batch of $B$ points by greedily optimizing the Upper Confidence Bound (UCB) acquisition function $\alpha(\boldsymbol{x} ; \lambda) = \mu(\boldsymbol{x}) + \lambda \sigma(\boldsymbol{x})$, where $\lambda$ is an hyperparameter controlling the exploration/exploitation trade-off. **C3. It would be beneficial if the authors could provide a detailed description of the multi-task optimization process.** The same architecture and training procedure used for single property prediction directly extends to the multi-task setting by adding as many target columns as there are properties to predict (lines 178-179). There are no other modifications needed. Besides being very practical, it also allows us to capture correlation between targets as we also perform self-attention between labeled columns. **C4. Could you explain how the annealing process is implemented for the token prediction objective?** We progressively increase the relative coefficient of the target prediction objective, and reduce that of the token denoising objective. This forces the network to first learn good representations of tokens via the reconstruction objective, and then progressively focus more and more on the main task of interest. **C5. Could you elaborate on the use of a cosine schedule throughout the training process?** The cosine annealing scheme allows to gradually decreasing the learning rate following a cosine curve over a predefined number of epochs, helping with convergence and has been observed to achieve strong performance in practice (see Loshchilov & Hutter, SGDR: Stochastic Gradient Descent with Warm Restarts). --- Rebuttal Comment 1.1: Comment: Thanks for the clarification, I will raise the score to 5.
Summary: The authors propose to use non parametric transformers to model the ‘fitness’ landscape and the protein primary sequences with a Bert-like model. Both tokens and continuous attributes can be masked in the modeling. Additionally, the authors introduce several novel datasets to test and benchmark their model. Finally, a directed-evolution like experiment is conducted to evaluate the capability of the proposed model to be used to assist the design. Strengths: 1- The authors’ problem is well motivated and clearly presented. The method is sound and particularly adapted to the tackled problem hence necessitating the conducted investigation. 2- I found the discussion and the different splits for training the model very interesting and provides additional insight on how the model behaves given novel mutations. 3- Adding novel datasets to the field in a standardized way is also in my opinion a great contribution to the bio-ML community. Nonetheless, given that there is room for additional details, I am disappointed about the lack of description of the novel datasets in the main part of the paper. 4 - The results seem rather conclusive that the method is well suited to adress the learning of the joint distribution despite several more investigations could be conducted for confirmation. Weaknesses: 1 - A notable exception in my opinion to the clarity of the paper is section 6 that i struggled to understand. 2 - There is in my opinion little discussion on how the inputs are constructed. For instance, how many proteins are included in the “alignement”. Are there any learning tradeoffs there ? For example: number of doable gradient steps before overfitting versus available able information in the alignment ? 3 - I found it hard to guess which baseline was what in Table 1. Moreover, the results in table 1 does not present any variance in the experiments. 4 - Since I found the presented work interesting, I was interested in what problem could be addressed by such a model and the expected limits (minimum number of datapoints, minimum number of datapoints in the alignments, gradient steps and so on) ? 5 - A lot of paper in the ML for protein design literature uses datasets such as GFP. Can the authors comment on their choice of not using this standard dataset. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1- How would the model behave if at inference, we specify the desired target and mask (randomly or any other fashion) some tokens ? 2 - To follow up the previous question, since your model learns a joint probability distribution, can you use this property for protein design ? I don’t believe that this is what is done in your section 6, but devising to an experiment like this would prove the ability to generate tokens conditionally to the output value. 3 - What is the influence of the number of rows in the input sequence ? 4 - Given that the current version does not exceed the 9 page limit, I strongly encourage the authors to provide either some hyper-parameterisation or some explanation on the dataset. 5 - I also found unclear in the main paper whether the authors were responsible for the experiments that led to the new datasets. 6 - How does the model scale with the number of data points ? Note that i am willing to change my evlulation if my questions are addressed. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 2 fair Limitations: See Weaknesses & Questions Minor typo: line 96 Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the very thoughtful comments and suggestions. We address each of your questions below, including several additional analyses which we believe significantly strengthen our submission. **C1. I am disappointed about the lack of description of the novel datasets in the main part of the paper.** Thank you for the suggestion. We provide high level information (eg, # of mutants, type of assay, taxon, MSA depth, reference sequence) for each assay in the reference file in the supplementary material (see ProteinNPT_repo/proteingym/ProteinGym2_reference_file_substitutions.csv, rows 89 -101). We will also include a new table in Appendix A.1 with a description in plain english for each assay. **C2. A notable exception in my opinion to the clarity of the paper is section 6 that I struggled to understand.** We have entirely rephrased the text in this section for clarity, and have included an algorithm (see Fig. B in pdf) to further clarify the approach. A detailed description of the algorithm is provided in response to reviewer 8GnD (C2). **C3. I found it hard to guess which baseline was what in Table 1. [...] Results in table 1 does not present any variance in the experiments.** A detailed description of baselines can be found in Appendix D1. We have added a reference to it in the caption of Table 1 to clarify, and updated Table 1 to include average standard errors across folds. **C4. Since I found the presented work interesting, I was interested in what problem could be addressed by such a model and the expected limits [...] What is the influence of the number of rows in the input sequence ?** We analyze your questions under two different angles, at training vs inference time. At training time, we conducted the following new analysis. We split the 100 assays in three equal-size groups depending on the number of available labels, and report the group-level performance in Table B (see pdf). We do not observe particular drops of performance in the “low depth” group compared with groups with more labels. We interpret this phenomenon by the fact that the token denoising objective that is used during training acts as a regularization mechanism during training that confers a lot of stability to the training dynamics across a diverse set of settings, and prevents overfitting. We will update the text to clarify. For the impact of the number of points at inference time, please refer to the ablation on this in Appendix B2 Table 6. We find that performance generally improves as we increase the number of sequences, up to a certain level (1k sequences). **C5. A lot of paper in the ML for protein design literature uses datasets such as GFP. Can the authors comment on their choice of not using this standard dataset.** The GFP assay is also included in our evaluation, as it was part of the original ProteinGym benchmark. In Figures 4-6 in Appendix, it corresponds to “GFP_AEQVI_Sarkisyan_2016”. These 3 figures allow us to appreciate the important variability of performance of different methods across assays. Focusing on one assay in particular, or even a handful of them, would significantly limit our ability to draw robust conclusions. To our knowledge, our work is the first one to date to benchmark protein models on that scale (at least 100 assays). **C6. How would the model behave if at inference, we specify the desired target and mask [...] I don’t believe that this is what is done in your section 6, but devising to an experiment like this would prove the ability to generate tokens conditionally to the output value.** Thank you for the fantastic question! We believe this is a key strength of the architecture and have run an additional analysis as suggested (Fig A of pdf). We first identify the sequence with the highest measured property for a given assay (we focused on the GFP assay in this experiment, given your comment above). We then form an input batch (randomly selecting other labeled points), mask a fixed number of tokens (5 in our experiment) in the fittest sequence, obtain the resulting log softmax over the masked positions with a forward pass and sample from these to obtain new sequences. Critically, rather than selecting the masked positions at random, we sampled them from the positions with the highest average row-attention coefficient with the target (across heads) in the last layer of ProteinNPT. This helped ensure we would select mutations at the positions with highest impact on the target of interest. We generated 1k new sequences with that process and measured the corresponding fitness with an independent zero-shot fitness predictor (ESM-1v), which we then compared with baseline in which mutations are randomly selected or selected to further minimize the fitness of the less fit protein. **C7. I also found unclear in the main paper whether the authors were responsible for the experiments that led to the new datasets.** No, the different assays were carried out in previously published work which we have referenced in Appendix A1. Our contribution has been to survey existing literature to identify assays that are relevant from a fitness prediction standpoint. This is a non-trivial effort that involves screening for certain standards in assay quality (eg., dynamic range, correlation between replicates), identify the experimental phenotype that is most relevant, preprocess and standardize the raw data in a way that is consistent with the other assays in the benchmark. **C8. How does the model scale with the number of data points ?** Since we use tied-row attention (Appendix B1) the row-attention memory footprint is simply quadratic in the sequence length but invariant to the number of rows used in each input batch. The column attention computational complexity and memory footprint is however quadratic in the number of data points and linear in sequence length. --- Rebuttal Comment 1.1: Title: Response to the authors Comment: First of all, I thank the authors for their rebuttal that clarifies some aspects of their work. I think the generation / optimization experiment provides interesting insights on the model behave for the very difficult design task. The answer to C.4 is also particularly valuable for reproducibility and validating the approach. I raise my score to 5.
Rebuttal 1: Rebuttal: Dear reviewers, We sincerely thank you for the time spent engaging with our paper and really appreciate the thoughtful comments. Based on your feedback, we have conducted additional experiments to further explore the strengths of our proposed approach, and have also clarified all points you had raised. We believe the submission is much stronger as a result. We summarize the key points of feedback and how we addressed them as follows: 1. **Novelty and contributions of this work (all reviewers)**: Based on reviews, we wanted to first restate what we believe the key contributions and novelty introduced by our work: - **Conceptual shift over prior supervised fitness prediction models**: methods relying on embeddings extracted from large protein language models have recently led to higher performance over prior approaches (Sec. 2). However, since the token embedding dimensions are large (eg, 1280 per token for ESM-1v), these methods typically apply a pooling operator (eg, mean pooling across the full sequence length) to prevent overfitting during training, which potentially destroys valuable information for the downstream task. In contrast, ProteinNPT does not pool embeddings but leverages self-attention to learn the dependencies between labels and the embeddings of specific residues in the sequence, thereby focusing on positions that matter for the property of interest. - **New methodological developments**: we summarize the differences between ProteinNPT and the closest model architectures in prior literature in Appendix B.6 and add further clarifications in response to reviewer 42SC (C1). We also explore, for the very first time, various aspects of non-parametric transformers such as the use of auxiliary labels (Sec. 4.4 - critical to top performance as per Table 4), the multiple property optimization setting (Sec. 5.4), conditional sampling (Sec. 4.5 and point #3 below) and uncertainty quantification (Sec. 6, Appendix G1 and point #4 below). - **Benchmarking improvements**: we introduced 2 novel cross validation schemes to better assess the ability of fitness predictors to generalize to mutations at unseen positions (Appendix A2). We curated 13 new assays (Appendix A1) to enrich the ProteinGym benchmark collection, allowing us to compare supervised fitness prediction models on a _unprecedented_ scale in terms of number of proteins and mutants, as well as diversity of properties considered (e.g., binding affinity, enzymatic activity, thermostability) - **Significant performance lift in all experimental settings**: our final architecture leads to _significant_ performance increase in various experimental settings: effects of single mutants, effects of multiple mutants, simultaneous prediction of multiple properties, and iterative protein redesign -- all of which are of major importance in practical protein engineering and variant effects prediction. Our models are trained on commodity hardware (a single GPU is needed) and the same hyperparameters are used across all settings and protein types, thereby facilitating its adoption by practitioners. 2. **Mathematical equations (Rev. 8GnD, af1N, 42SC)**: As a means to improve the clarity of the description of the ProteinNPT architecture, we have updated Sec. 4 to include mathematical equations that complement the main architecture diagram (Fig. 1). In particular, we hope this will clarify the _significant_ differences between our architecture and the standard “Transformer” architecture (Rev. af1N). For more details, please see the response to Rev. 8GnD (C2) where we provide the aforementioned equations. 3. **Conditional sampling for protein design (Rev. yxtV)**: We further explore the conditional sampling capabilities of ProteinNPT (Sec. 4.5) for protein engineering (Fig. A in pdf). We conduct new analyses in which we demonstrate the ability of ProteinNPT to sample new sequences with fitness significantly higher than the fittest sequence in the labeled dataset. We describe this analysis in detail in our response to Rev. yxtV (C6). This demonstrates the versatility of usages of ProteinNPT to support various protein design tasks. 4. **Uncertainty quantification (Rev. af1N, 42SC)**: Given the points raised on novelty we wanted to solidify our analysis of uncertainty quantification in non-parametric transformers and reaffirm it as a meaningful contribution of this work. We developed and compared three uncertainty quantification schemes: - **MC dropout**: we use a fixed inference batch at inference, apply MC dropout (Gal et al., Dropout as a Bayesian Approximation) and use the standard deviation of the prediction across multiple forward passes as our uncertainty metric. - **Batch resampling**: for a given point we want to quantify the uncertainty of, we perform several forward passes by completing the input batches with a different sample (with replacement) of training points for each pass. No dropout is applied and we use the standard deviation across forward passes as our uncertainty metric. - **Combined scheme**: we combine the first two schemes together (ie, turning on dropout in the batch resampling sampling) The last scheme delivers superior performance in calibration curve analyses (Fig. C), and we therefore refreshed the results of our iterative design analyses to use ProteinNPT in conjunction with this uncertainty quantification scheme instead, subsequently leading to even higher performance on this task (Fig. D). In addition to this overall response, we provide detailed responses to all comments raised by each reviewer. Please reach out to us if you would like us to clarify any remaining points. Pdf: /pdf/2224bb2b819d08c6cfbf09f45be8fff25f066b1b.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Extraction and Recovery of Spatio-Temporal Structure in Latent Dynamics Alignment with Diffusion Models
Accept (spotlight)
Summary: The authors address the problem of aligning behavior-related neural population dynamics, either within-subject but across different experimental sessions, or between subjects. This problem exists due to inter-subject variability in terms of which neurons are recorded or drift in the recording and is an important problem for systems neuroscience and for the development of brain-machine interfaces. Importantly, neural activity dynamics in many regions of the brain during behavior lie on a low-dimensional manifold and therefore have a defined spatio-temporal structure. While many state-of-the-art alignment approaches do not take into account this structure and therefore do not preserve it, the authors introduce a diffusion-guided method that does. The approach first uses a diffusional model to discover the manifold on which neural activity evolves (ie the spatiotemporal structure), then it uses this model to guide the alignment, which is done using MLE. Strengths: Originality + quality: The authors developed a novel approach for time-series alignment that also discovers latent structure (the low-D manifold on which the neural activity evolves). Quality: The authors validate their model against a number of state-of-the-art alignment methods on both synthetic and real-world data, demonstrating the practical applicability of their method. Clarity: The authors clearly state the problem and its details, as well as how their method differs from existing ones, and its advantages. Significance: The method has significance to the alignment of timeseries with latent spatiotemporal structure, which is broadly relevant in systems neuroscience. Although the authors mainly focus on behaviorally relevant neural data, timeseries in other fields also often possess lower-dimensional latent structure, so this method could be broadly applicable. The authors approach could also be relevant to identifying latent structure outside of alignment context, although I am not familiar with how it compares to existing approaches to do so. Weaknesses: Clarity: Figure and table legends should be more clear (see Questions section) Originality: Authors should state whether existing methods for discovering latent structures using diffusional models exist, and how their methods (e.g. architecture) differs from existing methods. Quality: The method is validated in data with strong, low-dimensional latent structure (monkey reaching tasks). The generalizability to other types of neural dynamics time series of varying dimensionality should be evaluated to determine limitations. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Figure 3B: Can the authors describe better what the takeaway from this panel is? In what ways does ERDiff improve on JSDM. The legend should tell the reader what the pink dots are (presumably critical points?). Figure 4: Please note in the legend or figure that the R-squared value is in %. I was confused at first as to how the R-squared could be negative before reading Table 1. Table 1: It is unclear to me what the denominator for the R-squared % calculation is. Can the authors make it more apparent in the table legend? If the time-series do not belong on a well-defined low-dimensional manifold, will the method hallucinate something? How well does the method work for more unstructured data? As the dimensionality of the latent structure increases, how much more badly does the method perform? The authors should quantify this. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors should address in more detail the kinds of latent structures/time series that would pose more of an issue for their method. Otherwise, limitations are adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive review and insightful questions. We provide clarifications and new results to address your questions below. > Authors should state whether existing methods for discovering latent structures using diffusional models exist. To the best of our knowledge, no existing methods use diffusion models (DM) for the discovery of neural latent structures. > The generalizability to other types of neural dynamics time series of varying dimensionality should be evaluated to determine limitations. The generalizability of each method is a vital consideration. The monkey reaching dataset has a neural signal dimension of around 180 and an average trial length of around 40. Additionally, we conducted experiments on an unconstrained rat’s hippocampus dataset [1], where the neural signal dimension is 120 and the average trial length is around 110. The corresponding results are presented in Section 1.1 of the general response. > Figure 3B: Can the authors describe better what the takeaway from this panel is? In what ways does ERDiff improve on JSDM. The legend should tell the reader what the pink dots are. We appreciate your suggestion about Fig. 3(B). In this figure, the phase portraits represent the trajectories and structures of latent dynamics. We note that, after alignment, ERDiff's phase portrait is noticeably closer to that of the source domain in comparison to JSDM. The pink dots you pointed out represent the stable fixed points within the phase portraits, which correspond to steady positions or equilibria of the dynamics system. We will provide a more detailed legend in the revised manuscript. > Table 1: It is unclear to me what the denominator for the R-squared % calculation is. Can the authors make it more apparent in the table legend? We would like to clarify that $R^2$ refers to the *coefficient of determination* that quantifies the goodness-of-fit for regression models. Formally, it is calculated as follows: $R^2(y, f) = \sum_{i=1}^{n}(y_i - \bar{y})(f_i - \bar{f})/\sum_{i=1}^{n}(y_i - \bar{y})^2$. The denominator is the total variance of ground-truth values. We will incorporate the formula of $R^2$ calculation into the revised manuscript. > **(1)** If the time-series do not belong on a well-defined low-dimensional manifold, will the method hallucinate something? **(2)** How well does the method work for more unstructured data? **(1)** We would like to emphasize that ERDiff isn't constrained to datasets containing a strong 'well-defined' low-dimensional manifold, such as the commonly depicted sphere-like or Swiss-roll-like manifolds. Instead, with the strong expressivity of DM, ERDiff can broadly fit distribution of datasets containing temporal dynamical structures. Meanwhile, it's supported by numerous neural studies [2, 3] that data from various brain regions inherently exhibit temporal dynamical structures. This manifests the wide applicability of ERDiff. **(2)** If the spatio-temporal structures in data are weak and the time-series resembles a random walk, the alignment task might not benefit as much from ERDiff, which is designed for more structured data. On the other hand, when we aim to apply ERDiff to datasets without a clear trial-structure [4], it is feasible to truncate such continuous data into discrete but meaningful syllables. These syllables typically exhibit clear structural patterns. Then, their overall latent distribution can be learnt through a VAE for subsequent alignment. We conduct additional experiments to validate this approach through an unconstrained rat’s hippocampus dataset [1]. Please refer to Section 1.1 of the general response for result details. > As the dimensionality of the latent structure increases, how much more badly does the method perform? The authors should quantify this. In real-world datasets, the dimensionality of the latent structure is inherently fixed. Hence, we increase the dimensionality of latent structure ($\mathbf{z}$) in synthetic datasets and conduct experiments in the following. From the results, we observe that the aligned distribution tends to deviate further from the source-domain distribution as the dimensionality increases. Despite this trend, ERDiff consistently demonstrates superior performance compared to the best baseline method. | Method | NLL $\downarrow$ | KLD $\downarrow$ | | :-----------: | :------------------------: | :------------------------: | | Best-Baseline | $3.58 (\pm 0.20)$ | $7.74 (\pm 0.43)$ | | **Ours** | $\mathbf{3.38} (\pm 0.16)$ | $\mathbf{7.09} (\pm 0.38)$ | *Table 1: Latent structure dimension $d_z$ = 4*. | Method | NLL $\downarrow$ | KLD $\downarrow$ | | :-----------: | :------------------------: | :-------------------------: | | Best-Baseline | $6.15 (\pm 0.21)$ | $12.33 (\pm 0.43)$ | | **Ours** | $\mathbf{5.53} (\pm 0.29)$ | $\mathbf{11.39} (\pm 0.47)$ | *Table 2: Latent structure dimension $d_z$ = 8*. > The authors should address in more detail the kinds of latent structures/time series that would pose more of an issue for their method. (1) For shorter time-series that may not exhibit strong latent structures, the alignment task becomes less complex. In such cases, even simpler methods like JSDM can have satisfactory results. (2) If the spatio-temporal structure is weak and the time-series resembles a random walk, the alignment task might not benefit as much from ERDiff, which is designed for more structured data. Refs: [1] Recordings from hippocampal area ca1, pre, during and post novel spatial learning. (Grosmark et al. 2016) [2] Context-dependent computation by recurrent dynamics in prefrontal cortex. (Valerio et al., 2014) [3] Network dynamics underlying OFF responses in the auditory cortex. (Giulio et al., 2016) [4] Hierarchical recurrent state space models reveal discrete and continuous dynamics of neural activity. (Scott et al., 2019) --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response addressing my concerns. Given the response as well as other reviewers comments, I maintain my opinion that the manuscript is suitable for publication and will keep my score at 7. --- Reply to Comment 1.1.1: Comment: We appreciate the timely response. Thank you again for your evaluation and recognition of our work.
Summary: Brain-computer interfaces require recalibration to accommodate drifts in the recorded neural populations over time. While there has been some success in aligning neural recordings based on their latent dynamics, deep learning-based alignment methods have also gained popularity since they ignore some of the implicit assumptions made by latent space methods, providing additional flexibility. However, deep learning neural alignment often ignores the temporal structure of the dynamics. The proposed model overcomes these limitations by first extracting the spatio-temporal structure in the source domain via a diffusion model and then aligning the target domain to the source dynamics. The authors demonstrated the success of the method in simulated and neural data, where it outperforms alternative alignment methods. Strengths: The paper is adequately written and technically sound. The method was tested and shown to work well when applied to a simulated dataset and neural data. The method introduced here uses deep learning-based alignment while still exploiting the temporal dynamics that are critical in neural datasets. Robust alignment of neural recordings is crucial for the success of BCI applications, and in this work, they showed how this method outperforms existing alignment methods in both synthetic and neural datasets. Moreover, they also showed that the alignment can be performed not only across sessions of the same animal but also across animals. Weaknesses: While the authors show the promise of their method to align neural responses, they overlooked other methods based on the alignment of latent dynamics that have been shown to be successful for BCI applications, as mentioned in the introduction. I believe that a systematic comparison to such methods is critical to fully grasp the significance of this work. For example, CCA, multiset CCA, hyperalignment, or Procrustes-based alignment. In the context of latent space alignment methods, another relevant piece of literature is the method introduced in (https://proceedings.neurips.cc/paper/2021/hash/aad64398a969ec3186800d412fa7ab31-Abstract.html), which also uses neural dynamics for alignment. Deep learning-based methods allow for more expressive functions, but they often come with additional computational costs, data demands, and the need for careful hyperparameter selection. None of these limitations are addressed in the manuscript, nor is there an explicit comparison across methods (latent space vs. deep learning-based), which could help demonstrate the promise of the method for practical BCI applications. The authors minimally showed the effect of dataset size on performance, but the lowest dataset size tested still had dozens of trials, which could be unrealistic in most practical settings. Additionally, it would be important to report the training times as a function of dataset size, as long training times would render the approach useless for real-time alignment. The method defines the alignment between a single source and target dataset, but ideally, one would pool data across all sessions for BCI decoding. It would be interesting to note if the proposed method also allows for multi-session alignment. The authors showed the success of the approach in the context of a single data simulation. However, to fully assess the robustness of the method, they could further test it under different conditions, such as measurement or latent noise, dimensionality, or tasks. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: It is unclear from the text how the training and testing are performed. In the sentence "During testing, we align the test neural data to the training neural data so that we can directly apply the velocity decoder," it is not clear whether the authors include test data for alignment. Additionally, the authors mentioned that behavioral data is used for alignment, but it is also used to evaluate the success of the approach via decoding. This raises the question of whether there is a circular evaluation of the method. References 9 and 10 cite the preprint and peer-reviewed versions of the same article. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The authors should include a section or provide a clear description of the limitations and assumptions of the method. They should also address the computational cost, data demands, and potential implications of the presented work.  Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed and constructive comments. We would like to make the following clarifications. Hopefully these will resolve most of your concerns, and they can be taken into account when deciding the final review score. > They overlooked other methods based on the alignment of latent dynamics that have been shown to be successful for BCI applications. E.g., CCA, multiset CCA, hyperalignment, or Procrustes, ..., another literature is (amLDS [1]). We appreciate your suggestion for this comparison. We would like to emphasize that our proposed method, ERDiff, is an **unsupervised** neural distribution alignment method, meaning **no** supervision signals or labels related to behavior are required during the alignment phase. In contrast, the listed methods are **supervised** learning methods. In linear algebra-based methods: CCA and multiset CCA, hyperalignment, and Procrustes-based alignment, they require the knowledge of supervised behavior signals for every trial in both domains to pair the source-target domain trials during the alignment phase. Therefore, these methods are heavily reliant on supervision. Thus, a direct comparison between ERDiff and CCA-based methods may not be a fair evaluation. Regarding amLDS, its probabilistic framework focuses on $K$ discrete stimulus conditions, making it impractical to smoothly expand to our continuous behavioral setting. Therefore, it's also not a direct counterpart to ERDiff. In practical brain-computer interface (BCI) and neural behavioral applications, unsupervised neural distribution alignment methods are highly desirable. Our proposed ERDiff has the potential to generalize to many more scenarios and fields where supervised signals are not accessible. > Deep learning-based methods allow for more expressive functions, but they often come with additional computational costs, data demands, and the need for careful hyperparameter selection. (1) In source domain training phase, the computation cost of ERDiff primarily comes from the diffusion model (DM) training. We note that the DM is trained in the latent space $\mathbf{Z}$, which is significantly smaller than the raw neural signal space $\mathbf{X}$. Therefore, the computational overhead of this one-time training phase is acceptable, especially given that it can be conducted offline in real-world BCI applications. As for the alignment phase, the computational cost of ERDiff is comparable with those of the baseline methods. More detailed analyses and comparisons of computational costs are provided in Section 1.3 of our global response. (2) We emphasize that training the diffusion model (DM) used in ERDiff does not require any additional data compared to the baseline methods. The DM uses the same dataset as the baselines for source domain learning as well as distribution alignment. (3) The hyperparameter selection of our model, including the number of epochs, learning rates, and other relevant parameters, are listed in Section D.1 of our submitted appendix. > Nor is there an explicit comparison across methods (latent space vs. deep learning-based), which could help demonstrate the promise of the method for practical BCI applications. We infer the terms 'latent space' and 'deep learning-based' you mentioned might be referring to 'methods specifically designed for neural distribution alignment tasks', and to 'general time-series domain adaptation methods', respectively. Besides their performance comparisons in Table 1 of the manuscript, we additionally conduct comparisons on their computational cost and time complexity, please refer to Section 1.3 of the global response for details. > The authors minimally showed the effect of dataset size on performance, but the lowest dataset size tested still had dozens of trials We infer the term 'dataset size tested' you mentioned might be referring to the number of trials used for alignment in the target domain. We would like to highlight that in Fig. 5(B) of the manuscript, we plot the performance of ERDiff and baselines given different numbers of alignment trials. From the plot, it can also be observed that ERDiff maintains a relatively high accuracy when only 25~ trials (representing a 10% sampling density) are used during alignment. > it would be important to report the training times as a function of dataset size. Please refer to section 1.3 of the global response for training time cost functions. > It would be interesting to note if the proposed method also allows for multi-session alignment Multi-session alignment belongs to the field of multi-domain adaptation (MDA), which is a more complex problem. However, we believe integrating additional components [2] for MDA, ERDiff can generalize to multi-session alignment. We include this direction as part of our future work, listed in section 1.4 of the global response. > To fully assess the robustness of the method, they could further test it under different conditions, such as measurement or latent noise, dimensionality, or tasks. Please refer to Fig. 2 in the attached PDF for our robustness study investigating the impacts of latent dimensionality and Gaussian noise on ERDiff. > it is not clear whether the authors include behavioral data for alignment. This raises the question of whether there is a circular evaluation of the method. We would like to clarify that no behavioral data or velocity information from the target domain is used in ERDiff. The alignment procedure of ERDiff is performed in an unsupervised manner, showing the broad applicability of our method. > The authors should include a section or provide a clear description of the limitations and assumptions of the method Please refer to section 1.4 of the global response for limitations and assumptions. Refs: [1] Across-animal odor decoding by probabilistic manifold alignment. (Pedro et al. 2021) [2] Multi-Source Unsupervised Domain Adaptation via Pseudo Target Domain. (Ren et al. 2022) --- Rebuttal Comment 1.1: Comment: I thank the authors for their really comprehensive response. I mostly agree with their comments and I have updated my score accordingly. I still believe that some of there comparisons, even if not tested, should be mentioned in the final version of the manuscript; which also emphasizes the relevance of this method, as they discussed here. --- Reply to Comment 1.1.1: Title: Thank you Comment: We appreciate the reviewer for the kind response and constructive comments. As suggested, we will incorporate the systematic comparison across methods discussed here into the final manuscript.
Summary: * One of the key challenges in analyzing neural recordings is the scalability of models that link behavior and neural population activity across recording sessions or in inter-subject settings. • When analyzing single-trial neural population activity, past studies have pointed out that neural activity can be understood in terms of low-dimensional latent dynamics. Such low-dimensional latent dynamics are helpful when visualizing neural profiles across different task conditions and trials. • Generally, existing methods try to align latent dynamics by minimizing the difference evaluated by the metric between source and target domains. The paper proposes a method to align the source and target domains of multivariate neural data by learning/capturing latent spatiotemporal structure in the source domain with a diffusion model and applying it as a prior on learning/capturing spatiotemporal structure in the target domain. • The authors applied their model to the non-human primate motor cortex, testing both cross-day and inter-subject recordings. Strengths: * The authors motivate their approach clearly by arguing that naively aligning time series using domain adaptation is ineffective as multivariate neural data has low SNR. Thus, leveraging low-dimension representation is a viable option. • The paper seeks to achieve a form of domain adaptation by aligning the spatiotemporal structure of latent dynamics of the target to the source using a novel alignment method (ERDiff). • The model and derivations are presented clearly. • An exhaustive comparison is provided showing that the their model outperforms standard models on both motor cortex and synthetic datasets. Weaknesses: * The authors must clarify why diffusion models are necessary. How about considerably simpler two-step approaches -- like extracting latents with GPFA (Yu et al, 2009) and aligning them with the proposed ML alignment? * Alternatively how about comparisons with alternative approaches of comparable complexity -- e.g. adversarial alignment with DANN (Ganin et al, 2015)? Technical Quality: 3 good Clarity: 3 good Questions for Authors: * Overall: there are many approaches for extracting latent structure from time series data (GPFA, CEBRA, T-PHATE, CILDS) -- one could readily realign the latents extracted from these approaches with the second stage alignment algorithm. * Apriori, why would one expect the diffusion model to be more effective at aligning the latents than these other approaches? * Because data from non-human primates is used, please clarify whether appropriate IRB approvals were obtained. Or if this is only a secondary analysis of existing datasets, the approvals obtained in the original studies could be mentioned. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: * No significant negative societal impact envisioned -- the potential use in BCI may suggest a positive social impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive review and insightful questions. We provide clarifications and new results that we have generated to address your questions below. Hopefully these will resolve most of your concerns, and they can be taken into account when deciding the final review score. > The authors must clarify why diffusion models are necessary. How about considerably simpler two-step approaches -- like extracting latents with GPFA and aligning them with the proposed ML alignment? This is an insightful question. We would like to emphasize that the diffusion model (DM) is an **essential** and **necessary** component in ERDiff. The traditional methods for identifying low-dimensional latents (e.g., GPFA, LFADS) are **not applicable** to the proposed maximum likelihood alignment (MLA) phase alone. This is because their models only learn a point-to-point mapping function $p(\mathbf{Z} \mid \mathbf{X}^{(s)})$ from neural spike signal to latent factor, rather than explicitly capturing the overall latent distribution $p_s(\mathbf{Z})$ in the source domain. Consequently, in the second step (alignment phase), given the MLA optimization objective: $\underset{\phi}{\operatorname{argmax}} \mathbb{E}_{\mathbf{Z} \sim q\left(\mathbf{Z} \mid \mathbf{X}^{(t)} ; \phi\right)}\left[\log p_s(\mathbf{Z})\right]$ (RHS of Eq.7 in the manuscript), the likelihood $p_s(\mathbf{Z})$ is intractable through those methods alone. In contrast, in our proposed ERDiff, along with the learning of $p(\mathbf{Z} \mid \mathbf{X}^{(s)})$ through VAE, we use DM to learn the overall latent distribution $p_s(\mathbf{Z})$. Therefore, through DM, we can reformulate the MLA objective into Eq. (10) of the manuscript, which is tractable, and computationally efficient. > Alternatively how about comparisons with alternative approaches of comparable complexity -- e.g. adversarial alignment with DANN (Ganin et al, 2015)? In the experiment section, the methods SASA, RDF-MMD, and DAF we compared are all of comparable alignment complexity with ours. For a detailed comparison of time complexity, please refer to Section 1.3 of the general response. Here we conducted further experiments (Table 1) using the Domain-Adversarial Neural Network (DANN) method employing 5 random seeds. We implemented the label predictor of DANN with the behavioral velocity predictor. The results of these experiments will be included in the revised manuscript. | Method | Cross-Day-$R^2$ | Cross-Day-RMSE | Inter-Subject-$R^2$ | Inter-Subject-RMSE | | :------: | :-------------------------: | :------------------------: | :-------------------------: | :------------------------: | | DANN | $-12.57 (\pm 3.28)$ | $8.28 (\pm 0.32)$ | $-18.37 (\pm 3.24)$ | $9.29 (\pm 0.33)$ | | **Ours** | $\mathbf{18.81} (\pm 2.24)$ | $\mathbf{7.99} (\pm 0.43)$ | $\mathbf{10.29} (\pm 2.86)$ | $\mathbf{8.34} (\pm 0.34)$ | *Table 1: Performance Comparison with DANN* > There are many approaches for extracting latent structure from time series data (GPFA, CEBRA, T-PHATE, CILDS) -- one could readily realign the latents extracted from these approaches with the second stage alignment algorithm. As we've explained in the first clarification of this response, approaches such as LFADS or GPFA that you mentioned are not applicable for the second MLA stage alone. A diffusion model is necessary to learn an overall $p_s(\mathbf{Z})$ for MLA. > Apriori, why would one expect the diffusion model to be more effective at aligning the latents than these other approaches? We appreciate your attention to this important aspect. We would like to clarify that the methods you listed do not serve as neural distribution alignment methods but rather latent variable models (LVM) that are designed to identify latents from raw neural signals. Besides the learning of an overall $p_s(\mathbf{Z})$ for MLA, we note that the effectiveness of diffusion model (DM) comes from the following two key points. (1) Precise source domain learning: We note that in behavior-related neural applications, the trial latent dynamics are non-linear and complex. This complexity pose challenges for aligning neural distributions across sessions. Owing to the strong **expressivity** of diffusion models (DM), we first use a DM to learn the overall latent distribution $p_s(\mathbf{Z})$ in source domain. To learn $p_s(\mathbf{Z})$ well, the DM focuses on the spatio-temporal structures of neural latent dynamics in source domain and extracts these structures using the specially designed STBlock. Fig. 2A of the manuscript visualizes the distribution learning and spatio-temporal structure extraction process of the DM. However, previous neural alignment methods ignore these crucial spatio-temporal structures. (2) Appropriate alignment objective: During the alignment procedure, we propose to use maximum likelihood alignment (MLA) as the optimization objective. We note that this objective aligns well with the DM since the $p_s(\mathbf{Z})$ inside MLA is tractable through DM, and the total MLA formula can be expanded into noise residuals terms that are simple and efficient to compute. Thanks to the stability and flexibility of MLA, the source-domain spatio-temporal structures can be well-recovered in target domain, as illustrated in Fig. 2B and Fig. 4 of the manuscript. > Please clarify whether appropriate IRB approvals were obtained. Or if this is only a secondary analysis of existing datasets, the approvals obtained in the original studies could be mentioned. This is an analysis of existing datasets [1] and IRB approvals have been obtained in their studies. We look forward to further discussion, and are happy to answer any questions that might arise. Refs: [1] Long-term stability of cortical population dynamics underlying consistent behavior. (Lee et al., 2020) --- Rebuttal Comment 1.1: Comment: I thank the authors for the detailed clarifications and additional experiments, and have updated my score accordingly. I have no further questions for the authors.
Summary: This paper proposes a distribution alignment method (ERDiff), which combines extraction of spatio-temporal structures in latent dynamics from the source distribution and maximum likelihood alignment procedure on the target domain. The proposed method was evaluated on both synthetic and real data (neural recordings from non-human primate motor cortex), outperforms other methods under both cross-day and inter-subject settings. Strengths: - The proposed method is novel and is technically sold - Performance of the method is well demonstrated on real data under inter-session/subject setup suggest that it could be an important tool with potential broad use in many field not just in neuroscience. Weaknesses: -I don’t have any major concerns. Although the methods has been shown to outperform some of the current techniques, advantage of the proposed approach is not well demonstrated. I would like to see some analysis on computational cost Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - Although the authors clearly mentioned the limitation of alignment method based on pre-defined metric, it would be nice see how the proposed method performs compared to these. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: - Limitations is not addressed in the draft Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive review and insightful questions. We provide clarifications and new results that we have generated to address your questions below. > Although the methods have been shown to outperform some of the current techniques, advantage of the proposed approach is not well demonstrated. We appreciate your attention to this important aspect. We'd like to emphasize that the advantages of our proposed ERDiff in neural distribution alignment comes from the following two key points. **(1)** Precise source domain learning: We note that in behavior-related neural applications (e.g., brain-computer interfaces), the trial latent dynamics are non-linear and complex. This complexity pose challenges when it comes to aligning neural distributions across sessions (domains). Owing to the strong **expressivity** of diffusion models (DM), we first use a DM to learn the overall latent distribution $p_s(\mathbf{Z})$ in source domain. To learn $p_s(\mathbf{Z})$ well, the DM focuses on the spatio-temporal structures of neural latent dynamics in source domain and extracts these structures using the specially designed STBlock. Fig. 2A of the manuscript visualizes the distribution learning and spatio-temporal structure extraction process of the DM in source domain. However, previous neural alignment methods [1, 2] ignore these crucial spatio-temporal structures. **(2)** Appropriate alignment objective: During the alignment procedure, we propose to use maximum likelihood alignment (MLA) as the optimization objective. We note that this objective aligns well with the DM since the $p_s(\mathbf{Z})$ inside MLA is tractable through DM, and the entire MLA formula can be expanded into noise residuals terms that are simple and efficient to compute. Thanks to the stability and flexibility of MLA, the source-domain spatio-temporal structures can be well-recovered in target domain, as illustrated in Fig. 2B and Fig. 4 of the manuscript. In contrast, methods based on pre-defined metrics often make a strong assumption that the overall latent distributions follow a Gaussian, which significantly restricts the expressiveness of those methods. For methods based on adversarial training, even though their optimization objectives can be formed into Jensen–Shannon divergence (JSD), their practical training steps always lack stability [3], and are far from reaching the optimization target (JSD). > I would like to see some analysis on computational cost. In source domain training phase, the additional computation cost of ERDiff primarily comes from the diffusion model (DM) training. We note that the DM is trained in the latent space $\mathbf{Z}$, which is significantly smaller than the raw neural signal space $\mathbf{X}$ in magnitude. Therefore, the computational overhead of this one-time training phase is acceptable, especially given that it can be conducted offline in real-world BCI applications. In the alignment phase, we would like to emphasize that ERDiff maintains a comparable computational cost with baseline methods. Please refer to Section 1.3 of the global response for a comprehensive analysis of the computational cost and time complexity. > Although the authors clearly mentioned the limitation of alignment method based on pre-defined metric, it would be nice see how the proposed method performs compared to these. In the experiment section of the manuscript, among the group of methods based on pre-defined metrics, we show the results of JSDM since it achieves the highest performance. To provide a more thorough comparison, here we additionally conduct experiments with two more methods in this group: Kullback–Leibler divergence minimization (KLDM) and Wasserstein distance minimization (WDM). These two metrics form the backbone of previous methods on neural distribution alignment [4] and [5], respectively. We can observe that ERDiff consistently demonstrates superior performance compared to these two methods. These additional comparative results will be incorporated into the revised manuscript. | Method | Cross-Day-$R^2$ | Cross-Day-RMSE | Inter-Subject-$R^2$ | Inter-Subject-RMSE | | :------: | :-------------------------: | :------------------------: | :-------------------------: | :------------------------: | | KLDM | $-31.63 (\pm 1.85)$ | $10.55 (\pm 0.37)$ | $-30.79 (\pm 2.24)$ | $10.42 (\pm 0.37)$ | | WDM | $-19.85 (\pm 2.45)$ | $8.61 (\pm 0.36)$ | $-17.74 (\pm 2.71)$ | $9.21 (\pm 0.43)$ | | **Ours** | $\mathbf{18.81} (\pm 2.24)$ | $\mathbf{7.99} (\pm 0.43)$ | $\mathbf{10.29} (\pm 2.86)$ | $\mathbf{8.34} (\pm 0.34)$ | *Table 1: Performance comparison with alignment methods based on pre-defined metrics* > Limitations is not addressed in the draft We delve into the Limitations, Future Work, and Broader Impact of our work. (Please refer to Section 1.4 of the global response for details.) This comprehensive discussion will be put into the revised manuscript in a new 'Discussion' section, including and replacing the existing 'Section 5: Conclusion'. We look forward to further discussion, and are happy to answer any questions that might arise. Refs: [1] Robust alignment of cross-session recordings of neural population activity. (Justin et al., 2022) [2] Stabilizing brain-computer interfaces through alignment of latent dynamics. (Brianna et al. 2022) [3] Evaluation of Mode Collapse in Generative Adversarial Networks. (Sayeri et al., 2018) [4] Stabilizing brain-computer interfaces through alignment of latent dynamics. (Brianna et al. 2022) [5] Hierarchical Optimal Transport for Multimodal Distribution Alignment. (John et al. 2019) --- Rebuttal Comment 1.1: Comment: I appreciate the authors for addressing my questions in detail. I will keep my original score at 7.
Rebuttal 1: Rebuttal: We would like to express our sincere gratitude to all the reviewers for their insightful feedback and suggestions. We appreciate the positive comments which characterized our work as having a `"clear motivation"` (2uTS), `"novel"` methodological progression (9MfT, wx95), being `"technically solid"` (wx95, xrSS), with a `"well-written and clear presentation"` (tfQ5, 2uTS), conducting `"exhaustive comparison"` in the experiment (2uTS) and recognized our work to have `"potential broad use"` (wx95, 1oNu). In this place, we would like to first provide several general clarifications to enhance overall understanding of our work. **1.1 Generalizability of ERDiff across Datasets** The spatio-temporal structures are intrinsic to neural dynamics associated with behaviors, and have been extensively studied in neuroscience [1,2]. We would like to emphasize that with the powerful distribution learning ability of diffusion model (DM), ERDiff has great potential to generalize across a wide range of neural behavior datasets and applications. For validation, we run additional experiments on an unconstrained rat’s hippocampus dataset [3] and verify the efficacy of ERDiff on it (please refer to Fig. 1 and Table 1 of the attached PDF). These results will be provided in the revised manuscript. **1.2 ERDiff is an unsupervised neural distribution alignment method** To be in line with the conventions of previous studies on neural distribution alignment [4, 5], in the manuscript, we agree that behavioral (velocity) signals of the source domain are present during the VAE training. These signals do help in learning a more interpretable neural latent space. However, we emphasize that ERDiff does not incorporate any behavioral signals of the target domain during the distribution alignment phase. Hence, ERDiff is entirely an **unsupervised** neural distribution alignment (i.e., test-time adaptation) method. We also note that the introduction of behavior signals in the source domain is an alternative choice. We conduct additional experiments to verify the efficacy of ERDiff when such behavioral signals are removed. Please refer to Table 3 of the attached PDF for detailed results. **1.3 (1) Computational Cost and (2) Time Complexity of ERDiff of Alignment Phase** **(1)** In the alignment phase, for any given target domain, ERDiff can stably align it to the source domain in a comparable overhead with baselines. In Table 2 of the attached PDF, we conduct a comparative analysis between ERDiff and baseline methods in terms of additional parameter number, additional model size, stability, and alignment time. The demonstrated alignment time corresponds to the execution time for aligning one iteration (a batch of size 64) on a MacBook Pro (2019 equipped with 8-Core Intel Core i9 and 4 GB RAM). These analyses will be provided in the revised manuscript. **(2)** Here we conduct time complexity analysis with respect to the batch size $B$ for the alignment phase. The ERDiff's alignment objective is composed of two main terms: Diffusion Noise Residual and Sinkhorn Divergence. We note that in the diffusion noise residual computation, it does not go through the entire $T$ diffusion steps. Instead, it just samples a single time step (noise scale) $t$ and calculates the noise residual specific to that step. Thus, the total complexity of this part takes $\mathcal{O}(K_1 *B * d)$, in which the coefficient $K_1$ relates to the inference complexity of the DM denoiser $\boldsymbol{\epsilon}\left(\mathbf{Z}, t \right)$; $d$ denotes the latent dimension size. For the Sinkhorn Divergence, it has to compute the distance matrix, costing $\mathcal{O}(K_2 *B^2)$; $K_2$ is a relatively small coefficient in magnitude. By summing up, the total complexity of ERDiff is given by $\mathcal{O}(K_1 *B * d + K_2 *B^2)$. This $\mathcal{O}(B^2)$ complexity is applicable since the non-adversarial baseline methods we compared (i.e., JSDM, and SASA) require quadratic complexities as well. These analyses will be provided in the revised manuscript. **1.4 New 'Discussion' Section** Here we delve into Limitations, Future Work, and Broader Impact of our work. These parts will be put into the revised manuscript in a new 'Discussion' section, including and replacing the existing 'Section 5: Conclusion'. *Limitation and Future Work*: **(1)** Multi-domain Adaptation. Currently, ERDiff can align well with a single source domain latent distribution. An intriguing direction for future work would be learning a unified latent space across multiple source domains using the diffusion model. Thus the method would be applicable to domain generalization problems. **(2)** Generalization on alternative latent variable models (LVM). ERDiff currently identify the latent variables of raw neural signals with a canonical version of VAE. However, the architecture of the LVM within ERDiff is actually disentangled from the diffusion model training or MLA procedure. Future work includes validating ERDiff given multiple implementations of LVM (e.g., LFADS, pi-VAE). *Broader Impact:* Not confined to computational neuroscience, the cooperative training technique and the MLA in ERDiff have potential to apply into broader domain adaptation tasks across general time-series datasets (e.g., weather forecasting, and seismology). We also expect that our method can be applied or extended to other BCI applications and the broader field of neuroscience/AI. Refs: [1] STNDT: Modeling Neural Population Activity with Spatiotemporal Transformers. (Trung et al. 2022) [2] Deep inference of latent dynamics with spatio-temporal super-resolution. (Feng et al. 2021) [3] Recordings from hippocampal area ca1, pre, during and post novel spatial learning. (Grosmark et al. 2016) [4] Robust alignment of cross-session recordings of neural population activity. (Justin et al., 2022) [5] Stabilizing brain-computer interfaces through alignment of latent dynamics. (Brianna et al. 2022) Pdf: /pdf/8b8d1eea953fcfbbd29949c40aea430ec3ce3148.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper focuses on aligning highly variable neural population activities across days and subjects to stabilize the learning proposes and advance applications such as the brain computer interface. The idea is to train a VAE on one dataset that is hopefully self-consistent and then using a VAE trained on a dataset from another day or subject align the conditional distributions (encoders) for the latent spaces maximizing the likelihood of the source domain latent space under the new encoder distribution. The difficulty is the need to model the marginal source distribution of the latent space, which the work does using a diffusion model. The approach is demonstrated in comparison with alternative models on a synthetic dataset and actual neural recordings. Strengths: 1. A well written paper (but the abstract). 2. An interesting application of diffusion models. 3. Potentially impactful in practice of the BCI, more work, including further evaluation, is needed here however. Weaknesses: 1. A niche application and demonstration. A cellular neuroscience focused paper with no additional effort to demonstrate a general applicability of the approach in evaluations. 2. Poorly written abstract, especially in contrast to the rest of the paper. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Is the code going to be released publicly? Looks like the success of the approach depends less on the high level probabilistic descriptions in the paper than on the details of the actual implementation. 2. Is the data going to be released publicly for reproducibility? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: Potentially, the applicability of this work may be limited only to the demonstrated use in intracranial multicellular recordings, and it may not contribute to advancements in other areas of Machine Learning (ML). No demonstration was provided to counter this potential limitation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments. We would like to make the following clarifications. Hopefully these will resolve most of your concerns, and they can be taken into account when deciding the final review score. > A cellular neuroscience focused paper with no additional effort to demonstrate a general applicability of the approach in evaluations. Potentially, the applicability of this work may be limited only to the demonstrated use in intracranial multicellular recordings, and it may not contribute to advancements in other areas of Machine Learning (ML). We would like to emphasize that the focus on this work lies in behavioral neuroscience research and its related applications (e.g., brain-computer interfaces (BCI)) and we have selected "Neuroscience and cognitive science" as the primary area for this paper. Within the field of neuroscience, we note that the focus of this paper (i.e., neural distribution alignment) is of vital importance. Therefore, we mainly validate the effectiveness of the proposed ERDiff on various neural recording datasets. On the other hand, we fully agree that the generalizability of each method is an important consideration. Therefore, we have conducted experiments on synthetic datasets (Section 4.1 of the manuscript) in which more general time-series data are simulated. Moreover, in Section 1.1 of the general response, we also provide additional experimental results on an unconstrained rat’s hippocampus dataset [1]. These results prove the performance enhancement by ERDiff in a more general context and will be provided in our revised manuscript. Here we further discuss the contribution of ERDiff to the broader Machine Learning field: **(1)** In the field of domain adaptation, we propose to use diffusion model (DM) and STBlocks (Fig.2(A) of the manuscript) to extract spatio-temporal structures of the source domain distribution for later alignment propose. We note that the preservation of spatio-temporal structures is a key component for accurate adaptation, and such structures are ubiquitous in dynamical time-series data outside the neuroscience field [2, 3]. **(2)** We propose the maximum likelihood alignment (MLA). The robust statistical properties of MLA ensure stability during the alignment phase and offer broad applicability across a range of time-series datasets. We also note that the optimization objective of MLA aligns well with the DM since the $p_s(\mathbf{Z})$ inside MLA is tractable through DM, and the entire MLA formula can be expanded into noise residuals terms that are simple and efficient to compute. Hence, we believe ERDiff has potential use in underlying structure extraction and domain adaptation tasks of general time-series data (e.g., weather forecasting [2] and seismology [3]). We appreciate that Reviewer 9MfT, wx95, and 1oNu have recognized the potential generalizability of ERDiff. In the revised manuscript, we plan to incorporate the aforementioned insights into the broader impact part of ERDiff (please refer to Section 1.4 of the general response). > Poorly written abstract, especially in contrast to the rest of the paper. We thank the reviewer for this practical suggestion. The following is our re-written abstract and we will update it in our revised version of the manuscript. "In the field of behavior-related brain computation, it is necessary to align raw neural signals against the drastic domain shift among them. A foundational framework within neuroscience research assumes that trial-based neural activities rely on low-dimensional latent dynamics. Focusing on such latent dynamics greatly assists the alignment procedure. Despite the great progress the field have reached, existing methods usually ignore the intrinsic spatio-temporal structures during alignment. Thus, those solutions lead to poor quality in dynamics structures and overall performance after alignment. To tackle this problem, we propose a method leveraging the expressivity of diffusion model to relieve such issues. Specifically, the latent dynamics structures of the source domain are first extracted by the diffusion model. Then, such structures are well-recovered through a maximum likelihood alignment procedure in the target domain. We first demonstrate the effectiveness of our proposed method on a synthetic dataset. Then, when applied to neural recordings from primate motor cortex, under both cross-day and inter-subject settings, our method consistently manifests its capability of preserving the spatio-temporal structure of latent dynamics and outperforms existing approaches in alignment quality." > Is the code going to be released publicly? Looks like the success of the approach depends less on the high level probabilistic descriptions in the paper than on the details of the actual implementation. Yes. We would release the codes in the supplementary material to public. The detailed implementations of ERDiff on all datasets are described in section A.1 and D.1 of the appendix. To further ensure the reproducibility of our experimental results, we will provide a more comprehensive listing of these implementation details in the appendix of our revised manuscript. > Is the data going to be released publicly for reproducibility? Yes. The dataset is collected by [4] and it is available from the corresponding author upon reasonable request. We look forward to further discussion, and are happy to answer any questions that might arise. Refs: [1] Recordings from hippocampal area ca1, pre, during and post novel spatial learning. (Grosmark et al.,2016). [2] Application of Domain Adaptation Approach for Weather Data Mining. (Yang et al., 2018) [3] Seismic Facies Analysis: A Deep Domain Adaptation Approach. (Quamer et al., 2021) [4] Long-term stability of cortical population dynamics underlying consistent behavior. (Lee et al., 2020) --- Rebuttal Comment 1.1: Comment: Thank you for your explanations and an improved abstract. I still hold that a generality can only be demonstrated in a wider set of experiments rather than hypothesized. I do value the potential uses that a method like the proposed can have if it works outside of the demonstrated domain, however as it stands, the evidence that it does is lacking. Nevertheless, this is a strong manuscript and an interesting approach that fits the "Neuroscience and cognitive science" section. --- Reply to Comment 1.1.1: Title: Additional Experiments on General Time-series Domain-Adaptation Datasets Comment: We thank the reviewer for the valuable response. We truly agree that the evidence from experiments carries more weights in determining the generalizability of each method. Here we additional conduct experiments on two general time-series datasets widely used in domain-adaptation papers: (1) Boiler Fault Detection Dataset [1]. The dataset contains sensor data from three distinct boilers, collected between March 24, 2014, and November 30, 2016. Each boiler in this dataset is considered as a unique domain (we represent as 1,2, and 3). The objective of the learning task is to predict the faulty blowdown valve of each boiler. The results can be found in Table 1 below. (2) City Air Quality Forecast Dataset [2]. The dataset is composed of air quality, meteorological, and weather forecast data from three cities, denoted as A, B, and C. Each city is treated as a unique domain. Using both the air quality and meteorological data, our objective is to predict PM2.5 levels. The results can be found in Table 2 below. *Implementation Details:* Besides the three methods that focus on general time-series domain adaptation we compared in the manuscript, we add one more fundamental baseline: LSTM_S2T. This approach trains a vanilla LSTM model using source domain data and then directly applies it to the target domain without adaptation. This method represents the performance lower bound. In ERDiff, we apply $4$ STBlocks in the diffusion model (DM). For a fair comparison, we set the size of the latent dimension equal to the representation space size used in other methods. The batch size is set as 128. Owing to the strong domain learning capabilities of DM and our proposed corresponding maximum likelihood alignment (MLA) in adaptation phase, most times ERDiff outperforms existing methods in terms of alignment quality and it reaches the highest performance on average. | Method | 1$\rightarrow$2 | 1$\rightarrow$3 | 3$\rightarrow$1 | 3$\rightarrow$2 | 2$\rightarrow$1 | 2$\rightarrow$3 | Avg | | :--------: | :-------------: | :-------------: | :-------------: | :-------------: | :-------------: | :-------------: | :-------: | | LSTM_S2T | 67.09 | 94.54 | 93.14 | 56.09 | 84.99 | 91.31 | 81.19 | | SASA | 71.54 | 96.39 | **94.77** | 63.15 | 87.76 | 93.59 | 84.53 | | RDA-MMD | 73.95 | 96.30 | 94.14 | 65.05 | 88.11 | **94.42** | 85.34 | | DAF | 74.55 | **96.54*** | 94.58 | 65.03 | 88.85 | 94.19 | 85.59 | | **ERDiff** | **75.26*** | 96.13 | 94.14 | **66.66*** | **89.09*** | 94.02 | **86.21** | *Table 1: AUC Score ($\%$) on Boiler Fault Detection Dataset. $\star$ denotes significance p-value <0.02 compared with the best baseline.* | Method | B$\rightarrow$A | C$\rightarrow$A | A$\rightarrow$B | C$\rightarrow$B | B$\rightarrow$C | A$\rightarrow$C | Avg | | :--------: | :-------------: | :-------------: | :-------------: | :-------------: | :-------------: | :-------------: | :-------: | | LSTM_S2T | 40.20 | 48.91 | 52.81 | 68.14 | 13.82 | 13.82 | 39.62 | | SASA | 34.26 | 40.91 | 48.15 | 56.80 | 13.49 | 13.46 | 34.51 | | RDA-MMD | 32.98 | 37.88 | 45.42 | 52.78 | **13.19** | 13.18 | 32.57 | | DAF | 31.75 | 36.86 | 44.24 | 52.93 | 13.22 | 13.07 | 32.02 | | **ERDiff** | **31.05*** | **35.45*** | **43.30** | **51.36*** | 13.41 | **13.03** | **31.28** | *Table 2: RMSE on Cities Air Quality Forecast Dataset. $\star$ denotes significance p-value <0.02 compared with the best baseline.* We thank the reviewer for the kind comments. [1] used in: Time Series Domain Adaptation via Sparse Associative Structure Alignment. (Ruichu et al., 2021) [2] https://www.microsoft.com/en-us/research/project/urban-air/
Summary: Inter-individual and inter-session variability significantly complicate direct comparison of neural recordings collected over time, degrading trained behavioral models. This can be cast as a more general distribution alignment problem, shared across unsupervised learning. To address neural distribution alignment, the authors introduce a novel "ERDiff" method, which co-trains a variational autoencoder and a diffusion model to extract latent spatiotemporal structure in a source dataset. To align with a desired target dataset, parameter finetuning is performed on the read-in layer of the probabilistic encoder learned during training to match the target dataset. Simulation and experimental results suggest that ERDiff captures relevant spatiotemporal structure, performing competitively against other baseline methods including those based on overall minimizing distribution divergence as well as based on adversarial methods. Overall, the authors argue that ERDiff is better able to capture the important spatiotemporal structure of their trialwise data; for example, from a monkey center-out reaching task in the experimental results. This focus differentiates ERDiff from many other alignment methods, which ignore unfolding dynamics in their alignments. Strengths: Incorporating spatiotemporal structure into the alignment of neural recordings is a novel approach, as these methods traditionally consider successive data points as independent samples or learn a set of low-dimensional latent dynamics which can then be aligned. These dynamics are particularly critical where the the trialwise dynamics strongly influence both behavior and neural activity over time. By directly learning and aligning the latent spatiotemporal structure, ERDiff shows stable performance even over relatively low sampling density, retaining relatively high decoding performance compared to other baseline methods even with decreasing numbers of trials in the target domain. These properties suggest that it is also likely that the general ERDiff approach may be useful in other cases where distribution shift between a source and target domain obscures but does not remove a shared latent structure. The general approach of combining variational autoencoders (VAEs) and diffusion models (DMs) has been previously introduced (e.g., Panday et al., TMLR, 2023); however, ERDiff is a significantly different formulation of the idea and represents a novel approach in leveraging the relative strengths of these methods through cooperative training. Weaknesses: While the current experiments extensively compare inter-session and inter-subject differences in real neural recordings --- in addition to the simulated experiments --- it is not clear to what extent the presented findings might generalize to data sources without such a clear trial structure. For example, in recordings collected during unconstrained exploration or sleep, low-dimensional latent factors may still drive a successful alignment. Nonetheless, it is not clear how ERDiff would best be deployed in that context. This is particularly relevant as the described real-data experiments additionally incorporated velocity information, and the performance of ERDiff without a behavioral signal during training is thus unclear. In the current paper, my additional concern with the experiments is on the relative baselines against which ERDiff is compared. It would be informative to see a direct comparison with canonical correlation analysis (CCA) approaches, which have been used to date in aligning neural datasets with a strong temporal correspondence (e.g., Gallego et al., Nat Neuro, 2020). Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Would the authors be able to directly comment on the relationship between their work and other field-standard methods to identify low-dimensional, latent factors such as LFADS (Pandarinath et al., Nat Methods, 2018) ? In particular, the relative benefits of learning the low-dimensional latent structure as part of the alignment --- as compared to other existing methods which learn low-dimensional dynamics which can then be aligned --- is not clearly explained. This would help to better situate the work in the literature. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Although I do not see a Broader Impact section included in the current submission, I do not anticipate potential negative societal impacts given the constrained focus of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed and constructive comments. We would like to make the following clarifications. Hopefully these will resolve most of your concerns, and they can be taken into account when deciding the final review score. > **(1)** It is not clear to what extent the presented findings might generalize to data sources without such a clear trial structure. **(2)** This is particularly relevant as the described real-data experiments additionally incorporated velocity information, and the performance of ERDiff without a behavioral signal during training is thus unclear. **(1)** This is a highly valid point. We would like to emphasize that the target of performing alignment between neural distributions is to adapt the neural-behavior mapping function from the source domain to the target domain. The consistency of such mapping function within every single domain is a prerequisite. Therefore, our paper and previous works [1, 2] in neural distribution alignment mainly focus on datasets where behaviors have been confirmed to maintain a consistent trial-wise dynamical structure. On the other hand, when we aim to apply neural distribution alignment (i.e., ERDiff) to datasets without a clear trial structure [3,4], it is feasible to truncate such continuous behavioral data into discrete but meaningful segments and syllables (e.g., grooming, running). These behavioral segments typically exhibit clear structural patterns. Then, their overall latent distribution can be learnt through a VAE for subsequent alignment. We conduct additional experiments to validate this approach through an unconstrained rat’s hippocampus dataset [5]. Please refer to Section 1.1 of the general response for details. **(2)** Please refer to Section 1.2 of the global response for a detailed discussion of this point. > It would be informative to see a direct comparison with canonical correlation analysis (CCA) approaches We notice that CCA is a well-known method for neural distribution alignment. However, practical approaches based on CCA [6] require supervised behavior signals of the target domain to pair the source-target domain trials during the alignment phase. In contrast, our proposed ERDiff is an **unsupervised** neural distribution alignment method, having the potential to generalize to broader datasets and applications where supervised behavior signals are inaccessible. Here, for a fair comparison, we remove the behavior signals in practical CCA approach [6] and report the results in the following: | Method | Cross-Day-$R^2$ | Cross-Day-RMSE | Inter-Subject-$R^2$ | Inter-Subject-RMSE | | :------: | :-------------------------: | :------------------------: | :-------------------------: | :------------------------: | | U-CCA | $-25.56 (\pm 2.13)$ | $9.70 (\pm 0.28)$ | $-29.26 (\pm 2.11)$ | $10.43 (\pm 0.46)$ | | **Ours** | $\mathbf{18.81} (\pm 2.24)$ | $\mathbf{7.99} (\pm 0.43)$ | $\mathbf{10.29} (\pm 2.86)$ | $\mathbf{8.34} (\pm 0.34)$ | *Table 1: Performance comparison with Unsupervised-CCA* > **(1)** Would the authors be able to directly comment on the relationship between their work and other field-standard methods to identify low-dimensional, latent factors such as LFADS ? **(2)** The relative benefits of learning the low-dimensional latent structure as part of the alignment --- as compared to other existing methods which learn low-dimensional dynamics which can then be aligned --- is not clearly explained. **(1)** This is a good point. While Latent Variable Models (LVM) such as LFADS are focused on identifying low-dimensional, latent factors from raw neural signals, the primary goal of ERDiff is quite different. ERDiff aims to perform unsupervised neural distribution alignment (domain adaptation) given these identified latent factors in the source and target domains. On the other hand, not limited to the current canonical VAE, ERDiff has the potential to perform alignment based on latent factors identified by alternative LVMs (e.g., LFADS). We thus include this topic as part of our future work (please refer to Section 1.4 of the global response). **(2)** We apologize for the confusion. In ERDiff, we would like to clarify that the learning of low-dimensional latents is **separate** from the alignment. In the outlined cooperative training procedure, during each iteration, the DM uses the inferred latents by the VAE as input data. However, meanwhile, DM does not affect the VAE's learning. This means the procedure of learning the low-dimensional latents is disentangled and separate from the procedure of DM learning and neural distribution alignment. We note that the cooperative training of VAE and diffusion model (DM) assists the DM in accurately extracting the distribution of the source domain latents, which ultimately improves the alignment performance. This is why we include a description of the VAE training (learning the low-dimensional latent) phase in the source domain. > I do not see a Broader Impact section included in the current submission. Please refer to Section 1.4 of the global response. We look forward to further discussion, and are happy to answer any questions that might arise. Refs: [1] Robust alignment of cross-session recordings of neural population activity by behaviour via unsupervised domain adaptation. (Justin et al., 2022) [2] Stabilizing brain-computer interfaces through alignment of latent dynamics. (Brianna et al. 2022) [3] Hierarchical recurrent state space models reveal discrete and continuous dynamics of neural activity. (Scott et al., 2019) [4] The Striatum Organizes 3D Behavior via Moment-to-Moment Action Selection. (Jeffrey et al., 2018) [5] Recordings from hippocampal area ca1, pre, during and post novel spatial learning. (Grosmark et al.,2016) [6] Long-term stability of cortical population dynamics underlying consistent behavior. (Gallego et al., 2020) --- Rebuttal Comment 1.1: Comment: I thank the authors for their comprehensive response in clarifying the contribution of this work, and I’ve updated my score correspondingly. In particular, I find the attentional experiments on (1) a rat hippocampus dataset and (2) when removing the behavioral signals from ERDiff particularly compelling. These broadly reinforce the author’s point that a trialwise structure—though not behavioral information—is necessary for a successful application of ERDiff. I appreciate the direct comparison with unsupervised-CCA, but I am still uncertain that this is the right baseline. Would it not be more meaningful to compare unsupervised-CCA with the ERDiff model without behavior signals of source domain during VAE training? --- Reply to Comment 1.1.1: Title: Thank you Comment: We sincerely appreciate the reviewer's positive evaluation of our additional experiments and contribution. We apologize for the ambiguity and would like to clarify that the term 'unsupervised' in unsupervised-CCA denotes unsupervised neural distribution alignment. This refers to the exclusion of supervised behavioral labels (e.g., direction and velocity) in the **target domain** during the alignment phase. In the above experiments of unsupervised-CCA (U-CCA) we presented, the behavioral signals from the **source domain** are incorporated during VAE training. Hence, we compare it with the version of ERDiff that also incorporates behavior signals of source domain. We thank the reviewer once more for the valuable response and suggestions.
null
null
null
null
Calibrate and Boost Logical Expressiveness of GNN Over Multi-Relational and Temporal Graphs
Accept (poster)
Summary: The paper deals with classifying multi-relational graphs, and specifically, the R-GNN architecture, which deals with more than one type of edge by producing a different neighbor set for each of them, aggregating the messages of each neighbor, and then merging them together in a final update layer. The first results in the paper are that these architectures are limited, as Figure 1 shows, these architectures cannot even count the number of total neighbours of a node. To fix this, the proposal is to maintain the architecture, but apply a graph transformation task, some sort of graph reification. Intersectingly, R-GNNs and R^2-GNNs recover the expressive power of similar architectures in the single-relation scenario, when applied not under the original graphs but in the transformed graphs. The second part of the paper looks to implement these ideas specifically in Temporal Graphs, wherein one has a different relation for each timestamp measured in the temporal part. As multi-relational graphs, this is indeed a nice application of the framework. Results and experiments show that the proposed architecture competes with several other proposals for classification of temporal graphs. Strengths: * Sound, robust theoretical study relating multi-label GNNs with logic. The proofs seem correct and use a variety of techniques. * Claims are backed up with examples, so that it is easy to quickly grasp the ideas of the paper. * The claims in the paper are backed up with experiments: It really does seem that the power of R-GNNs (and similar architectures) benefits from applying the graph transformation. Weaknesses: * The results are tailored specifically to a single architecture (R-GNNs). One could think of several other ways to incorporate edge information, I know for instance GATs (Velickovic et al. 2018), but the paper does not discuss any other approach, and we don't know if the weakness shown in the results of section 4 are due to the specific architecture, or if there is a bigger problem underlying multi-relation graphs that must be tackled with the transformation. * The proposed solution involves a linear transformation in graphs, but probably demands much more layers in a GNN, as the length of every path is now multiplied by two. This involves adding extra costs to the learning process, and probably some difficulties in using the node embedding of the graph. * Synthetics experiments do not compare against other GNN architectures capable of dealing with different edges, appart from R-GNN. Real life experiments do compare against other temporal variants, but it is difficult to see if the added power is due to the global readout, the transformation, or the mix of everything in the architecture proposed. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: * Please justify why a study on R-GNN is important, and whether your insights could be valuable when dealing with any other architecture incorporating edges. * I'd tend to think that maybe every multi-labelled GNN would benefit from the graph transformation approach, and like wise for the global readout. What would happen if I run GRU-GCN or TGN over a temporal graph that has already been transformed? Or if instead of using R^2-GNN I use any other GNN with the graph transformation? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 2 fair Limitations: * Please discuss potential inconveniences regarding using the transformation in graphs. You probably need more layers, with the added cost on training. Also, even if the transformation is linear, it is not free for huge graphs, so your approach is probably better tailored to datasets containing several small graphs. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank Reviewer pnMq for the recognition of our theoretical results on both static and temporal settings as well as our proposed novel transformation. Below, we would like to give detailed responses to each of your comments. ___ > **Q1**: Please justify why a study on R-GNN is important, and whether your insights could be valuable for other architectures such as GAT. **A1**: It is worth noting that R-GNN is not a specific model architecture; it is a framework that contains a bunch of different GNN architectures. In the paper, we just said it's generalized from R-GCN [1], but our real goal is to define a generalized framework as an abstraction of most Message-Passing GNNs (MPGNN). We feel sorry for not pointing this out explicitly and perfectly in the paper. In the definitions (line 112 and 118), the functions can be set as any functions you like, such as matrix multiplications or QKV-attentions. Most commonly used GNN such as R-GCN[1] and R-GAT[2] are captured (upper-bounded) within our R-GNN frameworks. Many other related works, such as [3], [4] and [5] also use intrinsically the same frameworks as our R-GNN if you read the definitions, so R-GNN is a kind of abstraction framework for MPGNN that has been approved widely and studied a lot in the community. Therefore, we think analyzing these frameworks leads to common results for many existing GNNs. In particular, GAT [6] is a kind of single-relation GNN which is not our topic, and its multi-relational variation R-GAT [2] can be incorporated in our R-GNN framework by setting the combination function as QKV-attention module. ___ > **Q2**: I think every multi-labelled GNN would benefit from graph transformation, and like wise for the global readout. What would happen if I run GRU-GCN or TGN over a transformed temporal graph? Or using any other GNN with the graph transformation? **A2**: Intuitively, every (temporal) Message-Passing GNN can be augmented by graph transformation and global readout, if they are within our framework. We've added experiments that test performance of the most recent model GRU-GCN/TGN with graph transformation and R-GAT [2] plus global readout and graph transformation. It can be seen that graph transformation indeed improves performance in most cases. Please check the tables below. | accuracy| $\varphi_1$ | $\varphi_2$ | $\varphi_3$ | $\varphi_4$ | | ------------------------------ | ----------- | ----------- | ----------- | ----------- | | R-GAT $\circ H$          | 100         | 61.4        | 88.6        | 82.0        | | R-GAT+readout $\circ H$        | 100         | 93.5        | 95.0        | 82.2        | | R-GAT+readout $\circ F\circ H$ | **100**         | **98.2**        | **100**         | **95.8**        | | Models | GRU-GCN | TGN | GRU-GCN$\circ F^T$ | TGN$\circ F^T$ | | -------- | ------- | ---- | ------------------ | -------------- | | Brain-10 | 91.6| 91.2 | 95.0| 94.2| ___ > **Q3**: Please discuss potential inconveniences regarding using the transformation in graphs. You probably need more layers, with the added cost on training. Your approach is probably better tailored to datasets containing several small graphs **A3**: You are right. The complexity of a model depends on its depth (layer number) and width (feature dimension). The graph transformation introduces 1 new unary predicate, so one needs to add one feature dimension, which makes the model slightly bigger but acceptable. On the other hand, if we observe the process of graph transformation as shown in Figure 2 of the paper, we will find that one-hop message passing is stretched to three-hop ones, which may require 3 times the depth to do the same task. However, in real-world datasets, the scalability of graph transformation is not as weak as imagined. We've added two real-world datasets DGS and AM (suggested by *Reviewer Hbfa*), whose graphs are much bigger in the following table. These results show our method is effective both on small and large graphs. | Models | AIFB | MUTAG | DGS  | AM   | | ------------------ | ---- | ----- | ---- | ---- | |# of nodes|8285|23644|333845|1666764| |# of edges|29043|74227|916199|5988321| | R-GNN| 91.7 | 76.5  | 81.2 | 89.5 | | R$^2$-GNN| 91.7 | 85.3  | 85.5 | 89.9 | | R$^2$-GNN$\circ F$ |**97.2**|**88.2**|**88.0**| **91.4** | ___ >**Q4**: Synthetics experiments do not compare against other GNN. It is difficult to see if the added power is due to the global readout, the transformation, or the mix of everything in the architecture proposed. **A4**: We've added R-GAT [2] synthetic experiments in the first table of Answer 1. In fact, as we've mentioned in Answer 1, R-GAT is also an architecture within R-GNN. From the experiments, we can see expressiveness improvements by using graph transformation/global readout on different kinds of equivariant MPGNN. We've added some experiments shown in the table below to help you better see which components of our method bring added power. These experiments show separate improvements from global readout and graph transformation. As we said in the paper, the drop in the last column may be due to the intrinsic drawbacks of current real-world datasets. Many real-world datasets can not be perfectly modeled as first-order-logic classifier.   This non-logical property may lead to less convincing experimental results. As [3]  commented,  these commonly used benchmarks are inadequate for testing advanced GNN variants. |Models|R-TGNN|R-TGNN$\circ F^T$|R$^2$-TGNN|R$^2$-TGNN$\circ F^T$| |--------|------|-----------------|----------|---------------------| |Brain-10|85.0|90.9|94.8|94.0| [1]Modeling relational data with graph convolutional networks ESWC2018 [2]Relational graph attention networks arXiv [3]The logical expressiveness of graph neural networks ICLR2020 [4]A Theory of Link Prediction via Relational Weisfeiler-Leman arXiv [5]Logical Expressiveness of Graph Neural Network for Knowledge Graph Reasoning arXiv [6]Graph Attention Networks ICLR2018 --- Rebuttal Comment 1.1: Comment: Thanks for the very detailed answers. I am happy about the additional experiments, and it does indeed show how it can be used in real life. I also acknowledge the fact about more general models, which I can now follow correctly. I'm raising my score as I think the proposal has a much more important contribution now. But I think it is important to separate the idea of the graph transformation with the global readout in the presentation, both are improvements to R-GNNs, that the former helps is less expected (for me) than the latter. --- Reply to Comment 1.1.1: Comment: Thanks for your comment. We are happy to see your confusion resolved and satisfaction on our new experiments. The reason why we don’t focus on separate graph transformation is that our main motivation is the following: From theorems in Section 3, we’ve seen R-GNN+global readout (R$^2$-GNN) brings some improvement but is still logically weak, so how to theoretically improve its logical expressiveness (to capture $FOC_2$)? Our solution is to use graph transformation and we derive some beautiful theory behind R$^2$-GNN+transformation in Section 4. Graph transformation is certainly orthogonal to model choice. However, if we just use transformation separately, the theoretical logical expressiveness will not be so satisfactory (can’t capture $FOC_2$). We can indeed give theory behind R-GNN+transformation, but it will be just a weaker version of the current Section 4 and all ideas will be similar. Therefore, for brevity we directly analyse the most powerful R$^2$-GNN+transformation and reach our final goal—boost the logical expressiveness to capture $FOC_2$. Of course, it is interesting to observe empirical improvement brought by separate graph transformation. Experiments in Answer 4 show separate graph transformation empirically helps less than global readout. We should add some related discussions in experiment part.
Summary: This paper justifies the logical expressiveness of R^2-GNN from the perspective of "universal" graph classes, including multigraphs and infinite graphs. Specifically, the R^2-GNN seems to be identical to the previous ACR-GNN in [1]. The engagement of graph classes makes theoretical statements in finer settings than previous ones in [1] which only involve one graph. The thorough inspection of different graph classes captures the expressiveness power of R^2-GNN compared to logical formulas in FOC_2. Moreover, this paper adapts a commonly used graph transformation $F$ to enhance the expressiveness of R^2-GNN to be stronger than FOC_2 on universal graph classes. The theoretical results collapse into existing ones in [1] when the graph classes contain finite graphs, which are naturally bounded. The theoretical results can be also extended to temporal graphs under static representations, which are studied in [2]. To achieve this, the authors investigate the expressiveness hierarchy between the collapse function $H$, two static representations, GNN and TGNN (also defined by the author), and graph transformation $F$. Empirical results support the expressiveness results partially related to $F$, $H$, and TGNN. [1] Barceló, P., Kostylev, E. V., Monet, M., Pérez, J., Reutter, J., & Silva, J. P. (2020, April). The logical expressiveness of graph neural networks. In 8th International Conference on Learning Representations (ICLR 2020). [2] Gao, J., & Ribeiro, B. (2021). On the equivalence between temporal and static graph representations for observational predictions. arXiv preprint arXiv:2103.07016. Strengths: This paper makes a large amount of work to (1) justify the logical expressiveness of R^2-GNN compared to FOC_2 in universal, bounded, and simple graph classes; (2) empirical technics to improve the expressiveness of GNN. The theoretical results are original. Though some results are still missing to achieve a complete understanding of this problem. The presented ones are significant and inspiring. Weaknesses: The presentation of this paper should be improved given the dense representation of theorems and proofs. Firstly, the content is not self-contained. The definition of static representation of temporal graphs is not included even in the appendix. Most importantly, the key definition to justify logical expressiveness is missing. In Lines 130-131, the author states that the R^2-GNN is the set of all R^2-GNN-based boolean node classifiers. However, it is far from a satisfactory definition, which might be risky for accepting this paper. Please check the question part for my questions about the definition. AnotherW ambiguous part of this paper is that the graph transformation $F$ for multi-graphs is isolated to FOC_2 classifiers. In fact, $F$ should be orthogonal to classifiers. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. What is the definition of one boolean classifier set A as the subset of another boolean classifier set B for a given graph class GC? There are two possible definitions 1. For any a in A, there exists a b(a) in B so that a and b(a) achieve the same results on all G in GC. 2. For any G in GC and any a in A, there exists a b(G, a) in B so that a and b(G, a) achieve the same result on G. 2. $F$ enhanced the power of R^2-GNN classifiers but it never relates the R^2-GNN. Can we state that $F$ also enhances the expressiveness of FOC_2 classifiers? What will Figure 3 be if we include FOC_2 \odot F (or with proper modification of FOC_2 with new predicates introduced)? 3. For the temporal graph, the author draws two lines of hierarchies of expressiveness. What are the relationships between R^2-TGNN and time-and-graph and R^2-GNN\odot H and time-and-graph? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank Reviewer KFPi for acknowledging the novelty and theoretical significance of our findings within multi-relational and temporal context. In particular, we are enthused by your thought-provoking comments, which motivate us to have deeper contemplation. Below, we would like to give detailed responses to each of your comments. ___ > **Q1**: What is the definition of one boolean classifier set A as the subset of another boolean classifier set B for a given graph class GC? There are two possible definitions. The definition for R^2-GNN classifiers is not satisfactory **A1**:  The correct definition is: *For any a in A, there exists a b(a) in B so that a and b(a) achieve the same results on all G in GC.* A R$^2$-GNN classifier is defined as any classifier that can be represented as a R$^2$-GNN model (outputs node features) equipped with a binary classification function $CLS(x): \mathbb{R}^D\rightarrow 0/1$. (where $x$ means that an input of $CLS()$ is a node feature outputted by the model, and $D$ represents the output feature dimension. $CLS()$ can be any function with the above domain and range.) Sorry that we did not explain this explicitly. We will include an explicit definition in our revised version. ___ >**Q2**: $F$ enhanced the power of $R^2-GNN$ classifiers but it never relates the $R^2-GNN$. Can we state that   also enhances the expressiveness of $FOC_2$ classifiers? What will Figure 3 be if we include $FOC_2 \circ F$ (or with proper modification of $FOC_2$ with new predicates introduced)? **A2**: Your perspective is entirely accurate!  We used the class $FOC_2\circ F$ in line 230-232 as an auxiliary class.   Although R$^2$-GNN can be boosted by $F$, it is not true for $FOC_2$. A counter-intuitive fact is $FOC_2\circ F \subsetneq FOC_2$, which means $FOC_2$ becomes strictly weaker after transformation. Since $FOC_2\circ F$ is just an unimportant bridge class, we don't write the above as a theorem in our paper. We can add the detailed proof in the appendix afterwards. In the following, I will briefly explain it. First, following the similar idea of proof of Theorem 9 combined with a slightly modified version of Lemma 26 (in the appendix), we can prove $FOC_2\circ F\subseteq FOC_2$. We can't give the specific proof idea due to space constraints, so please refer to our appendix or wait for our detailed proof in later version if interested. Then, consider the logical classifier $\varphi$ that "classifies a node $v$ to be true iff there are at least 5 nodes that aren't neighbors of $v$". $\varphi$ can be easily represented as a $FOC_2$  formula. However, it is impossible to construct an equivalent $FOC_2\circ F$ classifier! Thinking about this example, one may gain some intuition about why the transformation actually makes $FOC_2$ weaker. Combined with the above, we know $FOC_2\circ F \subsetneq FOC_2$. In fact, we have $FOC_2\circ F= FOC_2$ for any bounded graph class. Moreover, if we introduce a brand new relation type and modify the definition of graph transformation a bit, $FOC_2\circ F=FOC_2$ would be true for arbitrary universal graph classes. However, since this modification will introduce more edges and predicates to our transformation and make $F$ more costly and complex, we give in to the current weaker transformation in our paper. After all, $FOC_2\circ F$ is not important, so we don't have an incentive to boost it. ___ > **Q3**: For the temporal graph, the author draws two lines of hierarchies of expressiveness. What are the relationships between R$^2$-TGNN and time-and-graph and R$^2$-GNN$\circ H$ and time-and-graph? **A3**: First, it must be noted that the original definitions of time-and-graph and time-then-graph only consider localized message-passing. Therefore, in order to compare them fairly with our framework, we have to enable them to do global readout as R$^2$-TGNN do. This is just a slight modification from the original definition of time-and-graph. Relationships among time-and-graph, R$^2$-TGNN and R$^2$-GNN $\circ H$ are as follows: We can add formal proof in appendix of later version. Here, we just briefly describe some key points. 1. R$^2$-TGNN  $\nsubseteq$ time-and-graph That's because time-and-graph can not capture a chain of information that is continuously scattered in time intervals. Specifically, $\varphi(x):=\exists y,\Bigl( r_1^2(x,y)\wedge \bigl(\exists x,r_1^1(y,x)\bigl)\Bigl)$ can't be captured by time-and-graph but is in R$^2$-TGNN. 2.  time-and-graph $\nsubseteq$ R$^2$-GNN$\circ H$ That's because the static GNN defined in [1] is stronger than R$^2$-GNN. However, Their definition is kind of too ideal and doesn't exactly match most practical models (We can elaborate why this is the case in further discussion if you are interested). That is also why we didn't include a comparison between time-and-graph and R$^2$-TGNN/R$^2$-GNN$\circ H$ in this version of our paper. 3. Whether time-and-graph $\bigcap$ R$^2$-GNN $\circ H\subseteq$ R$^2$-TGNN ? Regrettably, we currently do not have a definitive answer to this question, which remains as an open problem. On an intuitive level, however, we are inclined to believe in the validity of *time-and-graph $\bigcap$ R$^2$-GNN $\circ H\subseteq$ R$^2$-TGNN*.  ___ [1] On the equivalence between temporal and static equivariant graph representations ICML2022 --- Rebuttal Comment 1.1: Comment: Thanks for your justification. Based on the new results and the improved representation. I will increase the score to 7. --- Reply to Comment 1.1.1: Comment: Dear Reviewer KFPi: Thanks again for your constructive review and score increase. We are happy to see your confusion resolved. A kind reminder: Could you please edit your original review to reflect your current evaluation? Authors
Summary: The paper introduces a new connection between the power of the relational GNNs (working on multi-relational graphs) with a class of first-order logic functions. This class of functions named $\mathcal{FOC}_2$ is a subset of first-order logic functions constrained to having only two variables, but includes counting quantifiers. The theoretical analysis considers three classes of multi-relational graphs: universal graph class, bounded size multi-relational graphs, and simple graphs that two nodes can have just one type of connection. Based on these classes paper proves in general none of $R^2-GNN$ and $\mathcal{FOC}_2$ class of functions is a subset of each other, but adding a preprocess to the graph can make a model that is at least as powerful as the union of both classes. They also extend their results to the temporal graphs, where time series graphs can be collapsed into a static graph or considered as separate graphs. The paper also compares two different classes of models, time-and-graph versus time-then-graph, and proves second class is absolutely more powerful. Empirical results on synthetic and real-world graphs have been also conducted for supporting theoretical results. Strengths: 1. Paper for almost all parts is very well-written. 2. Paper finds the exact class of functions that can be learned by most common relational GNNs. 3. Theorems show the theoretical superiority of time-then-graph functions on temporal graphs over the time-and-graph models. 4. Paper finds a hierarchal classification of the power of different models on temporal graphs. 5. Having supporting experiments for showing that theoretical implications have real effects on real-world datasets. Weaknesses: 1. Intuition behind the functions used for synthetic graphs has not quite been explained. A little more explanation on why these functions and if they are just a random selection or one of the most simple functions that could convey a point will be helpful. 2. The paper relies a lot on definitions of time-then-graph and time-and-graph models and refers to previous work for their definitions. These definitions are better to be briefly explained at least as a section in the Appendix. 3. Section 6 starts with the Experiments and it is hard at first to understand that all synthetic are temporal. Particularly, superscripts on the relations were confusing at first without knowing they are temporal indices. It would have been better to clearly indicate that before starting and also include synthetic experiments for the general multi-relational graphs part. 4. Real-world experiment on the Brain-10 dataset is a little inconsistent with theoretical results. Minor errors: 1. Inconsistent use of $FOC$ and $\mathcal{FOC}$ in the notations. 2. Line 110, a space missing between "vectors" and "$\mathbf{x}_v^{(i)}$". 3. Line 339, "somorphism" -> "isomorphism" Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. Node-level binary classification tasks are limited in the real world. Theoretical results are insightful, however, I was wondering if these results can imply anything about any other type of tasks on the graphs, e.g. multi-class node classifications or graph-level tasks. 2. In Corollaries 4.1 and 9.1, it is interesting to see that both classes have precisely same power. However, this power seems to appear because of $\mathcal{FOC}_2$ being able to enumerate all possible combinations of relation counts. Is there any complexity measurement, e.g. time-complexity of evaluating a function on a graph, on how complex can the function under $\mathcal{FOC}_2$ be as the bound on the number of nodes in the graph or number of possible relations in the class increases? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: The paper clearly defines the class of functions analysis. There are limitations on what types of graphs and tasks theoretical results work, but these limitations are clearly indicated in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank Reviewer a55p for acknowledging the merits of our paper in both theoretical and experimental aspects. In particular, we thank you for your interesting and insightful reviews, which motivated us to formally prove and add the following results (see below). We will add these results with rigorous proof in a later version of the paper. ___ > **Q1**: Node-level binary classification tasks are limited in the real world. Theoretical results are insightful, however, I was wondering if these results can imply anything about any other type of tasks on the graphs, e.g. multi-class node classifications or graph-level tasks. **A1**: In general, for a multi-class node classification task with $n$ labels, we can reduce it to $\lceil \log{n} \rceil$ separate binary node classification tasks by predicting each binary bit of label's index. A more natural way is to just tackle it as $n$ different binary node classifications. Both of the two ways described above may lead to a generalization of this paper to multi-class scenario. However, we don't know how to directly model a multi-class classifier as a unified logical classifier. This may be an interesting future direction. ___ > **Q2**: In Corollaries 4.1 and 9.1, it is interesting to see that both classes have precisely same power. However, this power seems to appear because of  $FOC_2$  being able to enumerate all possible combinations of relation counts. Is there any complexity measurement, e.g. time-complexity of evaluating a function on a graph, on how complex can the function under $FOC_2$  be as the bound on the number of nodes in the graph or number of possible relations in the class increases? **A2**: We can indeed get a (rather loose) bound: Suppose there are $P$ unary predicates and $R$ relation types, and we are considering a bounded graph class with no more than $N$ nodes. For any classifier $c$, suppose $c$ can be represented as an R$^2$-GNN with depth (layer number) $L$. Then there is a $FOC_2$ classifier $\varphi$ equivalent to $c$ such that the following bounds hold: - The quantifier depth of $\varphi$ is no more than $L$. - The size of $\varphi$ (size of parse tree) is no more than $2^{f(L)}$, where $f(L)=2^{2^{2Nf(L-1)}}, f(0)=2^{2^{2(P+R)}}$. The key idea is the following: First, by Lemma 25 in our appendix, $c$ can be represented as a $FOC_2$ formula $\varphi$ with quantifier depth no more than $L$. Then by Proposition 24 in our appendix (This is a key point of this bound; please refer to our appendix). We know the number of intrinsically different bounded-depth $FOC_2$ formulas is finite, so we only need to get an upper bound on this number. Finally, we can get the bound by iteratively using the fact that a boolean combination of a set of formulas can always be written as DNF (disjunctive normal form). The tower of power of two comes from L rounds of DNF enumerations. It is a rather loose bound. We can formulate it in the appendix in later version. ___ >**Q3**: Real-world experiment on the Brain-10 dataset is a little inconsistent with theoretical results. **A3** : We think maybe that's because our real-world dataset has the following two drawbacks when used as logical expressiveness benchmark. - its labels cannot be modeled as first-order-logic classifier. For example, maybe two isomorphic graphs (nodes) have different labels in the dataset. This negative fact about real-world datasets has also been observed and illustrated in Gao and Ribeiro's work (Figure 6 of [1]). As a result, permutation-equivariant GNNs can not get correct answers, and transformation increases the chaos. It means the real-world dataset may contain non-logical rules. - The intrinsic logical rules in the dataset are too complicated. In this scenario, maybe the transformation sometimes makes these rules more complex. That's because the transformation changes the predicate set and graph pattern. As a result, the transformed intrinsic logical rules become too complicated to capture by our model with bounded size. As we mentioned in the paper, [2] and [3] also observe the phenomenon, and they think these commonly used benchmarks are inadequate for testing advanced GNN variants. ___ > **Q4**: Intuition behind the functions used for synthetic graphs has not quite been explained. A little more explanation on why these functions and if they are just a random selection or one of the most simple functions that could convey a point will be helpful. The paper relies a lot on definitions of time-then-graph and time-and-graph models and refers to previous work for their definitions. These definitions are better to be briefly explained at least as a section in the Appendix. Section 6 starts with the Experiments and it is hard at first to understand that all synthetic are temporal. Particularly, superscripts on the relations were confusing at first without knowing they are temporal indices. It would have been better to clearly indicate that before starting and also include synthetic experiments for the general multi-relational graphs part. **A4** : Thanks for pointing it out! We've presented an explanation from line 308-316. Due to space constraints, the explanation may be too brief to be perfectly understood. We will revise this part in later version. Other suggestions on writing will be considered, too. ___ [1]On the equivalence between temporal and static equivariant graph representations ICML2022 [2]The logical expressiveness of graph neural networks ICLR2020 [3]Are powerful graph neural nets necessary? a dissection on graph classification arXiv --- Rebuttal Comment 1.1: Comment: Thanks for your thorough responses and I am thrilled to see that my questions have sparked new ideas to enhance the paper's theory. I am persuaded by the answers provided, and I'll maintain my current score.
Summary: The paper extends the logical characterization of GNN --- node classification, by investigating the relational case i.e. multi-relational graphs. The paper generalizes the work of Barcelo et al. [ICLR 2020], who provided the logical characterization of GNNs for node classification in terms of First Order Logic with two variables and counting quantifiers (FOC2). The paper discuss the R2-GNN model, that extends R-GNN --- a GNN architecture for multi-relational graphs --- with a global read out function. The global readout adds aggregation over node features to the combination function in each layer. Some of the key results of the paper are the following: - R2-GNN are not captured in FOC2 and FOC2 is not captured by R2-GNN (for unbounded graphs) - R2-GNN are captured in FOC2 for bounded graphs (given a known upper-bound on the number of nodes) but not vice versa - R2-GNN = FOC2 for simple graphs The paper then extends the expressivity of R2-GNNs to full FOC2 , by introducing a transformation F that runs in linear time w.r.t the multi-graph size (O(|V|+|E|)). Finally, the authors extend there framework to temporal graphs. This is achieved by simply adding additional predicates indexed by time. The authors then discuss the expressivity of this framework with the previously introduced transformation F applied to each time stamp and another transformation comprising union of each time stamped graph. Post-rebuttal: I have read author's rebuttal, and I am more confident about my rating and will keep it, i.e. acceptance. Strengths: The paper extends logical characterization of GNNs in an interesting direction i.e. over relational structures. The theory presented is rather intuitive and the authors try to convey the key proof ideas. They also exploit their theoretical investigation to expand the expressivity of GNNs over relational structures and temporal graphs. They provide convincing experiments (although slightly redundant, see weaknesses) over synthetic and real-world data. Weaknesses: - Quite terse for non-experts - Comparison of all the aggregation functions in tables are not much discussed in the paper and are not really important to the main message of the paper. Maybe this could be removed/shortened to give space for more proof intuitions? Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: When using many predicates, it seems that one could encounter some form of curse of dimensionality --- Do you think R2-GNNs suffer such issues? Specially in the temporal/transformation case, when number of predicates are further increased. In general, some discussion on how the complexity of learning changes with number of predicates could be interesting Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors discuss the fact that R2-TGNN*F^{T} do not improve performance over R2-TGNN on real-world dataset, whereas on their synthetic dataset they do observe improvement. This ambivalence between theory and practice is quite interesting and further discussion of this--- as to what exactly is different in real-world data that makes more expressive models inferior --- could be very interesting Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank Reviewer Yror for the recognition of our original and theoretical results analyzing the expressivity of graph neural networks in the multi-relational and temporal setting. Below, we would like to give detailed responses to each of your comments. > **Q1**: When using many predicates, it seems that one could encounter some form of curse of dimensionality --- Do you think R2-GNNs suffer such issues? Specially in the temporal/transformation case, when number of predicates are further increased. In general, some discussion on how the complexity of learning changes with number of predicates could be interesting **A1**: Yes. The more predicates there are, the more dimensions are needed in the feature initialization function $I()$ defined in line 103. However, it also means the graph itself is more complex, and reasoning on it requires heavier logic formulas. In some sense, dimensionality is a necessary sacrifice when dealing with graphs under huge semantic systems. In fact, one can indeed avoid this sacrifice by changing the initialization function $I()$. As an extreme example, one can define $I()$ as $I(v):=\prod_{i}{p_i^{c_i}}$ where $p_i$ is the $i-$th prime and $c_i$ is a $0/1$ that indicates whether node $v$ satisfies the $i-$th predicate. This initialization function also distinguishes nodes with different properties but outputs only one-dimension-features, no matter how large the predicate set is! However, we think these settings are unnatural and inapplicable for practical GNN models, so we don't use them in our paper. However, it might be worth noting that some theoretical work, such as the proof of Theorem5.2 of [1] indeed uses such settings for theoretical analysis convenience. Graph transformation will only introduce $3$ new predicates, so I think this additional cost is rather small and acceptable. In temporal case, if one uses collapse function H (definition 11 in the paper) to transform a temporal graph to a static graph and then run static R$^2$-GNN on it, this truly introduces $|P|\times T$ predicates and thus huge dimensionality, where $P$ is the unary predicate set and $T$ is the number of timestamps. However, R$^2$-TGNN won't introduce new predicates. It just runs different GNN model instances on different timestamps while preserving the original predicate set on each timestamp. Therefore, R$^2$-TGNN has a size proportional to the timestamp number $T$, but its dimensionality is still $|P|$. Temporal graph transformation $F^T$ (definition 12 in the paper) only introduces $3$ new predicates. In conclusion, R$^2$-TGNN$\circ F^T$ introduces $3$ new predicates in total, which is acceptable. In Theorem 14 of our paper we've proven its superior expressiveness, so there is no need to use the costly $H$ transformation in temporal graphs. > **Q2**: The authors discuss the fact that R$^2$-TGNN$\circ F^{T}$ do not improve performance over R$^2$-TGNN on real-world dataset, whereas on their synthetic dataset they do observe improvement. This ambivalence between theory and practice is quite interesting and further discussion of this--- as to what exactly is different in real-world data that makes more expressive models inferior --- could be very interesting **A2**: We think maybe that's because our real-world dataset has the following two drawbacks when used as a logical expressiveness benchmark. 1. Its labels cannot be modeled as a first-order-logic classifier. For example, maybe two isomorphic graphs (nodes) have different labels in the dataset. This negative fact about real-world datasets has also been observed and illustrated in Gao and Ribeiro's work (Figure 6 of [2]). As a result, permutation-equivariant GNNs can not get correct answers, and transformation increases the chaos. It means the real-world dataset may contain non-logical rules. 2. The intrinsic logical rules in the datasets are too complicated. In this scenario, maybe the transformation sometimes makes these rules more complex. That's because the transformation changes the predicate set and graph pattern. As a result, the transformed intrinsic logical rules become too complicated to capture by our model with bounded size. As we mentioned in the paper, [1] and [3] also observe the phenomenon. Their comments also say these commonly used benchmarks may be inadequate for testing advanced GNN variants. > **Q3**: Comparison among different aggregation functions should be removed/shortened to give space for more proof intuitions__ **A3**: Thanks for your suggestions. We will consider your suggestions and add some more necessary proof intuition in our revised version. [1]The logical expressiveness of graph neural networks ICLR2020 [2]On the equivalence between temporal and static equivariant graph representations ICML2022 [3]Are powerful graph neural nets necessary? a dissection on graph classification arXiv
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: In this paper the authors provide a theoretical analysis of graph neural network (GNN) expressivity on multi-relational and temporal graphs. Specifically, they extend the multi-relational graph convolution network R-GCN to a class of architectures they call R-GNN, which allows for arbitrary aggregation (pooling neighbors with respect to a particular relation) and combination functions (pooling these aggregations with the node representation itself). They further extend this to a class of architectures they call R²-GNN, which adds a global "readout" function which aggregates the feature vectors of all nodes in the graph. They analyze the capability of R²-GNN to classify nodes and compare it to that of $\mathcal{FOC}_2$, a restricted subset of first-order logic which only allows formulas with at most 2 variables but also allows counting quantifiers. They find that neither is a subset of the other, in particular R²-GNN is not able to distinguish whether two neighbors of a given node with different relations are, in fact, the same node. However by first augmenting the graph with a particular transformation $F$ they are able to overcome this restriction, and ultimately show that $R^2-GNNs \circ F = \mathcal{FOC}_2$ on any graph class with a bounded number of nodes. All of this is then extended to temporal graphs, which can also be converted to a multi-relational graph (with a unique set of relations per time-step). A hierarchy of expressivity with respect to current temporal graph frameworks is presented. Finally, experimental results validating the theoretical expressivity on synthetic and real-world datasets are presented. Strengths: The paper is, to the best of my knowledge, original, and theoretical results analyzing the expressivity of graph neural networks in the multi-relational setting are certainly of significant interest to the graph neural network community. The work is of a high quality, and presented clearly. Despite the highly technical nature of the results, the authors do a good job in motivating their work and providing intuition for the proofs. Weaknesses: In general, a point of caution with any strongly theoretical work is whether it applies in practice. In this setting the largest concern comes from the graph transformations. It is not unusual for a graph transformation to extend capabilities in the way the authors have done. A classic example is to extend a method which works on undirected graphs to the setting of directed graphs by creating a new graph with twice as many nodes, one representing "head" and "tail" for each node, however this can make some relationships (eg. transitivity) very difficult to observe. In the author's setting, while the graph augmentation may afford the ability to prove theoretical expressivity results, one might be concerned that the resulting graph is augmented to the point that a GNN architecture may require greater depth to perform the same sort of tasks it did previously. Of course the authors have provided empirical evidence that this augmentation is not problematic in at least three graphs, which is quite reasonable to support their approach, however it is something to bear in mind. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. What is the initialization used for the nodes in $V' \setminus V$? To the best I can tell, they do not have any unary predicates, so they would seem to be initialized with a vector of all zeros. 2. Not a question as much as a comment - the R²-GNN architecture is equivalent to the R-GNN architecture if we first augment the graph to have a new binary relation which forms a complete graph on all nodes. Given that the work already includes a graph augmentation step, an alternative presentation could potentially avoid the R²-GNN architecture altogether, and simply express the entire thing as R-GNN on an augmented graph. I'm not sure it is clearer to take this approach, but it might be worth noting. Most of the paper was very well written, but I did find the following typos (mostly in the intro): L37: authors -> the authors L45: authors -> the authors L46: in -> in the L47: edges in graphs -> edge in the graph L48: of -> (delete) L48: graph needs -> graphs need L51: as -> as a L57: be -> can be L60: to -> to the L60: Besides -> Moreover L61: extensively -> extensively in L69: in -> in the L70: power -> the power L71: such -> such a L72: mutli -> multi L91: to -> to denote L96: node Boolean -> Boolean node L111: vectorsx -> vectors x L338: do somorphism -> perform an isomorphism A minor suggestion: move the sentence in lines 170-171 to around line 85, to motivate the definition of bounded and simple graph classes. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: I did not see much discussion of the limitations of the proposed model. Some of the concerns outlined in the weaknesses section above are good candidates for this. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank Reviewer p5D8 for the recognition of our original and theoretical results analyzing the expressivity of graph neural networks in the multi-relational setting as well as their potential contribution to the graph neural network community. Below, we would like to give detailed responses to each of your comments. ___ > **Q1**: What is the initialization used for the nodes in $V' \backslash V$. To the best I can tell, they do not have any unary predicates, so they would seem to be initialized with a vector of all zeros. **A1**: Yes, your understanding is right. These new nodes are just constructed as auxiliary "bridges", so they don't own any substantial properties themselves. Their node features become meaningful only after messages from primal nodes pass to them. ___ > **Q2**: In general, a point of caution with any strongly theoretical work is whether it applies in practice. In this setting, the largest concern comes from the graph transformations. It is not unusual for a graph transformation to extend capabilities in the way the authors have done. A classic example is to extend a method that works on undirected graphs to the setting of directed graphs by creating a new graph with twice as many nodes, one representing "head" and "tail" for each node, however this can make some relationships (eg. transitivity) very difficult to observe. In the author's setting, while the graph augmentation may afford the ability to prove theoretical expressivity results, one might be concerned that the resulting graph is augmented to the point that a GNN architecture may require greater depth to perform the same sort of tasks it did previously. Of course the authors have provided empirical evidence that this augmentation is not problematic in at least three graphs, which is quite reasonable to support their approach, however it is something to bear in mind. **A2**: Indeed, your observation is on point. In essence, a model's complexity is contingent upon its depth (number of layers) and width (feature dimensions). With the incorporation of graph transformation, a single new unary predicate is introduced, thereby necessitating an additional feature dimension. This augmentation contributes to a minor increase in the model's size, which remains quite acceptable. However, a more intriguing aspect surfaces when we delve into the process of graph transformation, depicted in Figure 2 of the paper. Here, the progression from one-hop to three-hop message passing elongates the process, potentially demanding a tripling of the model's depth to achieve similar task outcomes. Furthermore, the graph transformation introduces two novel relation types, although this aspect is generally less consequential in terms of complexity. ___ > **Q3**: Typos and the suggested adjustment of the paper structure. **A3**: We greatly appreciate your feedback regarding the identified typos and your valuable suggestion for enhancing the readability of our paper. We will address these aspects diligently in our revised version. --- Rebuttal Comment 1.1: Title: Thank you and brief follow-up Comment: Thanks for your clarifications. Could you provide any comment related to my comment related to the equivalent interpretation of R²-GNN as R-GNN with an augmentation? I have copied it below: > Not a question as much as a comment - the R²-GNN architecture is equivalent to the R-GNN architecture if we first augment the graph to have a new binary relation which forms a complete graph on all nodes. Given that the work already includes a graph augmentation step, an alternative presentation could potentially avoid the R²-GNN architecture altogether, and simply express the entire thing as R-GNN on an augmented graph. I'm not sure it is clearer to take this approach, but it might be worth noting. Is my interpretation here correct? If so, would it have been perhaps simpler to present the entire thing as simply R-GNN with a graph augmentation? As some feedback on the comments from (at this time) the remaining reviewer providing a negative score: Reviewer auoR has some concerns with the term "multi-relational" graph, and proposes to use "real world graph" or just "graph". The term "multi-relational" is quite standard in the literature, and I see no particular issue with the author's definition of it. Using "real world graph" would be highly ambiguous, and using just "graph" would be incorrect, as a graph is a tuple $G = (V, E)$ where $E \subseteq V \times V$ but a multi-relational graph is a triple $G' = (V, E, R)$ where $E \subseteq V \times R \times V$. The authors here have chosen to split out binary and unary predicates, which is fairly standard and can be shown to be isomorphic to the multi-relational structure presented here. --- Reply to Comment 1.1.1: Comment: Thanks for your reply. Sorry that we missed one of your issue since we previously misunderstood it in some sense. Your alternative augmentation is correct, but not efficient. Suppose there are $n$ nodes, adding a new complete graph would require $n^2$ edges, which is too costly in many cases. Additionally, it may cause more cost after combined with graph transformation. A more efficient way is the following: We first add a brand new node called ‘Agg’ (abbreviation of aggregation). Then we only need to connect all original nodes to ‘Agg’ with a special new binary relation. This method only introduces $n$ new edges, which is linear. The reason why it works is that you can first integrate messages from all original nodes to ‘Agg’ by this new relation. Then in the next round (layer) you distribute the integrated messages from ‘Agg’ to all original nodes. This two rounds message-passing is equivalent to one-round global readout. In fact, it is almost the same as the standard technic used in definition of WL-test (Please refer to page 31 of [1] if you are interested). However, you may notice that this alternative method may require more depth or feature dimension to do the same task. In this work our theory doesn’t focus on depth and feature dimension so it’s fine, but empirical performances of this method and the current method may differ due to depth/dimension requirement. As for brevity of presentation, there are two possible use of ‘the new relation augmentation’ (abbreviated as augmentation in the following). 1. If we combine the augmentation and graph transformation in the presentation, our original motivation may be hidden: R$^2$-GNN has been studied on single-relation graphs in [2], and they prove R$^2$-GNN is very powerful in single-relation scenario where it can capture $FOC_2$. However, we find this inclusion relationship fails in multi-relation scenario. That’s why we want to calibrate and boost its expressiveness in this new scenario. Therefore, as we want to illustrate our original motivation, we choose to define R$^2$-GNN separately and show its failure in multi-relational graphs, which can’t be combined with the next step. Another reason is that we think calibration of expressiveness of R$^2$-GNN in multi-relational graphs (Section 3) counts for one contribution. We won’t have a chance to show this if we combine augmentation (alternative of R$^2$-GNN) and graph transformation. 2. In our opinion, if we separate augmentation from graph transformation and only use it as an alternative of R$^2$-GNN, the paper length won’t change much: We simply write a short paragraph to describe transition from R-GNN to R$^2$-GNN. If we use augmentation separately, it would be like replacing all line 116-123 with definition of augmentation. Other parts would remain the almost same. Still, we are truly thankful for your interest in our work, as well as the careful reading and review with your own thoughts. [1] Sandra Kiefer. Power and limits of the Weisfeiler-Leman algorithm. PhD thesis, Dissertation, RWTH Aachen University, 2020 [2] The logical expressiveness of graph neural networks ICLR2020
Summary: This work calibrate the logic expressiveness of R2-GNNs as node classifiers on multi-relational graphs. Motivated by some negative results, authors boost R2-GNNs with a graph transformation, which enables R2-GNN to capture FOC2 formulas. They further extend expressiveness results and graph transformation to temporal settings and derive an expressiveness hierarchy of temporal GNNs. Strengths: 1. Clear theoretical results on static and temporal settings. 2. Proposed transformation is straightforward and scalable. Weaknesses: 1. Nearly no related work. There is no related work section. Only the closest related work is mentioned in introduction and some related works are missing. For example, besides [1], some works also analyze GNN's expressivity as boolean classifier. Weifeiler-Leman test have been connected to boolean graph classifier for a long time [2]. [3] also analyze boolean link classifier. 2. The experiments are only conducted on small datasets. You can use larger DGS and AM in RGCN paper [4]. [1] Pablo Barceló, Egor V. Kostylev, Mikael Monet, Jorge Pérez, Juan Reutter, and Juan Pablo Silva. The logical expressiveness of graph neural networks. ICLR, 2020 [2] Jin-yi Cai, Martin Fürer, Neil Immerman, An Optimal Lower Bound on the Number of Variables for Graph Identification. FOCS 1989: 612-617. [3] Xingyue Huang, Miguel A. Romero Orth, Ismail Ilkan Ceylan, Pablo Barceló, A Theory of Link Prediction via Relational Weisfeiler-Leman. CoRR abs/2302.02209 (2023). [4] Michael Sejr Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, Max Welling: Modeling Relational Data with Graph Convolutional Networks. ESWC 2018. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. There is no scalability comparison between the proposed method and baselines. Please add it. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitation is not addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank Reviewer Hbfa for the recognition of our theoretical results on both static and temporal settings as well as our proposed novel transformation. Below, we would like to give detailed responses to each of your comments. ___ > **Q1**: There is no scalability comparison between the proposed method and baselines. Please add it. The experiments are only conducted on small datasets. You can use larger DGS and AM in RGCN paper. **A1**:  We have followed your advice and tested our method and baselines on larger datasets DGS and AM. The results on the two bigger datasets are as follows. From the results, we can see that R$^2$-GNN$\circ F$  outperforms the other baselines, which confirms the scalability of our method. |              | AIFB | MUTAG | DGS  | AM   | | ------------------ | ---- | ----- | ---- | ---- | |# of nodes|8285|23644|333845|1666764| |# of edges|29043|74227|916199|5988321| | R-GNN              | 91.7 | 76.5  | 81.2 | 89.5 | | R$^2$-GNN          | 91.7 | 85.3  | 85.5 | 89.9 | | R$^2$-GNN$\circ F$ | **97.2** | **88.2**  | **88.0** | **91.4** | | R-GCN              | 95.8 | 73.2  | 83.1 | 89.3 | | R-GAT              | 96.9 | 74.4  | 86.9 | 90.0 | > **Q2**: Nearly no related work. There is no related work section. Only the closest related work is mentioned in introduction and some related works are missing. For example, besides [1], some works also analyze GNN's expressivity as boolean classifier. Weifeiler-Leman test have been connected to boolean graph classifier for a long time [2]. [3] also analyze boolean link classifier. **A2**: We didn't have a separate section for related works in this preliminary version due to space constraints. It will be added in later version. Thanks for your proposal. It might be worth noting that [2] focuses more on distinguishing ability, which means whether a logic segment can distinguish two graphs. It is slightly different from classification ability. Specifically, consider that we have a graph (node) property $A$, and a set of logical classifiers $B$ (such as $B=FOC_2$). Distinguishing ability of $B$ over $A$ just guarantees that for each graph (node) $G$ with $A$ and each graph (node) $H$ without $A$, there exists a formula $\varphi_{G,H}\in B$ such that $G$ and $H$ get different labels on $\varphi_{G,H}$. However, Classification ability requires stronger expressiveness in the sense that classification ability of $B$ over $A$ means that there exists a **fixed** formula $\varphi\in B$, such that for every graph (node) $G$ with $A$ and every graph (node) $H$ without $A$, it satisfies $G\models \varphi, H\models \neg \varphi$. This difference appears often in related theoretical analysis, and it has been implicitly and briefly mentioned in the introduction of this preliminary version. [1]The logical expressiveness of graph neural networks ICLR2020 [2]An Optimal Lower Bound on the Number of Variables for Graph Identification FOCS1989 [3]A Theory of Link Prediction via Relational Weisfeiler-Leman arXiv --- Rebuttal Comment 1.1: Comment: Thank you for the detailed reply. It solves my concerns. I am willing to raise my score to 6.
Summary: This paper proposes the R^2-GNN that captures the logic features in the graph. The proposed methodology largely resembles the previous work of ACR-GNN, where a GNN variant is proposed to capture the logic feature for graphs with no relation types. The proposed R^2-GNN extends the previous method by considering the graphs that involve a set of unary and binary predicates and potentially temporal information. To do so, the authors apply the same readout function to the base R-GNN model and show that it can capture the full logic classifiers with an additional graph transformation F. Also an temporal variant R^2-TGNN is proposed to handle the temporal graphs. Strengths: The paper is overall easy to read, however, I didn't check all the proof in detail. Weaknesses: ## Novelty As mentioned in summary, the authors follow the same method in ACR-GNN in modifying R-GNN into R^2-GNN to enable the model to capture the FOC_2 logic family. It seems to me the main methodological differences are the additional graph transformation F and the discussion on temporal graphs. - The former helps the model to distinguish nodes with different relations, while it seems to work empirically, it is more or less an incremental modification to the framework. - For the temporal graphs, the author propose to "temporalize" the graph by mapping the graph snapshots into one single static graph and process it with a slightly modified model, namely R^2-TGNN. That said, this paper is not very novel methodology-wise. ## Quality The definition of a multi-relational graph is more or less redundant and confusing: this is effectively the graph with both unary and binary predicates, and many existing KGs already contain facts of both these types. One can refer to it as a real-world graph or just an ordinary graph instead of as "multi-relational". Some model and design choices lack justification: - It is unclear why the authors pick the R-GNN as the base model, provided that: (1) most existing GNNs can handle graphs with different relations and (2) the proposed modifications such as readout function and transformation F and H are orthogonal to the choice of the model. The authors may want to consider evaluating their method on different base models empirically to better demonstrate the difference - It is also unclear why the task is limited to node classification only: this leads to a narrow choice of the benchmarks in the experiments (L329-L330) and leaves some popular datasets untested such as GDELT and ICEWS. I'm also concerned about the real-world datasets experiments. The authors show the R^2-(T)RGNN achieves better performance than other GNN variants. While empirically this does have some merits, it does not provide any insights into the proposed model. Since the model focuses on learning the latent FOC_2 classifiers. One should first inspect the datasets and see what are the learnable FOC_2 rules in there. And if the model performs better, the authors should take the opportunity and analyze if this indeed resulted from the extra capabilities introduced to the model. ## Clarity The paper is overall easy to read, however, I didn't check all the proof in detail. ## Significance Apart from the lack of novelty, this paper also seems to lack significance. The FOC_2 logic is a small subset of first-order logic. Being able to capture these classifiers leads to minor benefits to the GNN literature: it does not provide sufficient explanation of the GNN model, nor does it tackle the difficult logical reasoning problems on graphs such as multi-hop reasoning, induction, and so on. In the experiments, the model is tested on only synthetic and three real-world datasets, and with no comparison to the SOTA models, so the empirical significance is also unclear. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: See above Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 1 poor Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank Reviewer auoR for the positive feedback and insightful comments. Below, we would like to give detailed responses to each of your comments. > **Q1**: Why the authors pick the R-GNN as the base model. The authors should evaluate their method on different base models.  **A1**: It is worth noting that R-GNN is not a specific model architecture; it is a framework that contains a bunch of different GNN architectures. In the paper, we just said it's generalized from R-GCN [1], but our real goal is to define a generalized framework as an abstraction of most Message-Passing GNNs (MPGNN). We feel sorry for not pointing this out explicitly and perfectly in the paper. In the definitions (line 112 and 118), the functions can be set as any functions you like, such as matrix multiplications or QKV-attentions. Most commonly used GNN such as R-GCN[1] and R-GAT[2] are captured (upper-bounded) within our R-GNN frameworks. Many other related works, such as [3], [4] and [5] also use intrinsically the same frameworks as our R-GNN if you read the definitions, so R-GNN is a kind of abstraction framework for MPGNN that has been approved widely and studied a lot in the community. Therefore, we think analyzing these frameworks leads to common results for many existing GNNs. We've added a series of experiments that use R-GAT [2] as base model. The following results show the generality of our results on different base models within the framework. (Here, We'd like to **re-emphasize** that R-GAT is actually another specific architecture under R-GNN.) |  $FOC_2$ classifier  | $\varphi_1$ | $\varphi_2$ | $\varphi_3$ | $\varphi_4$ | | ------------------------------ | ----------- | ----------- | ----------- | ----------- | |R-GAT $\circ H$|100|61.4|88.6|82.0| | R-GAT+readout $\circ H$| 100| 93.5        | 95.0        | 82.2        |  | R-GAT+readout $\circ F\circ H$ | **100**         | **98.2**        | **100**         | **95.8**  | > **Q2**: It is also unclear why the task is limited to node classification only  **A2**: We consider it as a future work because we are are not sure whether these results can be generalized to other tasks such as link prediction.  Take link prediction as an example. If one wishes to define logics for link prediction, the considered logical formulas have to contain two free variables rather than one in node classification. It may influence some of our theoretical results, but maybe we can use graph transformation to reduce the two-variable case to some one-variable case. We need further research to check whether it is true, and whether our results can be generalized.   > **Q3**: We focuses on learning the latent $FOC_2$ classifiers. One should first inspect the datasets and see what are the learnable $FOC_2$ rules in there. **A3**: Many real-world datasets may not be perfectly modeled as first-order-logic classifiers. Sometimes two isomorphic graphs (nodes) have different labels in the dataset. This negative fact about real-world datasets has also been observed and illustrated in [6] (Figure 6). Even if these datasets can be logically modeled, the challenge lies in extracting intricate latent logical rules from real-world datasets. This very challenge underscores the necessity of employing both synthetic and real-world datasets in our experimentation, as elaborated in this paper. Synthetic datasets serve to showcase enhanced logical expressiveness, whereas real-world datasets demonstrate the tangible performance enhancements stemming from this heightened expressiveness. Considering the intricacy of the datasets, we hold reservations about the efficacy of existing logical extraction algorithms, such as those outlined in [7], for our specific objectives. Given the dataset's complexity, a pragmatic approach seems more viable. We envisage a dataset that not only originates from real-world contexts but also employs explicit logical classifiers. Regrettably, our search for such a benchmark dataset remains unfruitful. > **Q4**: Being able to capture $FOC_2$ leads to minor benefits to the GNN literature. the model is tested on only synthetic and three real-world datasets, and with no comparison to the SOTA models **A4**:  While $FOC_2$ is a weak fragment within the realm of first-order logic, our **impossibility result** as exemplified in Proposition 2 has shown that R$^2$-GNN, a rather powerful framework, remains **inadequate** in capturing the entirety of $FOC_2$. Consequently, the framework also falters in apprehending more intricate logical architectures such as multi-hop and induction, which inherently belong to the broader scope of first-order logic. This realization prompts us to emphasize the imperative of enhancing the logical expressiveness inherent to GNNs. Our goal is to empower them to encompass even the seemingly modest scope of $FOC_2$. A closer examination of the capabilities of MPGNNs reveals that $FOC_2$ is indeed more formidable than initially perceived. In terms of comparison with SOTA, we have added comparison with R-GAT as above. We have to point out that we didn't compare with a few SOTA models, such as [8] because they are not permutation-equivariant and therefore inconsistent for classification tasks. These non-equivariant GNNs won't be equivalent to any logical classifier, and talking about their logical expressivenesses is meaningless. (See [6] for more detail).  [1]Modeling relational data with graph convolutional networks ESWC2018 [2]Relational graph attention networks arXiv [3]The logical expressiveness of graph neural networks ICLR2020 [4]A Theory of Link Prediction via Relational Weisfeiler-Leman arXiv [5]Logical Expressiveness of Graph Neural Network for Knowledge Graph Reasoning arXiv [6]On the equivalence between temporal and static equivariant graph representations ICML2022 [7]Explainablegnn-based models over knowledge graphs ICLR2021 [8]The surprising power of graph neural networks with random node initialization IJCAI2021 --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: After reading the response and other reviews, I decided to keep my score and I'm still inclined to reject, as it does not address my concerns about the novelty and the significance of this work. **Technical novelty**. I'm a bit frustrated that the authors did not attempt to defend the technical novelty issues I raised in the initial comment, and my concerns that (1) *"the proposed method largely resembles the previous work of ACR-GNN"* and (2) *"the additional graph transformation F is more or less an incremental modification"* remain unaddressed. Still, I'm happy to be proven wrong in this regard. **R-GNN**. I appreciate the clarification and I strongly recommend authors revise the draft according to make clear that this is for a family of GNN instead of a particular example. **Limited significance**. Apart from the novelty issue, my biggest concern is the limited significance of this work, and the author's response seems to acknowledge it rather than rebut it: - **The authors confirmed that R^2-GNN is limited to the node classification task only**. This significantly limits the scope of the work, as other tasks such as link prediction are critical to most of the graph-based systems and applications such as graph reasoning, entity resolution, QA, and recommendation. - **The authors also acknowledge that real-world graph datasets are difficult to be modeled with FOL classifiers**. While I fully understand the difficulty here, this limitation inevitably undermines the empirical evaluation and consequentially the claims that are supposed to be supported. As far as I'm concerned, the aim of this work is to enable GNN learning FOC2 classifiers rather than pursuing SOTA performance, then if real-world datasets are inherently bad for showcasing this, then why would one want to include them in the first place? Showing results on real-world datasets that are not SOTA distracts the audience and does not add to the main claim of the work. - **Finally, the authors acknowledge that even though FOC2 is already a fragment of FOL, R^2-GNN still cannot fully capture it**. I appreciate the authors' motivation and understand this is a challenging task, but the fact is that such a framework is indeed far from being practically useful and has yet to advance the explainability of GNN by a significant degree. --- Reply to Comment 1.1.1: Comment: We truly appreciate the time and effort you’ve invested in reviewing our paper. Briefly, you have concerns about the technical novelty and the significance .With the utmost respect, we respectfully hold a contrary viewpoint on these matters. In the ensuing discussion, we will provide detailed elucidations for each of these aspects. > Resemblance of our proposed method and the previous work of ACR-GNN We want to re-emphasize that these two frameworks work in two totally different scenarios, where ACR-GNN focuses only on the simple **single-relational** scenario while our proposed method targets at the more complex **multi-relation** scenario. Besides, as you can see in the Section 3 and Section 4, the formulation and theoretical analysis of them are totally different, too. > Graph transformation is only an insignificant increment. The logical expressiveness of GNN over multi-relational graph is **unexplored**, so after large amounts of theoretical analysis in Section 3, we found that direct extension of ACR-GNN **fails** at capturing $FOC_2$ in multi-relational scenarios. Hence, in Section 4, we innovatively proposed the graph transformation strategy, which is the key strategy to surpass $FOC_2$ and break the expressiveness barrier in multi-relational graphs. In particular, we have provided the detailed theoretical analysis in Section 4 (together with the proof in the Appendix). Experimental results also empirically confirm its superiority in both synthetic and real-world datasets. > The authors confirmed that R^2-GNN is limited to the node classification task only, which limits its significance. Node classification (as well as graph classification) is itself a quintessential task boasting numerous real-world applications. In line with prior researches that also only focus on node classification such as [1], we embrace this task in the context of our paper. Tasks such as link prediction, as elaborated upon in our response (Answer 2), may also be analysed and resolved using our technic, but it demands kind of different formations and proofs for conducting theoretical analyses. After all, other tasks are not focus of this work. > The authors also acknowledge that real-world graph datasets are difficult to be modelled with FOL classifiers. Therefore, real-world dataset experiments are unnecessary. We respectfully hold a differing perspective on this assertion. Our stance is rooted in the recognition that real-world datasets inherently embody noise and complexity, often lacking explicit logical rules. Consequently, the real-world datasets are difficult to be modelled with FOL classifiers. We firstly propose this point in rebuttal just in order to show that your original proposal of logically analysing real-world datasets is unrealistic. Yet, we think experiments on real-world datasets are still important. We want to reiterate the fundamental importance of incorporating both synthetic and real-world datasets in our experimental framework: Synthetic datasets, characterized by well-defined logical rules, serve to show the augmented logical expressiveness facilitated by our approach. On the other hand, the inclusion of real-world datasets demonstrates the practical performance enhancements attributed to this heightened expressiveness. Besides, the inclusion of both synthetic and real-world datasets with similar reason is a common practice in many previous related works, exemplified by [1], [2], and [3]. > Finally, the authors acknowledge that even though $FOC_2$ is a fragment of FOL, R^2-GNN still cannot fully capture it. I think this work is not practical, and doesn’t advance the explainability of GNN by a significant degree. We wish to underscore that while R^2-GNN, in its original form, fails to capture $FOC_2$ within the multi-relation context, our contribution lies in proving that the integration of graph transformation empowers R^2-GNN to overcome this limitation, marking a pivotal advancement of GNN explainability. Besides, to showcase the practicality of our proposed method, we do experiments over the real-world datasets, which exactly **refutes** your aforementioned second limitation saying ‘real-world datasets don’t add to main claim of your work’. In fact, showing the practicality of a methodology certainly holds substantial importance and counts for a main claim. As suggested by other reviewers, we have extended our experiments to encompass larger-scale real-world datasets. These additional results consistently corroborate the applicability, practicality and real-world relevance of our work. Thanks again for your time in writing your comments, and about **R-GNN**, we will take your suggestion and update the description in our revised version. [1] The logical expressiveness of graph neural networks ICLR2020 [2] Rethinking the Expressive Power of GNNs via Graph Biconnectivity ICLR2023 [3] On the equivalence between temporal and static equivariant graph representations ICML2022
null
null
FeCAM: Exploiting the Heterogeneity of Class Distributions in Exemplar-Free Continual Learning
Accept (poster)
Summary: This paper addresses the problem of continual learning, which holds great importance in the machine learning field. The authors argue that using the Mahalanobis distance is more optimal than the Euclidean metric for handling new classes. The proposed method is evaluated on various continual learning settings and compared against state-of-the-art approaches. Strengths: 1. Continual learning is of great importance in the machine learning field, and the availability of code is appreciated. 2. The proposed method is evaluated on multiple continual learning settings and compared against state-of-the-art approaches. 3. The figures effectively illustrate the distribution of old and new classes, making the proposed approach reasonable. Weaknesses: 1. The largest concern about this paper is the assumption of pre-trained model (or say, relying on large number of classes in the first task). There are two typical settings of class-incremental learning, training from scratch and training from half. The former equally assigns classes into each incremental task, while the latter treats half of the total classes in the first task. According to a recent work [1], these different settings concentrate on different aspects of the continual learning algorithms. In this paper, the experiments (as well as empirical observations) are conducted on the training from half setting, making it less convincing on the other setting. Would the proposed method work for the other setting? Will the observations of covariance be the same on the former setting? More empirical evaluations and experimental comparisons are essential. 2. In figure 1, why does the training process lead to the gaussian-like distribution of the seen classes and non-gaussian-like distribution of unseen classes? Perhaps some discussions about theoretical insights behind such phenomena should be addressed. 3. The current proposed method is a combination of several mature tricks, e.g., Mahalanobis distance, Covariance Shrinkage, and Tukey’s transformation. I understand that the most contributions lie in the motivations in the preliminary part, while the combination still makes the novelty/contribution to share with former works. [1] Online Hyperparameter Optimization for Class-Incremental Learning. AAAI 2023 Technical Quality: 3 good Clarity: 3 good Questions for Authors: Remarks 1. In Figure 5, clarification is needed regarding how the authors managed to sample 20,000 instances per class for CIFAR100, as the maximum number of instances per class is 500 for CIFAR. 2. What is “classifier incremental learning” in Line 137? 3. It would be beneficial to provide an implementation guideline with pseudo code. In summary, this paper addresses an interesting phenomenon in the field of continual learning and proposes a simple baseline method to address it. The proposed approach is evaluated on various protocols, including many-shot CIL, few-shot CIL, and pre-trained CIL, demonstrating superior performance compared to state-of-the-art methods. Although there are some evident drawbacks, the novel findings presented in this paper contribute to the continual learning field. Therefore, my initial rating is a borderline accept. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your feedback and suggestions. __W1:__ We agree that usually one method sticks to one setting. Exemplar-free methods use 50% of data in the first task as equally splitting is a much more challenging setting which is usually tackled by storing exemplars or by expanding the network in new tasks. When half of the total classes is present in the first task, the feature extractor is better. When we start with fewer classes (20 classes in the first step) and add 20 new classes at every task, we can observe the same behavior in the table below. FeCAM still works and outperforms other methods. However, the average incremental accuracy is not very high in this challenging setting because the representation learned in the first task is not as good as in the big first task setting. | Method | CIFAR100 | ImageNet-Subset | | :----: | :----: | :----: | | | Avg inc. acc (Final acc) | Avg inc. acc (Final acc) | | NCM (euclidean) | 50.0 (30.6) | 54.5 (35.0) | | FeTrIL | 61.3 (46.2) | 63.1 (48.4) | | FeCAM (ours) | __62.3 (48.1)__ | __66.4 (52.3)__ | __W2:__ As previously stated by Guerriero et al. [A], the highly non-linear nature of deep neural networks leads to isotropic spherical representations of the learned classes and hence using the euclidean distance is effective for NCM classification. However, in the class-incremental learning setting with fixed feature extractor, the unseen or new classes are not explicitly learned using the non-linear deep networks and hence the representations of these classes are anisotropic and not spherical as it used to be the case before the emergence of deep neural networks [B]. We also analyze this phenomenon in Fig. 3 (b,c) of the paper. __Q1:__ This is now clarified in the paper that we sample features for the linear classifier from the gaussian distributions of the old classes (using the prototypes and the covariance matrices for the old classes). We also updated Fig. 5 to clarify this. __Q2:__ We defined the classifier incremental learning in Lines 30-35 of introduction. It refers to the settings where the feature extractor is not updated after the first task and only the classifier is learned in new tasks. __Q3:__ We now added the pseudo code in the supplementary materials. Additionally, we will release the source code of our method in GitHub for reproduction. [A] Samantha Guerriero, et al. Deepncm: Deep nearest class mean classifiers. International Conference on Learning Representations Workshop (ICLR-W), 2018. [B] Thomas Mensink, et al. Distance-based image classification: Generalizing to new classes at near-zero cost. IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2013. --- Rebuttal Comment 1.1: Title: Response to the authors Comment: I thank the authors for the detailed response, which addresses most of my concerns. Although I find the updated figures unavailable, the response contains most of the results. I update my rating to weak accept. BTW, it is still very important to report the full results given various settings (e.g., training from half and scratch), and I expect the authors to report more results on the second setting in the final version. It at least answers the capability and incapability of the proposed method in various CIL settings.
Summary: The authors claim that feature space of a learnt network with old classes is more scattered in newly learned classes and this prevents Euclidean distance to represent the distribution properly. Therefore, they have adopted Mahalanobis distance instead to handle the heterogenety of the newly coming class distribution. To boost up the performance, covariance normalization, covariance shrinkage, covariance matrix approximation have been adopted. Strengths: - The performance presented in this paper is state of the art. Even though bayesian classifier is usually only valid for classification problem, it is still impressive to see the advance in performance. - The settings and amount of experiments in this paper are rich enough to support the hypothesis claimed in this paper. - Especially, thorough ablation over the components of this method is very interesting. Weaknesses: - many existing techniques comprising the proposed method make this paper somewhat short of novelty but I still think the whole idea throughout this paper is reasonable. - In Eq (6), I dont understand why the norm of Y is used all over the equation. Y should be the target of all samples and the norm of this does not seem to be important. I think more explanation about how this equation is derived is required here. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - I agree that a bayesian classifier is well suited with incremental learning. However, when it comes to applying this incremental learning procedure to other kinds of tasks such as segmentation or generative models, will it be still effective to use it? This is not a critical comment and I just want to know the opinion of the authors. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: No comments required here. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your comments and suggestions. __W2:__ This is not the norm. We now clarified this in the paper. We use |A| to indicate the cardinality of a set A. Here, we used | | to signify the number of classes seen till that task. __Q1:__ Yes. Whenever the classification can benefit from having richer representation by using covariance matrix, FeCAM can help with getting better classifier accuracy. However, maybe the covariance matrices should be adapted differently for different tasks. We think that FeCAM can be extended to other tasks in incremental settings which apply prototype-based classification (based on Euclidean distance). Current incremental semantic segmentation methods are mostly not prototype-based, but could be adapted to this setting. In principle, we believe that FeCAM can then be improved to obtain performance gains. We think it is harder to extend theory to incremental generative models. These models typically are not using any explicit Euclidean distance (which could be replaced by the proposed metric). In the case of pseudo-replay [A] where a generative model is used for image replay of old classes, the resulting feature distribution can form realistic anisotropic distributions. However, it should be noted that these methods require an additional generative model, and are typically reported to struggle when applied to complex image datasets (like ImageNet). [A] Shin, Hanul, et al. "Continual learning with deep generative replay." Advances in neural information processing systems 30 (2017). --- Rebuttal Comment 1.1: Comment: My minor concern has been resolved by the author and I maintain my decision as 'Accept'.
Summary: This manuscript introduces a Bayes classifier method FeCAM to address the feature distribution shift within the realm of Continual Incremental Learning (CIL). This method uses Mahalanobis distance formulation and additionally uses some techniques like correlation normalization, covariance shrinkage, and Tukey’s transformation to estimate better covariance matrices for continual classifier learning, thereby enhancing the efficacy of the model in addressing CIL tasks. Strengths: The manuscript presents clear and logical theoretical reasoning, making it easily understandable for readers. Moreover, the method is firmly grounded in a well-defined theoretical framework. Weaknesses: 1. There exist some mistakes, for example, on line 224, it states “20 initial classes and 10 IL steps of 5 classes”. It should be corrected to “20 initial classes and 10 IL steps of 8 classes”. 2. From table 1, it can be observed that the results of the FeTrIL are recomputed, whereas the other results are reproduced using the original configurations of the methods. This deviation contradicts the manuscript 's description. In addition, please explain how the experimental setup in this manuscript differs from that of FeTrIL. 3. On Page 7, the datasets “Split-ImageNet-R”, “Split-CIFAR100”, “CoRe50” and the “domain-incremental learning” need to be further explained. In particular, it is important to explain the differences between Split-CIFAR100 and the previous CIFAR100 dataset. 4. From ablation Studies, it can be observed that the experimental results in row 7 of Table 4 outperform those in row 6. Therefore, when the diagonal matrix is combined with "Tukey", "Shrinkage" and "Norm", whether the performance will be better. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Page 1, ‘Recent approaches to incrementally learning the classifier by freezing the feature extractor after the first task have gained much attention.’ I think it’s necessary to explain the weakness of the method after this sentence. 2. It can be observed that the proposed method FeCAM outperforms the existing state-of-the-art (SOTA) results on all datasets under the few-shot settings. But there should further analysis to explain the advantage of the proposed FeCAM. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The proposed method relies on a pretrained network to learn good representations because the method does not learn new features but reuse the ones learned on the first task. Therefore, when training from scratch, starting with small tasks, this method may not work well. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the inputs and the suggestions. __W1:__ We clarify that in the main text to “50 initial classes and 10 IL steps of 5 classes”. __W2:__ We train FeTrIL with more epochs, which results in improved accuracy in Table 1 compared to the original paper. We recompute FeTrIL as this is the state-of-the-art method with code available. The results of the other methods are taken from the FeTrIL paper which were reproduced from their original papers. We now corrected the text in the paper and clarified this in more detail. __W3:__ We added the following details of these datasets and splits in the supplementary materials. We use the widely-used benchmark in continual learning, Split-CIFAR-100 which splits the original CIFAR-100 [A] into 10 tasks with 10 classes in each task unlike the other settings in Table 1 which have different task splits. Based on ImageNet-R [B], Split-ImageNet-R was recently proposed by [C] for continual learning which contains 200 classes randomly divided into 10 tasks of 20 classes each. It contains data with different styles like cartoon, graffiti and origami, as well as hard examples from ImageNet with a high intra-class diversity making it more challenging for CIL experiments. We use CoRe50 [D] for domain-incremental settings where the domain of the same class of objects is changing in new tasks. It consists of 50 different types of objects from 11 domains. The first 8 domains are used for learning and the other 3 domains are used for testing. Since it has a single test task, we report the test accuracy after learning on all 8 domains. __W4:__ As mentioned in lines 307 to 309, the diagonal matrix is normalized (differently from Eq. 7) by dividing with the norm of the diagonal. Eq. 7 does not make sense with the diagonal matrix since it does not have the covariance values and making the diagonal values one, will lose all the information. So, we perform normalization of only the variance values. The results in row 7 of Table 4 uses ‘Tukey’, ‘Covariance Shrinkage’ and ‘Normalization’ but not the normalization from Eq. 7. We now clarified this in Table 4. __Q1:__ We add the following statement in line 33: One of the drawbacks is the inability to learn new representations with a frozen feature extractor. __Q2:__ We added more analysis to explain the advantage of FeCAM, particularly in the few-shot settings: FeCAM can easily be adapted to available few-shot methods in CIL since most methods obtain class prototypes from few-shot data of new classes and then use the euclidean distance for classification. We show in our paper that starting from the base task model from ALICE [E] and simply using the FeCAM metric for classification significantly improves the performance across all tasks for the standard few-shot CIL benchmarks. For further analysis to demonstrate the applicability of FeCAM, we take the base task model from FACT [F] and use FeCAM in the incremental tasks on the CUB200 dataset. FeCAM improves the performance on all tasks when applied to FACT as shown in the table below. | Method | Task 0 | Task 1 | Task 2 | Task 3 | Task 4 | Task 5 | Task 6 | Task 7 | Task 8 | Task 9 | Task 10 | Avg | | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: | :-----: | | FACT | __77.9__ | 74.9 | 71.6 | 66.3 | 65.9 | 62.5 | 61.2 | 59.8 | 57.9 | 57.6 | 56.4 | 64.7 | | FACT+FeCAM | __77.9__ | __75.3__ | __72.2__ | __67.6__ | __67.0__ | __63.5__| __62.4__ | __61.3__ | __59.8__ | __59.1__ | __57.9__ | __65.8__ | One of the main drawbacks of the many-shot continual learning methods is overfitting on few-shot data from new classes and hence these methods are not suited for few-shot settings. FeCAM is a single solution for both many-shot and few-shot settings and thus can be applied in both type of continual learning settings. [A] Alex Krizhevsky, et al. Learning multiple layers of features from tiny images. 2009. [B] Dan Hendrycks, et al. The many faces of robustness: A critical analysis of out-of-distribution generalization. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021. [C] Wang Z, et al. Dualprompt: Complementary prompting for rehearsal-free continual learning. In European Conference on Computer Vision, 2022. [D] Vincenzo Lomonaco and Davide Maltoni. Core50: a new dataset and benchmark for continuous object recognition. In Conference on Robot Learning, pages 17–26. PMLR, 2017. [E] Can Peng, et al. Few-shot class-incremental learning from an open-set perspective. In European Conference on Computer Vision (ECCV), 2022. [F] Da-Wei Zhou, et al. Forward compatible few-shot class-incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022.
Summary: The authors study classifiers in the incremental learning scenario with a fixed strong pre-trained feature extractor. They point out the limitations of the Euclidean distance-based nearest class mean classifier in continual learning and demonstrate the benefits of Mahalanobis distance-based classifier. Several methods (shrinkage, Tukey’s transformation, covariance normalization) are proposed to estimate the covariance matrices and their effectiveness is examined through ablation studies. The authors evaluate their method in various settings, including many-shot CIL, few-shot CIL, and domain incremental learning. Although the proposed method does not require saving previous samples, it outperforms the existing CL methods including those based on replay, in terms of performance. Strengths: 1. The proposed method is simply, yet outperforms the existing methods 2. The proposed method does not require saving previous samples 3. The paper is well-written and easy to follow Weaknesses: 1. My main concern regarding this method is that it relies on a fixed feature extractor after the initial training, with only the classifier being adapted. As [1] shows, fixed models underperform when the pre-training data and the continual learning data are dissimilar because they cannot acquire new knowledge. It’d make the paper stronger if the authors show how the model performs with a feature extractor pre-trained with dissimilar classes from CL. For instance, [2] uses a model pre-trained with ImageNet after removing the classes similar to CIFAR and Tiny-ImageNet. 2. Saving the covariance matrix for each class in the feature-level seems to be expensive for memory, especially when there are a large number of classes. [1] Ostapenko et al., Continual learning with foundation models: An empirical study of latent replay. CoLLAs, 2022 [2] Kim et al. A multi-head model for continual learning via out-of-distribution replay. CoLLAs, 2022 Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. How does the performance change when using models pre-trained using classes dissimilar to the classes used in CL? 2. Can the authors provide any suggestions on how to apply their method for a trainable feature extractor? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Refer to Weaknesses and Questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the inputs and suggestions. __W1, Q1:__ Similar to [2], we perform experiments using the DeiT-S/16 vision transformer pretrained on the ImageNet data with different pre-training data splits and then evaluate the performance of NCM (with euclidean distance) and the proposed FeCAM method on Split-CIFAR100 (10 tasks with 10 classes in each task). In order to make sure that the pretrained classes are not similar to the classes of CIFAR100, [2] manually removed 389 classes from the 1000 classes in ImageNet. We take the publicly available DeiT-S/16 weights pre-trained on remaining 611 classes of ImageNet by [2] and evaluate NCM and FeCAM. As expected, the performance of both methods drops a bit when the pre-training is not done on the similar classes. Still FeCAM outperforms NCM by about 10% on the final accuracy. Thus, this experiment further validates the effectiveness of modeling the covariance relations using our FeCAM method in settings where images from the initial task are dissimilar to new task images. | Split-CIFAR100 | DeiT-S/16 pre-trained on 1k classes | DeiT-S/16 pre-trained on 611 classes [2] | | :-------: | :-------: | :-----: | | | Avg Inc Acc (Final Acc) | Avg Inc Acc (Final Acc) | | NCM (euclidean) | 71.4 (60.5) | 69.2 (58.5) | | FeCAM (ours) | __78.5 (70.2)__ | __76.9 (68.6)__ | __W2:__ We want to point out that we describe this in the supplementary materials and we discuss it in comparison to exemplars-based methods where we show that FeCAM requires much less storage space as compared to exemplar-based methods. Furthermore, due to the symmetric nature of covariance matrices, we can store half (lower or upper triangular) of the covariance matrices and reduce the storage to half. The analysis of storage requirements after every task for FeCAM and the exemplar-based methods (storing 2000 exempars) for the ImageNet-Subset are as follows: | Method | Task 0 | Task 1 | Task 2 | Task 3 | Task 4 | Task 5 | | :----: | :----: | :----: | :----: | :----: | :----: | :----: | | Exemplar-based | 312 MB | 312 MB | 312 MB | 312 MB | 312 MB | 312 MB | | FeCAM (ours) | 53 MB | 63 MB | 74 MB | 84 MB | 95 MB | 105 MB | As future work, using covariance matrix factorization to reduce the storage requirements can be explored. SVD(covariance matrix) = U\*S\*V Here, the singular values S are in descending order. So, one can consider taking the first k singular values from S, and similarly truncating the U and V matrices (by taking the first k columns from U and the first k rows from V). This direction can be explored since it reduces the matrix dimensions from 512\*512 to (512\*k + k + k\*512) where k should at least be less than 256. __Q2__: As a consequence of backbone feature drift, the distribution of previously seen classes change and would need to be adapted. Semantic Drift Compensation (SDC) [A] proposes a method to estimate the drift of a single point in the latent space, based on the observed drift of the current data. In [A] this method is used to update the mean of the distribution (it is then combined with the Euclidean distance). We would also need to update the covariance matrix of the distribution. A potential approach could sample a set of points from the previous distribution, apply SDC to each of them, and then compute the covariance matrix on these points. We can then use FeCAM with the updated covariance matrix and the updated prototypes. [A] Lu Yu, et al. Semantic drift compensation for class-incremental learning. In Conference on Computer Vision and Pattern Recognition (CVPR), 2020. --- Rebuttal Comment 1.1: Title: Response to the author comments Comment: Thank you for the detailed responses. My concerns are addressed. I raised my score from 5 to 6. Please reflect the discussions and experiments in the revision.
Rebuttal 1: Rebuttal: We thank the reviewers for their insightful comments and sincerely appreciate their efforts in providing valuable feedback. The reviewers agree that the paper is very well-written and is easy to follow (MMgd, nDen), has clear and logical theoretical reasoning (YVv6), along with a good motivation with plots and figures (MMgd, QHUR) and also appreciated the availability of code (QHUR). The reviewers appreciated the multiple experimental settings (9pER, QHUR, MMgd) and the thorough ablation experiments (9pER). We address all the weaknesses and questions raised by all the reviewers in the respective rebuttals. We believe that most of these inputs are valid and greatly improve our paper. We summarize the major changes we made to the paper in the rebuttal process: 1. In order to make sure that the pretrained classes are not similar to the incremental classes, we perform experiments with the base model pre-trained with only the dissimilar classes and observe that FeCAM still performs good and improves over euclidean-NCM significantly. 2. We clarified the experimental settings for many-shot methods in more detail. We also added the details of the Split-CIFAR100, Split-ImageNet-R and CoRe50 datasets. 3. We performed more experiments to show that FeCAM can be easily adapted to multiple few-shot continual learning methods with improved performance. 4. We clarified the notations in equation 6. 5. We perform experiments in settings starting with only 20 classes in the first step and show that FeCAM still outperforms the most competitive exemplar-free method FeTrIL and NCM with euclidean distance. 6. We updated Figure 5 for more clarity. We also added a pseudo code of FeCAM. We hope that our responses to the reviewer's questions and the additional experiments will help in the review process. Regards, Authors
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper presents a method for class-incremental learning when a strong base classifier is available and incrementally classes are added to the classifier. The base classifier is kept frozen and another classifier is used on top of the features from this classifier. Instead of using deep networks, authors propose using a bayes classifier and modeling the distance between classes using mahalanobis distance instead of the popular euclidean distance. Authors argue and show that the features of the incoming new classes are heterogenous and are better suited to be measured with mahalanobis distance. The authors also propose some solutions to the problems with the covariance matrix in mahalanobis distance calculation and to improve the performance further. The method is evaluated on many and few-shot settings and evaluated on many standard datasets where the method seems to beat many existing methods. Strengths: 1. The paper is very well-written and easy to follow, supplementary material also provides good background and completes the picture, 2. Authors do a good job in motivating and validating their motivation and methodology, with plots, figures and qualitative numbers. The use of mahalanobis distance and corresponding proposals to the covariance matrix seem like simple but powerful changes. 3. The method works without additional training of classifiers so it would be quicker to incorporate new classes in an incremental setting 4. It does not use significant additional storage space and the quantitatively the methods does well compared to existing methods Weaknesses: 1. There are no error bars added for the results which would provide more confidence in the results, it is a convention in incremental learning research to only have a single plot per method without any error bars, earlier since it used to be computationally expensive but no reason we should start doing it now, 2. The limitations already address this, but a follow-up on changing features would exacerbate the drift problem and would love to see the theory handle that case as well, the authors already list as future work so no action is needed for this submission, Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. How is the order of the classes which are added decided? 2. Which classes are selected as part of MiniImagenet and ImagenetSubset, the classes and few-shot images should also be provided for full reproducibility Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Authors have sufficiently addressed the limitations of the method. The limitations make the method only applicable in certain cases but the authors acknowledge it and the performance of the method seems pretty good in that setting. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for all the feedback and suggestions. __W1:__ FeCAM is independent of random initialization. Since we do not train any deep neural network after the first step, the method is deterministic and thus we do not see any variation in the results on multiple runs. __W2:__ As a consequence of changing features and the backbone feature drift, the distribution of previously seen classes change and would need to be adapted. Semantic Drift Compensation (SDC) [A] proposes a method to estimate the drift of a single point in the latent space, based on the observed drift of the current data. In [A] this method is used to update the mean of the distribution (it is then combined with the Euclidean distance). We would also need to update the covariance matrix of the distribution. A potential approach could sample a set of points from the previous distribution, apply SDC to each of them, and then compute the covariance matrix on these points. We can then use FeCAM with the updated covariance matrix and the updated prototypes. __Q1:__ Following iCaRL [B], PyCIL [C] and several other CIL papers, we follow the same order of classes for fair comparison (iCaRL uses seed 1993). We included the sequence of class indices in supplementary material. __Q2:__ For ImageNet-Subset in many-shot settings, we use the same classes as in PyCIL [C], which is openly available on Kaggle platform under the name: `arjunashok33/imagenet-subset-for-inc-learn`. For miniImageNet in few-shot setting, we use the same set of images in every task as done previously in [D, E, F]. We will release the source code of our method in GitHub for reproduction (code for one setting is already provided in the supplementary materials). In the source code, the dataset details are more clear and can be easily used for reproducing our results. [A] Lu Yu, et al. Semantic drift compensation for class-incremental learning. In Conference on Computer Vision and Pattern Recognition (CVPR), 2020. [B] Sylvestre-Alvise Rebuffi, et al. icarl: Incremental classifier and representation learning. In Conference on Computer Vision and Pattern Recognition (CVPR), 2017. [C] Da-Wei Zhou, et al. Pycil: a python toolbox for class-incremental learning. SCIENCE CHINA Information Sciences, 2023. [D] Chi Zhang, et al. Few-shot incremental learning with continually evolved classifiers. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2021. [E] Can Peng, et al. Few-shot class-incremental learning from an open-set perspective. In European Conference on Computer Vision (ECCV), 2022. [F] Da-Wei Zhou, et al. Forward compatible few-shot class-incremental learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022.
null
null
null
null
null
null
Neural Lyapunov Control for Discrete-Time Systems
Accept (poster)
Summary: The paper proposes a Lyapunov control method for discrete-time systems, in contrast to previous approaches targeted at continuous-time systems. The paper proposes a mixed-integer linear programming approach for verifying stability conditions in discrete-time systems, a technique for computing sub-level sets which define the region of attraction, and a heuristic gradient-based approach for finding counterexamples that can accelerate the learning of Lyapunov functions. Strengths: I appreciate the counterexample generation method proposed in this work. It leverages the key ideas in adversarial training in the computer vision domain to use gradients to accelerate the discovery of counterexamples. Weaknesses: - The Heuristic Counterexample Generation cannot guarantee all counterexamples are found. It relies on sampling in the state space, but sampling, the method itself, cannot exhaust the state space. Therefore, the verification is more like an empirical evaluation. I am not saying being “empirical” is not good, but it just simply does not match the verification objective which should be very strict. - The authors did not explain if it is truly challenging to train in discrete time system compared to training in continuos time system. In fact, the continuous time system and discrete time system can share the same framework in training. See https://arxiv.org/pdf/2101.05436.pdf how they computed dot{h} in the paragraph below (7). - I would like to discuss with the authors if verification is truly necessary in Learning for Control. As we know, real-world dynamics are much more complex than the dynamics in the neural Lyapunov studies. The verification methods presented in these papers are very difficult to be applied to the real world and make real impacts. Often, a method that pursues excellent empirical performance instead of 100% rigorous verification will be closer to real-world scenarios in control (other domains such as planning or chip design might be different). Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weaknesses. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: See weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful and engaging comments. Below we provide our responses. 1. ***Comment:*** *The Heuristic Counterexample Generation cannot guarantee all counterexamples are found. It relies on sampling in the state space, but sampling, the method itself, cannot exhaust the state space. Therefore, the verification is more like an empirical evaluation.* ***Response:*** This appears to be a misunderstanding. In fact, our approach uses both heuristic and sound counterexample generation (using MILP). It is this combination that enables both improved efficacy, as well as soundness of the overall approach in that in the end the Lyapunov conditions are provably satisfied. In particular, the controller that our algorithm returns is always verified to be stable using the sound MILP verifier that we describe, and it is the stopping criterion of our algorithm. Consequently, as we assert in Theorem 4.1, our algorithm always returns provable stable control (in the sense of satisfying Lyapunov stability conditions) by construction. $\ $ 2. ***Comment:*** *The authors did not explain if it is truly challenging to train in discrete time system compared to training in continuos time system.* ***Response:*** As we emphasize in our Responses to Reviewer qMzc above, we can exploit structure in discrete-time settings that cannot be so easily exploited in continuous-time domains. As a result, we actually attain order-of-magnitude improvement compared to prior art, which is mostly in continuous time. Since neural network controllers take non-negligible computation to output control, systems using these are effectively discrete time, and the structure we exploit is therefore highly relevant. $\ $ 3. ***Comment:*** *In fact, the continuous time system and discrete time system can share the same framework in training. See https://arxiv.org/pdf/2101.05436.pdf how they computed dot{h} in the paragraph below (7).* ***Response:*** Note that in the referenced paper the authors are not doing strict verification, and their evaluation is entirely empirical. Thus, Qin et al. (the referenced paper) approximate the continuous time derivative $\dot{h}$ numerically and still achieve good empirical results. In contrast, we achieve fully verified stability (see our response to Comment 1 above). Since the definition of continuous and discrete-time Lyapunov stability condition are different, we cannot use the numerical method for derivatives in the paper, as doing so will not allow the controller-Lyapunov function pair to pass our MILP verifier. More precisely, in continuous-time settings, the key condition is $\forall x \in D; \nabla_{f_u} V(x) \leq 0$, where $\nabla_{f_u} V(x)$ is Lie derivative of function $V$ w.r.t. vector field $f_u$ at point $x$. In discrete-time settings, the condition is: $\forall x \in D; V(f(x)) - V(x) \leq 0$. Satisfying the former condition does not, in general, satisfy the latter (because discretized continuous-time systems imply that control is effectively piecewise constant, rather than continuously adapting to state feedback). $\ $ In addition the nature of our training framework is itself quite different from Qin et al. (and more akin to the NLC framework) in that we leverage a form of counterexample-guided abstraction and refinement (CEGAR) paradigm, iteratively generating counterexamples and adding these to training data. Our key advances within this framework involve the use of gradient-based heuristic counterexample generation and MILP-based verification, which in combination achieve significant improvements over prior art. $\ $ A final distinction is that Qin et al. are primarily concerned with safety (avoiding unsafe sets), while we focus on Lyapunov stability (ensuring that the system converges to the origin). $\ $ 4. ***Comment:*** *I would like to discuss with the authors if verification is truly necessary in Learning for Control. As we know, real-world dynamics are much more complex than the dynamics in the neural Lyapunov studies. The verification methods presented in these papers are very difficult to be applied to the real world and make real impacts. Often, a method that pursues excellent empirical performance instead of 100\% rigorous verification will be closer to real-world scenarios in control (other domains such as planning or chip design might be different).* ***Response:*** This is wonderful discussion to have! Indeed, the current state of the art of approaches that yield fully verified properties such as stability cannot be applied to the full complexity of real control problems. Nevertheless, just because we cannot do this now does not mean that we will never have this ability. We believe that our work serves as an important stepping stone in the line of research that continues to improve scalability. It can often takes several years of concerted effort to achieve a high-impact goal, and we feel that the aspiration to scalable synthesis of fully-verified control in real systems is a worthy goal to aspire to. In the meantime, we must indeed engage in a combination of empirical evaluation and testing with verification of smaller subcomponents, but the goal is to be able to verify larger and larger system components over time, until we can verify the full system. --- Rebuttal Comment 1.1: Title: Thanks for the Response Comment: I appreciate the authors' response to my questions. Since the first weakness is a misunderstanding, I don't have further question on the techinical validity of the proposed approach. I will change to accept. --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer's thoughtful consideration of our paper and responses, and glad that we were able to clarify the misunderstanding. Please let us know if you have any further questions.
Summary: The stability of nonlinear systems has long been a challenge in the field of control systems, with current methodologies employing Lyapunov stability theory to derive control policies. However, finding suitable Lyapunov functions for these systems is notably complex. To address this, researchers have started using neural networks to approximate Lyapunov functions, though primarily for continuous-time systems. The paper introduces a first-of-its-kind approach for learning neural Lyapunov control in discrete-time systems. This approach has three critical components:A unique mixed-integer linear programming method to verify stability conditions.A new method for computing sub-level sets to identify the region of attraction.A heuristic gradient-based method to rapidly find counterexamples, which improves the learning process of Lyapunov functions.Experimental results show a significant improvement over current techniques, outperforming neural Lyapunov control baselines by an order of magnitude in both running time and the size of the region of attraction. For the 'cartpole' and 'PVTOL' benchmarks, this is the first automated method to yield a provably stable controller. Strengths: This method outperform recent neural Lyapunov control baselines by an order of magnitude in both running time and the size of the region of attraction. Weaknesses: The loss function is then a weighted sum of several terms, which is hard to balance. Technical Quality: 3 good Clarity: 3 good Questions for Authors: How does your method compare in terms of advantages to similar works like "Reinforcement Learning Control of Constrained Dynamic Systems with Uniformly Ultimate Boundedness Stability Guarantee" by Han M, Tian Y, Zhang L, et al. (Automatica, 2021, 129: 109689)? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: This method leverages a mixed-integer linear programming approach, which is known to be NP- hard. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the comments. Our detailed responses are below. 1. ***Comment:*** *The loss function is then a weighted sum of several terms, which is hard to balance.* ***Response:*** It is common for loss function to be the weighted sum of several terms, as those weights are hyperparameters for learning we need to calibrate. For example, in both of our baselines, NLC and UNL, their loss function are the weighted sum of several terms. Similarly, standard RL approaches such as PPO also involve many hyperparameters. $\ $ 2. ***Comment:*** *How does your method compare in terms of advantages to similar works like "Reinforcement Learning Control of Constrained Dynamic Systems with Uniformly Ultimate Boundedness Stability Guarantee" by Han M, Tian Y, Zhang L, et al. (Automatica, 2021, 129: 109689)?* ***Response:*** We would like to clarify that the referenced paper is addressing a fundamentally different problem from ours. Although they also refer to a notion of stability, it is quite distinct, and is more useful for providing safety than stability guarantees. For example, in our paper we want the cartpole to converge to and remain at the origin, while in their experiment they want the carpole position to remain within a target safe interval ($x<4$). $\ $ Due to the formal distinction, the goal of Han et al. is to achieve what is commonly referred to as safety, rather than stability in the Lyapunov sense. We would also like to point out that Han et al. does not provide any strict verification guarantees like ours (for example, their Theorem 1 only guarantees safety approximately), which makes their evaluation essentially empirical, while our approach returns a controller that is provably stable. Moreover, while Han et al. also utilize Lyapunov functions, the associated conditions are in expectation, rather than uniform over all states, as required in the notion of stability that we use. $\ $ To be more precise, in Han et al. the key concept is uniformly ultimate boundedness (UUB), which is defined as follows (Defn. 1 on page 3; also see Thowsen, 1983): A system is said to be uniformly ultimately bounded (UUB) with ultimate bound $\eta$, if there exist positive constants $b$, $\eta$, where $\forall \epsilon < b$ there exists $T(\epsilon, \eta)$, such that $\|x_{0}\| < \epsilon$ $\Rightarrow$ $\|x_t\| < \eta$, $\forall t > T(\epsilon, \eta)$. At the high level, this means that trajectories that start from inside the safe set (defined by $\|x_{0}\| < \epsilon$) will remain inside a subset of the safe set (defined by $\|x_t\| < \eta$), for suitably chosen $b$ and $\eta$. $\ $ In contrast, the stability notion we are concerned with is defined as follows: a system is stable within a region of attraction $\mathcal{R}$ (which includes the origin) if $\forall x_0 \in \mathcal{R}$, $\lim_{t\rightarrow\infty} x_t = 0$, where $x_{t+1} = f(x_t)$ (we omit the control input here to simplify discussion). At the high level, we require convergence to the origin from any starting point in the region of attraction, which we also aim to make as large as possible. $\ $ 3. ***Comment:*** *This method leverages a mixed-integer linear programming approach, which is known to be NP-hard.* ***Response:*** Tight verification is typically NP-hard, so that the main avenues for advances in scalability are either in the direction of relaxation (which loses tightness) or taking advantage of problem structure. MILP is an approach that takes advantage of any linearity structre in the problem, and MILP solvers can indeed scale well, although they, too, have their limits. While clearly there is much research that remains in further scaling the learning of provably stable control, we note that our approach is an order-of-magnitude advance in efficacy/scalability over prior art for the more challenging problems. --- Rebuttal Comment 1.1: Comment: Dear Reviewer ruNW. Thank you again for your review. As the author-reviewer discussion phase is closing relatively soon, we hope you will be able to take a look at our detailed responses, which address all three of your questions and concerns. In particular, we would be very happy to field any additional questions that you may have.
Summary: This paper extends the neural Lyapunov control method to the discrete-time dynamical systems and propose two tricks to quickly find the counterexamples to accelerate the training process. The proposed method is tested in several standard tasks with comparison to the SOTA methods. Strengths: [1] The proposed mixed-integer LP and PGD method can efficiently find the counterexamples in the training process. [2] The proposed method can find the largest ROA compared to the existing methods. Weaknesses: [1] The extension of neural Lyapunov control from continuous-time dynamical systems to discrete-time dynamical systems has little contribution to the existing neural control community because the Lyapunov theories for continuous/discrete system are essentially the same. [2] The current method has huge computational cost for the case where the high-dimensional tasks are taken into account. [3] Lemma 3.1 is NOT mathematically correct. Technical Quality: 1 poor Clarity: 2 fair Questions for Authors: [1] Does the proposed method scale to the high-dimensional systems such as controlling the molecular dynamics? [2] The neural Lyapunov method for unknown system (UNL) focuses on the realistic model-free settings, while the proposed method relies on the fully known dynamics, is the comparison in terms of size of ROA fair? [3] The problematic Lemma 3.1 cannot guarantee the soundness of the proposed method. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 1 poor Presentation: 2 fair Contribution: 1 poor Limitations: The authors only test the proposed method in toy models; however, they do not show the current method can be applicable in the real-world scenarios. Flag For Ethics Review: ['No ethics review needed.'] Rating: 2: Strong Reject: For instance, a paper with major technical flaws, and/or poor evaluation, limited impact, poor reproducibility and mostly unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful comments. We respond to these below. 1. ***Comment:*** *The extension of neural Lyapunov control from continuous-time dynamical systems to discrete-time dynamical systems has little contribution to the existing neural control community because the Lyapunov theories for continuous/discrete system are essentially the same.* ***Response:*** As we highlight in our response to Reviewer qMzc (in particular, our response to Comment 1), there is a very consequential difference in the Lyapunov conditions between continuous-time and discrete-time systems. It is this difference that enables us to exploit the structure of the condition in discrete-time systems to enable a substantial increase in efficacy that we exhibit in the experiments. We also wish to emphasize that stability for continuous-time systems does not imply stability in discrete-time systems, nor is the converse true. For example, in our inverted pendulum experiment, all the the continuous-time controller and Lyapunov function generated by the NLC method fails the discrete-time Lyapunov condition. Moreover, as neural network control takes non-negligible computational time in a real robotic system, the controller is effectively discrete time, and stability guarantees for continuous-time systems would in general be unsound in such settings. $\ $ 2. ***Comment:*** *The current method has huge computational cost for the case where the high-dimensional tasks are taken into account. Does the proposed method scale to the high-dimensional systems such as controlling the molecular dynamics?* ***Response:*** Today, no method exists that can work on systems of such scale with general non-linear dynamics and provable stability guarantees. As our experiments demonstrate, our approach is an order-of-magnitude improvement over the state of the art in higher-dimensional settings, but clearly there is much research that remains to further improve scalability. The key bottleneck in this setting remains sound verification of stability. $\ $ 3. ***Comment:*** *The neural Lyapunov method for unknown system (UNL) focuses on the realistic model-free settings, while the proposed method relies on the fully known dynamics, is the comparison in terms of size of ROA fair?* ***Response:*** We appreciate this point. We wish to clarify that the comparison is not intended to make claims about efficacy of UNL, which is indeed targeted at a far more challenging problem; UNL simply serves as another state of the art baseline. Not including it would have exposed us to a criticism that UNL is a more recent approach and could be a more competitive baseline than others (which, as we show, is not the case). $\ $ 4. ***Comment:*** *The authors only test the proposed method in toy models; however, they do not show the current method can be applicable in the real-world scenarios.* ***Response:*** We observe that both of our baseline papers, UNL and NLC, utilize such control dynamics. Indeed, the benchmarks we use are common within the control community. Our aim is for our work to serve as a stepping stone towards practical application in real-world scenarios. --- Rebuttal Comment 1.1: Comment: Thank you very much for the response. I will leave the scores as is. --- Reply to Comment 1.1.1: Comment: Thank you for reading our response. Would you be able to clarify why you did not find our responses convincing? We had addressed all of the weaknesses, questions, and limitations that the reviewer had raised. Perhaps the most compelling evidence that our paper significantly advances the state of the art comes from our experimental evidence, in comparison with prior approaches. We explain, both in the paper, and in responses to other reviewers, what technical advances enabled the considerable improvements we observe in the experiments. In particular, what may perhaps be a misunderstanding is that our key observation is that Lyapunov conditions for discrete-time systems possess important structure that we take advantage of.
Summary: This paper studies learning stabilizing controllers with neural Lyapunov functions in discrete-time nonlinear systems. Previous works on neural Lyapunov control focused on continuous-time systems, and this is the first work that studies discrete-time systems. The outline of the proposed approach is similar with prior works on continuous-time systems, while the authors propose several key novel techniques for each step: They only use ReLU activations in the neural Lyapunov functions so that the constraints can be verified by mixed-integer programming. The authors also proposed a heuristic way to generate counterexamples efficiently via projected gradient descent. The simulation results show that the proposed method can find larger ROA and be implemented efficiently in several examples. Strengths: Learning neural Lyapunov control in discrete-time systems is important for many applications. This work proposes many novel techniques to handle the difficulties that have been found in prior works. The simulation results are promising. Weaknesses: I have some minor concerns about the presentation. After reading this paper, it is not clear to me which challenges are specific to the discrete-time system. In the first part of verifying Lyapunov constraints, the authors take the motivation from SMT solvers. However, this is also a challenge for neural Lyapunov control in continuous-time systems. I hope the authors can add a discussion about what efforts have been made to improve efficiencies in the continuous-time systems and why we cannot directly apply them here. And for the proposed ReLU network plus mixed-integer programming, does this method also apply to continuous-time systems? Or is it limited to discrete-time systems because of some special properties? There is a similar question for proposed heuristic for generating counterexamples. Is this heuristic limited to discrete-time system? If not, can we replace the counterexample generation step in existing algorithms for continuous-time system and see if the performance improves? Another minor concern I have with the PGD heuristic is that it might fail to find a counterexample because gradient-based search can get stuck in local minimums. Thus, in the final step when the algorithm claims success, shall we use a less efficient way to verify the Lyapunov conditions are actually satisfied? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see my comments in the previous section. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I didn't see the discussion about limitations or future directions in my read. But I don't think there is any potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's thoughtful and detailed comments and questions, to which we respond in detail below. 1. ***Comment:*** *It is not clear to me which challenges are specific to the discrete-time system.* ***Response:*** There is a consequential distinction in the definition of Lyapunov functions between continuous and discrete-time settings. In continuous-time settings, the key condition is $\forall x \in D; \nabla_{f_u} V(x) \leq 0$, where $\nabla_{f_u} V(x)$ is Lie derivative of function $V$ w.r.t. vector field $f_u$ at point $x$. In discrete-time settings, it is: $\forall x \in D; V(f(x)) - V(x) \leq 0$. If $V$ is a neural network with ReLU activations, it is not continuously differentiable, which makes it non-obvious how we can construct a MILP to verify the condition on the Lie derivative uniformly over the state space that is needed in continuous-time systems. However, this issue does not come up for the Lyapunov condition in discrete-time systems. An alternative could be to replace ReLU units with continuously differentiable neurons, but these must now be approximated in an MILP formulation using piecewise linear functions, which is a major challenge to scalability and efficacy, as it introduces an approximation gap that makes the Lyapunov condition even more difficult to achieve as we get closer to the origin. Another alternative is to use dReal for verification, but as we demonstrate above (see the data in response to Reviewer NZiv, Comment 2), MILP is in fact a critical ingredient in enabling high efficacy of the proposed approach. $\ $ 2. ***Comment:*** *In the first part of verifying Lyapunov constraints, the authors take the motivation from SMT solvers. However, this is also a challenge for neural Lyapunov control in continuous-time systems. I hope the authors can add a discussion about what efforts have been made to improve efficiencies in the continuous-time systems and why we cannot directly apply them here.* ***Response:*** Thank you for this suggestion! We note that our baselines already include state of the art approaches for continuous-time neural Lyapunov control; these still make use of dReal, and no one has proposed effective heuristic techniques for improving verification during training for continuous-time neural Lyapunov control. Indeed, the conceptual advances we make, such as heuristic verification techniques during training, could in principle be considered also in continuous-time control settings. However, it does not seem evident how to develop MILP-based approaches in that setting (see our response above), and dReal remains a significant bottleneck. $\ $ 3. ***Comment:*** And for the proposed ReLU network plus mixed-integer programming, does this method also apply to continuous-time systems? Or is it limited to discrete-time systems because of some special properties? ***Response:*** As we discuss in our response to Comment 1 above, it is not clear how to adopt our MILP approach to verify Lyapunov conditions in continuous-time systems. $\ $ 4. ***Comment:*** There is a similar question for proposed heuristic for generating counterexamples. Is this heuristic limited to discrete-time system? If not, can we replace the counterexample generation step in existing algorithms for continuous-time system and see if the performance improves? ***Response:*** We believe that the proposed heuristic counterexample generation approach would indeed extend to continuous-time systems. However, we wish to emphasize that the key bottleneck for continuous-time systems remains the scalability of dReal. For instance, even in verifying LQR solutions (which are easy to compute), dReal was unable to complete verification within the specified time frame for both the cartpole (2 hours) and PVTOL (24 hours) control systems. This implies that even if we were to apply the suggested heuristic to continuous-time systems, dReal would still encounter challenges in cases like cartpole and PVTOL (our final step for stopping the algorithm is to pass the verification). It is ultimately the combination of the use of heuristic counterexample generation with MILP-based verification that enables us to make significant progress. $\ $ 5. ***Comment:*** *Another minor concern I have with the PGD heuristic is that it might fail to find a counterexample because gradient-based search can get stuck in local minimums. Thus, in the final step when the algorithm claims success, shall we use a less efficient way to verify the Lyapunov conditions are actually satisfied?* ***Response:*** We would like to clarify that we achieve provable stability precisely by resorting to full MILP-based verification whenver the PGD heuristic can no longer identify counterexamples. Moreover, the MILP-based verifier can generate a counterexample, which then generates additional training iterations (e.g., often PGD now again generates counterexamples once this one is added to the training dataset and can be used to bootstrap the heuristic). Consequently, there are typically several times during training that we have to call the MILP-based verifier. We only claim success when training returns with MILP proving that stability conditions hold. The soundness of our approach is formally asserted in Theorem 4.1, and holds by construction. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. I don't have other questions.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper presents an algorithmic framework for learning neural Lyapunov control with stablizing control policy for discrete-time dynamic systems. It proposes to verify the stability condition with mixed-integer linear programming and speed up Lyapunov function learning with gradient based approximation. On four benckmarks, it outperforms SOTA baselines in terms of running time, size of regions of attraction and successful rate. Strengths: - The paper is well structured and nicely presented - The approach has extended the learning of Lyapunov functions to discrete-time dynamic systems, and has been shown effective on 4 benchmarks with a significant improvement. Weaknesses: See questions and limitations. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - The paper provides 2 options for setting the bias term in the training loss function. However, it is not crystal clear the impact of this term over the learning. An ablation study over the bias setting could help. - To better motivate the use of mixed-integer linear programming for stability verification in discrete-time systems, it would be good to show a case by comparing DITL-MILP with DITL-dReal. - Does the run time include the initialization time (for example by LQR)? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - The method is built upon the assumption that both the verification function and control policy can be simply represented with MLPs with ReLU activations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's thoughtful comments and suggestions. Our detailed responses are below. 1. ***Comment:*** *The paper provides 2 options for setting the bias term in the training loss function. However, it is not crystal clear the impact of this term over the learning. An ablation study over the bias setting could help.* ***Response:*** We appreciate this suggestion. In our method the bias term is treated as a hyperparameter for learning, thus the one presented in the main paper is the optimal one. Below is the ablation result for using the alternative method (i.e., without bias if the paper uses bias, or with bias when the results in the paper do not): | Environment | Runtime |ROA |Max ROA |Success Rate | | -------- | -------- | -------- | -------- |-------- | | Inverted Pendulum (without bias) | 11.1 $\pm$ 0.2 (s) |45 $\pm$ 17 |72 |100\% | | Path Tracking (RL, without bias) | 17.8 $\pm$ 0.1 (s) | 8 $\pm$ 0 |8 |100\% | | Path Tracking (LQR, without bias) | 12.1 $\pm$ 2.9 (s) |9 $\pm$ 2 |11 |100\% | | Cartpole (with bias) | >2 (hours) | N/A |N/A | 0\% | | PVTOL (with bias) | 17 $\pm$ 10 (hours) | $\sim$ 0 $\pm$ 0 |0.0001 |20\% | $\ $ 2. ***Comment***: *To better motivate the use of mixed-integer linear programming for stability verification in discrete-time systems, it would be good to show a case by comparing DITL-MILP with DITL-dReal.* ***Response:*** Thank you for the suggestion! Below are the results for DITL-dReal, in which we use our framework, but replace all the MILP components with dReal. As we can see, using MILP is an essential ingredient in the success of the proposed approach: | Environment | Runtime |ROA|Max ROA |Success Rate | | -------- | -------- | -------- | -------- | -------- | | Inverted Pendulum (DITL-dReal) | **6.0 $\pm$ 1.7** (s) | 57 $\pm$ 24 |75|**100\%** | | **Inverted Pendulum (DITL, main paper)** | 8.1 $\pm$ 4.7(s) | **61 $\pm$ 31** |**123**| **100\%** | | Path Tracking (RL, DITL-dReal) | 600 $\pm$ 0 (s) | 0 $\pm$ 0 |0 | 0\% | | **Path Tracking (RL, DITL, main paper)** | **14 $\pm$ 11** (s) | **9 ± 3.5** |**16**| **100\%** | | Path Tracking (LQR, DITL-dReal) | 420.9 $\pm$ 182 (s) | 4 $\pm$ 4 |11 |60\% | | **Path Tracking (LQR, DITL, main paper)** | **9.8 $\pm$ 4** (s) | **8 ± 3** |**12.5**| **100\%** | | Cartpole (DITL-dReal) | >2 (hours) | N/A |N/A | 0\% | | **Cartpole (DITL, main paper)** | **0.9 $\pm$ 0.3** (hours) | **0.021 $\pm$ 0.012** | **0.045** | **100\%** | | PVTOL (DITL-dReal) | >24 (hours) | N/A |N/A|0\% | | **PVTOL (DITL, main paper)** | **13 $\pm$ 6** (hours) | **0.011 $\pm$ 0.008** | **0.028** | **100\%** | $\ $ 3. ***Comment:*** *Does the run time include the initialization time (for example by LQR)?* ***Response:*** Run time does not include initialization time as it is negligible in most cases. The run time for LQR for all instances is less than 0.1s. One exception is the run time for RL (path tracking), which is 154.96 seconds as documented in the results section (Line 330). $\ $ 4. ***Comment:*** *The method is built upon the assumption that both the verification function and control policy can be simply represented with MLPs with ReLU activations.* ***Response:*** This is a good point. Nevertheless, we do not view this assumption as very limiting for our purposes, since NN with ReLU activation layers can approximate arbitrary continuous functions. Furthermore, policies with such structure are common in deep reinforcement learning. In any case, we will comment on this in the revision. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. The rebuttal has clearly clarified my questions and I would like to keep my current scores!
null
null
null
null
null
null
Resetting the Optimizer in Deep RL: An Empirical Study
Accept (poster)
Summary: The authors argue that Adam's internal parameters should be reset with each iteration. The authors demonstrate the effectiveness of this approach in the Atari domain. Strengths: - Results are convincing for Rainbow. - Novelty is very low, but potential impact is high, if the result generalizes, there is little reason not to use this method in every DQN-style RL algorithm. Weaknesses: - As mentioned by the authors, novelty is low compared to Bengio et al. - Results are only shown on Rainbow and do not appear to work for SAC (with reason) -- but does raise the question if the method is effective for other RL methods. - There is limited insight. Can the authors show that initializing Adam's parameters with 0 is better than using the parameters from the previous iteration in a more concrete way? Such as examining the behavior of the actual values. The fact that not resetting is seemingly better at low values of K suggests that not resetting can provide a reasonable initialization for the parameters of Adam. Minor - The y-axis is unlabelled in several figures. Technical Quality: 3 good Clarity: 3 good Questions for Authors: As mentioned in weaknesses: - Does this result generalize to other methods besides Rainbow? Such as DQN or more modern deep RL methods. - Can the authors show that initializing Adam's parameters with 0 is better than using the parameters from the previous iteration in a more concrete way? Such as examining the behavior of the actual values. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No concerns. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's reading our paper as well as the overall quite positive assessment. Thanks for pointing out that the paper's potential impact is high. - As mentioned by the authors, novelty is low compared to Bengio et al. Note that Bengio et al. proposed an approach to account for the staleness of the moment estimates in RL. Their approach is one that requires computing the Hessian, and so, it is not the most practical approach. Also, they did not conduct any experiment on the Atari domain with various algorithms and optimizers. We did not say that our novelty is low compared to them, merely that they also identified the same issue with using momentum-based optimizers in deep RL. - Results are only shown on Rainbow and do not appear to work for SAC (with reason) -- but does raise the question if the method is effective for other RL methods. As we have shown in the original experiments, resetting is useful for 1) Rainbow with Adam, 2) Rainbow with RMSProp, 3) Rainbow Pro with Adam 4) Rainbow with Rectified Adam. Additionally we added new results for 5) DQN with Adam and 6) IQN with Adam. Overall the resetting approach has been useful in all variants of DQN-like algorithms we tested. As for approaches such as SAC, notice that in the continuous control tasks the value of $K$ used in algorithms such as SAC is pretty small (often as small as $1$). Therefore, resetting the optimizer after each outer iteration corresponds to not giving any time to the optimizer to compute the moment estimates. We think that the positive impact of resetting in all DQN-like algorithms is strong enough to warrant publication. - There is limited insight. Can the authors show that initializing Adam's parameters with 0 is better than using the parameters from the previous iteration in a more concrete way? We examined the behavior of gradient estimates and their cosine similarity relative to the Adam's first moment estimates. There is no meaningful correlation between the two, suggesting that not resetting corresponds with arbitrarily initializing Adam estimates. Please see the attached pdf for the new results. - The y-axis is unlabelled in several figures. Thanks for catching this issue. We have labeled the axis. - Does this result generalize to other methods besides Rainbow? Such as DQN or more modern deep RL methods. Yes, they indeed do, based on our old and new experiments where we add results for DQN (and IQN) per your suggestion. Please see the pdf, and we hope the reviewer takes into account the added result in their final evaluation. --- Rebuttal Comment 1.1: Comment: Thanks for the response and the additional results. At this time I don’t intend on increasing my score but continue to favor acceptance of the paper.
Summary: The paper addresses the issue of using modern optimizers, such as Adam, which maintain internal parameters that are updated over time, potentially contaminating the optimization process. To mitigate this effect, the paper proposes a simple strategy of resetting the internal parameters of the optimizer at the start of each iteration. Empirical investigations using different optimizers and the Rainbow algorithm show that this modification enhances the performance of deep reinforcement learning on the Atari benchmark. Strengths: ### Writing The authors effectively communicate their ideas and concepts, ensuring clarity and coherence throughout the paper. The logical structure and well-reasoned arguments contribute to the overall quality of the essay. The article excels in providing the reader with a clear understanding of the problem's context and significance. By effectively conveying the goals and challenges of the study, the authors enhance the reader's comprehension of the subsequent experiments. Overall, the writing is of high quality, facilitating a smooth and engaging reading experience. ### Method The paper presents an easy-to-use approach by introducing a method that is not only easy to implement, but also easy to apply, which enhances the potential adoption and practicality of the proposed approach.This user-friendly feature makes the method highly accessible and beneficial to researchers and practitioners in various fields. The used code bases and hyperparameters are provided, allowing the results to be reproduced. Weaknesses: While I appreciate the proposed method's ease of use, I believe that the authors could have conducted a more comprehensive and statistically rigorous analysis of their approach, considering its simplicity. One notable limitation of the paper is the absence of confidence interval plots and statistical analysis, which could have been derived from [1], to enhance the clarity and precision of the findings. Incorporating these elements would have allowed readers to better understand the level of uncertainty associated with the reported results, thus bolstering the overall robustness of the study. Furthermore, the authors only rely on a single seed for the initial analysis, without providing a compelling rationale for this choice. Although using a single seed can streamline the experimental process, it diminishes the validity of the findings by disregarding potential result variations arising from multiple seeds. A more thorough explanation or a comparison of outcomes based on different seeds would have added value to the introduction, ensuring a more comprehensive analysis. I appreciate that the authors included continuous control tasks in their study; however, these tasks are not thoroughly explored. While the authors provide hypotheses to explain the unexpected results, a deeper analysis would have been expected. In line 293, the authors reference a follow-up paper on resetting approaches but fail to cite the original work [2], which states in the section "What and how to reset" that resetting the optimizer has almost no significant impact due to quick updates of the moments. This contradicts the findings of this work. Which brings me to the conclusion that I believe that the paper shows promise and the authors have taken a positive direction. However, in its current form, the paper falls short of being acceptable. It is essential to include comparisons to other baselines, such as [2], to provide a more thorough understanding of the opportunities and limitations, and to gain a clearer understanding of the internal effects in order to explain the aforementioned points. ### Minor - The protocol for the random resets is not easy to understand and should be specified more clearly [1] Agarwal, Rishabh, et al. "Deep reinforcement learning at the edge of the statistical precipice." Advances in neural information processing systems 34 (2021): 29304-29320. [2] Nikishin, Evgenii, et al. "The primacy bias in deep reinforcement learning." International Conference on Machine Learning. PMLR, 2022. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - How does this method compare to other resetting approaches in terms of effectiveness? - What is the level of statistical significance observed in the results? - Why is resetting not effective for continuous control tasks? - Are there any experiments demonstrating the impact of contamination on the tasks discussed in this paper? - What are the consequences of reducing the frequency of optimizer resets beyond K=8000? - How can an optimal value for $K$ be determined? - Which ADAM/optimizer parameters are relevant when performing resets, i.e. have an effect when reset? - Are the observed effects still present when modifying the ADAM/optimizer hyperparameters? - Do different loss functions used in various DQN versions (e.g., MSE, Huber, Quantile) exhibit similar behaviors? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The authors have made some effort to address the limitations; however, it is crucial for them to conduct a more comprehensive investigation into these limitations, as mentioned in the "Weakness" section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We first would like to thank the reviewer for mentioning that our paper is high quality, facilitating a smooth and engaging reading experience. We also appreciate that the reviewer provided a detailed review. - While I appreciate the proposed method's ease of use, I believe that the authors could have conducted a more comprehensive and statistically rigorous analysis of their approach, considering its simplicity We agree that we should have ensured we get statistically significant results not just for the experiment with all 55 Atari games, but also with the ablation studies. Please see the attached pdf where we have added 9 new seeds to all ablation studies. - In line 293, the authors reference a follow-up paper on resetting approaches but fail to cite the original work [2], which states in the section "What and how to reset" that resetting the optimizer has almost no significant impact due to quick updates of the moments. This contradicts the findings of this work. We actually reference the paper [2] in our paper. It is our reference [47]. We are not sure why the reviewer is assessing our work negatively given that we actually cite this paper. We are happy to add more discussion about situating our work relative to this innovative paper. To summarize, notice that the motivation of their work and the techniques used for resetting are quite different. Nikishin et al. study the primacy bias phenomenon whereby the RL agent usually overfits to the training examples it encounters early during training, and thus can lose its plasticity. This is quite orthogonal to our motivation where we argue that an RL agent is facing a sequence of optimization problems and therefore we should reset the optimizer at the beginning of each new problem. Nikishin et al., reset parts of the agent's weights to maintain plasticity, whereas we discuss resetting the optimizer to ensure we do not contaminate the gradient steps taken by the optimizer. Finally, Nikishin et al. is conducting their experiments in the simpler continuous control setting with policy-gradient approaches where Polyak-based updates are often used for the target network. This is another significant deviation from our setting where we focus on the DQN style of algorithms where hard target-network updates are often preferred to the Polyak update. Overall, we have different motivations, techniques, and settings relative to Nikishin et al. We are happy to further clarify this in the text. - Why is resetting not effective for continuous control tasks? In the continuous control task the value of $K$ used in algorithms such as SAC and TD3 is pretty small (often as small as $K=1$). Therefore, resetting the optimizer after each outer iteration corresponds to not giving any time to the optimizer to compute the moment estimates. Clearly suppressing the optimizer in terms of not allowing it to compute moments is not a good idea. - Are there any experiments demonstrating the impact of contamination on the tasks discussed in this paper? Please see the attached pdf where we verified the contamination effect empirically. - What are the consequences of reducing the frequency of optimizer resets beyond $K=8000$? We are not sure what the reviewer is asking when they say ``reducing beyond $8000$''. If the question is about increasing this value beyond $8000$, in our experience, spending too much time solving each iteration is not a good idea since after each step of optimization, by standard convention, we also take a step in the environment. Therefore, it is usually better to crudely solve each iteration (smaller $K$) and move to the next one. Resetting allows us to move to the next iteration faster, which is why for resetting a smaller than $K=8000$ is usually a better value. - How can an optimal value for $K$ be determined? We would like to remind that we did not introduce this parameter. Therefore, we do not think it is incumbent upon us to answer this question in our paper as this is simply beyond the scope of our work. Also, not knowing the optimal $K$ is not at all reflective of a weakness of the resetting strategy, because we also need to tune $K$ when not resetting the optimizer. That said, this is a good empirical question, one that we would be super excited to focus on after publishing this paper. - Which ADAM/optimizer parameters are relevant when performing resets, i.e., have an effect when reset? We have noticed that we need to reset both Adam's first and second estimates to achieve the best performance. Only resetting the first or the second moment is not effective. We are happy to add this experiment to the paper. - Are the observed effects still present when modifying the ADAM/optimizer hyperparameters? The reviewer can find additional experiments in the original submission where we changed the Adam optimizer to RMSProp and Rectified Adam, and we observed that resetting is useful in those contexts too. Moreover, for the rebuttal we also added experiments on DQN and IQN, where again we showed that resetting is helpful. See the attached pdf for the new results. - Do different loss functions used in various DQN versions (e.g., MSE, Huber, Quantile) exhibit similar behaviors? Again, see our answer above. Rainbow, Rainbow Pro, IQN, and DQN are all using different loss functions. Nonetheless resetting is helpful in all cases above. --- Rebuttal Comment 1.1: Title: Thank you for your answer Comment: For some reason I misread the citation [47] and thought you were referring to this follow-up paper: D'Oro, P., Schwarzer, M., Nikishin, E., Bacon, P. L., Bellemare, M. G., & Courville, A. (2023). Sample-Efficient Reinforcement Learning by Breaking the Replay Ratio Barrier. In The Eleventh International Conference on Learning Representations. Please apologize and disregard this comment. I also appreciate the clarification of why Nikishin et al. come to a different conclusion about the optimizer reset, which at the same time gives some intuition about the results for the continuous control setting. > What are the consequences of reducing the frequency of optimizer resets beyond ? This question was probably badly worded. But as guessed correctly, I meant an increase of $K$, which implies a decrease in the frequency of optimizer resets. > We would like to remind that we did not introduce this parameter. Therefore, we do not think it is incumbent upon us to answer this question in our paper as this is simply beyond the scope of our work. Also, not knowing the optimal is not at all reflective of a weakness of the resetting strategy, because we also need to tune when not resetting the optimizer. That said, this is a good empirical question, one that we would be super excited to focus on after publishing this paper. Thank you for the answer. With this question, I did not mean to imply that choosing an appropriate $K$ is a weakness of the proposed method, since, as you mentioned, it has to be done regardless. I was just interested in how the added purpose of $K$ for the optimizer resets affects the hyperparameter selection of $K$. > Are the observed effects still present when modifying the ADAM/optimizer hyperparameters? This question was more related to changing the hyperparameters of the respective optimizers than to the optimizer itself, e.g. the betas for ADAM. Granted, the default values of 0.9, 0.999 in the case of ADAM are usually the best settings. I would just be curious if you have tried other values besides the default and possibly seen a difference. Other than that, I appreciate the effort that went into the rebuttal answers as well as the additional results and seeds. They definitely further support the proposed approach. I will take this into account when discussing the final recommendation with the other reviewers and AC. --- Reply to Comment 1.1.1: Title: Changing Adam's Hyperparameters Comment: In terms of the Adam optimizer, we used the default values of Adam because deviating from the default values could have been interpreted as our trying too hard to show the effectiveness of resetting. In particular, other than $\beta_1$ and $\beta_2$, we also did not change the default step-size value of Adam used in baseline RL algorithms to keep the comparison as grounded as possible relative to the literature. That said, we totally agree that it would be interesting to look at the effects of these hyper-parameters on the validity of our conclusions. For example, the moment estimates may become less or more stale depending on the choices of $\beta_1$ and $\beta_2$, and this may ultimately affect the gap between the resetting and non-resetting agents. However, in light of our lack of ability to edit the rebuttal pdf, it seems impossible for us to carry this experiment, so unfortunately we can only promise to add this experiment in our final paper. We are delighted to see the reviewer's engagement with our rebuttal, as well as the reviewer's stating that they will take into account the added seeds, the contamination-measurement experiment, and the IQN/DQN results in their final recommendation. However, we also worry that our average score has remained low, and that this can negatively affect our chances in the discussion. While we appreciate if the new considerations could also be reflected in the final score, we nevertheless really respect the reviewer's engagement with our rebuttal.
Summary: This paper questions the standard use of Adam-type optimizers in deep RL. The paper argues that solution methods in deep RL are best thought of as solving a sequence of optimization problems. And that the standard use of optimizers leads to "contamination" of the optimizer's internal parameters. The paper then proposes to reset the optimizer's internal parameters to fix this "contamination." Finally, the experiments on Atari show that resetting the optimizer's internal parameters leads to significant performance improvement. Strengths: The main strength of the paper is that the key idea of the paper, resetting optimizer parameters at the beginning of each iteration, is simple and effective. I liked the general theme of the paper, i.e., we need to understand better the tools we borrow from other fields. The paper is well-written and easy to understand, making it accessible to a wide audience. The experiments in section 4.3 are performed on 55 Atarti games with ten seeds each, which might mean the results are statistically significant. Resetting seems to be beneficial for multiple optimizers like RMSprop, Adam and Rectified Adam. Weaknesses: The paper has two major weaknesses: 1. The paper claims at many points that a "contamination" effect plagues RL (for example, lines 107-109) and that many updates are wasted to unlearn the effects of the previous iteration. However, the paper does not describe what exactly this "contamination" means, and neither does it show the presence of any "contamination." All the paper shows is that there is a performance boost when we reset the optimizer's internal parameters. This performance boost is not direct evidence of contamination from iteration to iteration. However, this weakness can be easily overcome. The paper first needs to contain a definition of "contamination," maybe the authors mean that the internal parameters($m$ and $v$) are too far away from their true values at the beginning of each iteration. One way to measure this difference could be to measure the cosine similarity between the current value of $m$ and the true value of $m$. The true value can be measured by taking the gradient of all the samples in the buffer and taking steps using that gradient. A large difference between the true and current value of $m$ would mean contamination. The paper also suggests that this contamination is particularly large at the beginning of each iteration compared to a random time in the learning process. Again, this can be easily shown by showing that the difference in true $m$ and current $m$ is larger at the beginning of the iteration compared to any random time in the learning process. 2. None of the results in the paper except the ones in section 4.3 are statistically significant. The paper only shows results for a single random seed. I should note that the authors are aware of this weakness (line 138). The best way to look at the current experiments in Sections 4.1 and 4.2 is that they are used to tune hyper-parameters for the experiments in Section 4.3. The authors might be limited in their computational resources, but in that case, it is better to present statistically significant results in smaller environments like MinAtar[1] than unreplicable results in a big environment. Other than these two main problems, there are a few other minor issues in the paper. 1. The update equations for Adam in Section 3 are wrong. Instead of using $m$ and $v$ for the final update, new variables $\hat{m}$ and $\hat{v}$ are used. See to the original Adam paper for the correct equations. 2. Line 176 says that $K=1$ corresponds to vanilla gradient descent. But, that is not true. For $K=1$, the update is similar to the Rprop optimizer, not SGD. For $K=1$, the update only takes into account the sign of the partial derivative but not its magnitude. 3. The value of $K$ is not properly tuned. In Figure 4, the difference between $K=1000$ and $K=500$ is insignificant. So, the optimal value of $K$ could be smaller. I suggest the authors also try smaller values of K, like 250, 125, etc. I like the ideas presented in the paper. However, I can not recommend accepting the paper in its current form in light of these weaknesses. [1] Young, K., & Tian, T. (2019). Minatar: An atari-inspired testbed for thorough and reproducible reinforcement learning experiments. arXiv preprint arXiv:1903.03176. Technical Quality: 1 poor Clarity: 3 good Questions for Authors: What would be a good definition of "contamination"? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 1 poor Presentation: 3 good Contribution: 3 good Limitations: No confidence intervals are reported for any experiment in the paper. I recommend the authors report the 95% bootstrapped confidence intervals for their results. [2] and [3] provide good guidelines for properly reporting experimental results in deep RL. [2] Agarwal, R., Schwarzer, M., Castro, P. S., Courville, A. C., & Bellemare, M. (2021). Deep reinforcement learning at the edge of the statistical precipice. Advances in neural information processing systems, 34, 29304-29320. [3] Patterson, A., Neumann, S., White, M., & White, A. (2023). Empirical Design in Reinforcement Learning. arXiv preprint arXiv:2304.01315. EDIT: I have updated my score based on the new results provided by the authors. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer appreciating our idea. In what follows, we address the particular weaknesses and questions raised. - The paper does not describe what exactly this contamination means, and neither does it show the presence of any contamination. Please take a look at the new cosine similarity experiment in the attached pdf. We see very little cosine similarity between the gradient and Adam's moment estimate immediately after moving into a new iteration. If the moment estimate was not stale, you would expect them to form a low angle with the gradient, and therefore give us high cosine similarity, but this is demonstrably not the case for all games we tested. Thanks for your suggestion in conducting the cosine similarity experiment, though notice that computing the true value of m like you suggested is extremely expensive, and so inspired by your comment, we conducted a similar experiment where we measured the cosine similarity between the Adam's first moment estimates and the stochastic gradient computed immediately after moving to the next iteration. - None of the results in the paper except the ones in section 4.3 are statistically significant. That is a fair criticism, and in light of your concern, we updated all of our results to include 10 random seeds. We hope that the reviewer considers increasing their score since they stated using 1 random seed as one of the main weaknesses of our work. To be fully transparent, we were limited by compute at the time of submission and made the strategical decision to use most compute for the experiment with all 55 Atari games. - The update equations for Adam in Section 3 are wrong. You are correct, but we also want to reassure the reviewer that we know the Adam update. We just miss $\hat m$ and $\hat v$. This was simply a typo which we have addressed in the main text. Thanks for your comment nonetheless. - Line 176 says that corresponds to vanilla gradient descent. You are correct, and we will fix that. - No confidence intervals are reported for any experiment in the paper. Please see the attached pdf for the updated results. We now report confidence intervals so we hope the reviewer reconsiders their evaluation. - I suggest the authors also try smaller values of K, like 250, 125, etc. This is a good suggestion and we are happy to add this in the final paper if accepted. For the rebuttal we prioritized making the existing experiments statistically significant. We really hope the reviewer considers increasing their score and help us get the paper to the finish line and we will be happy to then use our time to try values of K between 500 and 1. - What would be a good definition of ``contamination''? A definition we have adopted is small cosine similarity between the first moment estimate and the gradient vector immediately after updating the target network. If the two vectors are very rarely aligned, the moment estimate is in fact just misleading the agent, and so we call this a contamination effect. --- Rebuttal Comment 1.1: Comment: I thank the authors for adding more seeds and experiments to measure contamination. The results with more seeds are largely consistent with the results in the original manuscript. The new results alleviate most of my concerns about the statistical significance of the results in the paper. The new contamination results are striking. The cosine similarity effectively reduces to zero after about 20M frames. These results seem to support the claim that there is contamination. However, they are not fully satisfactory. The authors approximate the true gradient from the first mini-batch in the next iteration. But I don't know if the first sample gradient is a good approximation of the true gradient. The true value is the average over a sample of all the data in the buffer (1M?), but the mini-batch is only 128(?). I realize that computing the gradient over the full buffer will be computationally expensive. Still, we can reach a middle ground by measuring the average gradient for 10,000 or 50,000 transitions in the buffer. A few questions to the authors: - Why does the cosine similarity decrease over time? And why was it exactly 1 in the beginning? If there is contamination from iteration to iteration, shouldn't the cosine similarity always be zero? Why do you think that is the case? - What do the error bars report? Is it the 95% bootstrapped confidence interval or IQM or something else? - Did you get to do more runs for rainbow without resetting (Figure 4) for K=0? I'm very curious to see if Rainbow with K=0 and K=8000 perform the same. If that is the case, it would mean that we can effectively remove the target network from Rainbow. As a side note, the authors do not describe the hyper-parameters used for all the algorithms in the paper. The authors should add a section in the appendix describing all the hyper-parameters for all the algorithms used in the paper. I light of the authors adding more runs to make their results statistically significant and adding a measure for contamination, I have changed my score to accept the paper. --- Reply to Comment 1.1.1: Title: Answering Additional Questions Comment: We thank the reviewer for their continued engagement with our rebuttal, and for their neat suggestion about the contamination experiment which truly strengthened our paper. - The true value is the average over a sample of all the data in the buffer (1M?), but the mini-batch is only 128(?). You are correct in saying that the gradient is noisy because it is only estimated over a minibatch of size 64, however, we think the low cosine similarity is not due to the fact that we use stochastic gradients and it is due to the fact that the optimization landscape actually changes from one iteration to the next one. That said, to further address the reviewer's concern, we just started another experiment where we change the batch size from 64 to 2048, and will add this to the paper. However, please keep in mind that even if a much larger batch size can increase cosine similarity, in practice we do not want to use a larger batch size anyways due to 1) computational considerations and 2) that in deep learning very large batch sizes actually performs poorly due to their converging into sharp minimizers with poor generalization gap (Keskar et al). - Why does the cosine similarity decrease over time? And why was it exactly 1 in the beginning? If there is contamination from iteration to iteration, shouldn't the cosine similarity always be zero? Why do you think that is the case? This is a very good question. We think the early cosine similarity we observe is due to the fact the network is initialized completely randomly, and so the optimizer roughly moves the weights in the same direction irregardless. However, this is a very quick and transient phase and the cosine similarity quickly vanishes to zero. - What do the error bars report? Correct, it is the 95\% bootstrapped confidence intervals.
Summary: The paper studies optimization in value-based deep reinforcement learning. The key insight is that when using target networks for action-value function training, changes in the target parameters yield a change in the optimization problem the online parameters are solving. Because of that, the authors argue that preserving the adaptive optimizer statistics (e.g. of Adam) might or might not be desirable. The paper then studies the effect of resetting the optimizer state after (hard) target updates mostly using the Rainbow algorithm on Atari games as a testbed yielding a slight positive aggregate improvement. Strengths: The main strength of the paper is the simplicity of the contribution; the paper is well-written and easy to follow, and the method is motivated and described well. The experimental protocol is solid: it uses the full set of 55 Atari games and a standard Rainbow implementation. Weaknesses: The main weakness of the paper is the mixed empirical results. Granted, the median human-normalized performance improves from ~1.75 to ~2.25, however, per-game effects from resetting the optimizer are highly heterogeneous, yielding performance deterioration in ~14 environments. The soundness of the paper could have been higher if, at least, an explanation (supported by evidence) for the negative effects was given. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Target network parameter updates indeed change the loss landscape that the online parameters are navigating. In addition to that, updating the replay buffer changes the distribution of inputs and hence the optimization problem for online parameters. Do you have ideas on how an optimizer could be changed to adapt to the input shifts? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: “We hypothesize that this can contaminate the internal parameters of the employed optimizer in situations where the optimization landscape of the previous iterations is quite different from the current iteration.” (L9) The reviewer didn’t find an empirical verification of this assumption. Many deep RL algorithms use moving average target updates after each step instead of periodic hard updates. The authors demonstrate preliminary evidence that in soft actor-critic that uses such a practice, the optimizer resets do not improve the performance. Having said that, the reviewer appreciates the transparency about the negative results. Again, one of the limitations is that in some environments resetting the optimizer yields negative results. It implies that a better alternative could be triggering the optimizer reset using a criterion (e.g. based on a measure of the loss landscape change / by performing a lookahead and assessing whether the reset was helpful) Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the time spent carefully reviewing the paper and for appreciating our work. Please find below some clarification regarding your questions. - "Granted, the median human-normalized performance improves from 1.75 to 2.25, however, per-game effects from resetting the optimizer are highly heterogeneous, yielding performance deterioration in 14 environments." We ask the reviewer to consider the fact that resetting the optimizer is an extremely simple modification of the original algorithm in that it is adding no new hyper-parameter and/or computation to the baselines. We believe that due to the simplicity of resetting and the fact that it is a very natural thing to do, the bar should not be set very high to the point where we expect dramatic improvement on all games. In order words, simplicity of the approach is in our view a strength of the approach and not a bug, and so the improvements should be celebrated more given this simplicity. Also, by looking at the original Rainbow paper, we observe that the positive impact due to each individual component of Rainbow is not that high, and so in light of that, we are actually positively surprised that the simple resetting idea can be so effective. We did not explore adaptive versions of resetting that could have been even more dominant relative to not resetting, but we think the fact that the simple resetting is so effective can instigate future work that can come up with even more performant resetting strategies. - In addition to that, updating the replay buffer changes the distribution of inputs and hence the optimization problem for online parameters. Do you have ideas on how an optimizer could be changed to adapt to the input shifts? This is a very good point, and we indeed agree that the online RL problem (in contrast to offline RL) is non-stationary even from one time-step to the other. We did not explore approaches that account for shifts in inputs as they were beyond the scope of this result. That said, the mere fact that the reviewer is, akin to us, interested in this question is a testament that our current results are insightful, thought-provoking, and would instigate further research in this direction. - ``We hypothesize that this can contaminate the internal parameters of the employed optimizer in situations where the optimization landscape of the previous iterations is quite different from the current iteration.'' (L9) The reviewer didn’t find an empirical verification of this assumption. Please take a look at the new cosine similarity experiment in the attached pdf. We see very little cosine similarity between the gradient and Adam's first moment estimate immediately after moving into a new iteration, which empirically verifies the contamination effect. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I concur that it is encouraging to see aggregate improvements from such a simple, hyper-parameter-free technique. What I was mostly pointing at was a lack of explanation for why some of the games experience worse performance from applying optimizer resets. The cosine similarity experiment is indeed interesting. However, it seems like it is unable to explain the effects of resetting the optimizer. For instance, based on Figure 8, DemonAttack is an example of a game with positive effects from opt-reset and Breakout is an example with neutral effects. For both of these games the cosine similarity between the Adam 1st moment and the gradient after the target update behaves almost identically, suggesting that low cosine similarity isn't explaining the opt-reset effects. Additionally, for a complete picture, we need a similar plot _with cosine similarity right before the target update_. Thanks for also validating opt-reset on DQN and IQN — was the aggregate score calculated on all 55 games? --- Reply to Comment 1.1.1: Title: Further Clarifications Comment: Thanks for your carefully reading the paper and the rebuttal. The reviewer makes a good observation. We acknowledge that measuring the cosine similarity alone cannot fully predict the utility of resetting. Note that in doing this experiment, our intention was merely to show that a contamination effect does exist, but even with different amounts of contamination it is still possible for the non-resetting agent to occasionally outperform the resetting agent due to the presence of other confounding factors in deep RL. To take a step back and focus on the bigger picture, in this work we presented the view that mainstream RL algorithms should better be thought of as solving for a sequence of optimization problems. This stands in contrast to the view that treats RL algorithms as solving for a single optimization problem. Fully adopting this view means that the most natural thing to do is to reset the RL optimizer in situations where the optimizer is moment-based. This surprisingly simple technique was somehow absent in the literature, and while we do not have all answers about when it works best, we observe that over many settings of RL algorithms and optimizers it is quite beneficial to reset. We believe that a paper that takes a first step to present this optimization view and to propose the resetting strategy can serve the RL literature well. - Thanks for also validating opt-reset on DQN and IQN — was the aggregate score calculated on all 55 games? To answer your question, we did the new 1) DQN with Adam and 2) IQN with Adam experiments only on 12 games. These are the same 12 games from section 4.1 used in the original submission to show the benefits of resetting on 3) Rainbow with Adam, 4) Rainbow Pro with Adam, 5) Rainbow with Rectified Adam, and 6) Rainbow with RMSProp. So taken together, we have 6 examples of resetting's improvement on the same set of 12 games. We chose these 12 games completely randomly when we began the project. Our experiments on the full 55 games are only on 1) Rainbow with Adam and 2) Rainbow with Rectified Adam.
Rebuttal 1: Rebuttal: We appreciate our reviewers for their thoughtful feedback. Despite their constructive criticism, we believe that all reviewers see positive aspects in our work. In particular, reviewer Ks5C agrees that our approach is motivated and well-described and that we ran our main experiments on all 55 Atari games with 10 seeds. Reviewer 5d1P agrees with the main message we liked to convey in this paper, namely that we need to understand better the tools we borrow from other fields, in this case deep-learning optimization. Reviewer 59Jp states that the paper presents an easy-to-use approach by introducing a method that is not only easy to implement, but also easy to apply, which enhances the potential adoption and practicality of the proposed approach, and finally Reviewer aUSe mentions that the impact of our work could be potentially high given the wide-spread use of modern optimizers in deep RL. That said, Reviewers 5d1P and 59Jp raised the issue pertaining to using only one seed in some of our experiments. We first reiterate that for our main experiment on all 55 Atari games, we used 10 random seeds which is more than or at least on par with mainstream deep RL papers. Because of the volume of our ablation studies (already more than 1000 experiments) we were not able to use more than 1 seed in our ablations when submitting the paper. That said, we understand your concern, and agree that with the ablation studies, too, we need to have made sure that our results are statistically significant akin to what we did with the full experiment with all 55 Artari games. To this end, we added 9 more seeds to the single seed we originally used for the ablation experiments, so in total we currently have 10 seeds for our ablation and main experiments. We updated Figures 1, 2, and 3 in the paper and we added error bars as requested by two of the reviewers. Please see the attached pdf for updated results. We are happy to add more seeds to all experiments in the final camera-ready version of the paper. We are under the impression that reviewers 5d1P and 59Jp overall liked the ideas presented in this work, but gave us a low score in large part due to the seed issue, so we really hope that they reconsider their score in light of now using 10 seeds in all experiments. Moreover, reviewers 5d1P and 59Jp asked us to elaborate further on the contamination effect that we claim to have plagued modern optimizers in deep RL. To recap, we showed that TD/Rainbow/DQN/etc could be thought of as RL algorithms that solve a sequence of optimization problems: $\theta^{t+1}\leftarrow \arg\min_{w} H(\theta^t,w)\ ,$ where we use Adam (without resetting) to approximately solve all iterations. Here, by contamination we mean that the Adam moment estimates computed in the previous iterations $(t-1, t-2, ...)$ correlate weakly (if at all) with the gradient of the objective function at iteration $t$. To support our claim, we conducted a new experiment where we measured the cosine similarity between Adam's first moment and the gradient we compute immediately after we update the target network. If there is no contamination effect then surely the cosine similarity (which is between -1 and +1) should usually be positive. However, as shown in the attached pdf, we see that while it is true that very early in training the cosine similarity is positive (meaning that moment estimates are not stale), the cosine similarity quickly converges to around zero, indicating that there is no meaningful similarity between Adam's moment estimate and the gradient. This observation manifests itself in all games we tested. This is what we refer to as the contamination effect: without resetting, at each iteration we are in effect poorly initializing Adam, and so the optimizer needs to waste some gradient steps just to unlearn this poor initialization. Resetting is a simple and effective strategy to hedge against this contamination. Finally, Reviewers aUSe and 59Jp asked us to look at effects of resetting on other standard deep RL algorithms, with DQN as their primary suggestion. In light of their feedback, we ran DQN and IQN with and without resetting and repeated the experiment presented in Figure 4 of the paper. In this case, we benchmarked DQN and IQN for different values of $K$ with and without resetting Adam. We see that a similar result manifests itself in which our resetting algorithms perform better than the standard non-resetting algorithms. With these new results added, overall we have shown that resetting enhances the performance of 1) Rainbow with Adam, 2) IQN with Adam, 3) DQN with Adam, 4) Rainbow Pro with Adam, 5) Rainbow with RMSProp, and 6) Rainbow with Rectified Adam. We really believe that this is a comprehensive set of results, and that it is reasonable to expect that this phenomenon would generalize across other successors of DQN and Rainbow. As mentioned by reviewer aUSe, this strengthens the significance of our work, so we hope Reviewers aUSe and 59Jp consider these additional experiments in their final evaluation of our paper. Please take a look at the attached pdf for 1) experiments with added seeds, 2) experiments measuring the contamination effect, and 3) experiments with DQN and IQN. Pdf: /pdf/eb212c34dca983541d4f86d7ac91a4b4729d61be.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Causal Context Connects Counterfactual Fairness to Robust Prediction and Group Fairness
Accept (poster)
Summary: This work aims to provide guidance for contextually-informed algorithmic fairness criteria selection. The work motivates counterfactual fairness from its legal and ethical foundations. The primary contribution of the work is to detail a set of cases (data generating processes and selection mechanisms) where counterfactually fair predictors are accuracy-optimal in the (unselected) target distribution and consistent with specific correlational fairness criteria in the observed data. Strengths: * The work is generally well-written and clear. * The arguments and key results of this work provide a new perspective on cases where counterfactual fairness is consistent with predicting outcomes under particular selection mechanisms and data generating processes. While the results regarding the mapping between counterfactual fairness and observational fairness criteria largely follow from prior work on counterfactual invariance, this work does expand upon the prior work meaningfully. Weaknesses: Major: * There are no experiments with either simulated or real-world data. At a minimum, it would be beneficial to include simulated experiments that verify the basic claims regarding the optimality of the counterfactually-fair predictors and the relationships between counterfactual fairness and the observational fairness criteria under different data generating processes. * The restriction of the scope of the work to the purely spurious setting (line 201) limits the generalizability of the claims and this is largely not acknowledged. The work makes the claim that the purely spurious setting where association between group attributes and outcomes are due to selection is reasonable to cover the space of problems where fairness is of interest. I would argue that this is a serious limitation of the work because of the aim to provide guidance that broadly applies to the space of settings where it is of interest to evaluate fairness and bias considerations. * For an example where the spurious setting may not hold, consider that in medical applications, structural racism and social determinants of health contribute to differences in observed health status across racial groups in addition to the differences in access to care that may be considered as selection effects. This can induce a difference in the conditional Y | X across subgroups (effectively an association between A and Y not mediated by X) that arises because race serves as a proxy for exposure to structural racism and social determinants. To the best of my understanding, this setting does not map cleanly on to the type of selection graphs considered in this work. Minor: * There is inconsistency in which variable is used to the ground truth outcome versus the noisy outcome with measurement error. In lines 112-113, Y-tilde is considered the noisy label and Y the ground truth. Elsewhere in the paper, Y-tilde is considered the ground truth. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: * To what extent should the statements about the observable implications of counterfactual fairness be considered if and only if statements versus unidirectional statements? For instance, under Corollary 2.1.1 does demographic parity imply counterfactual fairness under the causal measurement error graph? * Could you please add additional detail to prove the result regarding predictive parity? I generally follow the claim that binary calibration for binary classifiers is analogous to score calibration for risk scores, but it is not clear how conditioning on Y=1 yields predictive parity (which is confusingly referred to as D=1 in the supplement). It is also unclear whether the argument assumes that score calibration implies predictive parity. In general, this is not true (see Chouldechova 2017 Arxiv version, discussion immediately following definition 2.2 [1]). 1. Chouldechova, Alexandra. "Fair prediction with disparate impact: A study of bias in recidivism prediction instruments." arXiv preprint arXiv:1703.00056 (2017). Minor suggestions: * Introduce the notion that a counterfactually fair predictor depends only on X_{A}^{\perp} in the main text rather than in the supplementary material * Replace “correlational” with “observational” * Replace “persecuted” with “prosecuted” Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: It would be beneficial for this work to expand upon the potential limits of the generalizability of its claims, especially due to the assumption of the purely spurious setting. Relatedly, it is not clear how well the claims are applicable outside of the set of graphs and selection mechanisms considered in this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the close read. We appreciate your time, and we believe that we have addressed your two major concerns by adding an experimental evaluation and clarifying the limitations (and purpose) of working in the purely spurious setting. > There are no experiments with either simulated or real-world data. At a minimum, it would be beneficial to include simulated experiments that verify the basic claims regarding the optimality of the counterfactually-fair predictors and the relationships between counterfactual fairness and the observational fairness criteria under different data generating processes. As suggested, we have added an experiment to verify the main claims of the paper. See the global response for details. We find that on a (semi-synthetic modification of) the Adult income dataset: 1. The counterfactually fair predictor trained on biased domain data has good performance when deployed on the unbiased target distribution (Theorem 1). 2. The counterfactually fair predictor satisfies the observational fairness criteria predicted by the underlying causal structure (Corollary 2.1). > The restriction of the scope of the work to the purely spurious setting (line 201) limits the generalizability of the claims and this is largely not acknowledged. The work makes the claim that the purely spurious setting where association between group attributes and outcomes are due to selection is reasonable to cover the space of problems where fairness is of interest. I would argue that this is a serious limitation of the work because of the aim to provide guidance that broadly applies to the space of settings where it is of interest to evaluate fairness and bias considerations. > For an example where the spurious setting may not hold ... this setting does not map cleanly on to the type of selection graphs considered in this work. > It would be beneficial for this work to expand upon the potential limits of the generalizability of its claims, especially due to the assumption of the purely spurious setting. Relatedly, it is not clear how well the claims are applicable outside of the set of graphs and selection mechanisms considered in this work. We see that this is an important limitation and appreciate the reviewer’s critique. We discuss this in the global response, but to make the purely spurious assumption more prominent in the paper, we have added it to the Introduction and to the Discussion (the final section). We have also elaborated on it on Page 6 to more fairly and thoroughly present the pros and cons when we formally introduce it. As discussed in the global response, assuming pure spuriousness substantially simplifies the results (and thus also the presentation), allowing us to thoroughly develop the connection from counterfactual fairness to out-of-domain robustness and observational fairness metrics. However, the purely spurious assumption is likely not necessary to make such a connection in general. We note that extending the results to more complicated causal scenarios and fairness definitions is an important direction for future work. (However, we think that the extra level of technical complexity would severely obfuscate the core point in the present paper). > There is inconsistency in which variable is used to the ground truth outcome versus the noisy outcome with measurement error. In lines 112-113, Y-tilde is considered the noisy label and Y the ground truth. Elsewhere in the paper, Y-tilde is considered the ground truth. We have made $\tilde{Y}$ the ground truth throughout. > To what extent should the statements about the observable implications of counterfactual fairness be considered if and only if statements versus unidirectional statements? For instance, under Corollary 2.1.1 does demographic parity imply counterfactual fairness under the causal measurement error graph? Observational data can strongly constrain causal structure but usually cannot uniquely identify it. Accordingly, we do not expect the results to be bidirectional in general. However, with the purely spurious assumption and faithfulness, the relationships seem bidirectional. > Could you please add additional detail to prove the result regarding predictive parity? ... It is also unclear whether the argument assumes that score calibration implies predictive parity… This is an error. Statement 2.2.6 should say “with positive outcome label (i.e., $D=1$)” instead of “with positive outcome (i.e., $Y=1$)”. We sincerely apologize for the mistake and the confusion. Please let us know if you would still like additional detail, but hopefully that clarifies the statement and proof. Regarding score calibration, if a score-calibrated predictor were converted into a binary predictor by using a score threshold, it would not necessarily imply predictive parity—as you mention is explained in Chouldechova (2017). The corrected version of Statement 2.2.6 does not mention score calibration, nor does its proof (lines 601-602), so we currently don’t believe the argument assumes this. Note that the proof only refers to binary calibration, which does not have the issue that Chouldechova raises in which $S | R = r$ (i.e., $S | A = a$) potentially differs across groups in ways that result in PPV imbalance. > Minor suggestions: > * Introduce the notion that a counterfactually fair predictor depends only on X_{A}^{\perp} in the main text rather than in the > supplementary material > * Replace “correlational” with “observational” > * Replace “persecuted” with “prosecuted” Done. --- Rebuttal 2: Title: Follow-up with Reviewer c2Rt Comment: Dear Reviewer c2Rt, Thank you again for the thorough review. We believe we have addressed each of your concerns, particularly by adding an experiment to validate the results and more thoroughly and prominently documenting the pros and cons of the purely spurious assumption. Could you please check our response and let us know if you have further questions? All the best, Authors --- Rebuttal Comment 2.1: Comment: Thank you for the thorough updates to the paper. I appreciate the effort that went into the additional experimental results and the response. My concerns have been addressed. I have updated my score from a 3 (reject) to a 6 (weak accept).
Summary: The authors show that under various causal graphs involving protected attributes, features and labels, that a counterfactually fair classifier achieves a specific association-based fairness metric. From this, they suggest a pipeline to detect counterfactual fairness. Strengths: The authors do a really good job making the point that counterfactual fairness, which is what is desired in a lot of theory, can be evaluated using more standard associational fairness metrics when assuming (or discovering) particular causal graphs. The math seems to be correct. The findings are useful to push causal fairness further into practice. Weaknesses: The organization of the paper can be improved. For example, a lot of the beginning of Section 4 would make more sense in the introduction. Figure 2 and its associated introductory text would make more sense in Section 5. I don't think the impossibility theorems need to be emphasized so much. They're cute, but don't really give us anything practical, which this paper tries to do. Figure 2: please put the corresponding correlational fairness metric in the main figure rather than only mentioning them in the caption. Line 48: I don't think Ref [3] does data augmentation. However, this reference does: https://doi.org/10.1145/3375627.3375865. Figure 1 mentions data generation and regularization, but they don't come up in the text. I think bringing up examples of each would be useful to the reader, e.g. https://proceedings.mlr.press/v161/ahuja21a.html and https://arxiv.org/abs/2002.10774. It would be useful to discuss other theoretical work on the fairness-accuracy tradeoff, especially papers that relate this discussion to the presence or absence of different kinds of bias, e.g. https://proceedings.mlr.press/v119/dutta20a.html. I understand the difficulty of empirical validation when dealing with counterfactuals, but the paper would be a lot stronger if the pipeline of Figure 1 were carried out to show the steps tangibly. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Can you try to relate the evaporation of the tradeoff that you show to the argument made by Dutta et al. (reference given in the Weaknesses section of the review) and point out similarities and differences. It would also be good to relate the decomposition of X on line 268 to Dutta's other work https://doi.org/10.1609/aaai.v34i04.5794. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: It would be good if the authors could be more explicit on how they obtain/discover the causal graph because it is quite important. That limitation hasn't been discussed enough. As part of that discussion, they may wish to mention causal fairness methods that do not require the entire causal graph a priori and are able to get pieces of it through a group-testing based approach which require a sublinear number of conditional independence tests. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the support and feedback on framing and content, which we believe has improved the paper. > The organization of the paper can be improved. For example, a lot of the beginning of Section 4 would make more sense in the introduction. Figure 2 and its associated introductory text would make more sense in Section 5. Per your suggestion, we have moved parts of Section 4 into the introduction. For the second move, we have moved discussion of the three scenarios (5.1, 5.2, 5.3) to the introduction in response to Reviewer U9Dm’s suggestion to, “Help your reader by making these connections right away,” which we think addresses your concern. We would also prefer Figure 2 to be early in the paper because it is so illustrative, if possible. We have also done some other reorganization as a result of other comments. > I don't think the impossibility theorems need to be emphasized so much. They're cute, but don't really give us anything practical, which this paper tries to do. The impossibility theorems motivate the need to make trade-offs between fairness metrics, but we probably did overemphasize them so we have now reduced the emphasis. We have also added a qualification here with reference to a 2023 FAccT paper that critiques their practical relevance [1] alongside the already-cited work that suggests a method to improve fairness across the different metrics “with minimal model performance reduction” [2]. We think this improves the motivation and contextualization of our results. [1] Bell, Bynum, Drushchak, Zakharchenko, Rosenblatt, and Stoyanovich. 2023. “The Possibility of Fairness: Revisiting the Impossibility Theorem in Practice.” FAccT. doi:10.1145/3593013.3594007. [2] Hsu, Mazumder, Nandy, and Basu. 2022. “Pushing the Limits of Fairness Impossibility: Who’s the Fairest of Them All?” NeurIPS. doi:10.48550/2208.12606. > Figure 2: please put the corresponding correlational fairness metric in the main figure rather than only mentioning them in the caption. Done. > Line 48: I don't think Ref [3] does data augmentation. However, this reference does: [link]. Thank you for noticing this mistake. We have replaced it with your citation, Sharma et al. (2020). > Figure 1 mentions data generation and regularization, but they don't come up in the text. I think bringing up examples of each would be useful to the reader, e.g. [link] and [link]. We have added brief mentions and citations to both Ahuja et al. (2021) and Di Stefano et al. (2020). > It would be useful to discuss other theoretical work on the fairness-accuracy tradeoff, especially papers that relate this discussion to the presence or absence of different kinds of bias, e.g. [link]. /// Can you try to relate the evaporation of the tradeoff that you show to the argument made by Dutta et al. (reference given in the Weaknesses section of the review) and point out similarities and differences. Thank you for the pointer; we’ve added a reference in the paper. The analysis of mismatched distributions by Dutta and colleagues provides an interesting complement to our own critique of the purported fairness-accuracy tradeoff. They base it in the different Chernoff information between groups, and we base ours on the performance of a counterfactually fair predictor in an unbiased target distribution. The two views are complementary is that one is grounded in information theory, the other in causal structure. We think that the results do not have a crisp mathematical connection in general. > I understand the difficulty of empirical validation when dealing with counterfactuals, but the paper would be a lot stronger if the pipeline of Figure 1 were carried out to show the steps tangibly. We have added an experiment with the Adult dataset illustrating the main claims of the paper. This stops short of illustrating all the steps (there’s limited space), but we think it helps clarify the empirical implications. > It would also be good to relate the decomposition of X on line 268 to Dutta's other work [link]. Dutta and colleagues provide an information-theoretic decomposition that usefully separates discrimination into critical features that one wants to preserve and non-critical features that one is okay with being removed. Our decomposition is instead based in a causal framework, which separates the training data $X$ into the component not causally affected by $A$ and the component not causally affected by $Y$, respectively. We have added a brief discussion and citation to Dutta’s approach, which could be explored in future work. > It would be good if the authors could be more explicit on how they obtain/discover the causal graph … a group-testing based approach which require a sublinear number of conditional independence tests. Our technical results apply regardless of how the causal graph is obtained/discovered. Generally, we are envisioning that the analyst will use their understanding of the real-world situation under consideration to inform the causal graph they assume. We agree the need to (partially) articulate the causal graph is a limitation. It is a good point that the causal graph itself can be (partially) learned from data, and we have added a note that this is an interesting direction for future work, as well as a citation to Galhotra et al. (2022) [1], which develops a group-testing approach with sublinear conditional independence tests as you suggest. [1] Galhotra, Shanmugam, Sattigeri, and Varshney. 2022. “Causal Feature Selection for Algorithmic Fairness.” SIGMOD/PODS. doi:10.1145/3514221.3517909. --- Rebuttal 2: Title: Follow-up with Reviewer buKj Comment: Dear Reviewer buKj, We very much appreciate your constructive review and support. We believe that we have implemented each of your suggestions, which have in particular improved the contextualization of our results in the literature and, with the addition of experimental data, have provided some illustration of how our contribution can be applied. Could you please check our response and let us know if you have further questions? All the best, Authors --- Rebuttal Comment 2.1: Comment: The authors have done a ton of work, and have been very conscientious in responding to all the reviewers. I am satisfied that they have taken my suggestions to heart and worked on getting to a much better paper. I have raised my score.
Summary: This work investigated two important coutnterfactual fairness problems: (1) When will the counterfactually fair predictor also be the risk minimizer on the target distribution? (2) How do the classic statistical fairness metrics correspond to counterfactual fairness? In response to these questions, this work contextualized the problem in three causal DAG where selection bias may exist. To respond the first question, Theorem 1 shows under certain conditions that the counterfactual fair predictor is in fact accuracy-optimal. To respond the second question, Theorem 2 and Corollary 2.2 characterized the counterfactual fairness using the knowledge of the underlying causal context. The success in responding the two questions will motivate a pipeline to achieve counterfactual fairness without performance degradation. Strengths: This work bridges the gap between counterfactual fairness and statistical fairness metrics in the presence of selection biases. The studied problem is novel, and the theoretical results are of high quality. The motivation and the presentation of the whole paper is clear, and can provide high-level insights for fairness researchers and practitioners. Weaknesses: I did not fully check the proof details but only read the sketch. It looks like the statements heavily depend on the three causal DAGs in Figure 2. For the theoretical results, the assumptions for each statement can be made more clearly. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. In Figure 2(b), $X^\perp_A$ is the effect of outcome $Y$. What is the implication of such causal structure in the real world? 2. In Figure 2(c), the selection variable $S$ is the causal effect of protected class $A$ and observed outcome $Y$. What if the selection $S$ is modeled as the cause of the outcome $Y$? Do the statements still hold? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors did not address the limitations of how the counterfactual fairness can be evaluated in the real world. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the support, corrections, and provocative questions. > I did not fully check the proof details but only read the sketch. It looks like the statements heavily depend on the three causal DAGs in Figure 2. For the theoretical results, the assumptions for each statement can be made more clearly. Corollary 2.1 is exclusively about those three causal DAGs, but Theorem 1 and Theorem 2 neither reference nor depend on them. You make a good point that the scope of Corollary 2.1 is not very clear in the statement itself because Figure 2 is only mentioned on Line 245 in the preceding paragraph, so we have added a reference to Figure 2 in the statements to clarify that these statements refer to Figure 2a, Figure 2b, and Figure 2c, respectively. > Q1. In Figure 2(b), X \perp A is the effect of outcome Y. What is the implication of such causal structure in the real world? This causal structure is referred to as “anti-causal” and is the predominant setting in many machine learning applications. For example, in health care data, we’re often interested in assessing whether or not someone has a disease based on measurements of effects of that disease—e.g., $X$ may be a chest X-ray, and $Y$ some lung disease. Here, $Y$ causes $X$. (Ultimately, the true labels used for the training data may be produced by more expensive follow-up measurements or waiting for the disease to progress.) > Q2. In Figure 2(c), the selection variable S is the causal effect of protected class A and observed outcome Y. What if the selection S is modeled as the cause of the outcome Y? Do the statements still hold? We assume the reviewer means “and X” rather than “and observed outcome Y” because Figure 2(c) shows $S$ being affected by $X$, not $Y$. If so, note that $S$ is selection into the dataset. In our example, it is whether the loan repayment predictors $X$ and whether the person repaid ($Y$). Because the dataset contains historical data, it doesn’t seem to make logical sense in the real world to say that selection causes loan repayment. However, ignoring the real-world meaning of an effect of $S$ on $Y$, the mathematical answer is that if $S$ caused $Y$ in Figure 2(c), then counterfactual fairness would no longer imply calibration. This is because the new unblocked path between $A$ and $Y$ that does not go through $X_A^\perp$ makes the graph (i.e., the context) fail the test for calibration in Theorem 2. Additionally, in this case, $A$ would have a true causal influence on $Y$ (i.e., not purely spurious), and therefore it would be unclear whether a counterfactual notion of fairness is appropriate. > The authors did not address the limitations of how the counterfactual fairness can be evaluated in the real world. We describe a number of limitations of how counterfactual fairness can be evaluated in the real world, as detailed in Lines 150–182, beginning, “Counterfactual fairness is widely compelling, but there are still limitations.” We focus on two broad categories of limitation: the ambiguity of counterfactuals and the identification of causal structure. To make this more prominent, we have edited the discussion section to bring up these limitations. We would also note that some other revisions have also foregrounded the limitations of generalizability and application (e.g., the “purely spurious” assumption), and we have added an experiment with the Adult income dataset, in which we simulated a new protected class and induced the three causal contexts in Corollary 2.1 and showed that a counterfactually fair predictor achieves the corresponding metric and achieves the best performance in the unbiased target distribution (Theorem 1). --- Rebuttal 2: Title: Follow-up with Reviewer 7eza Comment: Dear Reviewer 7eza, We greatly appreciate your thoughtful review and support of the paper. We believe we have clarified the scope of when these statements hold and covered more real-world implications (including the addition of the Adult income experiment with a novel counterfactually fair predictor) and limitations that will help readers make use of our technical results. Could you please check our response and let us know if you have further questions? All the best, Authors --- Rebuttal Comment 2.1: Title: My questions are addressed Comment: I thank the authors for their insightful response. At this moment I do not have other questions. However, I would like to hold my score and further see other reviewers' discussions. --- Reply to Comment 2.1.1: Title: Thank you Comment: Thank you for the kind words. As you said you wanted to wait for other reviewers, we just wanted to mention that there are only 48 hours left in the discussion period, and three others have now also responded to our rebuttals, so please let us know if any other questions have come up, and we will try to address them in time.
Summary: The paper explores a connection between counterfactual fairness and correlation-based fairness metrics, exhibiting causal graph settings in which counterfactual fairness implies specific correlational metrics are satisfied. The paper further connects this mapping to the accuracy-fairness tradeoff. Strengths: I think the mapping between causal/counterfactual notions of fairness and metric-based fairness is useful. The paper does a good job of integrating several literatures into a coherent frame. The specific causal scenarios that relate counterfactual fairness and correlational fairness are useful, although I'm not well equipped enough in the structural causal modeling literature to understand the significance of this as a contribution. Weaknesses: I think the paper itself foresees my largest concern: it's not clear what settings these results apply in. While I'm not suggesting, as the conclusion tries to defend against, a full empirical validation of the theory, it would be nice if there was at least some attempt at mapping the causal graphs in Figure 2 to specific scenarios where the relationship being offered is relevant and an explanation of the relevance in that scenario. This exists, but way down in Section 5. Help your reader by making these connections right away. The literature on fairness (the human value) as operationalized by fairness (statistical metrics) strongly suggests that the problem with choosing metrics is less a fairness/accuracy tradeoff or impossibility of satisfying multiple metrics, but the need to contextualize the model well in the application, understanding the cost and meaning of things like errors and the value of their avoidance. It's not obvious that connections to counterfactual fairness help with this problem, and if they don't, that's a limitation worth articulating. The paper often cites into the literature in a way that only thinly and tenuously represents the large issues at play in the topic of the paper. I'd like to see more alignment between supporting sources and the content of the paper. For example, the citation of Jacobs & Wallach is offered as a problem with the difficulty of measurement error, but the paper is mostly about how to establish validity of models through explicit measurement modeling, a thing not considered by the methods in this paper at all. Nit: the paper consistently uses the word "persecuted" instead of "prosecuted" (e.g., at 53, 264, and 266, and maybe elsewhere?) Nit: the symbol l in Theorem 1 should probably be $\ell$. Nit: the discussion of gender at 164 is highly oversimplified and is probably actually a description of biological sex. Later text clarifies this, but the claims are a little vague as currently stated. Nit: "obtain" at 178 is probably the wrong word? Technical Quality: 3 good Clarity: 3 good Questions for Authors: How can the connections offered in this paper be oeprationalized for real-world cases? Many applications are mentioned, but the value of the connections is only loosely coupled to these. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: There is some defensive posturing in the conclusion about the lack of real-world connection, which is probably a limitation to address by improving the connections rather than by disclaiming the possibility of doing this (since making those connections is the ostensible value of the paper). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your support and your useful comments. > I think the paper itself foresees my largest concern: it's not clear what settings these results apply in. … This exists, but way down in Section 5. Help your reader by making these connections right away. We agree that the mapping of causal graphs in Figure 2 to specific scenarios is important, and we have moved the example description scenarios to appear sooner in the paper. We have also added an experimental demonstration illustrating the results (see the global response), which we think further clarifies the practical scenarios corresponding to the causal graphs. > The literature on fairness (the human value) as operationalized by fairness (statistical metrics) strongly suggests that the problem with choosing metrics is less a fairness/accuracy tradeoff or impossibility of satisfying multiple metrics, but the need to contextualize the model well in the application, understanding the cost and meaning of things like errors and the value of their avoidance. It's not obvious that connections to counterfactual fairness help with this problem, and if they don't, that's a limitation worth articulating. Thank you for pointing out the importance of addressing the cost and meaning of errors that vary across contexts (e.g., false negatives versus false positives), and we have added that explicitly in the motivation. We believe notions of counterfactuals can potentially help with this problem. For example, the severity of harm from an error can be in part estimated by comparing it to the harm or benefit one would receive in the counterfactual scenario. And if one believes counterfactually unfair practices (i.e., those prohibited by the “but-for” test in U.S. law) lead to more harmful errors, then one might be able to use the correspondence we draw to better achieve that. For example, the types of errors made by a predictor that only achieves demographic parity will tend to be different than those of a predictor that only achieves equalized odds. We would push back a little on the problem not being a fairness/accuracy trade-off or the mutual impossibility. In our experience, the literature still views both of these as important challenges [e.g., 1 on the trade-off and 2 on impossibility, just as examples], and we see our work as contributing to the toolkit of methods to address them [e.g., 3 on improving fairness on multiple metrics with “minimal model performance reduction”]. We think that contextual factors, such as the causal graphs that are the focus of this work and other approaches to counterfactual fairness, are relevant for addressing all three of the issues: fairness/accuracy trade-off, the impossibility theorems, and the cost of errors and benefit of correct prediction/classification. We appreciate the reviewer helping us clarify this point, and we have added a brief discussion of this to the paper. [1] Ge, Zhao, Yu, Paul, Hu, Hsieh, and Zhang. 2022. “Toward Pareto Efficient Fairness-Utility Trade-off in Recommendation through Reinforcement Learning.” WSDM. doi:10.1145/3488560.3498487. [2] Hsu, Mazumder, Nandy, and Basu. 2022. “Pushing the Limits of Fairness Impossibility: Who’s the Fairest of Them All?” NeurIPS. doi:10.48550/arXiv.2208.12606. [3] Bell, Bynum, Drushchak, Zakharchenko, Rosenblatt, and Stoyanovich. 2023. “The Possibility of Fairness: Revisiting the Impossibility Theorem in Practice.” FAccT. doi:10.1145/3593013.3594007. > The paper often cites into the literature in a way that only thinly and tenuously represents the large issues at play … explicit measurement modeling, a thing not considered by the methods in this paper at all. We see that J&W are focused on a particular approach that we do not apply or respond to. As far as we know, they have the most rigorous development of “measurement error models” in the fairness context, which is why we cited them. We have edited line 262 to specify that we are referring to their development of such models, and we have also provided a more general citation for measurement error [1]. We also reviewed the other citations in the paper to ensure that they are aligned with our text. [1] Bland and Altman. 1996. “Statistics Notes: Measurement Error.” BMJ. doi:10.1136/bmj.312.7047.1654. > Nit: the paper consistently uses the word "persecuted" instead of "prosecuted" (e.g., at 53, 264, and 266, and maybe elsewhere?) Nit: the symbol l in Theorem 1 should probably be $\ell$. Fixed. > Nit: the discussion of gender at 164 is highly oversimplified and is probably actually a description of biological sex. Later text clarifies this, but the claims are a little vague as currently stated. We could move the later text to the beginning of the discussion and reword for clarity, but with the existing discussion of race, we have just cut the gamete randomization example to make room for suggested additions. > How can the connections offered in this paper be oeprationalized for real-world cases? Many applications are mentioned, but the value of the connections is only loosely coupled to these… /// There is some defensive posturing in the conclusion about the lack of real-world connection… We have provided a partial connection to real-world cases through: 1. clarification of how the connections help one to measure (or enforce) counterfactual fairness using observational data, 2. elaboration of the three example problems, and 3. an experiment demonstration of the main results (including a simple method for learning a counterfactually fair predictor in the purely spurious setting; see the global response). We have also expanded the discussion of what “purely spurious” means in practice and clarified that this assumption is a limitation of the present paper (though not a fundamental one—we think it’s a great direction for future work to extend the ideas here to circumvent this!). We also discuss this in more detail in the global response. --- Rebuttal 2: Title: Follow-up with Reviewer U9Dm Comment: Dear Reviewer U9Dm, Thank you for the support and clarifying comments. We believe we have addressed each of your concerns, including making the real-world connections more prominent and adding some empirical validation. Could you please check our response and let us know if you have further questions? All the best, Authors --- Rebuttal Comment 2.1: Title: 48 hours left in discussion period Comment: Dear Reviewer U9Dm, There are only 48 hours left in the discussion period. We greatly appreciate your review and are keen to know if the improvements we made in response have fully addressed your concerns. Thanks again. All the best, Authors
Rebuttal 1: Rebuttal: We thank the reviewers for their time and thorough commentary. We are glad to hear the core technical results were seen as valuable contributions (“interesting way to connect…”, “technically sound”, “useful”, “novel”, “the theoretical results are of high quality”, “expand upon the prior work meaningfully”) and the general presentation was clear (“integrating several literatures into a coherent frame”, “clear”, “really good job making the point”, “generally well-written and clear”). There were two main reviewer concerns: experimental evaluation of the theoretical results and real-world applicability. # Experiments As several reviewers suggested, we conducted an experimental test of the main theoretical predictions of the paper. See the attached PDF for the table of results. Per NeurIPS policy, we have sent a code link to the AC privately. As predicted, we find: 1. A counterfactually fair predictor satisfies the observational fairness metric corresponding to the underlying causal graph (Corollary 2.1). 2. Counterfactually fair predictors trained on biased data (where the protected class is associated with the outcome) have strong predictive performance on the unbiased target domain where no association exists (Theorem 1). To conduct this experiment, 1. We produce a novel counterfactually fair (CF) predictor for the purely spurious setting, $f_{CF}(X) = P(Y=1|X, A=1)P(A=1) + P(Y=1|X, A=0)P(A=0)$ (i.e., a weighted average of naive predictors across each level of the protected class). This follows because $P(Y=1| X, A) = P(Y=1|X_A^\perp,A)$ in the spurious setting. Then $f_{CF}$ only depends on $X_A^\perp$, so it is in fact CF. $f_{CF}$ can be readily estimated in practice. Namely, fit a model $f_{naive}(X,A)$ that predicts $Y$ from both $X$ and $A$ (thus estimating $P(Y=1 | X, A)$) and then let $f_{CF}(X) = f_{naive}(X, 1) P(A=1) + f_{naive}(X, 0) P(A=0)$. 2. We use the Adult income dataset to construct semi-synthetic data matching each of the causal graphs. We first construct a synthetic binary protected class $A$, sampled at random. We then have $A$ causally affect $X$ by fixing the value of several attributes with a fixed probability when $A=1$ (if $A=0$, no change is made). We then create three biased datasets corresponding to the three causal graphs shown in Figure 2 by (a) noising $Y$ according to $A$, (b) selecting data points based on $A$ and $Y$, and (c) selecting on $A$ and predictors of $Y$. 3. We train CF predictors on each biased dataset then check the performance of the CF predictor on the base data where $A$ and $Y$ are independent (Theorem 1) and check observational fairness metrics (Corollary 2.1). # Real-world connections and the ‘purely spurious’ assumption The other concern that reviewers raised is about real-world applicability. In particular, there is a concern that the ‘purely spurious’ requirement is too strong in practice and is not sufficiently acknowledged as a key limitation. We basically agree with this point! We view the contribution of the paper as novel technical results that show counterfactual fairness is both measurable and desirable in some natural situations. The significance of this, which the reviewers appreciated, is: 1. Counterfactual fairness is an important notion socially (e.g., it maps to a legal notion of discrimination). 2. It’s not obvious a priori that counterfactual fairness can ever be assessed or connected to other observational fairness metrics. We show that considering causal structure can allow this. 3. Prominent existing work [1] suggests that counterfactual fairness results in an unacceptable penalty to performance. Our results on out-of-domain performance directly rebut this by showing counterfactual fairness can be desirable even if one is concerned only with performance. Focusing on the purely spurious setting lets us make these points clearly, avoiding obfuscation from extra technical complexity. We follow the recent literature in using this assumption [e.g., 2,3,4] to make meaningful progress in this challenging area. We also think it should be possible for future work to move beyond the purely spurious case. There is a literature on causal fairness beyond counterfactual fairness that offers guidance on how to define fairness notions in more complex scenarios [e.g., 5, 6]. Given an assumed causal structure and a desired causal fairness notion respecting this structure, one should be able to extend these results to articulate the kind of domain shifts where a fair predictor is expected to be robust and to derive observable signatures of the fairness notion. We do not believe we have room in a single paper to thoroughly develop the core technical results and also tackle these complexities, but this is an exciting direction for future work. As suggested by the reviewers, we have also changed the text to foreground the purely spurious assumption as a limitation. In response to comments that the paper would be stronger with real-world application (e.g., “if the pipeline of Figure 1 were carried out to show the steps tangibly”), we have conducted the experimental test but also foregrounded the role that we see these results playing in that broader pipeline. [1] Nilforoshan, Gaebler, Shroff, and Goel. 2022. “Causal Conceptions of Fairness and Their Consequences.” PMLR. doi:10.48550/2207.05302. [2] Makar and D’Amour. 2023. “Fairness and Robustness in Anti-Causal Prediction.” TMLR. doi:10.48550/2209.09423. [3] Makar, Packer, Moldovan, Blalock, Halpern, and D’Amour. 2022. “Causally Motivated Shortcut Removal Using Auxiliary Labels.” AIStats. doi:10.48550/2105.06422. [4] Veitch, D’Amour, Yadlowsky, and Eisenstein. 2021. “Counterfactual Invariance to Spurious Correlations in Text Classification.” NeurIPS. doi:10.48550/2106.00545. [5] Plecko and Bareinboim. 2022. “Causal Fairness Analysis.” doi:10.48550/2207.11385. [6] Nabi, Razieh, and Ilya Shpitser. 2018. “Fair Inference On Outcomes.” AAAI. doi:10.1609/aaai.v32i1.11553. Pdf: /pdf/6a4d406e758e96a0ef79e9c4a3c6f617c1306451.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper provides a new way to analyze the fairness-accuracy tradeoff in many fairness definition through the lens of counterfactual fairness. The authors prove that, under certain conditions, the counterfactually fair predictor is optimal when the target is OOD generalization. The theoretical analyses build the connection between counterfactual fairness and 10 well-known correlational fairness. Strengths: S1. Interesting way to connect counterfactual fairness, out-of-distribution generalization and other fairness definitions. S2. Technically sound proof on the theorems and corollaries. S3. Writing quality is good. Weaknesses: W1. Some statements/assumptions need more comprehensive illustration. W2. Interesting theoretical analyses on counterfactual fairness W3. The authors use impossibility theorem as one motivation, but more discussion about the technical analyses to impossibility theorem would better position this paper. W4. Title is a bit misleading to me. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Q1. I agree that assumptions about the purely spurious relationship between $Y$ and $A$ helps with theorem 1. But for the race and zip code example in lines 203 -- 205, can we simply say the bias is due to selection effect? It is obvious that population from different demographic groups are distributed unevenly in a community, and the selection effect may further amplify the bias. Can we simply characterize it as due to selection effect only? Q2. Is it possible to elaborate more on the assumption about decomposing $X$ into $X_{\perp A}$ and $X_{\perp Y}$? Is it a linear decomposition of $X$? It is a pretty strong assumption to decompose $X$ and I am not sure whether it is reasonable or not. Q3. What is the relationship between the proposed counterfacutal fairness-based analysis to impossibility theorem? The authors use impossibility theorem as one motivation for studying counterfactual fairness, but no more discussion about how these analyses are related to impossibility theorem is included. Q4. Where is the evaluation on simulated data? The authors mention it as a contribution but I could not find it. If I miss it, it would be good to provide pointers. Q5. How does the proposed analyses help **select** the algorithmic fairness metrics? Some discussions (or maybe a case study) about how to use the proposed analyses to select fairness metrics would help. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: L1. Many theoretical analysis in the paper assume the decompostion of $X$ into two parts. But it is unclear whether this assumption is reasonable. L2. It might be good to draw figures for some concepts for better clarity (e.g., illustrative figures for unblocked path and the correspondence between counterfactual fairness and correlational fairness). And it would be good to recall Figure 2 when discussing related technical details (e.g., theorem 2). L3. The authors mention that the theorems hold on simulated data, but no evaluation is found. L4. It would be great to better connect the content with the title `select algorithmic fairness metrics'. L5. Please carefully check typos in the paper, e.g., `if they had if they were classified negatively' $\rightarrow$ `if they were classfied negatively'. ================== Post-rebuttal ================== The authors have addressed my concern so I am raising my score. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and useful suggestions, which significantly improve the manuscript. > W1. Some statements/assumptions need more comprehensive illustration. We have added the more comprehensive illustrations in response to your comments below. > W3/Q3. The authors use impossibility theorem as one motivation, but more discussion about the technical analyses to impossibility theorem would better position this paper… We only intended the impossibility theorems to illustrate the need to make context-specific fairness decisions in practice because the metrics cannot all be satisfied at once, except in the unrealistic cases of perfect prediction, random prediction, or equal features across groups [1,2]. The technical counterfactual fairness analysis relates to this impossibility by providing a tool to select between the observational metrics based on the situation’s causal context (i.e., the data-generating process). We state the criteria that the causal context has to meet in Theorem 2 and Corollary 2.2. In response to the reviewer’s helpful suggestion, we have added a brief explanation of this relationship to the Discussion. We have also reduced the emphasis on impossibility theorems as a motivation in the Abstract and Introduction because there are many other reasons why we need to select between fairness metrics and the impossibility results may not be as relevant in practice as it may seem [e.g., 3]. [1] Chouldechova. 2017. “Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments.” Big Data. doi:10.1089/big.2016.0047. [2] Kleinberg, Mullainathan, and Raghavan. 2016. “Inherent Trade-Offs in the Fair Determination of Risk Scores.” ITCS. doi:10.48550/arXiv.1609.05807. [3] Andrew, Bynum, Drushchak, Zakharchenko, Rosenblatt, and Stoyanovich. 2023. “The Possibility of Fairness: Revisiting the Impossibility Theorem in Practice.” FAccT. doi:10.1145/3593013.3594007. > W4/L4. Title is a bit misleading to me … better connect the content with the title `select algorithmic fairness metrics'. We plan to retitle the paper “Causal Context Connects Counterfactual Fairness to Group Fairness and Robust Prediction.” > Q1. I agree that assumptions about the purely spurious relationship between Y and A helps with theorem 1. But for the race and zip code example in lines 203 -- 205, can we simply say the bias is due to selection effect? It is obvious that population from different demographic groups are distributed unevenly in a community, and the selection effect may further amplify the bias. Can we simply characterize it as due to selection effect only? We appreciate you pointing out that correlations between race and recidivism might be due to other factors in addition to the selection effect. We have softened the language here and clarified our statement that “other factors” can have effects because we agree that, in the real world, it is probably not only due to a selection effect. > Q2/L1. Is it possible to elaborate more on the assumption about decomposing X into X \perp A and X \perp Y? Is it a linear decomposition of X?... We have added more detail when we first mention decomposition. We do not mean a necessarily linear composition. The decomposition is quite general, and does not specify a particular functional form. E.g., $X^\perp_A$ may be some arbitrarily complicated nonlinear function $g(X)$, so long as $g(X(A=0)) = g(X(A=1))$ a.s. A key advantage of the view in this paper is that we do not need to know what this function is explicitly. The decomposition into two (abstract) parts is the ‘purely spurious’ assumption—see the global response for a discussion. We have added text to clarify this in the paper. > Q4/L4. Where is the evaluation on simulated data? The authors mention it as a contribution but I could not find it… We apologize for the erroneous statement left over from a draft that had simulations. We have in fact now added semi-synthetic experiments based on the Adult income dataset, detailed in the global response. > Q5. How does the proposed analyses help {\bf select} the algorithmic fairness metrics? Some discussions (or maybe a case study) about how to use the proposed analyses to select fairness metrics would help. In short: the analyst uses their understanding of the real-world situation to posit a causal graph, then selects the fairness metric implied by that graph. We give examples in the paper of measurement error in arrest data (e.g., COMPAS), selection on outcome with people of younger age or privileged backgrounds being more likely to be included in the dataset, and selection on predictors with loan repayment data also not being a balanced mix of all protected classes. We discuss real-world application more in the global response. > L2. It might be good to draw figures for some concepts … recall Figure 2 when discussing related technical details (e.g., theorem 2). We have added a brief illustration of step-by-step reasoning to connect counterfactual fairness to observational fairness. (e.g., an example of an unblocked path). We have also added an explicit mention of Figure 2 to the statement of Theorem 2. > L5. Please carefully check typos in the paper, e.g., ```if they had if they were classified negatively' $\rightarrow$``` if they were classfied negatively'. We have fixed this and gone through the paper to check for other typos. --- Rebuttal 2: Title: Follow-up with Reviewer WTYn Comment: Dear Reviewer WTYn, We are grateful for your suggestions and comments. We believe we have addressed each of your concerns and made each of the improvements you suggest, such as adding an experiment on the Adult income dataset and clarifying the decomposition. Could you please check our response and let us know if you have further questions? All the best, Authors --- Rebuttal Comment 2.1: Title: Response to author rebuttal Comment: Dear authors, I appreciate your efforts in addressing my concerns. I don't have further concerns and will raise my score. Best, Reviewer WTYn
null
null
null
null
null
null
You Only Condense Once: Two Rules for Pruning Condensed Datasets
Accept (poster)
Summary: This paper proposes a combination method for dataset pruning and dataset condensation. Both methods could reduce the size of training datasets. Experiments show that the proposed method achieves state-of-the-art performance on several datasets regardless of networks. The proposed rules are simple and effective, which might be commonly used in future works. Strengths: - This paper studies an interesting yet rarely studied problem, which is important for on-device scenarios. In addition, the authors have comprehensively analyze the challenges of this problem and the drawback of existing methods. - This paper is well-written and easy to follow. I generally enjoy reading the story, motivation, method and experiments of this paper. - Two novel, simple, and effective rules/metrics are introduced to select informative and balanced samples. Importantly, with this two rules, one can carry on various size requirements by one process. - Extensive experiments, including results and visualization, demonstrate the effectiveness and efficiency of the proposed method, and show the superiority over the state-of-the-art methods. Weaknesses: Generally, I think the paper is strong. However, I have the following questions. - In Table 1, the ImageNet-10 on the DREAM method is not listed. - Some proofs, such as Proof of Theorem 1, can be shortened. - The core idea is first to condense the dataset before pruning the dataset. How about pruning the dataset first and then condensing it? Which way is better and why? Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - From Figure 1, previous methods require 200K *(N-1) epochs for extra condensation. Is there any way to reduce the extra condensation process without using dataset pruning? Dataset Pruning is just like a selection method. - Will the code be open-sourced? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks a lot for your comment. We will answer the questions below. >1. In Table 1, the ImageNet-10 on the DREAM method is not listed. DREAM uses tiny-ImageNet (resolution: 64x64) instead of a subset of ImageNet (resolution: 224 x 224). This is because DREAM performs clustering during condensation, and the increase in the image resolution incurs large memory costs. > 2. Some proofs, such as Proof of Theorem 1, can be shortened. Thanks for the suggestion. We will modify this part. > 3 The core idea is first to condense the dataset before pruning the dataset. How about pruning the dataset first and then condensing it? Which way is better and why? Thanks for the insightful question. We believe conducting dataset pruning before dataset condensation is better. Dataset pruning has roughly two times compression, and dataset condensation has 100 times compression. If we do dataset pruning before dataset condensation, the final compression ratio is still 100 times. If we do dataset condensation before dataset pruning, the final compression ratio can reach 200 times. > 4 From Figure 1, previous methods require 200K\*(N-1) epochs for extra condensation. Is there any way to reduce the extra condensation process without using dataset pruning? Dataset Pruning is just like a selection method. Yes, it is possible. Thanks for your suggestion. > 5. Will the code be open-sourced? Yes. The code will be released after acceptance. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thanks for your effort in rebuttal. My concerns have been well addressed in the response.
Summary: This paper target at the dataset condensation for on-device scenarios. It proposes two novel rules to prune the condensed dataset. The first rule is to rank the condensed images. The rule believes easy examples are better for pruning condensed dataset. The second rule is to choose balanced samples for different classes. The overall writing is clear. The experiments are sufficient. Strengths: 1. This is the first work to consider pruning the condensed dataset. The novelty is good. 2. The illustration and writing are clear. The motivation is clearly explained in the figures. 3. The paper provides solid proof for the LBPE metric for ranking the condensed images. 4. The balanced construction is reasonable expecially when the dataset size is small. 5. The experiment looks good. On CIFAR-10, CIFAR-100, and ImageNet, the proposed YOCO method surpasses other methods by a large margin. For example, on CIFAR-10, the improvements are from 6.57% to 24.33%. On CIFAR-100, the improvements are from 4.77% to 22.13%. 6. The analysis is interesting. It's interesting to see that the rules are different between the condensed dataset and the full dataset. Weaknesses: 1. The LBPE score and balanced construction. Which rule is more important? 2. If we view the condensed dataset as a full dataset, will the rules change? Is there any threshold for when the rules will change? 3. The LBPE Score is from the Top-K Training Epochs with the Highest Accuracy. Why not use the loss to extract the LBPE score? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the Weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: The limitation part is covered in section 5. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your high score, and we'd like to answer your question below. > 1. The LBPE score and balanced construction. Which rule is more important? Thanks for the question. Table 2 in the paper provides an ablation study on the importance of LBPE score and Balanced Construction. These results indicate a higher importance of LBPE score over balanced construction. > 2. If we view the condensed dataset as a full dataset, will the rules change? Is there any threshold for when the rules will change? Thanks for the question. In our setting, we indeed view the "condensed dataset" IPCF as the "full dataset". As reported in line 261-262, the threshold is 24%, 38%, and 72% of total data pruned for condensed CIFAR10 IPCF=10, IPCF=50, and full CIFAR10 dataset, respectively. The threshold tends to occur at a large pruning ratio if the dataset is large. > 3. The LBPE Score is from the Top-K Training Epochs with the Highest Accuracy. Why not use the loss to extract the LBPE score? Thanks for the question. We agree that the loss could be used to extract the LBPE score. In the following two tables, we compare the performance for using "accuracy" and "loss" as metrics. Table 1. CIFAR10 IPCF=10 | (IPC10) | ipc1 | ipc2 | ipc5 | average | | -------- | ----- | ----- | ----- | ------- | | accuracy | 42.79 | 46.28 | 55.29 | 48.12 | | loss | 41.42 | 47.12 | 55.96 | 48.16 | Table 2. CIFAR10 IPCF=50 | (IPC50) | ipc1 | ipc2 | ipc5 | ipc10 | average | | -------- | ----- | ----- | ----- | ----- | ------- | | accuracy | 39.88 | 45.68 | 53.28 | 60.31 | 49.79 | | loss | 37.01 | 45.47 | 53.86 | 60.96 | 48.16 | The results show that these two metrics achieve similar performance on small IPCF, but "accuracy" metric is better on large IPCF. --- Rebuttal Comment 1.1: Title: ------------------Post rebuttal----------- Comment: Thanks for well addressing my concern. I maintain my scoring.
Summary: This paper employs a dataset pruning approach on a condensed dataset. Specifically, the intent is to evaluate and rank the data within this condensed dataset based on their respective significance. This enables users to selectively choose any number of condensed datasets, offering both efficacy and flexibility in training the model. Strengths: This paper claims that it's the first work to conduct dataset pruning on condensed datasets to fit varying computational constraints. LBPE score is a reasonable metric to measure the importance of data in the condensed dataset. Weaknesses: 1. The main weakness is the evaluation part. This paper compares their method: (1). dataset condensation + dataset pruning with the previous method (2). dataset condensation + random selection (line 202), which is not fair. For example, it's better to show the effectiveness of the proposed method by comparing (1) with IPCF = 10 to IPCT=1 or 2 or 5 with (3): directly generating the dataset condensation with the IPC=1 or 2 or 5. If it can get similar performance, this paper can claim its advantages of training time. 2. It's unclear—the whole pipeline in Fig 1 or the rest of the paper. I can only understand the paper after reading the Appendix. It's more critical than putting the algorithm part in the main body. 3. Proof of Theorem 1 is not solid. 4. The imbalanced part is not thoroughly analyzed. The equal number of each class is a standard setting in the condensed dataset. This paper does not provide new insight from Section 3.3 Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Please refer to Weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments regarding the evaluation section of our paper. We will explain your questions point by point. > 1. The main weakness is the evaluation part. This paper compares their method: (1). dataset condensation + dataset pruning with the previous method (2). dataset condensation + random selection (line 202), which is not fair. For example, it's better to show the effectiveness of the proposed method by comparing (1) with IPCF = 10 to IPCT=1 or 2 or 5 with (3): directly generating the dataset condensation with the IPC=1 or 2 or 5. If it can get similar performance, this paper can claim its advantages of training time. Thanks for your question. We will explain our comparison is fair in two parts. ### Comparison with dataset pruning method We would like to clarify that this paper, as the title suggests, utilizes **dataset pruning** techniques to fit condensed datasets on various computational budgets. So we compare the results with **dataset pruning** methods, including SSP [1], Entropy [2], AUM [3], Forg. [4], and EL2N [5] in **Table 1** and **Table 5**. It looks like the reviewer ignored these comparisons. We want to emphasize that these comparisons are fair. ### Comparison with dataset condensation method Regarding the comparison with dataset condensation methods, we believe it is also fair. In the table below, we list the required time for the condensation process on GPU 3090. We have these observations below: A. Condensation on IPC-10 requires 12 hrs 36 mins. B. Our method requires 12 hrs 37 mins. It consists of the 12 hrs 36 mins required by process A, and **only 1 minute required by our pruning method**. Following the suggestion from the reviewer that we should not use random selection as a baseline, we set the accuracies of other IPCs to zeros. We also include average accuracy over IPC from 1 to 10 as the metric. It is clear that our method improve the average accuracy by **+50.03%** (from 6.75% to 56.78%) with almost **the same training time** required compared to "condense ipc10 only". | | ipc1 | ipc2 | ipc3 | ipc4 | ipc5 | ipc6 | ipc7 | ipc8 | ipc9 | ipc10 | average | time | | |------------------------|-------|-------|------|-------|-------|-------|------|-------|-------|-------|---------|-----------------|---| | condense ipc1 only | 50.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5.08 | 11 hrs 46 mins | | | condense ipc2 only | 0 | 54.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5.48 | 11 hrs 48 mins | | | condense ipc3 only | 0 | 0 | 59.8 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5.98 | 11 hrs 50 mins | | | condense ipc4 only | 0 | 0 | 0 | 61.8 | 0 | 0 | 0 | 0 | 0 | 0 | 6.18 | 11 hrs 51 mins | | | condense ipc5 only | 0 | 0 | 0 | 0 | 62.5 | 0 | 0 | 0 | 0 | 0 | 6.25 | 11 hrs 53 mins | | | condense ipc6 only | 0 | 0 | 0 | 0 | 0 | 64.6 | 0 | 0 | 0 | 0 | 6.46 | 12 hrs 02 mins | | | condense ipc7 only | 0 | 0 | 0 | 0 | 0 | 0 | 65.5 | 0 | 0 | 0 | 6.55 | 12 hrs 19 mins | | | condense ipc8 only | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 66.3 | 0 | 0 | 6.63 | 12 hrs 23 mins | | | condense ipc9 only | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 66.8 | 0 | 6.68 | 12 hrs 33 mins | | | condense ipc10 only | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 67.5 | 6.75 | 12 hrs 36 mins | | | **ours** | 42.79 | 46.28 | 51.8 | 54.27 | 55.29 | 59.61 | 61.1 | 63.68 | 65.44 | 67.5 | **56.78** | 12 hrs 37 mins | | ### Reference [1] Beyond neural scaling laws: beating power law scaling via data pruning, NeurIPS 2022 [2] Selection via Proxy: Efficient Data Selection for Deep Learning, ICLR 2020 [3] Identifying Mislabeled Data using the Area Under the Margin Ranking, NeurIPS 2020 [4] An Empirical Study of Example Forgetting during Deep Neural Network Learning, ICLR 2019 [5] Deep Learning on a Data Diet: Finding Important Examples Early in Training, NeurIPS 2021 > 2. It's unclear—the whole pipeline in Fig 1 or the rest of the paper. I can only understand the paper after reading the Appendix. It's more critical than putting the algorithm part in the main body. Thanks for your question. But we are sorry that we don't quite understand this question. **Concerning Figure 1**: While we recognize that the reviewer found this figure unclear, we are unsure which part is unclear. As commented by Reviewer hLEg, "The motivation is clearly explained in the figures." > 3. Proof of Theorem 1 is not solid. Thanks for your question. But it's difficult for us to provide a response based on the general judgment of "not solid". Could you please provide more information? > 4. The imbalanced part is not thoroughly analyzed. The equal number of each class is a standard setting in the condensed dataset. This paper does not provide new insight from Section 3.3 Thanks for the clear question. Regarding the " imbalanced part", please take a look at **Section 4.3.2.**, **Table 3**, and **Figure 5**. We use these to analyze the imbalanced part. Could the reviewer please provide more information on which aspect of the analysis is missing for the imbalanced part? Regarding your comment "The equal number of each class is a standard setting in the condensed dataset", we agree with this. However, the equal number of each class is **NOT** a standard setting in **dataset pruning** methods. We find that the equal number of each class is quite useful for **pruning** condensed dataset. This is our new insight, and we sincerely hope the reviewer will recognize this. --- Rebuttal Comment 1.1: Title: Thank you for the clarification Comment: I appreciate the effort put forth in this paper and recognize the novelty in exploring dataset pruning, a direction that I believe remains relatively under-researched. I do have several points of contention and suggestions that I would like to highlight: Q1. The main concern still exists. Referring to Table 1, the paper demonstrates that using IDC directly to produce IPC=10 results in an accuracy of 67.5%. However, when the proposed dataset pruning is employed on a condensed dataset with IPC=10 from IPC=50, the accuracy drops to 60.31%. Such a significant decrease in accuracy challenges the practicality of this pruning method. The primary objective of pruning is to derive a compact model (dataset in this paper) with only a slight compromise on accuracy when compared to a pre-trained model. And this pruned model should outperform training from scratch. While I acknowledge that pruning can speed up the process when leveraging a pre-trained dataset, this significant drop in accuracy remains concerning. My central point here is, I'd prefer comparisons between the Pruned Condensed Dataset and directly obtaining a Small Condensed Dataset with the same IPC. If the latter yields better results without requiring the intermediate step of dataset pruning, then what merits does the pruning method offer? Q2. I do not have any major concerns here. This isn't a critical issue but more of a suggestion for improvement. Q3. The proof provided for Theorem 1 doesn't strike me as formal proof; it seems more like an elucidation. I'd recommend refining this section to meet the standards of rigorous proof. Q4. I acknowledge and commend the analysis of the imbalanced part. My primary issue lies in "The equal number of each class is a standard setting in the condensed dataset." this is not a new insight. In my understanding, the primary aim of dataset pruning is to extract a succinct yet informative condensed dataset. So the goal is to compare with the methods of the condensed dataset. Previous methods of condensed datasets seem to consistently adopt the Balanced construction. For this point, there is not a technical concern but only a novelty concern. In summary, I do acknowledge and commend the contributions of 1, 2, and partial 3 in the paper. However, because this paper is to focus on the first work using dataset pruning on the condensed dataset, authors should emphasize the effectiveness of dataset pruning in this field, such as litter accuracy drop from pre-trained condensed dataset or better than generated condensed dataset directly generated from scratch (IPC, DREAM), instead of focusing on comparing with other unproposed dataset pruning condensed dataset method. Regarding the experimental results, It would be more compelling to demonstrate superior performance when using the Pruned Large Pre-trained Condensed Dataset compared to directly generating the Small Condensed Dataset persist. --- Reply to Comment 1.1.1: Title: Response to Q1 (the main concern) Comment: > Regarding the experimental results, It would be more compelling to demonstrate superior performance when using the Pruned Large Pre-trained Condensed Dataset compared to directly generating the Small Condensed Dataset persist. > Referring to Table 1, the paper demonstrates that using IDC directly to produce IPC=10 results in an accuracy of 67.5%. However, when the proposed dataset pruning is employed on a condensed dataset with IPC=10 from IPC=50, the accuracy drops to 60.31%. Such a significant decrease in accuracy challenges the practicality of this pruning method. Thank you very much for your reply. We totally agree that the mentioned **superior performance** is a very good direction to explore. However, what we want to present in this paper is **increasing flexibility** for already condensed datasets. Let us look into the example the reviewer gave. We add setting C for comparison, and analysis point by point. **A**. IDC directly produces IPC=10 results, and the accuracy is **67.5%**. **B**. The proposed dataset pruning method selects IPC=10 from IPC=50, and the accuracy is **60.31%**. **C**. Based on IPC-10, our pruning method improves the accuracy of IPC 1, 2, 3,...,9. At the same time, IPC-10 still holds the accuracy of **67.5%**. The extra time required is just **one minute**. The analysis of setting A, B and C is listed below: 1. We admit that on a **single** IPC-10, setting A is much better than setting B. But please consider the scenario that we need **multiple** IPC for on-device applications. Therefore, please compare setting A with setting C: our method holds the 67.5% accuracy of the IPC-10, and improves all the accuracy of IPC 1,2,...9. All the improvements are achieved with **one extra minute**. 2. We hope you don't mind us saying this, but setting B you brought up is a little unfair. This is because we can select IPC-10 from **any larger IPC-X** including IPC-20, IPC-50, IPC-100, IPC-200, IPC-500 and so on. We find that if the number of IPC-X **increases**, the performance of the selected IPC-10 will **decrease**. This found pattern can be understood by the increasing pruning ratio. For example, selecting IPC-10 from IPC-20 leads to pruning ratio = 50%. And selecting IPC-10 from IPC-50 leads to pruning ratio = 80%. Considering the pruning ratio is 80%, it means that **80% of the information from the original dataset is abandoned**. It is very difficult to achieve comparable performance to a “directly condensed IPC-10”, which has the full dataset information. 3. Following point 2, the pruning ratio defined in setting B is actually **80%** (IPC-10 from IPC-50). We want to share that when pruning **80%** images from **full** CIFAR-10 dataset, the state-of-the-art pruning method [1] still has an accuracy drop of **4.30%** (from 95.23% to 90.93%). These numbers are shown in Table 2 and Figure 5 (a) in [1]. Please note this 4.30% accuracy drop is for **full** **dataset**, and pruning **condensed** **dataset** is much more difficult. Therefore, we believe our **7.19%** accuracy drop (from 67.5% to 60.31%) is a very good result. 4. The table below shows the accuracy of selecting IPC-10 from IPC-50. We totally understand the reviewer wants to compare **column to column** using 67.5% with 60.31%. This comparison is actually **unfair** since their pruning ratio is different (0% vs 80%). Our achievement is obtained by comparing **row to row** using 60.31% with 54.72%. We believe it is **fair** because these two numbers (60.31% with 54.72%) are obtained under the same pruning ratio (80%). Our pruning method beats random selection by a large margin. | | IPC10 (baseline) | Select IPC10 from IPC50 | | ----------------------- | ---------------- | ----------------------- | | Pruning ratio (\%) | 0 | 80\% | | Random selection (\%) | 67.50 | 54.72 | | Our pruning method (\%) | 67.50 | **60.31** | In conclusion, for any given IPC-X, using our pruning method will produce better accuracy for IPC-1, IPC-2, IPC-3,... IPC-(X-1). At the same time, we maintain the accuracy of IPC-X. We sincerely hope the reviewer can recognize our contribution regarding **increasing flexibility**. We thank you again for the valuable discussion brought by the reviewer. We will incorporate these discussions to revise our submission and make it more clear. Thank you! ## Reference [1] Zheng, Haizhong, et al. "Coverage-centric Coreset Selection for High Pruning Rates." in ICLR, 2023. --- Reply to Comment 1.1.2: Title: Response to Q2, Q3, and Q4 Comment: > Q2. I do not have any major concerns here. This isn't a critical issue but more of a suggestion for improvement. > Q3. The proof provided for Theorem 1 doesn't strike me as formal proof; it seems more like an elucidation. I'd recommend refining this section to meet the standards of rigorous proof. Thank you for your insightful suggestions. We will follow these suggestions to improve our paper. > Q4. I acknowledge and commend the analysis of the imbalanced part. My primary issue lies in "The equal number of each class is a standard setting in the condensed dataset." this is not a new insight. In my understanding, the primary aim of dataset pruning is to extract a succinct yet informative condensed dataset. So the goal is to compare with the methods of the condensed dataset. Previous methods of condensed datasets seem to consistently adopt the Balanced construction. For this point, there is not a technical concern but only a novelty concern. Thank you for the comments. First, please find the table below to conclude the previous literature. Dataset Condensation methods use **balanced** construction, while Dataset Pruning methods [1,2,3,4,5] use **imbalanced** construction. | Previous Literature | Balance or Imbalance | | ----------------------- | ----- | | Dataset Condensation | Balance | | Dataset Pruning | Imbalance | Second, please note that our target is to do dataset **condensation** first, and do dataset **pruning** after. The table below lists two possible ways to achieve our target. | | Direct Combination | Ours Contribution | | -------------------- | ------------------ | ----------------- | | Dataset Condensation | Balance | Balance | | Dataset Pruning | Imbalance | **Balance** | - "Direct Combination" is a direct combination of two previous methods to achieve our target. This combination provides no novel contribution. - "Ours Contribution" shows we are the **first** to use **balanced** construction for pruning condensed dataset. In conclusion, our novelty is that we are the **first** to discover the pruning rules are **different** for **full** dataset and **condensed** dataset. We believe our discovery will inspire the community to do more exploration about the different rules of full and condensed datasets. We hope this response solves your concerns properly. Thank you! ## Reference [1] Selection via Proxy: Efficient Data Selection for Deep Learning, ICLR 2020 [2] Identifying Mislabeled Data using the Area Under the Margin Ranking, NeurIPS 2020 [3] An Empirical Study of Example Forgetting during Deep Neural Network Learning, ICLR 2019 [4] Deep Learning on a Data Diet: Finding Important Examples Early in Training, NeurIPS 2021 [5] Coverage-centric Coreset Selection for High Pruning Rates, ICLR 2023.
Summary: --- Post-Rebuttal Edit --- After author rebuttals, I have updated my rating from 5 to 6 and confidence from 2 to 3. --- End Post-Rebuttal Edit --- This paper considers a practical problem in dataset condensation/distillation, where edge devices have varying constraints that need to be flexibly met by a data distillation method. The authors propose two rules to prune the condensed datasets in order to save computational costs. The first rule uses logit-based prediction error (LBPE) to identify useful examples, where low LBPE samples are useful for smaller datasets and high LBPE samples are useful for larger datasets. The second rule focuses on the construction of balanced classes by ensuring an equal number of samples in each class via Rademacher complexity and generalization error. Strengths: - The writing is generally easy to follow and the experiments are thorough. - The proposed approaches are logical and appear useful. The first rule based on LBPE scores allows for flexible treatment based on dataset size and other characteristics; the second rule incorporates Rademacher complexity in a novel way in order to set up a theoretical foundation for the significance of balanced class construction. Weaknesses: - Perhaps this is because I am unfamiliar with data distillation literature, but the motivation and setting are somewhat unclear to me. There is not a single citation or brief explanation of what is meant by “on-device scenarios.” Even if this is obvious to well-informed readers, I believe that at minimum a citation or definition is needed. - In what scenario would one need to store a training set on a device? Typically when deploying a machine learning model to a hardware-constrained device, the only thing being deployed is the model weights. - Related to the above point, I don’t understand the motivation for the experimental setting, where we are condensing an already-condensed dataset. If one of the central motivations mentioned in the Abstract is to “[eliminate] the need for extra condensation processes,” then I don’t see how this method achieves said goal – we are still ultimately performing two condensation steps! - The mixture of theoretical- and prose-style writing is awkward at times. For example, Theorem 1 on line 122 would be much more appropriately written as simple explanatory text. I don’t understand in what sense this is a theorem or how lines 123-124 constitute a “proof” for this claim. There is no need to present content in a mathematical style when it is not necessary. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - What is an “on-device scenario?” What is the motivation for storing a training set (rather than, say, the trained model weights) on said device? - What is the motivation for condensing an already-condensed dataset? - How does this method avoid extra condensation steps if all experiments are performed after an initial condensation step with another method? - Why not perform a single condensation step with the proposed method (from full -> desired size)? How does the method perform in this setting? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - See Questions above - There is no clear definition of what is meant by “small” and “large” datasets. A definition or rough empirical guideline would be useful. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your insightful comments and for raising concerns about the clarity of the motivation and the terminology used in our paper. > 1. What is an “on-device scenario?” We acknowledge that the term "on-device scenarios" may not be clear to all readers. What we mean by this phrase is the application of machine learning in environments where both the model training and inference occur on a local device, such as a smartphone or embedded system, without relying on cloud-based resources. We added a few citations [1-7] here and will make sure to include them in the revised manuscript. The key advantages of on-devices learning can be explained as follows: 1. **Continual Adaptation to New Data**: By leveraging on-device learning, edge devices can continually adapt the AI models to new data. This provides a dynamic and flexible system that can evolve with changing data patterns and user behaviors. 2. **Privacy Protection**: Since the training and adaptation of the model occur directly on the device, there is no need to transfer data to the cloud, thereby reducing the risk of unauthorized access or breaches. > 2. What is the motivation for storing a training set (rather than, say, the trained model weights) on said device? The scenario you pointed out, where only the model weights are deployed to a hardware-constrained device, is indeed a common one. However, in some on-device scenarios such as: 1. Incremental learning: adapting the model to new data as it comes in. 2. Federated learning: decentralized training In such cases, having a distilled or compressed version of the training set on the device can be valuable. Dataset condensation allows us to retain essential information while minimizing storage requirements. We will include a more detailed explanation of the specific scenarios where on-device storage of a training set might be needed. > 3. What is the motivation for condensing an already-condensed dataset? 1. **Limitations in Computational Power**: Existing methods may only reduce datasets to specific IPCs, and further condensation is needed for devices with lower computational abilities. 2. **Avoiding Extra Processes and Storage**: Traditional further condensation can cause performance losses or require additional complex processes and storage. To overcome these challenges, dataset pruning is used to find a representative subset without changing image pixels or needing extra storage, effectively eliminating extra condensation steps. > 4. How does this method avoid extra condensation steps if all experiments are performed after an initial condensation step with another method? Why not perform a single condensation step with the proposed method (from full -> desired size)? How does the method perform in this setting? The reason is that (from full -> desired size) is not **flexible**. If we condense the dataset to IPC5, we would not have a dataset of IPC10 if more computing resources are available. With our YOCO method, both IPC5 and IPC10 can be easily found as a subset of IPC50. > 5. There is no clear definition of what is meant by “small” and “large” datasets. A definition or rough empirical guideline would be useful. Please take a look at Fig. 4 and section 4.4. ### Reference: [1] Cai, Han, et al. "TinyTL: Reduce memory, not parameters for efficient on-device learning." _Advances in Neural Information Processing Systems_ 33 (2020): 11285-11297. [2] Lin, Ji, et al. "On-device training under 256kb memory." _Advances in Neural Information Processing Systems_ 35 (2022): 22941-22954. [3] Yang, Li, Adnan Siraj Rakin, and Deliang Fan. "Rep-net: Efficient on-device learning via feature reprogramming." _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 2022. [4] Yang, Yuedong, Guihong Li, and Radu Marculescu. "Efficient On-device Training via Gradient Filtering." _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 2023. [5] Qiu, Xinchi, et al. "ZeroFL: Efficient on-device training for federated learning with local sparsity." _International Conference on Learning Representations_ (2022). [6] Lee, Jinsu, and Hoi-Jun Yoo. "An overview of energy-efficient hardware accelerators for on-device deep-neural-network training." _IEEE Open Journal of the Solid-State Circuits Society_ 1 (2021): 115-128. [7] Dhar, Sauptik, et al. "A survey of on-device machine learning: An algorithms and learning theory perspective." _ACM Transactions on Internet of Things_ 2.3 (2021): 1-49. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications regarding the motivation and problem setting. I believe that including such explanations and references in the revised Introduction will enable readers who are not experts in this specific sub-field (like myself) to better access and understand the contributions of the paper. I will be updating my rating from a 5 to a 6.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Phase diagram of early training dynamics in deep neural networks: effect of the learning rate, depth, and width
Accept (poster)
Summary: The paper studies the effect of depth, width and learning rate on the early training dynamics of DNNs. The authors first describe four phases throughout, and then focus on the first two phases. In the first phase, they identify three ranges of learning rates, according to how they affect the evolution of the training loss and the sharpness of the loss in the first few training steps: - for small learning rates both loss and sharpness decrease monotonically. Surprisingly, this regime extends to learning rates that are larger than the traditional threshold of $2 / \lambda_{max}$ which guarantees a close match of GD to GF. - For larger learning rates, the loss is non-monotone but the sharpness still decreases. - For yet larger learning rates, both the loss and sharpness are non-monotone. Any larger learning rates lead to divergence. In general larger depths $d$ and smaller widths $w$ shifts these regime to larger values of the learning rate. In the second phase, the sharpness settles to some value. For small learning rates $\eta$, this value is roughly constant w.r.t. $\eta$ (but which depends on the width), but for large $\eta$ the sharpness decreases and settles to a value which seems to be independent of the width. The simple model of a shallow linear network with one datapoint and one dimensional inputs and outputs is studied, and the qualitative features described above are recovered. Strengths: Understanding the different regimes of training as a function of width, depth and learning rates is an important problem, and any work that studies these questions either mathematically or empirically is welcome. The analysis and discussions are clear and insightful. A range of different architectures are tested. The regimes proposed and studied are to my knowledge new. Weaknesses: The authors define a large number of regimes acoording to some rather subtle changes in behavior sometimes, and it is not always clear whether these regimes are also correlated with other quantities that we actually care about (e.g. generalization/feature learning). The training phase of neural networks features many strange/non-monotonic behavior and it is not clear that all of them have to be identified, since they might have a rather small impact. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: It is shortly mentioned that setting the outputs of the network at initialization to zero changes the ranges of learning rates corresponding to the regimes of phase 1. In the appendix, it appears that the transitions between ranges become almost independent of the width in this case, and the smallest range (where both loss and sharpness decrease) stops much closer to the expected value of $2 / \lambda_{max}$. This suggests that the dependence of these values on the width $w$ which is quite thoroughly studied in the main paper, might only be related to the variance size of the outputs at initialization, and not some deeper property of the network. Ideally I think that the plots of the appendix describing this phenomenon should be put in the main, as it greatly impacted my personal interpretation of the results of the paper. Did the authors also check the behavior of the output function norm in the first few timesteps? I was wondering whether the non-monotonic behavior at the begining of training might be related to the function norm having a similar non-monotonic behavior (scaling up then down, possibly to some value close to zero), since the function norm has an obvious impact on both the loss and sharpness of the loss. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The limitations of the results presented are discussed well. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their encouraging comments. Below are our responses to the comments: > The authors define a large number of regimes acoording to some rather subtle changes in behavior sometimes, and it is not always clear whether these regimes are also correlated with other quantities that we actually care about (e.g. generalization/feature learning). The training phase of neural networks features many strange/non-monotonic behavior and it is not clear that all of them have to be identified, since they might have a rather small impact. We believe that analyzing the generalization and feature learning properties through the lens of our phase diagram could lead to a more comprehensive understanding. Various studies, including [46, 66] and a recent paper (https://arxiv.org/abs/2306.04815), claim that large learning rates lead to better performance. In particular, [46] conjectures that optimal performance in wide networks is obtained for learning rates between $c_{loss} = 2$ and $c_{max}$. We believe that our finer characterization of the phase diagram, particularly at finite depth and width, could help in narrowing down the range of learning rates with optimal performance. From a broader perspective, we believe that any systematic behavior in optimization dynamics that occurs repeatedly throughout architectures and datasets is worth documenting and trying to understand. While we sympathize with the referee's comment that it is unclear what will be useful for quantities we care about, we do think that systematic studies that shed light on the nature of the loss landscape and the behavior of nonconvex optimization are worthwhile as it is difficult to predict which discoveries will eventually be practically useful. > It is shortly mentioned that setting the outputs of the network at initialization to zero changes the ranges of learning rates corresponding to the regimes of phase 1. In the appendix, it appears that the transitions between ranges become almost independent of the width in this case, and the smallest range (where both loss and sharpness decrease) stops much closer to the expected value of $2 / \lambda_{max}$. This suggests that the dependence of these values on the width $w$ which is quite thoroughly studied in the main paper, might only be related to the variance size of the outputs at initialization, and not some deeper property of the network. Ideally I think that the plots of the appendix describing this phenomenon should be put in the main, as it greatly impacted my personal interpretation of the results of the paper. We understand that this aspect may deserve more attention in the main text. Accordingly, we can move the relevant phase diagrams (for example, Figure 30) from the appendix to the main part of the paper to highlight this observation. > Did the authors also check the behavior of the output function norm in the first few timesteps? I was wondering whether the non-monotonic behavior at the begining of training might be related to the function norm having a similar non-monotonic behavior (scaling up then down, possibly to some value close to zero), since the function norm has an obvious impact on both the loss and sharpness of the loss. Based on the reviewer's feedback, we performed a preliminary analysis of the output function norm during early training in FCNs. For models trained with MSE loss, we found that the critical learning rate constant $c_{output}$ at which the output norm increases from initialization aligns with $c_{loss}$ (see table below). This is expected because the output norm is closely related to the loss. However, for cross-entropy loss, we found that the critical constant for the output norm is smaller than $c_{loss}$ as shown in the table below, specifically, $c_{output} < c_{loss}$. A thorough study would be needed to fully understand the relationship between the output function norm and cross-entropy loss. **Table: Average values of different critical constants of 4 layer FCNs trained with MSE loss using SGD with batch size $512$. Each value is an average over 10 distinct initializations.** | $d$ | $w$ | $c_{loss}$ | $c_{sharp}$ | $c_{output}$ | $c_{max}$ | |-----|-----|------------|-------------|--------------|----------| | 4 | 256 | 3.05 | 3.85 | 3.04 | 7.69 | | 4 | 512 | 2.52 | 3.34 | 2.55 | 8.41 | | 4 | 1024 | 2.30 | 3.35 | 2.30 | 9.86 | | 4 | 2048 | 2.24 | 3.39 | 2.24 | 10.22 | --- **Table: Average values of different critical constants of 4 layer FCNs trained with cross-entropy loss using SGD with batch size $512$. Each value is an average over 10 distinct initializations.** | $d$ | $w$ | $c_{loss}$ | $c_{sharp}$ | $c_{output}$ | $c_{max}$ | |-----|-----|------------|-------------|--------------|----------| | 4 | 256 | 7.70 | 10.94 | 4.79 | 21.68 | | 4 | 512 | 7.03 | 13.23 | 4.59 | 37.52 | | 4| 1024 | 6.66 | 22.32 | 4.14 | 69.42 | | 4| 2048 | 5.51 | 5.83 | 3.64 | 101.80 | --- Rebuttal Comment 1.1: Comment: Thank you for your answers and additional numerical experiments.
Summary: The paper identifies new phenomena involving the dynamics of the sharpness (largest eigenvalue of the Hessian) in the early stages of neural net training, for ReLU nets of various architectures that are initialized using He initialization. Previously, it was known that for very wide networks, there are two 'phases' in hyperparameter space: on the one hand, if the step size exceeds 2/(initial sharpness), then a "catapult" occurs (https://arxiv.org/abs/2003.02218) and the sharpness drops; on the other hand, if the step size is below this threshold, then the sharpness remains at its initial value. This paper fills in the picture for _narrow_ nets. For these nets, the paper shows that the sharpness decreases at the beginning of training even when the step size is less than 2/(initial sharpness). As a consequence, the train loss does not spike even when the step size is bigger than 2/(initial sharpness), since the sharpness drops first. Only for larger learning rates does the train loss initially spike. Then, for still larger learning rates, the paper shows the _sharpness_ also spikes (which the paper calls a "sharpness catapult"), which has not been observed previously. For several architectures, the paper draws phase diagrams depicting which learning rates cause which behavior. The paper then investigates connections to the loss connectivity literature, finding that only very large learning rates result in a barrier in the loss landscape (and that not every catapult results in a barrier). The paper then investigates the role of initial network output, finding that initial sharpness reduction goes away if the network is made to output zero at initialization. The paper finally reproduces all of these phenomena analytically in a simplified model (the u-v model from the catapult paper). Strengths: - The paper identifies and carefully studies a novel phenomenon involving the dynamics of the sharpness at the early stage of training. - The experiments are very thorough (though limited to ReLU nets initialized with He initialization) - There is an analytical explanation in a simplified model - The findings shed some light on mode connectivity Weaknesses: - The paper doesn't explain the early-stage sharpness reduction for general neural nets (i.e. beyond the simple u-v model) Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: - Have you looked at loss connectivity between the final parameters (at the end of training) and the initialization? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 2 fair Contribution: 3 good Limitations: discussed above Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their encouraging comments. Here is our response to the question on loss connectivity: > Have you looked at loss connectivity between the final parameters (at the end of training) and the initialization? As the main focus of this work is the early training dynamics, we did not analyze the loss connectivity between the initial and final parameters. However, we refer to a couple of studies that linearly interpolate the loss between the initial and final parameters. Goodfellow et al. 2014 (https://arxiv.org/abs/1412.6544) linearly interpolate the loss between the initial and final points of training and show that training does not encounter any barriers. However, they do not analyze the effect of large learning rates. A more recent study by Lucas et al. 2021 (https://arxiv.org/abs/2104.11044) analyzed the effect of large learning rates in Section 4.1 of their paper. They report that training traverses a barrier at large learning rates, which is in agreement with the naive intuition of a barrier between the initial and final points of the loss catapult. However, they do not distinguish their result as a function of different learning rate phases. We believe that analyzing such results through the lens of our phase diagram could lead to a more comprehensive understanding of these phenomena. --- Rebuttal Comment 1.1: Title: Thanks for authors' rebuttal! Comment: Reviewer BcD3, did the authors address your concerns on early-stage sharpness reduction for general neural nets? Thanks. --- Reply to Comment 1.1.1: Title: Further clarifications on early sharpness reduction and loss catapult Comment: Even though this question was directed at Reviewer BcD3, we would like to take the opportunity to clarify this point. We have observed two separate phenomena in general neural networks with standard parameterizations: (i) the initial reduction in sharpness (i.e. the "sharpness reduction phase") and (ii) the systematic increase of $c_{loss}$ (marking the onset of the loss catapult) with $1/w$, leading to the opening up of the sharpness reduction phase with $1/w$. To further understand these phenomena, we performed systematic experiments as discussed in Section 4 and Appendices G and H. These results indicate that the initial sharpness reduction depends on the function output scale at initialization (see Figures 29 and 31). This provides some explanation, albeit incomplete, for the origin of (i). However, the reasons behind the scaling of $c_{loss}$ with $1/w$ remain more unclear. This scaling only ceases when the function output is set to zero at initialization (Appendix G, Figure 30) and persists even when the network output is made small by scaling it by a constant (Appendix H, Figure 32). We made some progress in developing a theoretical understanding of these phenomena in a toy model (the uv model), but developing a complete theoretical understanding for general neural nets is beyond the scope of this work. Given the richness and universality of these findings across architectures and datasets, we believe it deserves the attention of the wider community to arrive at a complete understanding.
Summary: In this paper, the authors studied the effects of learning rate, width and depth of neural networks for early training dynamics. Specifically, the authors focused on the value of loss and the maximum eigenvalue of Hessian of loss and found that there are 4 different possible training dynamics in the early training. These different regimes are determined by the value of learning rate and are related with depth and width. Experiment results are provided to support the claim. A simple example with the same phenomena is also given and analyzed. Strengths: 1. Understanding the effect of large learning rate and its effect on the nonlinear training dynamics is an important problem. 2. The paper presents many experiment results to support their findings. 3. The finding that there exist different regimes depending on the learning rate ($c_{loss}$, $c_{sharp}$, $c_{max}$) in the early training seems to be interesting and new. Weaknesses: 1. The current paper seems to have lots of results and experiments in the main text. As a reader, it is not very easy for me to get the main conclusion for each section/set of experiments. It might be good to highlight the conclusions so that the readers can understand the point easier. 2. The current paper tries to study the effect of depth and width on those $c$ values ($c_{loss}$, $c_{sharp}$, $c_{max}$) related with stepsize. However, it seems to me that only a few choices of width $w$ are tried in the experiments (Figure 3) and no results showing the effect of depth $d$ in the main text. It might be good to include more results on different width and depth so that the fitted curves could use more than only 3 or 4 data points and the support of the conclusion could be stronger. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. I was wondering for these $c$ values ($c_{loss}$, $c_{sharp}$, $c_{max}$), how many times do the experiments repeat? 2. For the definition of these $c$ values, I was wondering how time $T_1$ is chosen. 3. In Definition 4, should $\chi_\tau$ be $\chi_\tau^\prime$? 4. In Figure 4, I was wondering if the networks have the same depth and width, or different point represents network that may have different depth and width. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The limitations are discussed in the paper. This is a theoretical work and therefore does not seem to have negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments and careful reading. Below are our responses to the questions and comments: > The current paper seems to have lots of results and experiments in the main text. As a reader, it is not very easy for me to get the main conclusion for each section/set of experiments. It might be good to highlight the conclusions so that the readers can understand the point easier. We can highlight/summarize the conclusion of each experiment in each section if it helps readers to grasp the main points easily. This requires minor edits, and we can implement them in the updated version of the manuscript. > The current paper tries to study the effect of depth and width on those values ($c_{loss}, c_{sharp}, c_{max}$) related with stepsize. However, it seems to me that only a few choices of width $w$ are tried in the experiments (Figure 3) and no results showing the effect of depth $d$ in the main text. It might be good to include more results on different width and depth so that the fitted curves could use more than only 3 or 4 data points and the support of the conclusion could be stronger. For each architecture, we explored various values of depth and width, considering $10$ distinct initializations for each. Specifically, we tested $4$ values of widths and $3$ values of depths for FCNs and $3$ values of widths for CNNs. We concur that adding more values could strengthen our conclusions. Figure 1 of the PDF attached to the global response shows the phase diagrams of three neural networks with ~10 width values: (a) 8-layer FCNs trained on MNIST trained with MSE, (b) 7-layer CNNs trained on Fashion-MNIST using MSE, and (c) 16-layer FCNs trained on CIFAR-10 using MSE. We will make these adjustments in the revised version. We also agree that including the effect of depth in the main text would be beneficial and will make these enhancements in the updated version of the manuscript. > I was wondering for these $c$ values ($c_{loss}, c_{sharp}, c_{max}$), how many times do the experiments repeat? As mentioned in line 4 of Figure 3 caption, each experiment is repeated for $10$ distinct initializations for each depth and width to obtain the $c$ values. > For the definition of these $c$ values, I was wondering how time $T_1$is chosen. We choose $T_1$ to be the smallest value of step $t$ that contains the entire duration of the catapult effect. For the widths and depths considered, the catapult typically lasts at most $\sim 10$ steps. Thus, for computational efficiency, we've set $T_1 = 10$. > In Definition 4, should $\chi_\tau$ be $\chi_\tau'$? We thank the reviewer for pointing this out. We have fixed this in the updated version of the manuscript. > In Figure 4, I was wondering if the networks have the same depth and width, or different point represents network that may have different depth and width. In Figure 4, each data point can correspond to a run with varying depth, width, and initialization. With $n_w$ values of width, $n_d$ values of depth, and $10$ initializations, there are a total of $n_w \times n_d \times 10$ data points in this Figure, which illustrates that the inequality $c_{loss} \leq c_{sharp} \leq c_{max}$ holds regardless of depth, width, and initialization. We thank the reviewer for pointing this out. We will update the Figure caption to include this information in the revised version of the manuscript. --- Rebuttal Comment 1.1: Comment: Thanks for the response to address my question. It makes things clear and help me have a better understanding of the paper. I will increase my score.
Summary: This work mainly investigates the learning dynamics depending on the learning rate and categorizes its qualitative behaviors in some phases. In particular, the authors identify new phases; sharpness reduction, and loss-sharpness catapult. They investigate the dependence of the critical learning rates (mainly, $c_{class}, c_{sharp}, c_{max}$ and $c_{barrier}$) on the width, depth & loss type, and try to identify a universal relationship between them. Their empirical observations in some deep neural networks are also confirmed in a simple linear network and the obtained phase diagram is expected to hold in various situations. Strengths: This work attacks the challenging problem to find any universal behavior of early learning dynamics in finite-size networks (I guess that the terminology of "phase diagram" comes from physics or chemistry but such a dynamic and finite-sized diagram seems not known). Despite this difficulty, the authors verified new phases (characterized by $c_{loss},c_{sharp},c_{max}$) in various settings and these new phases seem to have the originality. Their careful and elaborated experiments are persuasive and enhance the quality of the paper. Weaknesses: (a) The whole findings are empirical. Even in a simple model of $uvx$, the authors did not solve the dynamics in an analytical form. It is quite hard to obtain any intuition on why the three phases characterized by $c_{loss}<c_{sharp}<c_{max}$ universally appear. (b)They are several unclear points that the authors should more clearly mention. See below. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: **Connection to the infinite width limit** While the current work focuses on finite-sized networks, some previous work investigated the infinite-width limit [64]. In particular, [64] claims that for the infinite-width limit, the learning dynamics *must be* the kernel regime for SP initialization (nevertheless, the learning dynamics does not progress or explodes). How is this infinite-width study related to your results? Since the critical learning rate of the kernel regime is given by $2/\lambda_{0}^{NTK}$, does this suggest that all $c_{loss,sharp,max}$ converge to 2 in the infinite width limit? **Phase diagram in previous work** Related to [64], there are several studies on the phase diagram of convergence in the infinite width case (e.g. https://arxiv.org/abs/2012.15110 and https://arxiv.org/abs/2007.07497). It would be better to mention them for giving a rich overview of the phase-diagram studies in the literature. More related to the current work, the phase diagram for width v.s. learning rate is also investigated in the literature (https://arxiv.org/abs/1806.01316). This work implies that 2/$\lambda_{0}^{NTK}$ works as a critical learning rate of the convergence in the infinite width. **The effect of early learning dynamics on eventual minima** I understand that the main focus of the current work is on the early training regime, but it is quite curious how the sharpness reduction, loss catapult, and loss-sharpness catapult phases determine the eventual generalization performance and sharpness of minima in the trained models. If this point would be clarified, it would increase the significance of this paper much more. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: **No normalization layers** The architectures in this work have no batch/layer normalization layer. Since the batch norm is known to strongly affect the sharpness (e.g. https://arxiv.org/abs/1901.10159), it is curious how the normalization layer potentially changes the results. **Difference between MSE and cross-entropy cases** It seems that the cross-entropy loss shows a different behavior of the critical learning rates. In particular, in most cases, $c_{loss}$ and $c_{sharp}$ decrease close to 2 as $1/w$ decreases in the MSE case while they keep far from 2 in the cross-ent case. In the revised version, I believe it would be better to give a more elaborated explanation of the difference in the main text. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to read our manuscript and for providing detailed comments. Below are our responses to the comments provided: > Connection to the infinite width limit: In our study, we are looking at the behavior of the optimization dynamics at training timescales $t_\star$ that grow with width $w$. Specifically, the end of the early time transient period occurs at $t_\star \sim \log(w)$. However, in the usual infinite width studies in SP initialization [64], the training dynamics is restricted to $O(1)$ times in the limit of infinite width; this leads to the kernel regime, for which the system is in a lazy training regime for learning rates less than $2/\lambda_0^{NTK}$ and divergent training for larger learning rates. Our results for critical constants in the limit of large width, therefore, do not converge to $2$ as would be expected from the analysis of [64] (except for $c_{loss}$, which converges to $2$ for MSE loss). We will provide a more detailed explanation in the updated version of the paper to clarify this distinction. > Phase diagram in previous work: We will incorporate the above suggestion by including a paragraph discussing the phase diagram from previous studies in the updated version of our manuscript. > The effect of early learning dynamics on eventual minima: We agree that analyzing generalization properties through the lens of the phase diagram could lead to a more comprehensive understanding and enhance the importance of the study. However, this is a non-trivial task due to the extensive hyperparameter search space (including depth, width, learning rates, initializations, and batch size) and the significantly large training time required for each workload (model and training task). Moreover, defining the success measure, such as finding the best generalization error for a fixed compute versus the shortest training time for a given target error, adds another layer of complexity. A comprehensive analysis would result from a cumulation of studies, with our study contributing to this effort. > Difference between MSE and cross-entropy cases: The disparity in the limiting value of $c_{loss}$ at large width is mostly due to the non-constant Hessian of the cross-entropy loss, as discussed in [62]. Previous work, such as [47], analyzed the catapult dynamics for the $uv$ model with logistic loss and demonstrated that the loss catapult occurs above $\eta_{loss} = 4 / \lambda_0^{NTK}$. Their argument is as follows. Consider the $uv$ model trained on a binary classification task using the logistic loss on two training examples $ (x_1, y_1) = (1, 1) $ and $ (x_2, y_2) = (1, -1) $. Then, the total loss is $ \mathcal{L}(f) = \frac{1}{2} \log (2 + 2 \cosh(f)) $. Hence, the loss grows monotonically as the output function $ |f| $ increases. The update equation of the function is given by: \begin{align} f_{t+1} = f_t \left( 1 - \frac{\eta \lambda^{NTK}_t \mathcal{L}'(f)}{f_t} + \frac{\eta^2 \mathcal{L}'(f_t)^2}{w} \right), \end{align} where $\eta$ is the learning rate, $w$ is the width, and $\mathcal{L}'(.)$ is the derivative of the loss. At large width $w$, if the condition $ | 1 - \eta \lambda^{NTK} \mathcal{L}'(f_t) / f| < 1 $ holds, then the output function continues to decrease. Given that $ \mathcal{L}'(f) / f \leq 1/2 $ in the above case, this decrease persists for $ \eta \lambda^{NTK} < 4 $. This result provides some intuition behind the discrepancy. We can describe the above analysis in more detail in the updated version of the manuscript. However, a complete understanding of the catapult phenomenon in the context of cross-entropy loss requires a more detailed examination. --- Rebuttal Comment 1.1: Comment: Thank you for your kind reply. I am looking forward to seeing the accepted version, which will briefly include the above answers. >a complete understanding of the catapult phenomenon in the context of cross-entropy loss requires a more detailed examination. I agree. I hope that this point will be clarified in your next work or other subsequent works.
Rebuttal 1: Rebuttal: We would like to express our gratitude to all the reviewers for taking the time to review our paper and providing detailed comments. Attached below is the PDF containing the figures required for some responses. Pdf: /pdf/6454cba2e59bb7937f38dde0a452c80e73c4e977.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Towards Anytime Classification in Early-Exit Architectures by Enforcing Conditional Monotonicity
Accept (poster)
Summary: This paper studies an interesting problem of early-exit classification networks when applied to anytime prediction tasks. Specifically, the authors empirically demonstrate that in an early-exit network, the prediction confidence scores of its different exits for the same sample are not monotonically increased. Then the authors propose a method to recalculate the prediction confidence of different exits. This method is shown effective in improving the conditional monotonicity of different exits' predictions. Strengths: 1. The studied problem is interesting and important; 2. The proposed method is theoretically guaranteed; 3. The experiment is comprehensive, covering different architectures and datasets. Weaknesses: I have some concerns as follows: 1. The results on CIFAR datasets (Fig. 2 left and middle) are confusing: the IMTA baselines achieved lower/similar accuracy than the MSDNet baseline in Fig. 2. However, IMTA is apparently a stronger method than the original baseline. This makes the results much less convincing to me. Moreover, I did not find the ImageNet results using the proposed PA for MSDNet and IMTA (blue and green dashed lines) in Fig 3 (right). Only DVT results (orange dashed line) are presented. 2. The experiment results in Fig. 4 indeed show that the ground-truth probability trajectories monotonically increase by using the proposed PA. However, **how can such monotonicity improve the anytime prediction task**, is not discussed. Is there any metric that can measure the anytime performance of a model? Now the authors only show us that the overall accuracy can be preserved (or improved a little). 3. It is recommended to conduct experiments on more recent multi-exit models, such as L2W-MSDNet [1]. 4. I'm curious whether better monotonicity is beneficial in the budgeted dynamic inference setting (the smooth accuracy-FLOPs curves reported in MSDNet and IMTA papers). [1] Han, Yizeng, et al. "Learning to Weight Samples for Dynamic Early-Exiting Networks." ECCV 2022. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weakness. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: As I stated in the weakness part, the benefits of the obtained better monotonicity are not discussed in depth. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your time and efforts in reviewing our work. > the authors empirically demonstrate that in an early-exit network, prediction confidence scores [...] for the same sample are not monotonically increased. We would point out that our findings indicate the ground-truth class probability, denoted as $p_m(y^* | \boldsymbol{x})$, exhibits non-monotonic behavior. This means that there are instances where the ground-truth class is more probable in earlier exits than in subsequent ones. We believe that this is even more concerning than non-monotonicity observed in prediction confidence, represented as $\max_y p_m(y | \boldsymbol{x})$. Nevertheless, we concur that any form of non-monotonicity is troubling. For example, in Appendix B.7 we show that MSDNet is non-monotone also w.r.t. its estimates of uncertainty. Specifically, there are points where the model is highly confident in the initial exits and then becomes less confident later on. As a measure of uncertainty, we have employed entropy and conformal set sizes, but we expect the results to also hold for prediction confidence. Encouragingly, our PA helps with monotonicity not only when considering performance quality (e.g., ground-truth probability), but also for monotonicity in the uncertainty (e.g., conformal set size). > IMTA is apparently a stronger method than the original baseline. This makes the results much less convincing to me. > It is recommended to conduct experiments [using] L2W-MSDNet We share your surprise that IMTA does not clearly outperform MSDNet in our experiments, which seems to contradict the claims in their respective papers. However, it's important to emphasize that we used the code and training scripts exactly as provided by the authors of IMTA (note that the pre-trained models were not published for this model): https://github.com/kalviny/IMTA. Our own attempts to improve the performance of the baseline IMTA model (to match the performance reported in their paper) were unsuccessful. We followed your advice and also ran the experiments using L2W model. As shown in Figure R.4 (see attached PDF), our findings remain consistent — **the PA method maintains marginal accuracy while substantially improving conditional monotonicity for L2W**. We will add this baseline to Figures 2 & 3 in the camera-ready version. We believe the inclusion of L2W will make our empirical findings stronger so we would like to thank you for this suggestion. For now we have results only for ImageNet model, because, regrettably, the authors of L2W did not release neither training code nor pre-trained models for CIFAR-10/100: https://github.com/LeapLabTHU/L2W-DEN. We will reach out to them regarding this so that we can, hopefully, include L2W results for all considered datasets in our camera-ready. > I did not find the ImageNet [PA results] for MSDNet and IMTA [...] in Fig 3 (right). The monotonicity curves of the PA method for the MSDNet and IMTA models on ImageNet overlap with the one for the DViT model at y=0. We recognize that the current presentation might lead to confusion, and we appreciate you highlighting this issue. We plan to add a statement to the caption of Figure 3 to clarify this. >  how can such monotonicity improve the anytime prediction task, is not discussed. > the benefits of [...] better monotonicity are not discussed in depth. The primary argument of our paper is that with our PA method, we can maintain the marginal accuracy while simultaneously ensuring conditional monotonicity. This is crucial in a truly anytime setting (where the stopping time is random and determined by the environment) as it ensures that the performance will not deteriorate with more computational resources. As a concrete example of how monotonicity can help with anytime predictions, we can consider again the scenario of Android phones introduced in Section 1 (c.f. lines 21-24 and 29-32). **Deploying the original MSDNet might result in certain data points receiving inferior predictions on a higher-spec device compared to a device with limited computational capabilities**. However, if we apply our PA to an MSDNet, such inconsistencies become far less likely due to the model's monotonic behavior. > Is there any metric that can measure the anytime performance of a model? Estimating the performance of an anytime algorithm in a real-world setting is tricky as there are many potentially important facets of "performance" to consider. We believe that the combination of marginal accuracy (Figure 2) and conditional monotonicity (Figure 3) that we focus on in our paper represent some of these facets. Unfortunately, to the best of our knowledge, previous work has almost exclusively focused on marginal accuracy, so this is an underdeveloped area. Nonetheless, we have identified two metrics proposed in the previous work that could potentially be useful for studying anytime models. The first is the "Overthinking" metric introduced in [1], which looks at difference in performance between an oracle model that exits at the first exit with the correct answer and the performance of the full model (i.e., final exit). The second, termed "Hindsight Improvability" in [2], gauges the efficiency of the early-exit network in leveraging previous predictions. We show that our PA leads to improvements over baseline MSDNet in terms of both of those metrics, see Appendix B.5 for more details. > I'm curious whether better monotonicity is beneficial in the budgeted dynamic inference setting Please refer to the *Anytime Prediction vs. Input-Dependent/Efficient Inference setting* in the global rebuttal. If you believe we have adequately addressed your concerns, we would be grateful if you would consider raising your score. [1] Kaya, Y., et al., 2019, May. Shallow-deep networks: Understanding and mitigating network overthinking. *ICML* [2] Wołczyk, M., et al., 2021. Zero time waste: Recycling predictions in early exit neural networks. *NeurIPS* --- Rebuttal Comment 1.1: Comment: 1. It's strange that IMTA performs inferior to MSDNet, because I have run the experiments on CIFAR-100. Only the finetuning stage can increase the accuracy. This downgrades my confidence in your results. 2. Most of my concerns are addressed. About the application on L2W, I think ImageNet results are sufficient. 3. I'm still curious about the benefits of better monotonicity in the budgeted dynamic inference setting. Testing without finetuning or re-training is easy, taking very little time. I'd be glad to see this result. --- Reply to Comment 1.1.1: Comment: We thank you for your response. > 3. I'm still curious about the benefits of better monotonicity in the budgeted dynamic inference setting. Testing without finetuning or re-training is easy, taking very little time. I'd be glad to see this result. We have performed the budgeted batch classification experiment from MSDNet paper. Since we can not attach plots to our response here, we summarize the results below in a table format (we report test accuracy in % at various level of computational budget measured in FLOPs). We will include a full plot in the camera-ready version. **CIFAR-10**: | FLOPs[1e7] | MSDNet | PA | | --- | --- | --- | | 1.7 | **91.39** | 90.71 | | 2.0 | **92.26** | 91.58 | | 3.2 | **92.66** | 92.49 | | 4.5 | 92.59 | **92.62** | | 5.1 | 92.59 | **92.63** | **CIFAR-100**: | FLOPs[1e7] | MSDNet | PA | | --- | --- | --- | | 1.7 | **64.87** | 64.15 | | 2.0 | **68.26** | 66.4 | | 3.3 | **71.96** | 69.91 | | 4.9 | 72.20 | **72.44** | | 6.6 | 71.68 | **73.95** | **ImageNet**: | FLOPs[1e8] | MSDNet | PA | | --- | --- | --- | | 3.7 | **58.87** | 58.49 | | 5.1 | **64.44** | 62.82 | | 8.7 | **71.68** | 69.48 | | 10.5 | **72.46** | 71.23 | | 11.6 | **72.51** | 71.94 | Since MSDNet outperforms our PA at most computational budgets, we conclude that the monotonicity is less beneficial in budgeted batch classification. However, as evident on CIFAR datasets, non-monotonicity of MSDNet is worrisome also in this setting - for larger values of FLOPs, MSDNet performance starts to drop, while PA keeps monotonically increasing and surpasses MSDNet. Moreover, note that in our paper we suggest “turning on” our PA paper only for the anytime prediction - we make no claims regarding the use of our model in budgeted batch classification. The results here suggest that the user is better off sticking to the original model in the budgeted batch classification, though this certainly warrants further investigation (as evident by performance benefits of our PA on CIFAR for larger FLOPs). We will add a new section in the Appendix of our camera-ready version describing this experiment. We believe that it will be a nice addition and should further help with illustrating the difference between the two settings (anytime prediction vs budgeted batch classification). We thank you for this suggestion. > 1. It's strange that IMTA performs inferior to MSDNet, because I have run the experiments on CIFAR-100. Only the finetuning stage can increase the accuracy. This downgrades my confidence in your results. We sincerely regret the diminished confidence you have in our experimental results due to the reproducibility challenges surrounding IMTA. We are committed to addressing this by engaging with the authors of the IMTA model for clearer guidance on achieving the accuracy numbers they reported in their work. We'd like to emphasize that even if we were to replicate the IMTA results precisely (meaning that marginal accuracy would be between 0.5-2% higher compared to what we managed to get using that model), it would not have any significant impact on the overall message of our paper. Our core finding is that early-exit models are non-monotone and hence not directly applicable in anytime prediction setting. This is further supported by the fact that models outperforming IMTA in terms of marginal accuracy, such as DViT and L2W, are still highly non-monotone.
Summary: This submission focuses on multi-exit early networks, from the perspective of anytime inference that can accommodate varying computational budgets. Such models produce a progressive refinement of the final prediction, offering the opportunity to "exit-early", providing a meaningful prediction. The main motivation of this work is to enforce conditional monotonicity on the predictions of successive early-exits, such as the output quality consistently improves as the inference process progresses. The proposed solution adopts the Product-of-Experts formulation and can be applied post-training, to achieve monotonicity on prediction quality as well as prediction uncertainty. Strengths: -The submission identifies and studies an interesting challenge with early-exit models, in the context of anytime inference, which is often overlooked by relevant literature. -The papers analysis is wide and insightful, providing numerous empirical results that can motivate further research in the topic. -The proposed methodology is clearly explained, and experiments are conducted on a range of traditional benchmarks, consistently to the practise of relevant literature. Weaknesses: -Certain design choices are not adequately justified and further comparisons/ablations should be included on the main paper. More specifically, it will be beneficial to examine: i) how the proposed methodology would behave in frozen backbone early-exit models and ii) in comparison to simple ensemble between early-exit models that comprises a intuitive baseline. -Additionally, it is claimed that the proposed approach can also be applied at training time, but this scenario is not adequately evaluated. Would the proposed methodology incur additional error or under-confidence on the shallower exits (instead of improving the deeper ones) in a quest to achieve the desired monotonicity? And if so how can this be mitigated? -Figure 2-right may suggest a scalability issue for the proposed approach as the baseline dominates the depth-accuracy trade-off in contrast to other datasets. This limitation should be further investigated. -The motivation of progressively increasing predictive confidence, may contradict the general benefits of uncertainty estimation in early-exit models (e.g. for input dependent inference). One could argue that uncertainty should ideally be calibrated with the probability of the ground-truth class; rather than penalised on early-classifier even if the provided prediction is correct. -Finally, the discussion of the results (e.g. on Fig. 2) could benefit from providing some quantitative conclusions on text (e.g. average/median improvement etc.). Technical Quality: 3 good Clarity: 3 good Questions for Authors: In accordance to the discussion in "Weaknesses" please clarify: 1. How does the proposed methodology compare to vanilla early-exit ensembles in terms of both accuracy and uncertainty monotonicity (should be added in Fig.2 analysis). 2. Is the proposed methodology still relevant if applied on early-exit models trained via the frozen-backbone methodology (i.e. is the behaviour of Fig1-middle still evident in this case; and resolved by the proposed approach)? 3. Is there a scalability issue when the proposed method is applied on ImageNet? What aspect (sample size, number of classes etc) seem to mostly cause this effect ? 4. Is the predictive accuracy and uncertainty of early-classifier reduced when applying the proposed methodology at training time ? 5. Is it indeed beneficial to decrease prediction confidence on shallow classifiers, even when their predictions are correct (e.g. on easier samples) ? 6. In the case of the IMTA early-exit model, where knowledge distillation is applied, it is expected that the prediction uncertainty of the resulting model will be affected. How does this affect the analysis of Sec.6.2; do the results remain consistent for IMTA models ? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Some limitations are clearly discussed in a dedicated subsection of the paper. Following the above discussion, it may be beneficial to broaden this discussion accordingly. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank you for your valuable review and many interesting questions. > how the proposed methodology would behave in frozen backbone early-exit models > Is the proposed methodology still relevant if applied on early-exit models trained via the frozen-backbone methodology **Yes, our proposed PA is still relevant when applied to EENNs with frozen-backbone**. For a concrete example, see Appendix B.5, specifically Figure 16, where we present the results using ZTW [1]. ZTW begins with a pretrained backbone, e.g., ResNet, and adds early-exit heads on top of it. During training of the exit heads, the backbone is frozen (see Algorithm 1 in their paper). Figure 16 shows that ZTW is highly non-monotone, corroborating our findings in Section 3. Furthermore, applying our PA to ZTW substantially outperforms the original ZTW in terms of Hindsight Improvability, a metric proposed by the ZTW authors [1] to capture how effectively a model reuses past predictions. If by "frozen backbone early-exit models" you're referring to something else, please clarify, and we'll discuss further. If there's a specific EENN you're interested in, let us know and we'll do our best to conduct the relevant experiments during the discussion period. > in comparison to simple ensemble between early-exit models that comprises a intuitive baseline. > How does the proposed methodology compare to vanilla early-exit ensembles We did explore this baseline, though we didn't label it as “early-exit ensembles”. We'll clarify this in the camera-ready. The results are in Appendix B.2, see the MoE-exponential (corresponding to an ensemble of all softmax predictives up-to-and-including the current exit) in Figure 7. **In short, vanilla early-exit ensembles are not sufficient to achieve monotonicity**. For now, the results are only available for the CIFAR-100 with MSDNet. If you think it would be beneficial for us to extend these experiments to other datasets and models, we would be happy to do so and include the results in Appendix B.2. We did not to include the early-exit ensembles baseline in Figures 2 & 3, since these plots are already somewhat busy. If you disagree with this choice, we are willing to change it. > certain design choices are not adequately justified and further comparisons/ablations should be included Please let us know if there are any specific additional ablations we should perform. As discussed with reviewer G2Sm, we’ll include a more comprehensive ablation on the activation function as well as on the ensemble weighting scheme in the camera-ready. > Is the predictive accuracy and uncertainty of early-classifier reduced when applying the proposed method at training time? Due to space constraints, this didn't make it into the main text. However, we intend to incorporate it using the extra page allocated for the camera-ready. Please see Appendix B.3.2 , where we elaborate in detail on the application of our PA model during training/finetuning. As shown in Figure 12, fine-tuning with PA does slightly compromise both accuracy and monotonicity compared to the post-hoc PA. Nonetheless, it still markedly surpasses the baseline model in terms of monotonicity and largely closes the calibration gap of the post-hoc PA. We perceive this as a promising avenue for future exploration. > Is there a scalability issue when […] applied on ImageNet? We are not entirely sure what you mean by scalability issue here, let us know if we’ve misinterpreted. If by scalability issue you mean that our method leads to a decrease marginal accuracy for DViT and IMTA models on ImageNet, we would point out that the performance drop is rather marginal (amounting to less than 1%). Considering that a lack of monotonicity can be detrimental in the anytime setting, we deem such a decline in marginal performance as an acceptable trade-off for a significantly more monotone model. > Is it indeed beneficial to decrease prediction confidence on shallow classifiers, even when their predictions are correct (e.g. on easier samples) ? > progressively increasing predictive confidence, may contradict the general benefits of uncertainty estimation in EENNs (e.g. for input dependent inference). We agree that decreasing confidence also on easier samples could be problematic in input-dependent inference. However, this application is out of scope of our current paper. Please refer to *Anytime Prediction vs. Input-Dependent/Efficient Inference setting* in the global rebuttal for further discussion. We also encourage you to have a look at Appendix B.3.1, where we introduce PA with adaptive thresholding. As shown in Figure 11, this adaptation does not reduce confidence for all data points early on, but rather for the more challenging ones. However, monotonicity is compromised when compared to vanilla PA; see Figure 10. Moreover, it isn’t post-hoc due to the fitting of a thresholding model. For these reasons, the main text prioritizes the original PA. Nonetheless, we believe that adaptive thresholding presents a promising avenue for future work. > […] uncertainty should ideally be calibrated with the probability of the ground-truth class; rather than penalised […] even if [...] prediction is correct. We agree with this concern and have discussed it in Section 6.3 of the paper. Please also refer to *Calibration Gap* in the global rebuttal. > How does this affect the analysis of Sec.6.2; […] consistent for IMTA models ? We followed your suggestion and have replicated experiment from Section 6.2 using IMTA model. As depicted in Figure R.3 (see attached pdf), the uncertainty results are consistent for this baseline model. > the discussion […] could benefit from some quantitative conclusions [...] Will do! If you think we have sufficiently answered your questions, we would appreciate if you would consider increasing your score. [1] Wołczyk, M., et al., 2021. Zero time waste: Recycling predictions in early exit neural networks. *NeurIPS* --- Rebuttal Comment 1.1: Comment: Thank you for providing thorough and constructive clarifications to all raised comments. I acknowledge I have read them, along with the comments of the other reviewers. I believe it would be beneficial to incorporate some corresponding changes to the manuscript, to ensure that the above assumptions, limitations and insights are clearly stated. Additionally, including a more comprehensive comparison with MoE on ImageNet (both in terms of accuracy and monotonicity) would significantly increase the conclusiveness of this comparison. --- Reply to Comment 1.1.1: Comment: Thank you for your response and help in improving our work. We will, of course, incorporate all the valuable feedback from the reviews into our camera-ready version as we firmly believe this will further strengthen our manuscript. We will add a comparison with MoE baseline for all the models and datasets to Appendix B.2. Since we unfortunately can not attach plots to our response here, we summarise the results for MSDNet model on ImageNet in a table format below **Test accuracy [%] per early exit** | | 1 | 2 | 3 | 4 | 5 | | --- | --- | --- | --- | --- | --- | | MSDNet | 58.0 | 65.2 | 69.3 | 71.3 | 72.1 | | MSDNet-PA | 58.0 | 65.6 | 69.3 | 71.7 | 72.7 | | MSDNet-MoE | 58.0 | 64.7 | 68.7 | 70.9 | 72.3 | **Conditional Monotonicity [%] per decrease threshold** (the lower the better) | | 0.05 | 0.1 | 0.2 | 0.33 | 0.5 | | --- | --- | --- | --- | --- | --- | | MSDNet | 46.8 | 34.0 | 18.5 | 8.1 | 2.3 | | MSDNet-PA | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | | MSDNet-MoE | 17.1 | 8.6 | 2.5 | 1.1 | 0.1 | As seen here, this results confirm our findings from Figure 7 that vanilla early-exit ensembles are not sufficient to make the model fully monotone (contrary to our PA).
Summary: The authors propose a new approach, called Product Anytime (PA), for monotonic confidence estimation in early-exit neural networks. Inspired by the Product-of-Experts (PoE) approach, PA takes the normalized confidence with respect to the product of ReLU thresholded prediction logits. The authors show through experiments that PA achieves similar accuracy curves as the original softmax logits, but improves the monotonicity of early-exited prediction confidences. Strengths: 1. The authors tackle a practical and important problem that should be solved in early-exit neural networks. 2. The intuition of monotonicity from the Product-of-Experts (PoE) makes sense to me. 3. The provided evaluations are comprehensive and support the claims made in the paper. Weaknesses: 1. Although I agree with the authors monotonicity is an important property in early-exit NNs. I am not completely convinced whether it is meaningful to design a specialized algorithm to achieve it. My point is, can we use the confidence/uncertainty calibration algorithms [1, 2] in the literature, that seem to produce relatively accurate confidence estimation to replace the proposed algorithm? On one hand, the uncertainty calibration algorithm better reflects the accuracy of the model prediction. At the same time both approaches (uncertainty estimation and the proposed algorithm) can not provide theoretical guarantee as PoE does. I think uncertainty quantification should provide better monotonicity than softmax confidence. On the other hand, uncertainty calibration gives better intuition on which sample should be early exited, since it does not have the "calibration gap" issue observed in the proposed PA method. 2. I am unsure how the proposed algorithm should be used in the practical runtime deployment scenario. 3. The proposed approach seems to have multiple obvious limitations, such as the zero distribution due to non-overlapping labels among stages, the poor confidence at initial stages, and suboptimal performance for applications with the limited number of classes. [1] Gal, Yarin, and Zoubin Ghahramani. "Dropout as a Bayesian approximation: Representing model uncertainty in deep learning." international conference on machine learning. PMLR, 2016. [2] Lakshminarayanan, Balaji, Alexander Pritzel, and Charles Blundell. "Simple and scalable predictive uncertainty estimation using deep ensembles." Advances in neural information processing systems 30 (2017). Technical Quality: 2 fair Clarity: 3 good Questions for Authors: What is the unique advantage of the proposed approach compared to conventional uncertainty estimation algorithms applied in early-exit neural networks? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Yes, the authors have a separate section to discuss the limitations of the proposed technical solution. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your time in reviewing our work. We address your concerns below. > The authors propose [PA] for monotonic confidence estimation We would just point out that our model primarily targets monotonicity in the ground-truth class probability, denoted as $p_m(y^* | \boldsymbol{x})$, and not in prediction confidence, represented as $\max_y p_m(y | \boldsymbol{x})$. Nevertheless, we concur that any form of non-monotonicity is troubling. Encouragingly, our PA helps with monotonicity not only when considering performance quality (e.g., ground-truth probability), but also for monotonicity in the uncertainty (e.g., conformal set size, see also Appendix B.7 for a more detailed analysis on this). > Although I agree with the authors monotonicity is an important property in early-exit NNs. I am not completely convinced whether it is meaningful to design a specialized algorithm to achieve it. > I am unsure how the proposed algorithm should be used in the practical runtime deployment scenario. Respectfully, we would disagree that our solution is particularly specialized. The method we propose is entirely post-hoc, meaning it can be readily applied to any pre-trained early-exit neural network. The implementation consists of four lines of Python code: ```python # logits: torch tensor with shape (t, C) where t represents index of current exit and C number of classes # weight: PoE weights (torch tensor with shape (t,) ) probs = torch.clamp(logits, min=0) # ReLU activation probs = probs.pow(weights[:, None]) # apply PoE weights probs = torch.prod(probs, dim=0) # product ensemble probs /= probs.sum() # normalize ``` **Thus, any model originally designed for an efficient inference scenario (e.g., MSDNet) can be repurposed for anytime prediction by simply "turning on" our post-hoc solution.** We hope that with our answer here, we have also addressed your question on the “practical runtime deployment scenario”. If not, please let us know, and we will be happy to provide additional details and clarifications. > On the other hand, uncertainty calibration gives better intuition on which sample should be early exited We would point out that in the anytime setting the “intuition on which sample should be early exited ” is of lesser importance since the environment dictates the exit point, not the user. Please refer to *Anytime Prediction vs. Input-Dependent/Efficient Inference setting* section in the global rebuttal for more. > My point is, can we use the confidence/uncertainty calibration algorithms [1, 2] in the literature, that seem to produce relatively accurate confidence estimation to replace the proposed algorithm? > What is the unique advantage of the proposed approach compared to conventional uncertainty estimation algorithms applied in early-exit neural networks? Please refer to *Monotonicity via Calibration* section in the global rebuttal above, where we have performed a new experiment based on your questions. If you feel that our answers there inadequately address your concerns, please let us know, and we will be happy to elaborate further. > The proposed approach seems to have multiple obvious limitations, such as the zero distribution due to non-overlapping labels among stages, the poor confidence at initial stages, and suboptimal performance for applications with the limited number of classes. Please refer to the *Calibration Gap section* in the global rebuttal for a longer discussion on “the poor confidence at initial stages”. Regarding the issue with “zero distribution due to non-overlapping labels among stages” - while we agree that this could be concern, we have not observed many issues with this in our experiments. **The collapse to a zero-distribution was found to occur extremely rarely on the considered datasets**. Specifically, when using MSDNet, we observe this scenario for 3 (out of 10000) test cases for CIFAR-10, and never for CIFAR-100 and ImageNet. Also, in such cases, we provide a concrete suggestion of falling back to softmax probabilities, which we found to work well in practice. To counter the issue of suboptimal performance when dealing with a small number of classes (i.e., < 5), we recommend using the Caching Anytime (CA) approach instead of the PA method. Although CA was originally proposed in Zilberstein’s work [1], it has often been overlooked in contemporary anytime literature, where the standard practice has been to return the most recent prediction. We hope our work will reignite interest in the CA approach within the anytime literature. Moreover, poor performance for a small number of classes is common among approaches relying on Product-of-Experts ensemble. In [2] , the authors report the same issue; see their Appendix C.4. If you think we have addressed you concerns sufficiently, we would appreciate you considering raising your score. [1] Zilberstein, S., 1996. Using anytime algorithms in intelligent systems. *AI magazine* [2] Wołczyk, M., et al., 2021. Zero time waste: Recycling predictions in early exit neural networks. *NeurIPS* --- Rebuttal Comment 1.1: Title: Thanks for the repsonse. Comment: Some of my concerns have been addressed. However, I can not agree with the authors that in an anytime setting, the exit point should be solely decided by the environment. Instead, the "input dependent" setting is more reasonable, where the exit point is decided by both the resource constraint/environmental factor, and also the data sample itself, such that the coordination can happen between samples. Specifically, samples that achieve high ground-truth class probability can be early exited to save time for low ground-truth class samples, while their overall throughput or average inference speed still satisfies the resource constraint. This is a more intelligent strategy to improve the general prediction quality than "letting the environment dictate the exit point". --- Reply to Comment 1.1.1: Comment: We thank you for your reply. In our work, we assume that the model sees one data point at a time when deployed (in line with previous literature on anytime algorithms, see [1] and [2]). We acknowledge that we have not made this explicit enough in the current version of the manuscript. We will add some clarifying sentences to our manuscript and would like to thank you for pointing this out. In addition, we would highlight that in the seminal MSDNet paper [1], the authors have distinguished two potential uses of early-exit neural networks (EENNs). - The first, termed the “anytime prediction,” is when the environment dictates when computation should be stopped. This is equivalent to saying that the computational budget is unknown beforehand. Moreover, here the model sees a single datapoint at a time. The goal is to build a model that can use additional compute to increase the quality of the prediction, i.e., it should start with a crude initial prediction and then refine it given more resources/time. - The second, termed “budgeted batch classification” (but often also “dynamic/efficient inference” or “input dependent inference”) is when the computational budget is known beforehand. The user can hence allocate a budget for performing computation on a batch of examples, and the goal is to distribute the computational budget across the examples to maximize the overall accuracy. If our reading of your response is correct, you are referring to this setting. While we agree with the authors of MSDNet on the two proposed use cases of EENNs, we believe that important differences between the two settings, in terms of their desiderata and potential applications, have been overlooked. To address this, our work focuses on the first setting. We demonstrate that an early-exit neural network like MSDNet falls short in fulfilling a fundamental prerequisite of anytime prediction: monotonicity. Furthermore, we put forth a post-hoc solution to make EENNs more suitable in the anytime regime. Of course, *hybrid schemes* in which a batch of inputs is processed with an environment-decided overall budget are also possible. However, this is outside of the scope of our paper. Even if you are particularly interested in such a hybrid scheme, we hope you can appreciate the need for better distinction between the two settings (anytime prediction vs. budgeted batch classification) and see the value in a well-focused paper that solves one problem well. [1] Huang, G., et al., 2017. Multi-scale dense networks for resource efficient image classification. *ICLR* [2] Zilberstein, S., 1996. Using anytime algorithms in intelligent systems. *AI magazine*
Summary: The paper first reviews the definition & key properties of anytime models (Zilberstein, 1996) and argues to focus on equipping the EENNs with conditional monotonicity (in addition to their built-in interruptibility, Sec 2). The paper then verifies that existing EENNs lack this critical property (Fig 1), proposes the PoE-based post-hoc modification with ReLU relaxation to encourage conditional monotonicity (Sec 4.2), and validates its accuracy (Fig 2), monotonicity (Fig 3) and uncertainty (Fig 4) on CIFAR-10/100 and ImageNet with MSDNet, IMTA and DViT. The paper also presents results on ablations (Sec B.2), calibration gap (adaptive thresholding & fine-tuning, Sec B.3), NLP (Sec B.4), ensembling, regression (Sec B.6) and different degrees of ReLU relaxation (Sec B.8). Strengths: + [Originality] The paper novelly modifies the PoE formulation to achieve conditional monotonicity, making it sufficiently and advantageously different from existing works e.g. (Wolczyk et al., 2021). + [Clarity] The paper is well written and easy to follow. All figures are well made and the appendix is very informative. + [Quality] The paper is of good quality, with the proposed method relatively well motivated and designed, and the empirical evaluation (including results in the appendix) relatively comprehensive, although further improvements can be made (see Weaknesses). + [Significance] Given the importance of the topic, even though the results are not necessarily perfect (but decent enough), I think this work is reasonably promising and could aid/inspire future research. Weaknesses: - [Evaluation] 1) Given the criticality of the relaxation/approximation of the Heaviside function, it’s highly desirable to see more thorough analyses of and comparisons between different approximations, e.g. sigmoid (seemingly much more logical than ReLU), clipped ReLU, learnable Padé approximation [1], etc., in addition to Sec B.8. 2) The exponent $i/M$ of the proposed PA method (Eq 4) is clearly a powerful inductive bias ensuring the network starts with low confidence, which should deserve more analyses such as: (a) How do the baselines perform when equipped with this simple feature or corresponding adjustments on the softmax temperature? (b) How does $i/M$ compare to other progressions or constant exponents? (c) Does $i/M$ adversely limit the confidence on simpler test samples such that e.g. early exits become ineffective? [1] Padé Activation Units: End-to-end Learning of Flexible Activation Functions in Deep Networks, ICLR, 2020. Technical Quality: 2 fair Clarity: 4 excellent Questions for Authors: - Are $w_i$ and $b_i$ in Eq 4 learnable during fine-tuning? Why or why not? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 4 excellent Contribution: 2 fair Limitations: The authors have reasonably addressed the paper’s limitations in Sec 6.3. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your feedback and help in improving our work. > Given the criticality of the relaxation/approximation of the Heaviside function, it’s highly desirable to see more thorough analyses of and comparisons between different approximations, e.g. sigmoid … We fully agree with your observation on the importance of the activation function. We would politely draw your attention to Appendix B.2 where we compare our choice of ReLU against Heaviside and exponential (which corresponds to softmax), as well as to Appendix B.8 where we report the results for clipped ReLU (even though we do not call it such in the current version, we will update this in the next version of our manuscript). Your suggestion regarding the sigmoid function is appreciated. Initially, we also considered it a viable option; however, upon further examination, we found it to have two drawbacks that make it less applicable in our context. Firstly, the sigmoid function lacks the "nullifying" effect of ReLU. Consequently, the support of classes diminishes more slowly, and we do not observe the same anytime uncertainty behaviour as we do with ReLU (c.f. Section 6.2). Secondly, logits with larger values all map to the same value, approximately 1, under the sigmoid function. This can negatively impact accuracy, as it does not preserve the ranking. Figure R.2 (see attached PDF) illustrates the results using the sigmoid activation function for the CIFAR-100 dataset with MSDNet as a backbone: **our choice of ReLU clearly outperforms sigmoid activation**. We will prepare an Appendix section in the camera-ready version and include these results there. Thank you for your suggestion. We want to underscore at this point that our use of activation functions deviates from their traditional usage within the deep learning literature. In the context of our work, when we refer to activation functions, we are discussing the function that maps logits to probabilities in the final layer of the network. Conventionally, however, an activation function refers to the non-linearity that dictates which neurons are propagated between layers. This distinction may not have been sufficiently clear in the current version of our manuscript. We appreciate your observation and will correct this in the camera-ready version. Regarding PAU activation [1]: thanks for pointing out this work; we were unaware of it before. However, based on our understanding, this activation has trainable parameters that are usually fitted with the rest of the NNs’ parameters. As such, it is not entirely compatible with our post-hoc approach. Related to the previous paragraph, it also seems PAU is used more as a non-linearity between layers rather than a function that maps logits to probabilities in the final layer. That being said, it's plausible that PAU might be a suitable choice for the activation function in our fine-tuning approach (see Appendix B.3.2). We plan to investigate this possibility in our future work. > 2a) How do the baselines perform when equipped with this simple feature or corresponding adjustments on the softmax temperature? We tried finding the optimal temperature for the baseline model (MSDNet), see *Monotonicity via Calibration* section in the global rebuttal above. In short, temperature scaling helps with the calibration of the baseline model. However, it still significantly underperforms our PA when it comes to achieving monotonicity. > 2b) How does $i/M$  compare to other progressions or constant exponents? > Are $w_i$ and $b_i$ in Eq 4 learnable during fine-tuning? Why or why not? We experimented with various coefficient schemes, such as: - $w_i = 1$ - $w_i =$ np.linspace(1, L/2)[i] - learn $w_i$ during fine-tuning, similar to ZTW paper [2] **However, we found that none of these alternative weights consistently outperformed our choice of $w_i=i/M$** across different datasets and backbone models, both in terms of marginal accuracy and conditional monotonicity. In the camera-ready version of our paper, we will include a separate section in the Appendix detailing this ablation study on the selection of $w_i$. > 2c) Does $i/M$  adversely limit the confidence on simpler test samples such that e.g. early exits become ineffective? In the anytime setting upon which we're focusing, our method (utilizing $i/M$ coefficients) maintains the marginal accuracy (see Figure 2). As such, we conclude that our method does not render the initial exits ineffective. If your comment is directed towards the efficient/input-dependent inference setting, kindly refer to the *Anytime Prediction vs. Input-dependent/Efficient Inference Setting* section in the global rebuttal above. [1] Molina, A., et al., 2020. Páde Activation Units: End-to-end Learning of Flexible Activation Functions in Deep Networks. *ICLR* [2] Wołczyk, M., et al., 2021. Zero time waste: Recycling predictions in early exit neural networks. *NeurIPS* --- Rebuttal Comment 1.1: Title: Re: Rebuttal Comment: Thank you for the detailed response and additional results! In addition to the Android example in the paper, I think another possibly more attractive scenario would be multitasking in a (medical, industrial, etc.) system with strict real-time requirements, where the anytime model and other processes (possibly with higher priorities) compete for and get varying amount of resources (e.g. CPU cycles) per time slot (e.g. one video frame), so the anytime model must work with the environment to deliver as-good-as-possible results with the given (varying) budget. --- Reply to Comment 1.1.1: Comment: Thank you for your response! Your suggestion is much appreciated; we will incorporate it into our introduction, to further motivate the need for anytime models.
Rebuttal 1: Rebuttal: We thank all reviewers for their valuable time and feedback; it’s much appreciated. We are encouraged that the reviewers found “the paper well written and easy to follow” (G2Sm), thought that “the studied problem is interesting and important” (NWJy), “often overlooked by the relevant literature” (ne1V), and that we “tackle a practical and important problem that should be solved” (kJKM). Moreover, we are pleased to hear that our proposed method is “well motivated and designed” (G2Sm) and that “its effectiveness is also supported by theoretical results” (6rYn). Importantly, reviewers found the experiments “comprehensive” (NWJy & G2Sm & kJKM), “supporting the claims made in the paper” (kJKM), and “wide and insightful” (ne1V). Finally, we are excited that the reviewers recognized that our work “can motivate further research in the topic” (ne1v) and “could aid/inspire future research” (G2Sm). We next address some of the main concerns raised in the reviews. **Anytime Prediction vs. Input-Dependent/Efficient Inference setting** (kJKM, ne1V, NWJy) We make no claims in our paper regarding the use of our model in an input-dependent/efficient inference setting (a.k.a. budgeted batch classification [1]). **Our focus is solely on the anytime scenario**, where the computational budget is unknown or dynamic, and exiting is determined by the environment (cf. “Setting of Interest”, lines 72-81). Since our solution is very lightweight (4 lines of Python code) and can be applied post-hoc, it is easy to “turn on and off”. Hence, the same early-exit model, e.g., MSDNet, can easily be used for either scenario as required. One of our main goals is to highlight the distinction between the two settings and their different requirements. We believe that this is missing in the current literature where the desiderata of anytime setting have been largely overlooked (as evidenced by the highly non-monotone behavior of SOTA ‘anytime’ models, cf. Figure 1) Reviewer NWJy has suggested that it would be interesting to study the effect of monotonicity in the budgeted batch classification scenario too. This is a great idea, but it is out of the scope of the current paper. We plan to investigate it in future work. **Monotonicity via Calibration** (6rYN, kJKM, G2Sm, ne1V) Some reviewers questioned whether monotonicity could be achieved by calibrating the underlying early-exit NN, rather than employing our proposed PA. To answer this, we conducted a new experiment where we calibrated MSDNet and analyzed its effect on monotonicity. For the calibration, we utilized three standard techniques: temperature scaling [2], deep ensembles [3], and last-layer Laplace [4]. In Figure R.1 (see attached pdf) we present results for MSDNet on CIFAR-100. While all three approaches lead to better ECE (right plot), our PA significantly outperforms them in conditional monotonicity (middle plot). Moreover, all three baselines are (arguably) more complex compared to our PA: temperature scaling requires validation data, deep ensemble has M-times more parameters (M = ensemble size), and Laplace is significantly slower compared to PA since we need test-time sampling to estimate the predictive posterior at each exit. However, simply improving calibration might not be expected to improve monotonicity. Thus, in Figure R.1, we also provide results for combining the above uncertainty calibration techniques with our CA baseline, as suggested by reviewer 6rYn. We see that this combination does indeed provide further improvements w.r.t. monotonicity – the improved calibration allows for better caching of the correct prediction. **However, the monotonicity of these caching-and-calibrated methods still underperforms compared to PA, despite being more complicated**. We plan to include these results in our paper's camera-ready version, as it neatly demonstrates that monotonicity can not be achieved via calibration alone. This insight further underlines the value of our proposed approach. We thank the reviewers for this suggestion. **Calibration Gap** (6rYN, kJKM, ne1V) Some reviewers raised concerns that our post-hoc method can hurt ECE in earlier exits (c.f. Figure 5). While we share this concern, we would like to emphasize that this potential drawback must be considered alongside the notable advantages of our method. These include the introduction of conditional monotonicity and enhanced uncertainty estimates (c.f. Figure 4, right). Additionally, note that poor ECE early on stems from under-confidence rather than over-confidence. We would argue such inductive bias is appropriate in EENNs, especially in safety-critical scenarios **Moreover, in addition to pointing out this limitation ourselves, we propose two concrete ways to improve the calibration in earlier exits in the paper**, one based on the training/finetuning with the Product Anytime (PA) objective and one based on using ReLU with adaptive thresholding. We describe both in detail and present their impact on the monotonicity/calibration in Appendix B.3. Both yield very promising results when it comes to closing the calibration gap, albeit at some cost to monotonicity/accuracy. We will also include a description of both approaches in the main text using the additional page in the camera-ready. Lastly, we believe that our paper is stronger due to our honest discussion of both the pros and cons of the method, which we hope will facilitate future work and easy adoption for practitioners. We further address all reviewers' remaining concerns and questions in individual responses. [1] Huang, G., et al., 2017. Multi-scale dense networks for resource efficient image classification. *ICLR* [2] Guo, C., et al., 2017, July. On calibration of modern neural networks. *ICML* [3] Lakshminarayanan, B., et al., 2017. Simple and scalable predictive uncertainty estimation using deep ensembles. *NeurIPS* [4] Daxberger, E., et al.., 2021. Laplace redux-effortless bayesian deep learning. *NeurIPS* Pdf: /pdf/4f5f2ca7778f3e62dc6e95fe8bfe3316c16e0736.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes a post-hoc modification of early-exit neural networks. Specifically, the authors focus on the conditional monotonicity out of four properties presented in a prior work. A training-based method is also proposed to improve the model confidence. Strengths: The proposed method based on product-of-experts improves conditional monotonicity and its effectiveness is also supported by theoretical results. Weaknesses: - Some writing is somewhat verbose/redundant. For example, while properties of anytime models take a considerable amount of space, they are not that meaningful at last. - Plots on "% of Test Samples vs. Max y* Prob. Decrease" should be described in a better way, in that what it means and why do we care about it. - As addressed in Fig. 5, the proposed method results in hurting model confidence. It is good to see that fine-tuning with the proposed learning objective can improve this, but it makes the point of this work distracted; recall that this paper means to propose a post-hoc method. - How PA Finetune is done is not explained enough. - I am not sure if we really need to care about monotonicity. Enforcing to have monotonicity might be harmful, as shown in some curves in Fig. 2. Instead, assuming that the model is calibrated well, we could rely on the confidence of intermediate result and exit early regardless of the time budget for the best performance. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Please address concerns in weaknesses. --- post rebuttal I appreciate the authors' active responses during the author-reviewer discussion period. Most of my concerns have been solved, however, the main concern on the necessity of (conditional) monotonicity for EENNs is still not yet solved. Hence, I increase my rating, but not by much. In my opinion, when proposing a new metric, the metric itself should either 1) be meaningful by itself or 2) have a strong positive correlation with the main goal. The monotonicity is not meaningful by itself (a model producing a constant output has the perfect monotonicity, which is not useful in practice) and the correlation with the overall performance (accuracy) is not so significant. To provide successful cases as examples, ECE in confidence calibration is crucial when considering the reliability of model outputs, and forgetting in continual learning has a positive correlation with the average accuracy. Regarding the scalability, the monotonicity measure seems scalable, but ECE is not for the proposed method. To me, confidence calibration is one of the most important topic in deep learning for a successful deployment to the real world applications, so non-scalable ECE is my major concern. In short, increasing the monotonicity leads to slightly improve the overall accuracy, while significantly deteriorating confidence calibration in the large-scale experiment on ImageNet. However, given that conditional monotonicity is important (and given that ECE is somewhat problematic), I agree that the post-hoc method proposed by the authors is simple yet effective in terms of the proposed monotonicity metric. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Nothing special. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback and help in improving our paper. > Some writing is somewhat verbose/redundant. For example, while properties of anytime models take a considerable amount of space, they are not that meaningful at last. While we focus exclusively on the monotonicity property, we have included a discussion on other relevant anytime desiderata (i.e., consistency and diminishing returns) to outline promising directions for future research. One of our goals in this work was to highlight that the field has largely neglected these desiderata. If you disagree with this inclusion, we are prepared to relocate the information to the Appendix. Please let us know if you identified any other instances of redundant writing. We would be happy to address these issues. > Plots on "% of Test Samples vs. Max y* Prob. Decrease" should be described in a better way, in that what it means and why do we care about it. Concerning the question, "why do we care about it," this plot is designed to capture the percentage of test datapoints that display non-monotone trajectories in performance quality. A non-monotone performance trajectory signifies that the model's performance deteriorates with more evaluated exits, a behavior that stands in contradiction to the monotonicity desideratum outlined in Section 2 (see Properties of Anytime Models). Thus, this plot enables us to compare different models with respect to their monotonicity. Specifically, models with lower monotonicity curves (i.e., closer to y=0) are more effective at utilizing additional computational budget to refine their predictions. Also, we would politely point out two paragraphs on lines 150-166 and the caption of Figure 1, where we tried to explain our conditional monotonicity plots in detail. Should you have concrete suggestions on enhancing the description of this plot further, we will gladly integrate those into the camera-ready version. > As addressed in Fig. 5, the proposed method results in hurting model confidence. It is good to see that fine-tuning with the proposed learning objective can improve this, but it makes the point of this work distracted; recall that this paper means to propose a post-hoc method. Please refer to the *Calibration Gap* section in the global rebuttal above. We must also respectfully disagree with the statement that the fine-tuning "distracts from the point of our work." We view the post-hoc nature of PA as a highly desirable property, allowing our method to be easily compatible with most existing early-exit NNs. Thus, we have emphasized the post-hoc version of PA in the main paper. However, recognizing that preserving calibration can be important in some applications, we have also proposed a fine-tuning version that helps close the post-hoc approach's calibration gap. > How PA Finetune is done is not explained enough. Due to space constraints, this didn't make it into the main text. However, we intend to incorporate it using the extra page allocated for the camera-ready. Moreover, we would like to draw your attention to Appendix B.3.2, where the PA finetuning is explained in detail. If you have specific suggestions for enhancing our description of the fine-tuning approach, we would be more than happy to incorporate them. > I am not sure if we really need to care about monotonicity We must respectfully disagree on this point. Monotonicity, as a key property of anytime models, was introduced by Zilberstein in 1996 [1]. This seminal work, which has garnered over 1000 citations since its publication, has motivated most of the modern anytime models [2, 3]. **All other reviewers recognized the value of monotonicity.** > Enforcing to have monotonicity might be harmful, as shown in some curves in Fig. 2 While our PA method may slightly decrease marginal accuracy in some instances (CIFAR-10 DViT, ImageNet DViT, ImageNet IMTA), we emphasize that the performance drop is rather marginal (amounting to less than 1%), and that this is the exception not the rule. Considering that a lack of monotonicity can be detrimental in the anytime setting (since more computation is not guaranteed to improve performance), such minor decline in marginal performance (i.e., < 1%) is in our opinion an acceptable trade-off for a significantly more monotone model. > Instead, assuming that the model is calibrated well, we could rely on the confidence of intermediate result and exit early regardless of the time budget for the best performance. Please refer to the *Monotonicity via Calibration* section in the global rebuttal above. Moreover, your comment regarding the strategy of relying "on the confidence of intermediate result and exiting early regardless of the time budget for optimal performance" may be related to our Caching Anytime (CA) baseline (c.f. Section 4.1). In CA, the most confident prediction up to that point is cached and returned when prompted by the environment (instead of the latest prediction). While this approach is indeed a viable alternative to our PA method for enhancing monotonicity, our experimental results show that it falls short of PA in terms of both marginal accuracy (Figure 2) and conditional monotonicity (Figure 3) in most cases. > **Rating:** 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility Based on your score of 3, it appears that you perceive a technical flaw, weak evaluation, or inadequate reproducibility in our work. **We would sincerely appreciate further elaboration on these concerns, as we are confident that any such concerns can be addressed in this rebuttal period**. [1] Zilberstein, S., 1996. Using anytime algorithms in intelligent systems. *AI magazine* [2] Grubb, A., et al., 2012. Speedboost: Anytime prediction with uniform near-optimality. *AISTATS* [3] Huang, G., et al., 2017. Multi-scale dense networks for resource efficient image classification. *ICLR* --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: Thank you for your response. Below I leave my comments/questions after reading the rebuttal. > while properties of anytime models take a considerable amount of space, they are not that meaningful at last. I don't have a strong opinion on this. If authors and other reviewers agree that these desiderata other than monotonicity are worth to discuss, then the current state might be okay. > Plots on "% of Test Samples vs. Max y* Prob. Decrease" Providing an example scenario with specified numbers would be helpful to understand what it means (e.g., an EENN test case with fluctuating or monotonic y* over layers). It would also be nice if there is a metric that quantifies conditional monotonicity by a single number, similar to ECE for confidence calibration. > the proposed method results in hurting model confidence. (fine-tuning) makes the point of this work distracted. Regarding the answer of the authors "the post-hoc nature of PA as a highly desirable property," could you clarify if you want to improve the model confidence or performance over layers? To my understanding, conditional monotonicity is about the robustness of the model prediction (or confidence calibration) over layers (whether the model exhibits better confidence as going deeper) rather than improving the performance over layers (whether the model exhibits better performance as going deeper). However, the experimental results show that the proposed method hurts the calibration of the model confidence, i.e., increasing ECE. By the way, plots on "% of Test Samples vs. Max y* Prob. Decrease" seem to be somewhat problematic, because a model producing fixed outputs regardless of the layer index exhibits perfect conditional monotonicity. By looking at the experimental results in Figure 12, "PA finetune" overall decreases the test accuracy, so I wonder if PA is a well-defined learning objective that results in better optimization. In worst case, it would work as a random learning objective that bothers to learn the main task, such that it makes the model just less confidence and less accurate. Then, the improved conditional monotonicity or ECE might come from the random noise on the learning objective. To address this, authors may want to check the gradient and optimal state of the PA objective, and training a model from scratch rather than fine-tuning pre-trained one. > Enforcing to have monotonicity might be harmful. Authors responded that "Considering that a lack of monotonicity can be detrimental in the anytime setting (since more computation is not guaranteed to improve performance), such minor decline in marginal performance (i.e., < 1%) is in our opinion an acceptable trade-off for a significantly more monotone model." The proposed method does not only decreases the performance (which might be okay if the gap is not significant), but also harms confidence calibration, measured by ECE. It seems authors wanted to claim that the proposed method exhibits better uncertainty by the conformal set size comparison. In my opinion, authors had to focus more on discussion on the discrepancy between ECE and the conformal set size comparison. At this point, ECE is the only reliable metric to me. > (additional question 1) According to Figure 12, PA finetune makes the ECE strictly worse on ImageNet, i.e., the observation in Figure 5 is not scalable. I wonder how the authors think about the scalability of the proposed method, especially considering the result on Figure 12. I think early-exit scenario is meaningful mostly when we meet a large-scale problem, but it seems the observation in this work is not well-scaled to large-scale settings. > (additional question 2) In Figure R.1, why does temperature scaling affect the test accuracy? As I understand, it scales all logits by the same factor, so the final results should not be changed. --- Reply to Comment 1.1.1: Comment: Thank you for your response and further engagement. We are encouraged to note that most of your concerns from the initial review appear to have been addressed. > Providing an example scenario with specified numbers would be helpful to understand what it means […] We tried to do so in lines 153-156. If you feel this is not sufficient, or that it would be more interpretable to present ground-truth probability trajectories in a table format, let us know and we will incorporate this. > It would also be nice if there is a metric that quantifies conditional monotonicity by a single number […] We appreciate this suggestion. One suitable metric here would be area under the conditional monotonicity curve (AUC), where the lower value denotes a better, i.e. more monotone, model. Below we report this metric for CIFAR-100 dataset: | | MSDNet | IMTA | DViT | | --- | --- | --- | --- | | baseline | 19.45 | 12.37 | 5.53 | | PA | 0.26 | 0.42 | 0.01 | We plan to incorporate AUC values in the camera-ready. However, we'll present these alongside our existing plots. We feel that the visual representation of conditional monotonicity provided by these plots offers a more intuitive understanding than a standalone metric (similar to how a reliability diagram helps understanding ECE). > plots […] seem to be somewhat problematic, because a model producing fixed outputs regardless of the layer index exhibits perfect conditional monotonicity. We fully agree with your observation here, i.e. a model that has ground-truth probability equal to 0 at each exit achieves perfect conditional monotonicity. That is precisely the reason why we never study conditional monotonicity in isolation, but always in combination with marginal accuracy. This is analogous to ECE - a random MNIST classifier that predicts each class with probability of 0.1 will achieve perfect ECE (i.e., 0). Hence, one should always study ECE in combination with accuracy. > could you clarify if you want to improve the model confidence or performance over layers? […] In our response here we assume that by model confidence you are referring to prediction confidence, i.e. $\max_y p_m(y | \boldsymbol{x})$. Let us know if we have misunderstood. Our primary aim is to improve performance quality over exits and not the model confidence (c.f. lines 118-132). What could potentially be a source of confusion here is the measure with which we estimate the performance quality, i.e. the probability of the ground-truth class $p_m(y^*|\boldsymbol{x})$. While this quantity is related to model (or prediction) confidence, the two are not equivalent. Here we would also like to draw your attention to Appendix B.1, where we show that our PA solution leads to more monotone behaviour even when other measures of performance quality are used, e.g. correctness of prediction $[\arg\max_y p_m(y|\boldsymbol{x}) = y^*]$ . However, it is true that in Section 6.2 and Appendix B.7 we additionally argue for monotonicity in the uncertainty estimates in anytime models. Since model/prediction confidence is one possible way to estimate the uncertainty, we recognise that this could be another potential source of confusion. We will ensure to make the distinction between performance quality and confidence more explicit in the camera-ready. > […] "PA finetune" overall decreases the test accuracy, so I wonder if PA is a well-defined learning objective […] random learning objective [...] improved conditional monotonicity or ECE might come from the random noise on the learning objective. […] We would respectfully push back here. PA-finetune achieves on-par accuracy to MSDNet on CIFAR datasets and 2-3% worse accuracy on ImageNet. While we agree that these results could be better, we find it hard to believe that this could be an indication of “random learning objective”. PA-finetune is well motivated - the calibration gap highlighted in our main paper primarily stems from the difference between the training and testing objectives. Specifically, MSDNet is trained using the softmax parametrization of the categorical likelihood, while our PA employs the ReLU version. PA-finetune directly addresses this mismatch by exposing the model to ReLU likelihood already during training. Therefore, we disagree with the notion that the enhanced ECE results from "random noise." Additionally, we provide theoretical motivation for the improved conditional monotonicity associated with our PA approaches, as detailed in Appendix A. Given this, we maintain that it's highly improbable for the improvements in monotonicity to arise merely from "random noise”. Furthermore, we attempted to train models from scratch using the PA-finetune objective. Although such models do converge—serving as additional empirical evidence against the notion of a "random learning objective"—we discovered that training from scratch doesn't outperform the finetuning method. Consequently, our focus in Appendix B.3.2 is on the finetuning approach. Title: Response Part 1
null
null
null
null
null
null
Strategic Data Sharing between Competitors
Accept (poster)
Summary: This paper uses algorithmic game theory to study potential data sharing between competing firms that train machine learning models with similar goals. The authors propose a new framework that can be used to analyze trade-offs faced by data-using agents who make decisions about whether to collaborate (via data sharing) to improve their own ML performance, while potentially losing out on profits because they also improved their competitors ML performance. The authors draw on conventional market models from academic economics to make predictions about how different structural factors might impact collaboration decisions. This analysis has implications for the development of regulations and norms pertaining to data sharing. Strengths: This paper has a number of strengths. The motivation, combination of theoretical arguments with simulation evidence, and reasonable use of assumptions all stood out. First, the paper’s motivation is quite strong. The authors argue that the incentives underlying data sharing between machine learning operating firms is understudied, and that using the lens of theoretical modeling can highlight how market conditions might impact sharing behaviors. The work combines theory (drawing on published economics literature) and a simulation experiment. I thought this combination was convincing: the experimental component is likely to help readers understand the implications of the theoretical framework. There are a lot of assumptions at play, but they seem fairly plausible on the whole. Thinking in terms of whether this theory-focused paper could guide practical decision-making (by firms) or policy making (by regulators), the current draft devotes enough attention to arguing for plausibility. I do think there’s room to see more discussion of where these assumptions are more or less plausible (see below), though this may be out of scope for a theory paper that’s trying to propose a new framework and ask for some re-thinking. Weaknesses: In my view, the main threat to the validity and impact of the work is the reliance on assumptions from conventional economics. The authors are very upfront about signposting which prior works are most foundational (e.g. the 1979 work defining representative consumer with quasi-linear quadratic utility, and classical models of competition that focus on quantities vs. prices) but empirical validation could help this line of work have more impact in the long-term. Put another way, the current draft does a great job of pointing out relevant literature that it builds off, but could do more to explicitly state why certain assumptions fit a specific empirical context. Of course, the authors may not wish to zoom in on a specific context, which is a fair choice for scoping the paper. The paper also discusses data as a “key asset” (e.g. in the Introduction) in a very broad sense. None of the claims are situated relative to specific use cases of data and ML. Examples like ad tech and a “production process” are gestured at, and of course the running example of taxi driver scheduling is helpful. On the whole, however, I think it may be hard for readers to assess how contextually dependent some of the claims and results are. Overall, the impact of the work could be improved by clarifying the extent to which these analyses are or are not contextually dependent (even in terms of specific numerical characterizations of data scaling behavior / “data impact model”). Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Early in the Introduction, free-riding and non-collaboration concerns are mentioned. The authors may want to engage briefly with sociological work on collective action (though perhaps this better suited for future work and out of scope for this paper). I think a big open question for future work along these lines is which types of data are best handled with an “economic-leaning” model vs a “sociological-leaning” model that includes non-monetary incentives facing individual data generating agents. In general, the framework also seems to lack any notion of the possibility of commons or public goods. I could imagine adding an additional stage (or several) to the game to account for this. Fully engaging with this topic is almost certainly out of scope given the current space constraints, but perhaps worth a brief mention. While I expect the core audience for this kind of paper (i.e. readers familiar with some of the references already or interested in data collection games) will follow most sections, there’s potential to strengthen the broader impact of the paper just by adding a bit more high-level summarization of each sections.. Sections 3.1 and the end of Section 5 do a nice job of this kind of discussion, but there’s opportunities to emphasize this kind of recap in other sections. As a minor comment, it may help the paper to discuss whether data-dependent products lend themselves to Bertrand vs. Cournot competition. It seems Cournot may be preferred, but I didn’t quite understand why. This relates to my main high-level comment about the work, which is that I think readers will want to know how different assumptions here map to different data-dependent technology contexts. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The work is reasonable in discussing its own limitations. In terms of societal implications, I think the discussion in the paper is commensurate to potential concerns. Overall, the paper is a bit light on discussion of how data sharing would work in practice (not touching on topics like public goods, anti-trust concerns, consumer welfare, etc.) but I think this is reasonable given space constraints and paper scope. As noted above, I think some engagement with literature on collective action, commons, and public goods could be insightful, or even a brief mention of why the authors think the collective action / commons perspective on data-dependent technologies is not relevant. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable and constructive feedback. In the following we address the weaknesses and questions raised in your review. **Weaknesses** **In my view, the main threat to the validity and impact of the work is the reliance on assumptions from conventional economics … The current draft … could do more to explicitly state why certain assumptions fit a specific empirical context. Of course, the authors may not wish to zoom in on a specific context, which is a fair choice for scoping the paper.** Thank you for your comments. We certainly agree that designing and validating market and data impact models for specific applications is important for the framework's applicability, and we deem this direction interesting for future work. The current paper indeed stays away from focusing on a specific real-world application: please refer to our general response for a thorough justification on this matter. **The paper also discusses data as a "key asset" (e.g. in the Introduction) in a very broad sense. None of the claims are situated relative to specific use cases of data and ML … Overall, the impact of the work could be improved by clarifying the extent to which these analyses are or are not contextually dependent** Thank you for your valuable suggestion. Indeed, the parts of the framework may vary depending on the context, due to the generality of the framework. We plan to provide further explanation on the framework adaptation by giving more specific examples of relevant industries (also suggested by Reviewer 4Akt) and discussing example numerical characterizations of our models (also suggested by Reviewer Jw59). **Questions** **The authors may want to engage briefly with sociological work on collective action (though perhaps this better suited for future work and out of scope for this paper). I think a big open question for future work along these lines is which types of data are best handled with an "economic-leaning" model vs a "sociological-leaning" model that includes non-monetary incentives facing individual data generating agents.** Thank you for your valuable suggestion. We certainly agree that analyzing the differences between economic and sociological modeling of data sharing is an exciting direction for future work. We believe that our framework can capture other (e.g., sociological) data-sharing approaches by adapting the collaboration scheme. We see this as orthogonal to the focus of the current work, which studies non-cooperative games since they align with the classic economic modeling of firms as rational, profit-driven, and self-interested. However, we will be happy to add a discussion on possible sociological extensions as interesting future work. In fact, Appendix E provides some evidence about the relevance of collective action theory toward finding widely beneficial schemes for data sharing. This section analyzes global welfare in the context of the data-sharing problem. The result suggests that the welfare is maximized when data sharing occurs (in the case of full data sharing between two firms). **In general, the framework also seems to lack any notion of the possibility of commons or public goods. I could imagine adding an additional stage (or several) to the game to account for this. Fully engaging with this topic is almost certainly out of scope given the current space constraints, but perhaps worth a brief mention.** We certainly agree that the public goods perspective might be relevant here since data is non-rival. However, data does not seem to be the commons because it belongs to either companies or individuals in most legal systems. At the same time, machine learning models and data do not seem to be public goods since they are excludable. Therefore, we are unsure how to include the discussion of public goods in our framework. That said, one aspect that may be possible to model is the creation of public datasets (e.g., ImageNet). **There's potential to strengthen the broader impact of the paper just by adding a bit more high-level summarization of each sections.** Thank you for the valuable suggestion. We will seek to provide further summaries and intuition for the next version of the paper. **It may help the paper to discuss whether data-dependent products lend themselves to Bertrand vs. Cournot competition. It seems Cournot may be preferred, but I didn't quite understand why. I think readers will want to know how different assumptions here map to different data-dependent technology contexts.** We focus on the Bertrand and Cournot competition because these models are the most popular in empirical research and are a basis for all other competition models. In some cases, we only provided the analysis for the Cournot model since the derivations were less tedious with the parametrization of the market model adopted in the paper. However, we expect these results to also transfer to the Bertrand model. Regarding the mapping to different contexts, completely answering this question is beyond speculation on our side, requiring a separate economic study since we are not aware of any current economic work that empirically studies the question of data sharing for machine learning (see also our main response). We hope to discuss how one could adopt our framework to real-world contexts in the next version of our paper, for instance by covering examples of relevant industries and numerical characterizations of our models, as mentioned above. --- Rebuttal Comment 1.1: Title: Rebuttal helps address areas for improvement Comment: Thanks for this rebuttal, authors! Overall, I think the points raised here are fair (e.g., challenges with incorporating public goods directly into this current work, the discussion of using global welfare lens). I thought the general argument for avoid hyper-specific data use-case here is fair as well. Overall, I appreciate this additional information from the authors, and it sounds like it will be possible to build on some of the areas for improvement in camera ready. I think this paper will be a valuable contribution to the conference. --- Reply to Comment 1.1.1: Title: Thank you for your response! Comment: Thank you for your timely response! We appreciate your constructive and positive feedback, which we will incorporate in the next version of the paper. In particular, we will discuss the non-rivalry of data and the obstacles in front of collective action in collaborative learning, as well as the positioning of this work as a general framework that opens the door for application-specific models.
Summary: The authors present a trainable model for optimising the benefits of data sharing based valuation of data, benefits of the shared data sets for the inference model, in a conventional market model. The authors point out that the model can be used to find an optimal data sharing strategy. The model is nicely developed and mathematically sound, presented clearly in an understandable manner. It states the assumptions that are used in the development of the model and provides a good a good summary of the consequences, the effect of the simplicity of the learning task, and similarity of the products build using the shared data sets. The mean coalition size is shown as function of these parameters. Strengths: The paper is written and goes through the relevant steps in creation of the model. It provides excellent argumentation and conclusions from the assumptions made. Weaknesses: The biggest weakness I see in the paper is its assumption that the collaborative partners share the same data distribution and the data is modelled as iid samples of a common distribution. This is generally the situation when there less sense of sharing data, as one has the capability, in time, to the get a representative data set that is enough for an highly accurate inferred model. However, the real problem, is in a case that the data sharing partners work with differently distributed, where the parties are do not have the capability to reproduce the data that the other parties have. This is for example the case where medical X-rays are taken with different X-ray machines, in diverse set of hospitals and the aim is to build an inference model that works generally with a variable types of machines. An other example is, again in the medical domain, where the ethnic origin of the subjects has to be taken into account. Then one cannot really build a good model that is not ethnically non-discriminating without schemes of sharing data. An other aspect that has not been taken account, is the effects of legislation, for example EZU Data governance act that mandates data sharing for data recorder from what the act calls connected devices. Also, the effects of data sharing based on EU GDP rights for subjects, not companies, to share their personal data with third parties. These require more complicated data strategy consideration that should be discussed to make the paper valuable for real evaluation of companies data sharing sharing strategies Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: I would like the authors to address the points discussed in the weaknesses part of the review, especially redoing their data value analysis based on cases where not including the data from others would lead to discriminative AI models that are generally not acceptable. Also, even the current analysis should contain the estimation how long time would it take to build an own model (and estimate the cost of being late in the market with AI features) compared to the complications of sharing the data. The equivalent extra time for a attaining a similar product alone could be estimated already with the current model in the manuscript. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: The libations are discussed in the weakness part - as these are the major weakness, of the otherwise very good paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable and constructive feedback. In the following we address the weaknesses and questions raised in your review. **Weaknesses** **The assumption that the collaborative partners share the same data distribution and the data is modeled as iid samples of a common distribution.** We certainly agree that heterogeneity can be an important aspect in some collaborative learning settings. Please refer to our shared response for a discussion of the numerous challenges and orthogonal incentives arising when considering a heterogeneous setting. In particular, we would like to highlight that since directly incorporating data from other distributions may actually damage the model performance (due to the additional data being too different), heterogeneity yields additional strategic data-sharing considerations (Donahue & Kleinberg, 2021), which are orthogonal to the trade-off considered in this work between improving your model versus risking increased competition. Additionally, we note that we consider a simple model of heterogeneity in Appendix C.4. We believe that our framework allows for considering even more intricate models of heterogeneity, providing that an appropriate data impact model is formulated. We certainly agree with the reviewer that this is an interesting direction for future work. Donahue, K. and Kleinberg, J. Model-sharing games: Analyzing federated learning under voluntary participation. In: *AAAI Conference on Artificial Intelligence*, 2021. **One cannot really build a good model that is not ethnically non-discriminating without schemes of sharing data (in the heterogeneous case).** We certainly agree that fairness and non-discrimination are important concerns in collaborative learning. In fact, one can explicitly analyze fairness concerns within our framework. To do so, one could consider the impact of fairness constraints on learning outcomes by changing the data impact model and adding some diversity constraints on the action spaces of the firms within their collaboration scheme. Additionally, we note that already in the homogeneous case, our framework provides a possibility to improve these measures in practice. In particular, even for homogeneous data, the performance of the ML models on rare population groups can be significantly improved by having access to more data from the same distribution (and therefore observing more of the rare group samples). **Another aspect that has not been taken account, is the effects of legislation, for example EZU Data governance act that mandates data sharing for data recorder from what the act calls connected devices.** Thank you for the valuable suggestion. One can explicitly incorporate and study the effect of legislation using our framework by constraining the action spaces of the firms in the collaboration scheme (for example, by forcing them to share some parts of data regardless of other data-sharing actions). We see this as an exciting direction for future work, though orthogonal to the current focus of the paper, which analyzes the data-sharing trade-off from the perspective of market incentives only. Additionally, we believe that our work is aligned with the values and objectives of the EU Data Governance Act since it shows that data sharing can be beneficial even between competing market participants and without the presence of legal requirements to share data. **The effects of data sharing based on EU GDP rights for subjects, not companies, to share their personal data with third parties … should be discussed to make the paper valuable for real evaluation of companies data-sharing strategies.** Thank you for the valuable suggestion. We will discuss data ownership aspects in the next version of the manuscript. We expect that this requirement can also be incorporated into our framework. For example, in Section 6, one can implement sharing consent constraints by constraining the firms’ choices of $\lambda$. While, previously, the firms were able to share all data $\lambda \in [0, 1]$, now they can share only data of people who agree with data sharing $\lambda \in [0, \lambda_\text{consent}]$. **Questions** **Even the current analysis should contain the estimation how long time would it take to build an own model (and estimate the cost of being late in the market with AI features) compared to the complications of sharing the data. The equivalent extra time for a attaining a similar product alone could be estimated already with the current model in the manuscript.** Thank you for the valuable suggestion. While training and local data collection costs are certainly interesting to consider, we see these aspects as orthogonal to the focus of our work, which is on the trade-off between receiving access to competitors' data and risking increased market competition. While training can be a bottleneck in some cases (for example, in the context of foundation models), we target the fairly common situation where the lack of sufficient data (and the expensiveness of its collection) are of a much bigger concern. Additionally, we are unsure how the reviewer suggests to estimate the training costs using our current framework. In order to estimate the effect of being late with the ML model, one needs to model temporal characteristics of the market, as well as costs for local data generation. Both of these aspects are not currently modeled. We will be happy to expand on our answer, if the reviewer's concern was related to different effects or some other misunderstanding occurred. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications The consent coefficient is a good addition. It also covers the case of copyrighted data, as there is push (despite of the text and data mining exception) to ask licences for training use of copyrighted works, and companies start to voluntarily accept this. On the training cost case use case, I really meant foundation models, where it really matters. For training these, gaining as large, and diverse data as possible is crucial. I think the considerations of this paper are of greatest value in this context, as most of the advances in AI, are currently emerging from transformer like technologies, be in protein folding, automatic coding or image generation etc... The need for large models will overshadow others, not only in use, but also in costs of data.( and in human labelling based grounding), in costs of training, and costs of running. There is an entanglement of these in the deployment that one cannot factorise easily into independent components - like stand alone data strategy, without interfacing the others factors. In this context, there is also other "data" that can be shared and is valuable, like encodings of texts (and images), and the weights of the pre-trained neural networks that can reduce training costs, also it can be reduced by giving out data and asking the recipient to train the model, and get back the trained model. This way of paying training costs with data is actually quite common. Training costs can be estimated by quoting AWS, Azure, Google cloud,... GPU time pricing. --- Reply to Comment 1.1.1: Title: Thank you for your response! Comment: Thank you for your timely and detailed response! We completely agree with the reviewer that foundation models are increasingly prevalent, and their training costs are not negligible. We will be happy to provide a discussion in our manuscript regarding training costs as a possible consideration that may additionally come into play in the context of foundation models. While training costs are certainly interesting to model, we note that they give rise to many orthogonal incentives to the trade-off studied in this work. In particular, sharing computation brings up the aspect of fair client and server compensation for training costs, which is a different line of work in the FL incentives literature (Tu et al., 2022). Additionally, we see several obstacles in front of the direct modelling of training costs in the context of our problem. Specifically, this will likely require application-specific (and potentially proprietary) information on how these costs are actually incurred. First, it is unclear how the companies will negotiate the training costs splitting. For example, they might split the costs equally among all coalition members, they might split them proportionally to the sizes of their datasets, or, as the reviewer suggested, the central server might bear all the costs, but receive data as compensation. Similarly, there are multiple options how the firms will do inference. They may receive a copy of the end model and use it locally, or they may leave the model at the central server only. If the latter case, it is unclear how they will pay for inference. For instance, they might pay for each query, they might get a quota proportional to their data contribution, or they might auction the server inference time. We will be happy to elaborate on these considerations in the next version of the manuscript. Tu, X., Zhu, K., Luong, N. C., Niyato, D., Zhang, Y., and Li, J. Incentive mechanisms for federated learning: From economic and game theoretic perspective. IEEE Transactions on Cognitive Communications and Networking, 2022.
Summary: This paper explores the dilemma faced by firms when considering strategic data sharing with their competitors. It introduces a framework to investigate the incentives for data sharing and examines its impact on collaboration and profitability. The author discusses the barriers to data sharing, such as privacy concerns, and proposes a market model, data impact model, and collaboration scheme as components of the framework. The findings suggest that reduced competition and harder learning tasks foster collaboration. An illustrative example of a taxi market is provided to demonstrate the concepts discussed. Overall, this study aims to understand how market competition affects collaborative learning incentives and provides insights into the data-sharing trade-off. Strengths: - The study introduces a novel and comprehensive framework to analyze data sharing between competitors, considering factors such as machine learning model quality's impact on production cost. - The study investigates the incentives for data sharing and examines the impact of market conditions, product similarities, the complexity of learning task and firm size on collaboration incentives. Albeit it's simply based on a conventional market model and a natural model of data impact grounded in learning theory, the findings are inspiring and the exposition of the results is clear and easy to follow. Weaknesses: - Since this research is primarily about economic modeling and analysis, it would be great to see a discussion of the various real-world examples in the context rather than just one taxi market example and a oil-market example in the appendix. - This study uses simulation to examine collaboration incentives among multiple firms, but it would be valuable to discuss the limitations of the simulations and potential biases inherent in such studies. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - The study mentions the use of a data impact model grounded in learning theory, but it doesn't provide detailed information about the model's validity and discussion about alternative formulations. Is there any other forms of data impact model could also be considered? - I can see there are plenty extensions provided in the supplementary materials, is there a way to get comparable results for an analog of the problem that consider a sequential game(such as Stackelberg) rather than simultaneous game? This set up may happens in some mega tech companies get the drop on some other small firms and most often the big companies take the advantages. - The big companies tend to have more bargaining power than small size firms, and the cost functions are often asymmetric. Does Theorem 6.1 still holds when bigger company has different cost function as it is discussed in C.3? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors didn't discuss the limitations of their theoretical results. This study is mainly theoretical and primarily focused on introducing a novel framework on strategic data sharing dilemma. There are still plenty rooms for in-depth investigations into the data sharing studies. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable and constructive feedback. In the following we address the weaknesses and questions raised in your review. **Weaknesses** **It would be great to see a discussion of the various real-world examples in the context** Thank you for your valuable suggestion. We will add more examples of industries where such problems arise in the next version of the manuscript. We expect that such problems are particularly relevant for industries such as agriculture, finance, and healthcare (Durrant et al., 2022; FedAI; Rieke et al., 2020), as well as other industries interested in cross-silo FL (Kairouz et al., 2021). Durrant, A., Markovic, M., Matthews, D., May, D., Enright, J., and Leontidis, G. The role of cross-silo federated learning in facilitating data sharing in the agri-food sector. In: *Computers and Electronics in Agriculture*, 2022. FedAI. WeBank and Swiss Re signed Cooperation MoU. https://www.fedai.org/news/webank-and-swiss-re-signed-cooperation-mou Rieke, N., Hancox, J., Li, W., Milletari, F., Roth, H. R., Albarqouni, S., Bakas, S., Galtier, M. N., Landman, B. A., and Maier-Hein, K. The future of digital health with federated learning. In: *NPJ digital medicine*, 2020. Kairouz, Peter, et al. "Advances and open problems in federated learning." In: *Foundations and Trends in Machine Learning*, 2021. **It would be valuable to discuss the limitations of the simulations and potential biases inherent in such studies.** Thank you for pointing this out. We provide a brief discussion here and will update the manuscript's next version accordingly. First, our simulation solves each instance of the data-sharing game exactly, so we do not anticipate any biases coming from here. Moreover, given that we average over many independent runs (10000), we expect our results to be very close to the true expectation. Second, we analyze this game to test if the findings from Sections 5 and 6 apply to more complex scenarios involving multiple companies. Therefore, we see Section 7 as further validation of the theoretical findings in the paper rather than a complete characterization of the data-sharing trade-off between multiple firms. Investigating other negotiation schemes between multiple companies will certainly be an interesting direction for future work. **Questions** **No detailed information about the data impact model's validity and discussion about alternative formulations. Can other forms of data impact models be considered?** We refer to the shared response for a discussion on the practical evaluation of the data impact (and market) model. Additionally, please see our response to Reviewer Jw59 for how we expect our results to transfer in case of other data scaling laws suggested by Viering et al. 2022. We also note that Appendices C.3 and C.4 briefly discuss different data impact models. We will make sure to better highlight these aspects in future versions of the manuscript. Overall, we consider our parametric family to be quite general, at least in the i.i.d. setting, since it covers multiple model-quality convergence rates grounded in standard statistics and ML theory frameworks. In practical applications, we expect the best parametric form of the data impact model to be sensitive to the real-world context. **Is there a way to get comparable results for an analog of the problem that consider a sequential game (such as Stackelberg) rather than simultaneous game?** Thank you for the valuable suggestion. There are several ways to include sequential decision-making in our framework. One can assume a sequential competition (for example, according to the Stackelberg model) during the competition phase. Coalition formation could also be made sequential (as in Section 7). If one wants to cover the entry and exit of the firms, they could also iterate our data-sharing game multiple times (similar to super-games in the industrial organization literature). We would be happy to discuss other possibilities if the reviewer provides more details about their setup of interest. **Does Theorem 6.1 still hold when the bigger company has different cost function as it is discussed in C.3?** Qualitatively the solution to the bargaining problem should not change, but the numerical constants will be different. --- Rebuttal Comment 1.1: Comment: Thank you for the response. I have a followup comment to Q2 and Q3: - The setup I'm interested in is when some mega tech companies who has different form of costs(maybe lower in \beta_i since they got more resources) than small tech companies and oftenly move first in the market(a Stackelberg game). This may be an overly-detailed setup, but it's more common in real life. And it's intuitively a potential special case for scenarios when companies find it hard to collaborate. It may provide more insights to whole context if it's turned out to be true. --- Reply to Comment 1.1.1: Title: Analysis of a Stackelberg setup Comment: Thank you for your constructive feedback and for the clarification! Following your suggestion, we investigated a setup where the competition phase corresponds to a Stackelberg game between two companies. As you suggested, the big company $F_1$ has a better cost function ($\beta_1 > \beta_2$) and more data ($n_1 > n_2$). We consider the full data-sharing negotiation scheme from Section 5. Repeating our two-firm analysis (see Appendices A.2, A.3, and A.4) for the Cournot-based Stackelberg game (e.g., Boyer & Moreaux, 1987), where the first firm decides on quantities before the second one, we get the following collaboration criteria: $$\varPi_{1, \text{ind}}^e \le \varPi_{1, \text{share}}^e \iff \gamma (n_2^{-\beta_2} - (n_1 + n_2)^{-\beta_2}) \le 2 (n_1^{-\beta_1} - (n_1 + n_2)^{-\beta_1}),$$ $$\varPi_{2, \text{ind}}^e \le \varPi_{2, \text{share}}^e \iff \gamma (n_1^{-\beta_1} - (n_1 + n_2)^{-\beta_1}) \le \Bigl(2 - \frac{\gamma^2}{2} \Bigr) (n_2^{-\beta_2} - (n_1 + n_2)^{-\beta_2}).$$ Here, the first company's incentives to collaborate do not change compared to the Cournot case, while the second company's incentives decrease $\Bigl(2 - \frac{\gamma^2}{2} - \gamma \le 2 - \gamma \Bigr)$. Despite the reduced incentives for the second firm, since $\beta_1 > \beta_2$ and $n_1 > n_2$, the second condition always holds, both in the Stackelberg setup presented here and in the context of Theorem C.1 in Appendix C.3. Therefore, the smaller company will always want to collaborate. Since the incentives of the first firm are unchanged in both cases, there is no change in the likelihood of collaboration compared to the Cournot case. Interestingly, the first firm will have larger profits in the Stackelberg setup, compared to the setup in Appendix C.3, since it could always choose Cournot equilibrium quantities at the first stage of the competition and get the Cournot equilibrium (Anderson & Engers, 1992). We hope the reviewer finds this analysis interesting and relevant to their proposed setup. Please let us know if you have any further questions; we will be happy to address them. Boyer, M., & Moreaux, M. On Stackelberg equilibria with differentiated products: The critical role of the strategy space. The Journal of Industrial Economics, 217-230, 1987. Anderson, S. P., & Engers, M. Stackelberg versus Cournot oligopoly equilibrium. International Journal of Industrial Organization, 10(1), 127-135, 1992.
Summary: * This paper analyzes the economic consequences of data sharing between competitors. * Interaction is modeled as a market: * Market has $m$ firms ($F_1,\dots,F_m$), where firm $i$ produces $q_i$ units of good $G_i$ at price $p_i\ge 0$, and quality $v_i$. * In addition, there are “outside goods” $\{1,\dots,k\}$ offered at fixed prices $\tilde{p}_l$. * Each consumer $j$ optimizes their utility $u^j$ by deciding on consumption of firm goods $q^j\in\mathbb{R}^m_+ $ and outside goods $g^j\in\mathbb{R}_+^k$ under budget constraint $B^j$ - Leading to Firms maximize their expected utility $\mathbb{E}_v[p_i q_i - C_i(q_i,v_i)]$. * Different solution concepts are considered: Firms act either by deciding on $q_i$ (Cournot competition), prices $p_i$ (Bertrand competition), or by strategically considering the response of their competitors (Nash equilibrium). * In section 4, a concrete market model is instantiated: Consumers make decisions according to a quasi-quadratic utility model characterized by a substitutability parameter $\gamma$, and the cost associated with each firm is $C_i(q_i,v_i)=c_i q_i$, where $c_i$ depends on the quality of the machine learning model. Machine learning quality is assumed to affect production costs only. * For analysis, data is assumed to be homogeneous (data of all firms is sampled independently from the same distribution), and coefficient $c_i$ is assumed to take a concrete power-law parametric form ($c_i = a+b/n^\beta$). Attention is restricted to competition between two firms. * Theorem 5.1 characterizes the equilibria as a function of the $\gamma$ parameter, and the ratios between the amounts of data collected by the two firms. Theorem 6.1 characterizes the equilibrium in the case of partial data sharing. Finally, a numerical simulation is conducted on a competition setting with more than two firms. Strengths: * Topic is well motivated. * Clean presentation, connects contemporary topics in machine learning to classic economic notions in a creative and interesting way. Weaknesses: * Parametric assumptions for main theorems are not validated against real-world data. Not clear which parameter regimes are likely in practice. * Data homogeneity assumption may be too conservative - Homogeneity means that datasets collected by all firms are assumed to be sampled independently from the same distribution. This is unlikely to be the case - For instance, in the taxi running example, different operators are likely to observe different data distribution, e.g. because they operate in different parts of town. * Limitations of the method are not thoroughly discussed. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * In the proposed model, do the firms have the ability to invest part of their budget in independent data collection? (e.g by conducting surveys to improve training set quality, or buying data from an external provider which is not a direct competitor). If the market model does not include this possibility, what would be the consequences of considering it? * What are typical values of the constants $a$,$b$,$\gamma$ in real-world systems, and why? * L224: “We assume that $b/(1-a)$ is small enough...” - Which values of $b/(1-a)$ are small enough for the lemmas to hold? Are they realistic in real-world systems? * How would the results be different if the cost $c_i$ had a different parametric form? For example, using similar arguments to the ones presented in the paper, one could consider costs that relate to the parametric forms described in Viering et al. 2022 (Table 1). Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are not thoroughly discussed. In particular, I feel it would be helpful to provide an indication of where we expect the main assumptions in this paper to hold, and discuss the consequences of making wrong assumptions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable and constructive feedback. In the following we address the weaknesses and questions raised in your review. **Weaknesses** **Parametric assumptions for main theorems are not validated against real-world data. Not clear which parameter regimes are likely in practice.** Thank you for your feedback. We show how one can reason about parameter values in our response in the "Questions" section below. As we note in the general response, a systematic analysis and validation of a real-world market model and the most exact data impact model are probably outside our submission's scope. Please refer to the main response for our motivation. **Data homogeneity assumption may be too conservative** We certainly agree that heterogeneity may be an important aspect in some settings. Please refer to our shared response for a discussion of the numerous challenges and orthogonal incentives arising when considering a heterogeneous setting. Additionally, we note that a simple model of heterogeneity is considered in Appendix C.4. Our framework allows even more intricate models of heterogeneity, providing that an appropriate data impact model is formulated. **Questions** **In the proposed model, do the firms have the ability to invest part of their budget in independent data collection? … If the market model does not include this possibility, what would be the consequences of considering it?** Thank you for the valuable suggestion. One can incorporate this possibility within our framework by adding a data-collection stage of the game before the data-sharing stage. Intuitively, it should incentivize firms with small amounts of data to gather more of it to be accepted into desirable coalitions. We expect the results of Theorem 5.1 qualitatively stay the same, but this mechanism may lower the threshold for collaboration. **What are typical values of the constants $a, b$, $\gamma$ in real-world systems, and why?** While our market model is widely used in economic theory, one typically uses domain-specific models to describe real-world systems. As we pointed out in the general response, we hope that our three-component decomposition of the data-sharing problem will help practitioners to leverage their situation-specific knowledge of their market and ML models to formulate an appropriate application-specific model. That said, the parameters in our manuscript do have natural interpretations that can inform relevant magnitudes based on economic intuition. In particular, $\gamma$ should be positive (since we consider competing firms) but not very close to $1$ (since firms usually do not produce identical products). The parameter $a$ should correspond to the costs of the firms with a perfect ML model, and $a + b$ should correspond to the costs of the firms with a very bad model. Since $1$ should correspond to the largest possible price for a product (when there is only an infinitesimally small amount of this good on the market), the ratio $\frac{1}{a}$ should roughly correspond to the ratio of the highest possible price of the product (like during supply shortage) to the lowest possible price. For example, this multiplier for the natural gas in the EU during 2020–2022 was around 50 (FRED), implying that $a$ was around $\frac{1}{100}$. As for the ratio of $\frac{b}{a+b}$, we could use consulting firms' data about the impact of machine learning on costs. In Espel et al. (2020), this ratio was around $\frac{1}{5}$, yielding values of $b$ around $\frac{1}{500}$. FRED. Global price of Natural gas, EU. https://fred.stlouisfed.org/series/PNGASEUUSDM Espel, P., Herbener, M., Rupprecht, F., Schröpfer, C., and Venus, A. How industrial companies can cut their indirect costs—fast. https://www.mckinsey.com/industries/automotive-and-assembly/our-insights/how-industrial-companies-can-cut-their-indirect-costs-fast, McKinsey, 2020. **Which values of $b / (1 - a)$ are small enough for the lemmas to hold? Are they realistic in real-world systems?** We provided these restrictions in the proofs of our lemmas (Appendices A.3, L490 and A.4, L504). For example, in the Cournot case, the restriction would follow from the following property: $1 - a > m b$. Given our discussion of magnitudes in the previous paragraph, we expect this property to hold. Intuitively, in our market model, the ratio being small is equivalent to the assumption that firms do not exit the market during the competition stage. While modeling firms' entry into and exit from the market is certainly interesting, we see this mechanism as complementary to our focus on the trade-off described in this work between improving your model versus risking increased competition. **How would the results be different if the cost had a different parametric form? For example, costs that relate to the parametric forms described in Viering et al. 2022 (Table 1).** We do not expect the results to change qualitatively. However, instead of a threshold for the ratio between the number of data points, we will have another disparity function between them (for example, difference), and the notion of task simplicity would be different. --- Rebuttal Comment 1.1: Comment: Thank you for the thorough and helpful response! In particular, I appreciate the order-of-magnitude estimation of model parameters, and I believe that such grounding significantly strengthens the presented results. Given the suggestions you made for improvement, I view the paper as a step in the right direction, and I believe it can facilitate fruitful discussions within the community. I'm increasing my rating to 7 (Accept). --- Reply to Comment 1.1.1: Title: Thank you for your response! Comment: Thank you for your timely response! We appreciate your constructive and positive feedback, which we will incorporate in the next version of the manuscript. In particular, we will include the discussion on the model parameters $a, b, \gamma$.
Rebuttal 1: Rebuttal: We thank all reviewers for their valuable and constructive feedback. Below we provide a general response to several recurring comments. We also offer individual answers for all remaining questions. **Real-world evaluation of the market and data impact models.** Multiple reviewers brought up the importance of real-world evaluation of the considered market and data impact models studied in Sections 4-7. While we recognize the significance of this concern, we see the core contributions of our paper in a general framework for the data-sharing trade-off and general qualitative insights into this problem. Our approach is more theoretical in nature and does not aim to provide a fully-realistic model that can directly inform practitioners. We will make sure to further clarify the scope of our work in the next version of the manuscript. We opted for a theoretical approach for the following reasons. First, we do not anticipate a one-size-fits-all model to cover all data-sharing scenarios, considering the diversity of relevant industries (e.g., online platforms, finance, agriculture, healthcare) and machine learning tasks (e.g., mean estimation, regression, deep learning). In practice, designing the most appropriate models will likely be an application-dependent problem, and we hope that our three-component decomposition of the data-sharing problem will help practitioners to leverage their situation-specific knowledge of their market and ML models. Moreover, both economics and machine learning lack a definitive theory to predict parametric forms of market and data impact models for particular situations. In economics, market modeling is more often art than science, and parametric models (e.g., Berry et al., 1995; Nevo, 2001) require the scope of one economic article to explain and justify (since they require us to model counterfactuals). In machine learning, the generalization of deep models, especially for non-i.i.d. data, remains poorly understood. Third, already with the basic market and data impact models, a complete analytical solution to the problem in Sections 6 and 7 remains elusive (although we provided various qualitative insights about them). With more nuanced market and data impact models, we expect to get only numerical results without rigorous theoretical characterizations. At the same time, we used a market model widely adopted in the theoretical literature (Choné & Linnemer, 2019) and a data impact model motivated by established theoretical frameworks in machine learning (Tsybakov, 2004). Given the wide adoption of these parametric families, we hope that our results will be useful and qualitatively valid in real-world data-sharing settings. **The assumption of homogeneous data** We certainly agree that heterogeneity is an important concern in collaboration learning. In case of significant heterogeneity, our framework will only require appropriate changes to the data impact model. We expect that designing such a model is a significant challenge orthogonal to the focus of our work (see, for example, Gulrajani & Lopez-Paz, 2020). Moreover, in some cases, additional data from a different distribution may damage model performance, yielding additional data-sharing considerations (Donahue & Kleinberg, 2021). Given these concerns, we decided to focus on the homogeneous case and also discuss a simple model of heterogeneous learning in Appendix C.4. We certainly agree the analysis of heterogeneity is an exciting direction for future work. Berry, S., Levinsohn, J., and Pakes, A. Automobile Prices in Market Equilibrium. In: *Econometrica: Journal of the Econometric Society*, 1995. Choné, P. and Linnemer, L. The quasilinear quadratic utility model: An overview. *CESifo Working Paper*, 2019. Donahue, K. and Kleinberg, J. Model-sharing games: Analyzing federated learning under voluntary participation. In: *AAAI Conference on Artificial Intelligence*, 2021. Gulrajani, I. and Lopez-Paz, D. In Search of Lost Domain Generalization. In: *International Conference on Learning Representations (ICLR)*, 2020. Nevo, A. Measuring market power in the ready‐to‐eat cereal industry. In: *Econometrica*, 2001. Tsybakov, A. B. Optimal aggregation of classifiers in statistical learning. In: *The Annals of Statistics*, 2004.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
A Deep Instance Generative Framework for MILP Solvers Under Limited Data Availability
Accept (spotlight)
Summary: The paper discusses the recent surge in using machine learning techniques for solving combinatorial optimization problems, particularly mixed-integer linear programs (MILPs). However, the limited availability of real-world instances often hinders optimal decision-making and unbiased solver assessments. To address this issue, the proposed solution is G2MILP, a deep generative framework for MILP instances. G2MILP represents MILP instances as bipartite graphs and employs a masked variational autoencoder to corrupt and replace parts of the graphs iteratively, generating new instances. This approach learns to generate realistic MILP instances without relying on expert-designed formulations, while preserving the structures and computational complexity of real-world datasets. The generated instances can assist in improving MILP solvers when data availability is limited. A set of benchmarks is designed to evaluate the quality of the generated MILP instances, and experimental results demonstrate their resemblance to real-world datasets in terms of both structure and computational difficulty. Strengths: 1. The paper takes a traditionally hard problem and proposes a novel solution to it. 2. Experiment results seem very compellingly, however the lack of good baseline makes hard to judge how good it actually is. 3. Presentation is good and clear. Weaknesses: some minor issue might be lack of proper baseline, it's quite hard for the readers and reviewers to access how good this method actually is. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Is it possible to provide some "fair" comparisons on downstream proxy tasks with other popular methods? I must confess that I am not very familiar with the specific subdomain. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful and valuable comments. We respond to your comments as follows and sincerely hope that our rebuttal could properly address your concerns. If so, we would deeply appreciate it if you could raise your score and your confidence. If not, please let us know your further concerns, and we will continue actively responding to your comments and improving our work. **Weakness.** > some minor issue might be lack of proper baseline, it's quite hard for the readers and reviewers to access how good this method actually is. - Thanks for your comment! **We want to emphasize that G2MILP, to the best of our knowledge, is the first learning based method to generate realistic MILP instances**. We open up a new research direction while previous methods cannot deal with such a task. Therefore, we can hardly find strong existing baselines. - **Nonetheless, we have delicately designed two baselines, i.e., Bowly and Random, for a fair comparison.** - **Bowly** is a traditional rule-based MILP instance generation technique. It generates instances from scratch by randomly sampling the coefficients. It was not designed for learning datasets to generate realistic instances, and we can only tune simple statistics such as problem sizes and coefficient means. For a fair comparison, we keep all controllable parameters the same as the training sets. Unsurprisingly, it does not work well on the new task. - We design the **Random** baseline as a comparison to demonstrate the advantage of G2MILP, especially the deep learning modules. The comparison with this baseline demonstrates that modifying MILPs while reserving both graph semantic structure and difficulty is nontrivial, and our proposed deep learning framework is able to achieve that. - **For a better comparison, we add an additional experiment to compare G2MILP with G2SAT.** Specifically, we adapt G2SAT to a special MILP dataset, MIS, in which all coefficients are 1.0 and thus the instances can be modeled as homogeneous bipartite graphs. Notice that G2SAT can only be applied on homogenous bipartite graphs, and so it only works on such special MILP data. We apply G2SAT to learn to generate new graphs and convert them to MILPs. Results are in **Table 9** in the newly submitted PDF. The results show that G2MILP significantly outperforms G2SAT on the special cases. For your convenience, we quote Table 9 here. **Table 9: Results of G2SAT on MIS. In the table, ''sim'' denotes similarity score (higher is better), ''time'' denotes solving time, and ''\#branch'' denotes the number of branching nodes, respectively. Numbers in brackets denote relative errors (lower is better).** | | sim | time (s) | #branch | | -------------------- | ----- | ------------- | ------------ | | Training Set | 0.998 | 0.349 | 16.09 | | G2SAT | 0.572 | 0.014 (96.0%) | 2.11 (86.9%) | | G2MILP ($\eta=0.01$) | 0.997 | 0.354 (1.5%) | 15.03 (6.6%) | | G2MILP ($\eta=0.1$) | 0.895 | 0.214 (38.7%) | 4.61 (71.3%) | **Question.** > Is it possible to provide some "fair" comparisons on downstream proxy tasks with other popular methods? - Thanks for your suggestion! Previous popular methods include **instance generation methods** and **MILP solving methods**. For the former, though they are not designed for the learning to generate task, we have designed the baselines and conduct comprehensive comparisons. For the latter, notice that G2MILP is orthogonal to them and can be used to enhance them. We also consider the downstream task (optimal value prediction) to demonstrate the effectiveness of G2MILP. - **For further demonstration, we conduct an additional downstream task, i.e., the predict-and-search framework proposed in [1].** This paper proposes a framework that first predicts a solution and then uses solvers to search for the optimal solutions in a trust region. We consider using generated instances to enhance the predictive model. We first train the predictive model on 100 MIS instances, and then use the generative models to generate 100 new instances to augment the dataset. The results are in **Table 14**. The results show that G2MILP can effectively enhance previous popular methods. For your convenience, we quote Table14 here. **Table 14: Results on the predict-and-search framework on MIS. The training set contains 100 instances, and we generate 100 new instances. For Random and G2MILP, masking ratio is 0.01. Time means the time for Gurobi to find the optimal solution with augmenting data generated by different models. Bowly leads to the framework failing to find optimal solutions in the trust region.** | Method | Training Set | Bowly | Random | G2MILP | | :----: | :-----------------: | :---------: | :-----------------: | :-----------------: | | Time | 0.041 ($\pm$ 0.006) | 17/100 fail | 0.037 ($\pm$ 0.003) | 0.032 ($\pm$ 0.004) | [1] Han Q, Yang L, Chen Q, et al. A GNN-Guided Predict-and-Search Framework for Mixed-Integer Linear Programming. ICLR, 2023. --- Rebuttal Comment 1.1: Title: Increasing score to accept Comment: Thank you for your effort meticulously addressing other reviewers' comments and mine. After reading all the discussions above, I believe this work is significant and should be presented to the community. Hence raising the score to accept. --- Reply to Comment 1.1.1: Title: Thank you for your kind support. Comment: Dear Reviewer sm9N, Thanks for your kind support and for helping us improve the paper! We appreciate your valuable suggestions. Best, Authors
Summary: The authors propose a deep generative framework for MILP(mixed integer programs) instances. The work has applications in enhancing MILP solvers under limited data availability. MILP instances are represented as weighted bipartite graphs. The authors propose a masked variational autoencoder (VAE) method for graph generation. The problem is relevant and interesting. The experiments show that the proposed approach obtains significantly better results than existing methods. Strengths: 1. Important problem to tackle. The work has applications. 2. Paper is easy to understand. 3. The authors motivate different components of encoder decoder architecture nicely in the methodology section. 4. The experimental section consists of Structural similarity metrics, hardness comparison of generated MILP instances, application on downstream task. 5. Comparison on variety of datasets( MIS, Set cover, MIK(knapsack), including standard deviation, multiple runs. 6. Visualization of generated instances( through TSNE plots). Weaknesses: 1. Downstream Task( learning to solve MIP)- learning to branch etc. : Recently many neural MIP solvers have come up( Eg: -Gasse et al.[A]). The authors have also cited them. Is it possible for the authors to show whether they can improve the performance of such solvers by data augmentation( on any dataset) under low data availability. I agree the authors have shown improvement on predicting optimal value task. However, the above mentioned task is also important. It can make the paper stronger. 2. Presentation : Table 8 and Fig 2 not clear. Request the authors to add Standard deviation in Table 8 Appendix ( especially for Random). Also I see "Random" in Table 8 and not in Fig.2 ( Downstream task).. Also is there any difference between Table 8 and Fig 2? 3. Code is not provided by authors. [A] Maxime Gasse, Didier Chételat, Nicola Ferroni, Laurent Charlin, and Andrea Lodi. Exact354 combinatorial optimization with graph convolutional neural networks. Advances in neural355 information processing systems, 32, 2019 Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Request the authors to answer the questions in weakness section. 1. Downstream task: Augmenting neural mip solvers with generated data. ( Especially under low data availability). 2. Presentation clarification( Fig 2 and Table 8). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors have addressed it nicely. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful and valuable comments. We respond to your comments as follows and sincerely hope that our rebuttal could properly address your concerns. If so, we would deeply appreciate it if you could raise your score and your confidence. If not, please let us know your further concerns, and we will continue actively responding to your comments and improving our work. **Weakness 1 & Question 1.** > Recently many neural MIP solvers have come up( Eg: -Gasse et al.[A]). The authors have also cited them. Is it possible for the authors to show whether they can improve the performance of such solvers by data augmentation( on any dataset) under low data availability. > > I agree the authors have shown improvement on predicting optimal value task. However, the above mentioned task is also important. It can make the paper stronger. - Thanks for your suggestion! **We conduct an additional downstream task for a neural solver, i.e., the predict-and-search framework proposed in [B].** This paper proposes a framework that first predicts a solution and then uses solvers to search for the optimal solutions in a trust region. We consider using generated instances to enhance the predictive model. We first train the predictive model on 100 MIS instances, and then use the generative models to generate 100 new instances to augment the dataset. The results are in **Table 14**. For your convenience, we quote Table14 here. **Table 14: Results on the predict-and-search framework on MIS. The training set contains 100 instances, and we generate 100 new instances. For Random and G2MILP, masking ratio is 0.01. Time means the time for Gurobi to find the optimal solution with augmenting data generated by different models. Bowly leads to the framework failing to find optimal solutions in the trust region.** | Method | Training Set | Bowly | Random | G2MILP | | :----: | :-----------------: | :---------: | :-----------------: | :-----------------: | | Time | 0.041 ($\pm$ 0.006) | 17/100 fail | 0.037 ($\pm$ 0.003) | 0.032 ($\pm$ 0.004) | - The results show that Bowly generates low-quality data that disturbs the model training, so that there is no optimal solution in the trust region around the predicted solution. Though both Random and G2MILP can enhance the solving framework to reduce solving time, we can see G2MILP significantly outperforms Random, demonstrating its effectiveness. - We consider the predict-and-search task because it is a typical predictive task that requires large amounts of instances for training. Gasse et al.[A], however, leverages each branching decision, instead of each instance, as one data sample, so it may not be a typical application scenario of instance generative techniques. [A] Maxime Gasse, Didier Chételat, Nicola Ferroni, Laurent Charlin, and Andrea Lodi. Exact combinatorial optimization with graph convolutional neural networks. Advances in neural information processing systems, 32, 2019 [B] Qingyu Han, Linxin Yang, Qian Chen, Xiang Zhou, Dong Zhang, Akang Wang, Ruoyu Sun, 410 and Xiaodong Luo. A gnn-guided predict-and-search framework for mixed-integer linear 411 programming. ICLR, 2302. **Weakness 2 & Question 2.** > Presentation clarification( Fig 2 and Table 8). > > Presentation : Table 8 and Fig 2 not clear. > > Request the authors to add Standard deviation in Table 8 Appendix ( especially for Random). Also I see "Random" in Table 8 and not in Fig.2 ( Downstream task).. > > Also is there any difference between Table 8 and Fig 2? - Thanks for your suggestion! The results with std in Table 8 are in **Table 12** in the newly submitted PDF. For your convenience, we quote Table 12 here (better viewed in the submitted PDF). **Table 12: Results on the optimal value prediction task (mean±std). On each dataset and for each method, we sample 5 different sets of 20 instances for augmentation.** | | MIK | MIK | Nurse Scheduling | Nurse Scheduling | | :-----: | :-------------------: | :--------------------: | :------------------: | :-----------------: | | | MSE | Improvement | MSE | Improvement | | Dataset | 0.0236 | 0.0% | 679.75 | 0.0% | | Bowly | - | - | 663.52($\pm 95.33$) | 2.3% ($\pm 14.0\%$) | | Random | 0.0104 ($\pm 0.0023$) | 55.9\% ($\pm 9.7\%$) | - | - | | G2MILP | 0.0073 ($\pm 0.0014$) | 69.1$\%$ ($\pm 5.9\%$) | 548.70 ($\pm 44.68$) | 19.3% ($\pm 6.6\%$) | - **Table 8 and Figure 2 show the same results.** We are sorry for the typo. The "Bowly" in the left figure in Figure 2 should be "Random". As we state in Section 4.2 III., on MIK, instances generated by Bowly introduce numerical issues so that Ecole and SCIP fail to read them. We will correct the typo in a revised version. **Weakness 3.** > Code is not provided by authors. We will release our code once the paper is accepted to be published. We expect more studies on the topic of MILP instance generation, and our released code will be an important resource. --- Rebuttal Comment 1.1: Title: Thank you. Increase score to Accept Comment: I thank the authors for conducting additional experiments in a short duration of time.( Especiall the downstream task expt which shows the capability of the model). I am satisfied with the response to the question. I am increasing my score to accept. I hope the authors will release their code as promised( if the paper is accepted). --- Reply to Comment 1.1.1: Title: Thank you for your kind support. Comment: Dear Reviewer Vxmz, Thanks for your kind support and for helping us improve the paper! We appreciate your valuable suggestions. We will release our code once the paper is accepted. Best, Authors
Summary: The paper generates MIP instances with the help of VAEs. Experiment results show that the generated samples remain the characters of the training dataset from several aspects (e.g. the graph statistical value, the solving time of the generated samples, and the number of branching nodes of the generated samples), The evaluation shows that the generation helps the diversity, and preserves the difficulty. The samples may further help the NNs in downstream learning for MIP training, the authors introduces a task that predicts the optimal objective value, models trained on G2MILP shows obvious improvement on the testing dataset. Strengths: The task is highly valuable. The idea of generating instances via deep neural networks is novel and sound. The whole framework provides an approach that generates new instances without the need to start form scratch. Experiment results prove the effectiveness of the proposed method across various datasets (MIS, MIK, set covering and a nurse scheduling problem from MIPLIB2017). Weaknesses: N/A Technical Quality: 3 good Clarity: 3 good Questions for Authors: In the experiment results, the ratio $\eta$ is controlled as $0.01, 0.05$ and $0.1$. At most only 10% of the nodes in the original V-C bipartite graph are changed. 1. Why doesn’t the generator set larger $\eta$ for a complete comparison? (As the other baselines may generate the instances from scratch.) From the table, the effectiveness of the method quickly drops as $\eta$ increases. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: As far as I could understand, the generator could only output the instances with the same problem scale. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful and valuable comments. We respond to your comments as follows and sincerely hope that our rebuttal could properly address your concerns. If so, we would deeply appreciate it if you could raise your score and your confidence. If not, please let us know your further concerns, and we will continue actively responding to your comments and improving our submission. **Questions** > In the experiment results, the ratio $\eta$ is controlled as 0.01,0.05 and 0.1. At most only 10% of the nodes in the original V-C bipartite graph are changed. > > 1. Why doesn’t the generator set larger $\eta$ for a complete comparison? (As the other baselines may generate the instances from scratch.) From the table, the effectiveness of the method quickly drops as $\eta$ increases. - Thanks for your suggestion! **We have conduct ablation studies on the effect of $\eta$ on MIK.** The results are in **Figure 4** in the newly submitted PDF. The experimental settings are the same with those in Table 2 and Figure 2. From the figure we have the following conclusions. - Though empirically a smaller $\eta$ leads to a relatively better performance, G2MILP maintains a high similarity performance even when $\eta$ is large. - The downstream task performance does not drops significantly. This makes sense because smaller $\eta$ leads to more similar instances, while larger $\eta$ leads to more diverse (but still similar) instances, both of which can bnefit downstream tasks. - G2MILP always outperforms Random, which demonstrates that the learning paradigm helps maintain the performance. - Bowly fails on this dataset because its generated instances lead to numerical issues and cannot be read by Gurobi or SCIP. - Though Bowly generates instance from scratch, our results show that instances generated by Bowly are specious and unlikely to be realistic. This is because previous methods, including Bowly, are designed for research instead of developing solvers in real-world applications. Therefore, we can hardly use those methods in real-world scenarios. G2MILP, to the best of our knowledge, is the first method that can learn to generate realistic instances. - **Because our aim is data augmentation, it is unnecessary to generate instances from scratch.** As an analogy, G2SAT generates SAT instances by splitting existing SAT graphs into templates and then learning to merge them to form new formulas. This technique then develops as a standard manner in SAT generation area. We believe modifying existing instances will become an important line in developing better MILP generators. **Limitations** > The generator could only output the instances with the same problem scale. - Good point! **It is possible that G2MILP outputs instance with slightly different scales.** For example, we can add a virtual constraint, mask it and generate a new one to obtain an instance with one more constraint. Similarly, we can delete two constraints and generate a new one to obtain an instance with one less constraint. However, this method might seem trivial and unnecessary, because current implementation of G2MILP has brought good performance, and this method can hardly bring additional improvements. - **Generating MILP instances with different scales but similar semantic information and similar difficulty is a hard task.** Even in the community of SAT generation it has not been well solved. We recognize it as a valuable research direction and plan to study it in our future work.
Summary: The paper addresses the challenge of limited availability of real-world mixed-integer linear programs (MILPs) instances for machine learning techniques employed in combinatorial optimization problems. The scarcity of these instances often leads to sub-optimal decisions and biased solver assessments. Existing solutions either depend heavily on expert-designed formulations or fail to capture the intricate features of real-world instances. The authors propose G2MILP, the first deep generative framework for creating MILP instances. G2MILP treats MILP instances as bipartite graphs and uses a masked variational autoencoder to iteratively corrupt and replace parts of the original graphs, generating new ones. This approach allows G2MILP to learn to generate novel and realistic MILP instances without the need for expert-designed formulations, while simultaneously preserving the structures and computational hardness of real-world datasets. The generated instances can help improve MILP solvers when data availability is limited. The authors also present a set of benchmarks to evaluate the quality of the generated MILP instances. The experimental results show that G2MILP is capable of producing instances that closely mimic real-world datasets in terms of both structures and computational hardness. Strengths: 1. This is the first deep generative model for MILP instance generation based on the well known variant-constraint bipartite representation. 2. My favorite part of this work is the masking-based pre-training scheme, making me recall the masked language modeling objective in large language modeling. My feeling is that this direction could probably lead to large optimization model pre-training in the L2O field and this could be a very good start. 3. The empirical performance is good. Weaknesses: 1. I have the feeling that the problem similarity metric is too privileged for the proposed method, as it essentially modifies existing problem instances. 2. The effectiveness seems to drop quickly with higher ratio of variable/constraint alternations. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: N/A Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive and valuable comments. We respond to your comments as follows and sincerely hope that our rebuttal could properly address your concerns. If so, we would deeply appreciate it if you could raise your score. If not, please let us know your further concerns, and we will continue actively responding to your comments and improving our work. **Weakness 1.** > I have the feeling that the problem similarity metric is too privileged for the proposed method, as it essentially modifies existing problem instances. - **The used similarity benchmark is based on many previous works.** The considered statistics and features are commonly used in SAT generation [1] and MILP studies [2]. Measuring similarity by computing the distributional divergences is a common practice in benchmarking graph generation models [3]. - **Beyond these similarity metrics, we also access the downstream task performance.** The results demonstrate that instances generated by G2MILP are not only realistic, but can also benefit real-world applications. - **Measuring MILP instance quality is a hard task. Currently no strong enough metrics have been developed, and we may open up the direction.** We believe that with the development of MILP generation techniques, studies on benchmarking MILP instances will follow up, and exploring MILP space will become possible. This is in line with the development law of generative model research. - **Modifying existing problem instances is not a trivial task, and G2MILP works well.** See the comparison with the Random baseline. Random also modifies existing instances, but generated instances are specious and unlikely to be realistic. - **Because our aim is data augmentation, it is unnecessary to generate instances from scratch.** As an analogy, G2SAT generates SAT instances by splitting existing SAT graphs into templates and then learning to merge them to form new formulas. This technique then develops as a standard manner in SAT generation area. We believe modifying existing instances will become an important line in developing better MILP generators. [1] Jiaxuan You, Haoze Wu, Clark Barrett, Raghuram Ramanujan, and Jure Leskovec. G2sat: learning to generate sat formulas. Advances in neural information processing systems, 32, 2019. [2] Frank Hutter, Lin Xu, Holger H Hoos, and Kevin Leyton-Brown. Algorithm runtime prediction: Methods & evaluation. Artificial Intelligence, 206:79–111, 2014. [3] Nathan Brown, Marco Fiscato, Marwin HS Segler, and Alain C Vaucher. Guacamol: benchmarking models for de novo molecular design. Journal of chemical information and modeling, 59(3):1096–1108, 2019. **Weakness 2.** > The effectiveness seems to drop quickly with higher ratio of variable/constraint alternations. **We have conduct ablation studies on the effect of the masking ratio $\eta$ on MIK.** The results are in **Figure 4** in the newly submitted PDF. The experimental settings are the same with those in Table 2 and Figure 2. From the figure we have the following conclusions. - Though empirically a smaller $\eta$ leads to a relatively better performance, G2MILP maintains a high similarity performance even when $\eta$ is large. - The downstream task performance does not drops significantly. This makes sense because smaller $\eta$ leads to more similar instances, while larger $\eta$ leads to more diverse (but still similar) instances, both of which can benefit downstream tasks. - G2MILP always outperforms Random, which demonstrates that the learning paradigm helps maintain the performance. - Bowly fails on this dataset because its generated instances lead to numerical issues and cannot be read by Gurobi or SCIP. Moreover, in real applications, it is reasonable and flexible to adjust the hyperparameter $\eta$ to achieve good performances in different scenarios.
Rebuttal 1: Rebuttal: Dear reviewers, We sincerely thank all reviewers' insightful and constructive comments, which helped to significantly improve our work. We respond to the comments given by each reviewer in detail, and in this global response, we answer several key issues raised by multiple reviewers. **1. Contribution & Novelty** - **Task setting.** **MILP generation is much harder than SAT generation because it involves not only topological structure prediction, but also precise numerical prediction.** Specifically, SAT instances can be modeled as homogeneous bipartite graph (i.e., graphs without edge or node features), so current SAT generation techniques merge graph templates together to form new ones. These methods can hardly adapt to MILP generation tasks because they cannot determine the coefficients. Nonetheless, we conduct **an additional experiment** that transfers G2SAT to a special MILP dataset, MIS, in which all coefficients are 1.0 and thus the instances can be modeled as homogeneous bipartite graphs. We apply G2SAT to learn to generate new graphs and convert them to MILPs. Results are in **Table 9** in the newly submitted PDF. The results show that G2MILP significantly outperforms G2SAT on the special cases. - **Technique.** **Our proposed masked VAE is novel and different from previous masked auto-encoder (MAE) in computer vision.** - **Motivation:** MAE is an **auto-regression** method that learns to reconstruct the masked images for **representation learning**. Masked VAE, in contrast, is a **VAE** based **generative model** that aims to generate new diverse instances---instead of reconstructing the original instances---from a masked one. - **Model Structure:** Masked VAE has a resample layer (Eq. 5) as well as a prior loss (Eq. 4) when training, and works in a decoder-only manner when inference. For decoder-only generation, masked VAE takes masked graphs as inputs of the decoder instead of the encoder, which is also different from MAE. - **Theory:** Moreover, we theoretically derive the evidence lower bound (ELOB) to incorporate the mask process (Eq. 14), which is an extension of VAE theory. **2. Evaluation Metrics.** - **The used similarity benchmark is based on many previous works.** The considered statistics and features are commonly used in SAT generation [1] and MILP studies [2]. Measuring similarity by computing the distributional divergences is a common practice in benchmarking graph generation models [3]. - **Beyond these similarity metrics, we also access the downstream task performance.** The results demonstrate that instances generated by G2MILP are not only realistic, but can also benefit real-world applications. Novelty and diversity are also metrics that measure generative model performance. However, these two metrics are never easy to define for MILP instances. The downstream task performance improvements imply that instances generated by MILP have sufficient novelty and diversity for real applications. - **Measuring MILP instance quality is a hard task. Currently no strong enough metrics have been developed, and we may open up the direction.** We believe that with the development of MILP generation techniques, studies on benchmarking MILP instances will follow up, and exploring MILP space will become possible. This is in line with the development law of generative model research. **3. We conduct an additional downstream task for a neural solver, i.e., the predict-and-search framework proposed in [4].** Specifically, [4] proposes a framework that first predicts a solution and then uses solvers to search for the optimal solutions in a trust region. We consider using generated instances to enhance the predictive model. We first train the predictive model on 100 MIS instances, and then use the generative models to generate 100 new instances to augment the dataset. The results are in **Table 14**. Bowly generates low-quality data that disturbs the model training, so that there is no optimal solution in the trust region around the predicted solution. Though both Random and G2MILP can enhance the solving framework to reduce solving time, we can see G2MILP significantly outperforms Random. **4. We implement a new version of G2MILP that supports modifying variables.** Specifically, in each iteration, we randomly mask a variable and use new modules to generate the objective coefficient and the variable type. We fine-tune the previous model trained on MIK to obtain a new model with a variable generation module, and test its performance on MIK. We conduct experiments with different implementations, and the results are in **Table 10** in the newly submitted PDF. In future work, we plan to implement more kinds of modifying operators to generated more diverse MILP instances. **5. We conduct extensive experimental analysis.** - Different orders of masking constraints. Results are in **Table 11**. - Performance improvements on different sizes of datasets and ratio of generated instances. Results are in **Table 13**. - Impact of masking ratio $\eta$ on similarity scores and downstream task performance improvements. Results are in **Figure 4**. - t-SNE visualization for baselines. See **Figure 5**. [1] Jiaxuan You, Haoze Wu, Clark Barrett, Raghuram Ramanujan, and Jure Leskovec. G2sat: learning to generate sat formulas. Advances in neural information processing systems, 32, 2019. [2] Frank Hutter, Lin Xu, Holger H Hoos, and Kevin Leyton-Brown. Algorithm runtime prediction: Methods & evaluation. Artificial Intelligence, 206:79–111, 2014. [3] Nathan Brown, Marco Fiscato, Marwin HS Segler, and Alain C Vaucher. Guacamol: benchmarking models for de novo molecular design. Journal of chemical information and modeling, 59(3):1096–1108, 2019. [4] Qingyu Han, Linxin Yang, Qian Chen, Xiang Zhou, Dong Zhang, Akang Wang, Ruoyu Sun, 410 and Xiaodong Luo. A gnn-guided predict-and-search framework for mixed-integer linear 411 programming. ICLR, 2023. Pdf: /pdf/2be5a42cdf3c403a4e4509f974312466a837e503.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper is focused on using machine learning techniques to address combinatorial optimization problems, particularly mixed-integer linear programs (MILPs). Particularly, this paper proposed G2MILP, which is a generative model for generating realistic MILP instances. Without prior expert knowledge, G2MILP can generate expressive instances preserving computational hardness of real-world datasets. To achieve this, G2MILP adopts a masked VAE approach to iteratively mask original graphs to generate new ones. Extensive evaluations are made to demonstrate the effectiveness of the proposed approach. This paper shows how to adopt the well-established concepts (e.g., masked VAE) to the new data type (i.e., MILP instances). Strengths: The main strength of this paper is its originality in my eyes. It introduces a generative framework for generating realistic MILP instances. Though different generative models have been established for various types of data, little attention is paid for using generative models to generate MILP instances for improving the solvers. To better develop training-based MILP solver, adequate training data is needed, and the proposed model can help. Weaknesses: 1. The contribution of the paper is unclear. Though the paper introduces a new research direction of generating MILP instances, its technical contributions are unclear. Representing MILP using bipartite graph and capturing features using GNN have been proposed by Gasse et al. The major technical contribution of this work is using masked VAE for expressive and realistic MILP instance generation, which I think is heuristic and lacks novelty. 2. The methodology is not easy to follow. Adopting the concept of masked VAE to generate expressive MILP instances seems a cute idea. But the masking process is confusing to me. 2.1. How do you generate the masked graph G_tilde? What is the distribution of p_tilde(G_tilde|G)? From lines 165-168, the masking is performed by randomly selecting a constraint vertex. Does this mean p_tilde(G_tilde|G) is just a uniform distribution or something like that? 2.2. What’s the intuition of making the masked graph as a conditional variable for the decoder? In masked VAE, the masked images are the input of the encoder. More explanations about the difference between the original masked VAE and the proposed pipeline for MILP instance generation are expected. 3. The evaluations are insufficient to fully validate the proposed framework. 3.1. There is a lack of ablation study to show that the masking scheme helps improve complex graph generation as claimed in lines 58 – 60. Though there are evaluations considering different masking ratios (0.01, 0.05, 0.1), they are insufficient to help understand the proposed masking scheme. Does the order of masking constraint vertices matter in the generation process? 3.2. What’s the reconstruction for each graph component (i.e., bias, degree, etc)? How the graph features affect the new instance generation (e.g., the density of the graph)? I guess the new instance generation will be harder if the original graph is dense and complex? 3.3. The downstream task evaluation is important to demonstrate the benefits of the proposed method. However, this evaluation is inadequate. What is the mask ratio for the downstream task evaluation (figure 2)? I guess the performance will decrease if the mask ratio is high? Can you report the std for performance improvement given different sets of 20 generated instances? What about the performance on larger datasets? To which extend (e.g., the scarcity of the original dataset), the generated instances help improve the performance? 4. The significance of the proposed method is unclear. Since the proposed method only mask constraint vertices, I think it is more like a graph generation pipeline. I am not fully convinced that we need to make this many efforts to generate new graphs. Furthermore, the benefits of new instances are not well demonstrated (as I question above). 5. Presentations can be improved. The position of Table 2 needs to be re-arranged. Notation wise, I think it is a bit confusing to use the same G for both the input and the output of the model. Typo: there are two ‘and’s in line 67. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: I would like to adjust my rating if the authors can help clarify my confusions listed in the weakness section. Besides, I have other questions listed below: 1. Generating instances is not a new topic and G2SAT has explored this. Why it is non-trivial to adapt G2SAT to MILP instance generation? How does the consideration of the high-precision numerical prediction affect the design of the generative framework? 2. The generation of a new MILP instance is an iterative process. What is the computation cost for generating one MILP instance? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The limitations are discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful and constructive comments. We respond to your comments as follows and sincerely hope that our rebuttal could properly address your concerns. If so, we would deeply appreciate it if you could raise your score. If not, please let us know your further concerns, and we will continue actively responding to your comments and improving our submission. **Weakness 1.** Due to space limitation, we state the contribution of our work in respect of both task setting and technique in **Global Response 1**. **Weakness 2.** > 2.1. generate the masked graph - Yes, $\tilde{p}(\tilde{G}|G)$ is like a uniform distribution. Specifically, we generate $\tilde{G}$ by sampling a constraint vertex $\tilde{v}$ from all constraint vertices $\mathcal{V}$ following a uniform distribution over these vertices, i.e., $\tilde{v}\sim U(\mathcal{V})$. Then we mask $\tilde{v}$ to generate $\tilde{G}$. > 2.2. difference with MAE. - We assume that you refer to masked auto-encoder (MAE) in computer vision. If not, please kindly provide the literature citation. Due to space limitation, the differences are in **Global Response 1**. **Weakness 3.** > 3.1. Masking scheme. - We conduct ablation studies on more different masking ratios, from 0.01 to 1.0, which may help understand the masking scheme. The results are in **Figure 4** in the newly submitted PDF. - We test different orders of masking constraint vertices, including uniformly sampling and sampling according to the vertex indices. Results are in **Table 11**. We find that **uniformly sampling achieves the best performance**. Sampling according to indices leads to a performance decrease, maybe because constraints that are indexed closely are relevant and lead to error accumulation. We think these results are interesting, and will study it in the future work. > 3.2. Generation - We are not sure whether we understand the question correctly. The generation process is determined by (i) the masked graph and (ii) the latent variables. During training, the latent variables encode the information of original graphs (graph features), so the decoder learns to reconstruct the original graph from the masked graph according to the latent variables. During inference, the model is decoder-only and the latent variables are randomly sampled. Different latent variables lead to different generated samples, so the decoder can randomly generate new diverse instances. - Dense and complex original graphs may lead to harder generation because graph representation learning is more challenging. > 3.3. Downstream task evaluation. - For the results in Figure 2, we use a mixture of $10$ instances from $\eta=0.01$ and $10$ instances from $\eta=0.05$ together (see Line 572 in Appendix B.4), because we find the mixture brings a slight performance improvement. - We conduct ablation studies on masking ratio. Results are in **Figure 4(b)**. According to the results, though empirically a small masking ratio brings better performance, the performance does not decrease quickly with an increasing masking ratio. - We report the standard error for performance improvement in **Table 12**. - We conduct experiments on different sizes of the original datasets, as well as the ratio of generated instances to original ones, on MIS. The results are in **Table 13**. The results show that G2MILP can bring performance improvements on varying sizes of datasets. **Weakness 4.** - In real applications, low data availability is often a key bottleneck for solver development. Generating realistic instances is orthogonal to previous methods that directly focus on solver development, and can be applied in many different real-world scenarios. It does not take many efforts but can bring additional performance improvements. - G2SAT has attracted much attention since its publication, which implies that the graph generation pipeline for instance generation task is a proper topic for research community. - To further demonstrate the benefits of generated instances, we conduct an additional downstream task of a predict-and-search framework. Results are in **Table 14** and details are in **Global Response 3**. **Weakness 5.** - Thanks for your suggestions! We will re-arrange Table 2 and fix typos in a revised version. Notation wise, we will use $\hat{G}$ to denote the outputs for clarity. **Question 1.** - **G2SAT cannot determine numerical values of the coefficients in MILPs.** Specifically, SAT instances can be modeled as homogeneous bipartite graph (i.e., graphs without edge weights or node features), and G2SAT splits the graphs into fragments and then merges the fragments together to form new ones. This method can generate new topological structures of bipartite graphs, but when merging the fragments together, it is nontrivial how to determine the edge weights, i.e., the coefficient values. - **Even without numerical coefficients, G2SAT can hardly preserve reality of instances.** To see this, we conduct **an additional experiment** that transfers G2SAT to a special MILP dataset, MIS, in which all coefficients are 1.0 and thus the instances can be modeled as homogeneous bipartite graphs. We apply G2SAT to learn to generate new graphs and convert them to MILPs. Results are in **Table 9** in the newly submitted PDF. The results show that G2MILP significantly outperforms G2SAT on the special cases. **Question 2.** - We generate $1000$ instances using G2MILP with $100$ iterations for each instances. **The average time costs for generating one instance for MIS, SetCover, and MIK are 0.45s, 0.97s and 0.85s respectively.** The time cost is linearly related to the number of iterations. Though being an iterative process, G2MILP is fast because each iteration is very fast. As a comparison, our experiments on MIS (mentioned in response to **Question 1**) show that G2SAT takes about 47s to generate one instance. --- Rebuttal Comment 1.1: Title: Thank you. Increasing my score. Comment: I appreciate the authors' time and efforts in carefully responding to each of my questions. The significance of the proposed method is clarified. Lots of additional evaluations are conducted and they are helpful in further demonstrating the effectiveness of the proposed approach. Overall, I like the originality of this work as I stated in the Strengths section. Since my concerns and questions are all well addressed, I would love to increase my score. --- Reply to Comment 1.1.1: Title: Thank you for your kind support. Comment: Dear Reviewer sAxU, Thanks for your kind support and for helping us improve the paper! We appreciate your valuable suggestions. Best, Authors
Summary: The paper proposed a deep generative framework G2MILP based on a masked variational autoencoder (VAE) for mixed-integer linear programs (MILP) instances. The proposed framework G2MILP corrupts and replaces constraints to generate new MILP instances while keeping the variables unchanged. Experiments show that G2MILP can produce instances that closely resemble real-world datasets in terms of both structures and computational hardness. Meanwhile, the generated instances from G2MILP can be used to augment the small training dataset to improve the prediction performance. Strengths: 1. The paper is well motivated by the limited data issue of MILP in the real world and the shortage of a flexible generative framework that can generate high-quality MILP instances. 2. The proposed decoder consisting of four predictive modules seems quite tailor-designed for the problem. Weaknesses: 1. The motivation of this work is similar to those Graph generation works and G2SAT, though targeting a different specific application domain. The main technique of masked VAE has already been well studied in the literature. So, this work is not that novel and interesting. 2. The proposed G2MILP can only modify the constraints of MILP beyond the training samples, which limits the contribution of this work. Although the authors claimed that their framework can be extended to change the variables of MILP, it is expected such flexibility can be achieved in this work. 3. Eq. (9) and Eq. (11) are inconsistent with Fig. 1. The logits/weights predictor shown in Fig. 1 is conditioned on $d_\tilde{v}$ but Eq. (9) and (11) are not. Moreover, from my understanding, the logits/weights prediction of $\delta_{\tilde{v}, u}$/$e_{u, \tilde{v}}$ should also be conditioned on $h_\tilde{v}$ and $z_{\tilde{v}}$. 4. The writing needs to be further improved. See the questions below. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. How does G2MILP deal with different n and m? Different instances would have different n and m. 2. For the proposed normalization, would it be problematic when there is outlier in the dataset? 3. In Section 4.1, about structural distributional similarity, statistics that remain constant are excluded. Does it mean the similarity is only calculated in terms of the constraints? 4. For the Bowly baseline, can it change variables? If can, the similarity comparison may be unfair since the Bowly baseline introduces more diversity to the generated instances. 5. Good generation should introduce novelty compared to the training set, implying that the generated samples should differ from the training samples. So, it is tricky to use the structural distribution similarity to denote a good generation. I expect a discussion on this. t-SNE visualization for the baselines should be included. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The authors have discussed the limitations of their work, which however I think should be a part of this work (point 2 in the weakness). Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful and valuable comments. We respond to your comments as follows and sincerely hope that our rebuttal could properly address your concerns. If so, we would deeply appreciate it if you could raise your score. If not, please let us know your further concerns, and we will continue actively responding to your comments and improving the work. **Weakness 1.** - We state the novelty of our work in respect of both task setting and technique in **Global Response 1**. Due to space limitation, we summarize the statement here. - Generating MILPs is much harder than generating SATs because it involves precise numerical prediction. G2SAT can hardly transfer to MILP generation. - We assume that you refer to masked auto-encoder (MAE) developed in computer vision. If not, please kindly provide the literature citation. However, they are different because MAE is an auto-regressive method which aims to reconstruct given samples, while our masked VAE framework aims to generate new instances from masked ones. **Weakness 2.** - Thanks for your suggestion! **We have implemented a new version of G2MILP that supports modifying variables.** Results are in **Table 10** in the newly submitted PDF. Due to space limitation, you can refer to **Global Response 4** for more details. **Weakness 3.** > Eq. (9) and Eq. (11) are inconsistent with Fig. 1. - Thanks for your comment! **Fig 1 shows the inference process while Eq. (9) and Eq. (11) details the network implementation.** As shown in Fig. 1, the logits and weights prediction are conditioned on the degree. As for the detailed implementation, we use a NN to predict $\hat{d}_{\tilde{v}}$. We use another NN to predict the ${\hat{\delta}} _{\tilde{v},u}$ for each variable vertex $u$, and this NN is not conditioned on $\hat{d} _{\tilde{v}}$ . Then the connected vertices are the $\hat{d} _{\tilde{v}}$ variable vertices with top $\hat{\delta} _{\tilde{v},u}$, and so the connected vertices are conditioned on both $\hat{d} _{\tilde{v}}$ and $\hat{\delta} _{\tilde{v},u}$ . We will improve the presentation in a revised version to avoid inconsistency. > The logits/weights prediction should be conditioned on $ℎ_{\tilde{v}}$ and $z_{\tilde{v}} $. - Thanks for the insightful question! As we add virtual edges between $\tilde{v}$ and all variable nodes (Line 167), there should be message passing between $\tilde{v}$ and the variable nodes, and thus the information of $\tilde{v}$ has been encoded into other node representations. Therefore, we do not explicitly use $h_{\tilde{v}}$ and $z_{\tilde{v}}$ as inputs for logits/weights prediction. Actually we have tried such implementation before, but found that it introduced additional parameters while the impact on performance was limited. **Weakness 4.** - We respond to your questions as below, and we will polish our writing in a revised version to make the questions clear in our paper. **Questions 1.** - For the **inputs**, G2MILP allows any $n$ and $m$ because the model does not depends on instance size. For the **outputs**, current implementation of G2MILP does not change $n$ and $m$ of input instances. - It is possible that G2MILP outputs instances with slightly different $n$ and $m$. For example, we can add a virtual constraint, mask it and generate a new one to obtain an instance with one more constraint. Generating MILP instances with different scales but similar semantic information and similar difficulty is an interesting topic, and we plan to study it in our future work. **Question 2.** > For the proposed normalization, would it be problematic when there is outlier in the dataset? - In real applications, instances in the training set are from real-world scenarios. Thus covering all such cases is meaningful. - Every model might be impacted by outlier data. However, the normalization makes the model more robust to outlier cases, because it maps all values to $[0,1]$, which makes it easier for the model to learn. - Empirically, we do not find obvious outliers in several datasets. **Question 3.** - **We consider comprehensive metrics including coefficient density, constraint and variable degrees, coefficient means, et al.** We excluded statistics such as problem sizes, numbers of integer variables, et al. For descriptions of the considered statistics, readers can refer to Table 7 in Appendix B.3. **Question 4.** - **Bowly generates instances from scratch by randomly sampling the coefficients.** It was not designed for learning datasets to generate realistic instances, and we can only contral simple statistics like problem sizes and coefficient means. For a fair comparison, we set all controllable parameters the same as the training sets. - **We do not aim to show that G2MILP is better than Bowly in every metrics, because the two methods are for different tasks.** Our results show that instances generated by Bowly are unlikely to be realistic. In other word, it cannot learn to generate data that is iid. with existing data. Therefore, we can hardly use it in low-resource MILP solver development, which is also verified by downstream task experiments. **Question 5.** > Good generation should introduce novelty compared to the training set. > > It is tricky to use the structural distribution similarity to denote a good generation. - We state the rationality of our evaluation metrics in **Global Response 2**. Due to space limitation, we summarize the response as follows. - The used similarity benchmark is based on many previous works. - Beyond these similarity metrics, we also access the downstream task performance. - Currently no strong enough metrics have been developed. > t-SNE visualization for the baselines should be included. - The t-SNE visualization for baselines are in **Figure 5** in the newly submitted PDF. G2MILP generates diverse instances around the training set, while instances generated by Random are more biased from the realistic ones. --- Rebuttal Comment 1.1: Title: Concerns have not been fully resolved. Comment: I appreciate authors’ efforts in addressing my concerns. However, they have not been fully resolved: 1. Beyond MAE developed in computer vision, I would more like to emphasize that masked auto-encoder has already explored widely in graphs, e.g., [1, 2, 3]. What I want to stress is that the mask idea for reconstruction/generation has been well explored in the community. In addition, there is also masked VAE in graphs [4]. While I acknowledge that there is some novelty in this work, a Taylor-designed masked VAE for a new specific application, I don't think the idea of the paper is that interesting or that new given the literature. 2. For weakness 3, I am confused what you explained. $p_\theta(\delta_{\hat{v}, u}|G, Z, d_\hat{v})$ in Fig. 1 describes the condition on $d_\hat{v}$, but Eq. (9) is not. 3. For question 1, how does G2MILP deal with different $n$ and $m$ of the inputs? For the output, from my understanding, G2MILP would need to deal with different $n$ and $m$ for different instances. If I am wrong, please correct me. 4. My question 3 indeed challenges the rationality of current structural similarity. Excluding the statistics of variables is unreasonable, as these statistics form a crucial part of MILP instances. This exclusion also leads to the inability to evaluate “v” models mentioned in Table 10. I think it is more reasonable to consider statistics of variables when calculating the structural similarity. References: [1] Hou, Z., Liu, X., Cen, Y., Dong, Y., Yang, H., Wang, C., & Tang, J. (2022, August). Graphmae: Self-supervised masked graph autoencoders. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (pp. 594-604). [2] Tan, Q., Liu, N., Huang, X., Choi, S. H., Li, L., Chen, R., & Hu, X. (2023, February). S2GAE: Self-Supervised Graph Autoencoders are Generalizable Learners with Graph Masking. In Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining (pp. 787-795). [3] Tan, Q., Liu, N., Huang, X., Chen, R., Choi, S. H., & Hu, X. (2022). Mgae: Masked autoencoders for self-supervised learning on graphs. arXiv preprint arXiv:2201.02534. [4] Li, X., Ye, T., Shan, C., Li, D., & Gao, M. (2023, April). SeeGera: Self-supervised Semi-implicit Graph Variational Auto-encoders with Masking. In Proceedings of the ACM Web Conference 2023 (pp. 143-153). --- Reply to Comment 1.1.1: Title: Response to the concerns (1/2) Comment: Dear Reviewer K4a3, Thanks for your reply and for your valuable comments! We are glad to respond to your concerns as follows. > 1. Beyond MAE developed in computer vision, I would more like to emphasize that masked auto-encoder has already explored widely in graphs, e.g., [1, 2, 3]. What I want to stress is that the mask idea for reconstruction/generation has been well explored in the community. In addition, there is also masked VAE in graphs [4]. While I acknowledge that there is some novelty in this work, a Taylor-designed masked VAE for a new specific application, I don't think the idea of the paper is that interesting or that new given the literature. Thanks for kindly providing the valuable references. We appreciate your feedback, and we would like to further emphasize our technical contribution in light of the existing literature. We will incorporate the citations of the relevant papers and discuss them in the revised version. - **Our main contribution lies in identifying MILP generation as a valuable task and proposing the first framework as a feasible solution.** MILP plays a crucial role in combinatorial optimization research and finds wide application in various industrial optimization scenarios, while the low data availability is a bottleneck challenge in developing powerful MILP solvers. **Therefore, we believe that our work is aligned with current trends and will be significant in promoting the advancement of the MILP solver area.** While our work builds upon existing techniques, we have carefully designed our method specifically for this new task, demonstrating its effectiveness. - **Though the ideas of MAE and VAE have been extensively explored in the literature, developing new variants for novel applications remains valuable.** MAE and VAE serve as foundational models that have inspired numerous studies across diverse domains. Designing tailored model structures to suit specific scenarios is challenging and has motivated significant research efforts. In the context of MILP generation, we carefully design the mask-and-generate mechanism to preserve realism, and we design the decoder consisting of four predictive modules specifically for the bipartite graph representation of MILPs. - **Differences from [1, 2, 3].** These three papers explore the use of MAE for self-supervised learning on graphs and are relevant to our work. Our method is significantly different from them, and the reasons are similar to the differences we have highlighted regarding MAE in **Global Response 1**. - **Differences from [4].** The paper [4] presents a graph VAE with masking for graph self-supervised learning. Our method is different from [4] in several aspects. - **Motivation:** While [4] focuses on generative graph self-supervised learning with VAE, our focus is on generating new graphs. - **Use of masking:** [4] employs masking in a VAE model only for data augmentation. In contrast, our aim is to generate new graphs from masked input graphs. - **Model Structure:** In [4], the masked graphs are inputs to the encoder for data augmentation. In contrast, in our work, the masked graphs are inputs to the decoder because we aim to generate new graphs from masked ones. - **Theory:** The aforementioned differences result in distinct derivations of the evidence lower bound (ELBO) in our work compared to [4]. > 2. For weakness 3, I am confused what you explained. $p_\theta(\delta_{\tilde{v},u}|\tilde{G},Z,d_{\tilde{v}})$ in Fig. 1 describes the condition on $d_{\hat{v}}$, but Eq. (9) is not. - In Fig.1, the probabilities $p_\theta(\delta_{\tilde{v},u}|\tilde{G},Z,d_{\tilde{v}})$ correspond to the **true logits** $\delta_{\tilde{v},u}$ (taking values in $\{0,1\}$), which indicate whether two vertices $\hat{v}$ and $u$ are connected. These probabilities are conditioned on the degrees $d_{\tilde{v}}$ in our probability modeling. - On the other hand, in Eq. (9), we describe the output of the neural network MLP$_\theta^{logits}$, which provides the **predicted logits values** $\hat{\delta} _{\tilde{v},u}$ (taking values in the range $(0,1)$) that indicate the likelihood of the connection between the two vertices. The network does not take the predicted degrees as inputs. - As we state in Line 202, we connect $\hat{d} _{\tilde{v}}$ variable vertices with top logits. To be precise, the selected vertices to connect are those $u\in argTopK(\{\hat{\delta} _{\tilde{v},u}|u\}, \hat{d} _{\tilde{v}})$ (also see Algorithm 2 in Appendix A.3.3). Therefore, even though $\hat{d} _{\tilde{v}}$ is not an input to the neural network in Eq. (9), the decision of whether a vertex $u$ is linked to $\tilde{v}$ is conditioned on both $\hat{\delta} _{\tilde{v},u}$ and $\hat{d} _{\tilde{v}}$. - If the reviewer still finds the notation confusing, we will refine the notation to make it more clear and unambiguous in the revised version. We appreciate the reviewer's input and will address this concern accordingly. --- Reply to Comment 1.1.2: Title: It is our pleasure to discuss with you. Comment: Dear Reviewer K4a3, We deeply appreciate your time and insightful comments. We sincerely hope that our response could properly address your concerns. If you have any further concerns, we are more than happy to discuss with you and keep improving this work. Best, Authors
null
null
null
null
Rethinking SO(3)-equivariance with Bilinear Tensor Networks
Reject
Summary: The paper proposes n permutation and SO(3) equivariant network for b-tagging in high energy physics. Permutation equivariance is required since the input data is a _set_ of track particle features, while SO(3) equivariance is required since these features are geometrically scalars or vectors whose rotational transformation law should be respected. The basic architecture is a particle flow network (PFN), which is based on deep sets. It consists of 1) a first SO(3)-equivariant subnetwork, applied to each track particle individually, 2) permutation invariant pooling via summation, 3) a second SO(3)-equivariant subnetwork, and 4) a final layer which extracts scalars. Internally, the network operates on scalars, vectors, and Cartesian 2-tensors. The first two are irreps of order $0$ and $1$, respectively, the third one is reducible (it would in principle decompose according to $1\otimes 1 \cong 0\oplus 1\oplus 2$). The equivariant subnetworks employ a range of different SO(3)-equivariant mappings: - Affine layers, i.e. linear maps followed by bias summation. The linear maps are made equivariant by broadcasting weights over the full representation dimension. Biases are only summed to scalars and 2-tensors since equivariant bias summation is impossible for vectors. - Linear layers that are applied in local frames that are aligned along each individual jet's momentum axis $j$. This alignment is only specified up to rotations around the momentum axis, which is addressed by making the layers $\mathrm{SO}(2)\_j$ _gauge_-equivariant (the subscript labels the axis along which the subgroup is taken). Technically, the network seems to operate on restricted representations $\mathrm{Res}^{\mathrm{SO}(3)}_{\mathrm{SO}(2)} \rho$. Since the momentum axes $j$ are moving along with SO(3) rotations, the operation is as a whole still SO(3)-equivariant. Two explicit constructions of such SO(2) equivariant maps for vectors and 2-tensors are proposed. Due to the restricted equivariance requirement, these maps are less constrained than fully SO(3)-equivariant layers. - Nine different _bilinear_ maps which map pairs of features of different types again to scalars, vectors and 2-tensors. This step allows (in contrast to the others) to transition between different representations. - SO(3)-equivariant nonlinearities. For scalars, the authors use conventional ReLUs, while the nonlinearities for vectors and 2-tensors are acting on their norm (a common approach). There is a single experiment, in which the models are trained as binary classifiers for b-tagging. The full equivariant model improves significantly upon a non-equivariant baseline and a version which ablates 2-tensor features. Strengths: The paper is well written and generally easy to follow. While I am not familiar with the b-tagging task and competing approaches, the empirical improvements over the baseline model seem very significant. Another strength is the use of bilinear mappings - most equivariant networks utilize only linear maps. I really liked the idea of using locally $SO(2)\_j$ gauge equivariant operations besides fully SO(3) equivariant layers. The authors identified the additional geometric structure given by the momentum axes $j$ and addressed it appropriately. Weaknesses: My main concern with the approach is that the linear layers are not shown to be _complete_: in principle one could derive a basis of the most general equivariant linear maps (intertwiners) between the given representations. The authors show only the sufficiency of their layers regarding equivariance, but not the necessity. Indeed, I believe that quite some maps are in fact not the most general ones (more details listed below). To address this issue it is most convenient to work in the basis of irreducible representations, which the authors consciously avoid. For the following comments, note that scalars and vectors are irreps of order $0$ and $1$, while Cartesian 2-tensors are an irrep tensor product which decomposes according to $1\otimes 1 \cong 0\oplus 1\oplus 2$. Furthermore, intertwiners exist by Schur's lemma only between irreps of the same order, and are for SO(3) scaled identity matrices $\lambda\mathbb{I}$. For all non-trivial SO(2) irreps over $\mathbb{R}$ the spaces of irreps are 2-dimensional and are spanned by the irrep-endomorphisms ((1,0),(0,1)) and ((0,-1),(1,0)). - The intertwiners for scalars and vectors (presented as "broadcasting") are complete since these are irreps. However, the broadcasting for $1\otimes 1$ tensors is overly restrictive, and there are actually three parameters, one for each irrep in the 2-tensor. The claim that the intertwiner space for 2-tensors is one-dimensional is repeated in line 172. - The solutions for biases are indeed complete: biases can only be summed to trivial irreps, which exist with multiplicity 1 in scalars and 2-tensors, but not in vectors. - It is furthermore possible to have intertwiners between scalars or vectors and 2-tensors, since the latter contain irrep orders 0 and 1 as subspaces (trace and antisymmetric part). These solutions are not used. - The SO(2)-intertwiners between SO(3)-vectors in 3.1.1 seem complete, however, this is not proven but just claimed ("The set of all SO(2)-equivariant maps is _exactly_ ... [proposed parametrization]"). One can easily prove the completeness by observing that order 1 SO(3)-irreps decompose under restriction into the direct sum of an order zero and order 1 SO(2)-irrep, whose intertwiner spaces are 1 and 2-dimensional, respectively. The proposed parametrization has the same dimensionality. - The SO(2)-intertwiners between Cartesian SO(3) 2-tensors are again not proven to be complete ("[equivariance] is satisfied when ... [proposed parametrization]" just claims sufficiency). Going to the irrep basis shows again that there are more possible solutions. - In the case of bilinear maps, there are again some missing operations, e.g. combinations of two 2-tensors that result in a vector. All possible solutions follow directly from Clebsch-Gordan decompositions, which are well known for SO(3). - An alternative approach to TReLU would be to apply three independent nonlinearities to the irreps contained in the 2-tensor. The equivariance of TReLU is not explicitly shown (this might be quite trivial to show). As mentioned above, addressing these concerns would be easiest by switching to the irrep basis. As this would require a major revision, I am not sure whether this is the right way forward for this submission, or should rather be addressed in follow-up work. An alternative way to address these concerns would be to explicitly discuss completeness of intertwiner bases in general and prove it for each operation in which it holds. The equivariance of operations like TReLU or Eq. (4) should also be proven. Another issue is that it is not well explained how the overall network remains SO(3)-equivariant despite intermediate operations only being SO(2)-equivariant. This is one of the most interesting aspects of the paper and deserves more attention. It should also be discussed how this relates to other _gauge equivariant_ / _coordinate independent_ networks. The alignment of frames along the $j$-axis with remaining SO(2) ambiguity seems very similar to the SO(2)-structure (SO(2)-bundle of frames) considered by (Weiler et al. 2021), specifically their figure 53 (right). The transition from SO(3) to SO(2) features is currently not sufficiently explained. I believe that the authors assume the restriction function $\mathrm{Res}^{\mathrm{SO}(3)}\_{\mathrm{SO}(2)}$ - please clarify this! The group actions and the domains and codomains of A and L in sections 3.1.1 and 3.1.2 are nowhere defined. One can read them off between the lines, but they should be stated more clearly. I am somewhat worried about the extent of the experiments. It would be nice to have a more thorough empirical analysis, for instance giving more ablations, showing training curves, investigating whether the equivariance is exact or due to numerical errors only approximate. Not all ablations discussed in the text of section 5.2 are shown in the table. Finally, I wondered about the input features of the baseline, are they the same as for vector and tensor PFN? There should really be two baselines, with either set of features, to clarify that the improvement is really coming from the architectural changes instead of the input alone. How exactly are the different models made comparable? Do they have the same number of parameters or the same computational cost to ensure a fair comparison? Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Most of my questions and suggestions are given in the "weaknesses" section. In addition I wondered about the following points: - How can the SO(2)-equivariant layers be applied to the summed features, i.e. in the subnetwork $F$? As I understand, each jet comes with its own momentum axis $j$, but which axis is used after the summation? - Why are the 2F features in the bilinear layers split in two groups? Other models consider all possible pairings of features (which scales quadratically instead of linearly in F). - Is there a specific reasoning behind the norm-capping of VReLU and TReLU? Prior work like (Worrall et al., 2017) or (Weiler&Cesa, 2019) used ReLU-based "norm-nonlinearities" which do not saturate - this seems more intuitive and closer to the usual behavior of ReLUs to me. - In how far is the saturation of VReLU and TReLU required to stabilize the training? Could you analyze or substantiate this claim with experiments? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The main limitation is in my opinion that the solutions are incomplete - which should be addressed more clearly. Other limitations, like the limitations of the bilinear ops due to not using Clebsch-Gordan decompositions or the lack of investigating the space of tensor nonlinearities are adequately addressed. Negative societal impacts are not to be expected. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for a thoughtful response to our paper. We address the reviewer's specific questions below: * "How can the SO(2)-equivariant layers be applied..." They can be used in exactly the same way. Because each parallel result of the subnetwork $\Phi$ is a proper vector quantity, so is their sum. The resulting sum can be passed into further $SO(2)_j$ layers, where $j$ could either be the original jet axis, which is common to all of the particles in the sum (as is the case in our implementation), or one could parameterize the layer with any other vector-valued $j$. * "Why are the 2F features..." This is an arbitrary choice made purely for simplicity and to enable faster-running experiments. Still, with this simplified approach we find impressive an improvement over the baseline, which serves to highlight the main point of the paper, which is to demonstrate the benefit of the equivariant architecture. It would of course be possible (and presumably more expressive) to include all pairs of combinations, or to even use an attention mechanism to construct pairs. We hint at this possiblity in the conclusion, and leave for future work the integration of the modular, equivariant elements presented here into more sophisticated models such as transformers and GNNs. * "Is there a specific reasoning behind norm-capping..." * "In how far is the saturation of VReLU and TReLU..." These questions are related as indeed the reasoning for saturating activations was to prevent exploding magnitudes. We do not have particular experiments to this effect, _per se_; generally we were just completely unable to train reasonably deep networks without the saturating activation (we experienced exploding gradients). There may be other solutions to this problem, but our activation functions seem to work well empirically. --- Rebuttal Comment 1.1: Comment: I thank the authors for clarifying some of the questions. As there is no reaction regarding my concerns and suggestions in the weakness section I am sticking with my original rating.
Summary: The paper presents a method to handle complex geometric structured data, specifically SO(3) representations, through SO(3)-equivariance and judicious symmetry breaking. The technique improves computational efficiency and enhances the performance of a network operating on these representations. When applied to b-tagging, a High Energy Physics classification problem, it yielded a 2.7x improvement in rejection score over conventional methods. Strengths: 1. It makes sense in machine learning to explore more efficient representation for data with symmetric structures. It is particularly important in many scientific fields such as HEP and material. 2. Using the proposed method, it shows a significant performance improvement compared with the baseline method. Weaknesses: 1. Some explanation in the paper is difficult to be followed by ML researchers. This work deeply involves the task of B-jet identification at LHC experiments, but I’m not sure if these are sufficiently interesting for the ML and AI community. 2. In the related works, the authors mentioned that there had been numerous prior works on SO(3) equivariant models, but in the experiment only PFN with simulated datasets is implemented for comparison. The numerical evidence in this paper looks not sufficient. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Some comments: 1. It would greatly enhance the clarity of the paper if the authors provided an initial explanation of the concept of SO(3) symmetry. While the authors may be well-versed in this notion, it is essential to remember that many readers within the machine learning community may not be familiar with it. By providing a concise and accessible explanation, the authors can bridge the knowledge gap and ensure that readers grasp the significance of SO(3) symmetry in the context of their work. 2. In the introduction section, the authors make a claim that the proposed method is simpler compared to existing approaches. To strengthen this claim, it would be beneficial for the authors to support it with numerical or theoretical evidence. By presenting concrete results or theoretical analyses, the authors can substantiate their assertion and enhance the persuasiveness of their argument. This evidence will enable readers, including myself, to be convinced of the method's simplicity and its potential advantages over existing techniques. 3. The design of the new layers appears to be somewhat ad-hoc. While the paper demonstrates the equivariance of these layers, it would be valuable to explore whether alternative designs can achieve the same goal and what differentiates them. By discussing potential alternative designs or comparing the proposed layers with existing approaches, the authors can provide a more comprehensive understanding of the design choices made. This analysis will strengthen the paper's contribution by highlighting the unique aspects of the proposed layers and providing insights into their advantages over other potential designs. 4. If the main contribution of the paper lies in the introduction of the new layers, it would be beneficial to provide additional numerical evidence using different datasets. This would further validate the effectiveness of the proposed layers across a range of scenarios and reinforce their potential applicability beyond specific domains. Alternatively, if the focus of the paper is primarily on the performance improvement in the context of the LHC experiment, it would be essential to demonstrate the significance and relevance of this task. By providing additional context, motivation, and potentially exploring the broader implications of the performance improvement, the paper can better engage readers and establish the attractiveness of the addressed problem. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Title: late rebuttal Comment: Apologies for the late response. We thank the reviewer for a thoughtful response to our paper. We address points below: * Some explanation in the paper... The b-tagging problem is an example of a class of symmetry-related problems, which are of interest to a much wider audience. We expect our novel methodology for dealing with restricted equivariance could be adapted to other fields with similar symmetry considerations. Our method is also versatile in the sense that it can be extended to other architectures such as GNNs. To facilitate the physics-specific discription of this work, we have also attached a supplementary figure with an illustration of the b-jet coordinate system. * In the related works... Indeed, while we acknowledge prior work on SO(3)-equivariant models, our method is developed specifically to handle cases where the global SO(3) symmetry is expected to be broken by the underlying physical/generative data process. We are not aware of existing work that handles this case. Our experiments use high-fidelity physics simulators that are standard in the field of HEP. This is the only way to train and evaluate ML models, as it is technically impossible to produce labelled real data. The baseline PFN (a.k.a. Deep Sets) is chosen as it is a standard benchmark model on these types of tasks in physics/ML literature. But more importantly, our equivariant models are elaborations of the basic PFN architecture, so using a "plain" PFN model as the baseline is a natural choice to demonstrate the gain in performance directly attributable to our proposed methods, rather than architectural differences in the models. In other words, the baseline can be seen as a most extreme "ablation" where all of the improvements we propose are omitted. * It would greatly enhance the clarity ... explaination of... SO(3) While we would love to make the paper as self-contained as possible, this is a difficult balance to strike given the page limit. We do feel that in the ML literature especially, there is a large and well-established body of work on equivariance stretching back several years with highly sophisticated foundations in group theory. We therefore feel that for the intended audience, the concept of the SO(3) group should be fairly well known; however, we will try to include a brief overview of the SO(3) group in the introduction if possible. * In the introduction ... Thank you for pointing out the possible ambiguity in this statement. Our intention here is to describe the technical complexity of actually describing/understanding/implementing the method is simple. For example, we feel most would agree that working with vector dot products, cross products, etc, is much more intuitive and straightforward compared to, say, Clebsh-Gordon theory. Therefore we don't expect to have numerical evidence for this claim! However, we will certainly think of a better way phrase this argument. * The design ... This is an interesting comment, and upon re-reading, it seems a reasonable impression. We will try to address it here in some detail, but will also work on the text to make the following arguments more clear. Firstly, the SO(2) layers are not ad-hoc in that we specifically analyzed criteria for equivariance (i.e., it must commute with $SO(2)_j$ rotations) and then came up with the general case in Eq. 4. The bilinear operations are perhaps ad-hoc, in that they are specialized and reduced cases of Clebsh-Gordon products of SO(3)-irreps. However, our physical intuition (i.e. inductive bias) suggests that nearly anything interesting one might want to compute on the given data (comprised of vectors and scalars), should be expressible in terms of the bilinear operations chosen. As for VReLU and TRelu: for VReLU we state a theorem that there's only one general form, however, the actual form of the scalar function $f(|\vec{x}|)$ we used, i.e. saturating relu, is indeed ad hoc. We describe one of a few that we tried, which was determined to work well empirically. The TReLU similarly is totally ad hoc and we admit that directly in the paper (while also pointing out that a more general version would be a function of the three principal invariants of the tensor). Our primary motivation and novel contributions in this paper is in dealing with the reduced/broken symmetry in an effective way, and we feel that the overall method is demonstrated soundly, even if some of the detailed choices are somewhat arbitrary. * If the main contribution ... We agree that it would be more interesting to perform experiments on additional datasets. The dataset chosen of course is the one which directly motivated the architecture in the first place, due to the specialized symmetry of the underlying generative process. We are open to suggestions of additional point-cloud datasets that have a similar symmetry feature, if the reviewer is aware of any, and would be happy to conduct additional experiments provided they can be done in reasonable time. --- Rebuttal Comment 1.1: Comment: Thank the authors for the detailed response. I'm not an expert on this topic, so I cannot evaluate how much novel technical contribution is given in this work. Nevertheless, I like this paper because it is quite well-written (the background part has been promised to be improved). So I tend to give an acceptance score.
Summary: The authors propose a SO(3)-equivariant network operating on scalars, vectors and 2-tensors. They identify the corresponding equivariant linear layers and come up with a mixing strategy to mix different representations. Furthermore, SO(2)-equivariant linear layers are proposed to allow the scenario that SO(3) symmetry breaks into a subgroup SO(2) axial symmetry along a specific axis $\hat{j}$. Finally, they demonstrate how it is appllied to b-tagging in High Energy Physics (HEP), where the data is rotational symmetric w.r.t. jet axis $\hat{j}$. Strengths: - The authors provide a good and simple implementation of SO(3) equivariant networks on scalars, vectors and 2-tensors. The weight matrix is carefully designed to preserve the symmetry. The way to mix different representations is also intuitive and easy to understand. - The discussion on axial symmetry is well motivated and easy to follow, and the analysis is solid and sound. Weaknesses: - [Novelty] The idea of tensor-product-based representations is not new [1]. The main method (except the SO(2) part) looks a simple variation and the technique involved is pretty standard. Add discussion and comparison with existing tensor-product-representation-based methods could make the work more solid. - [Evaluation] To show the effectiveness of mixing and SO(2) linear layers, I think it is better to put more intermidiate results (e.g., w/ and w/o SO(2) linear layers) in the main table. - [Minor issues]: Eq. (1, 2, 3) use Einstein summation without declaration, which may cause confusion to readers without physics background. Line 168 should be "isotropic linear neuron of Eq. (2)" instead of "Eq. (1)". [1] Finkelshtein, Ben, et al. "A simple and universal rotation equivariant point-cloud network." _Topological, Algebraic and Geometric Learning Workshops 2022_. PMLR, 2022. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - How to understand Line 54 "We show that this kind of equivariant neuron is generally only possible with the introduction of order-2 tensor representations"? For me it seems everything should work well even if we only use scalars and vectors (like VectorPFN). Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Limitations are not included in the manuscript. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for a thoughtful response to our paper. We address points below: * "The idea of tensor-product-based neurons is not new..." We thank the reviewer for brining this work (and others) to our attention, and would be add a brief discussion comparing and contrasting our method. The primary difference that we emphasize is the partial symmetry breaking aspect of our network's architecture. * "To show the effectiveness of mixing..." Indeed, we had ablation studies to this effect in an earlier preprint, but they were removed late in the editorial process. We have attached a PDF with a supplemental results table showing various ablations for the vector network. Unfortunately due to time constraints, only one ablation is available for the tensor network, although we would be happy to provide complete results within a few days if the reviewer is interested in viewing them during the discussion period. * "Einstein summation..." Thank you, we have clarified the use of the Einstein summation convention, and fixed the mislabelled eqution. We have also added a figure illustrating the physical geometry of the b-jet events. * "How to understand Line 54..." Thank you for pointing this out, upon rereading we agree it is not very clear. This is a reference to the fact that the $SO(2)_j$-equivariant neurons introduced in Eq. (4) intrinsically require the construction of an order-2 tensor, namely $jj^T$. Briefly the argument is that starting with a vector-valued neuron of the form $\vec{y} = A \vec{x}$, However, following our discussion regarding novelty above, and a similar discussion with reviewer NLGv, we prefer to remove this statement from the introduction and instead emphasize and clarify the order-2 tensor properties of the $SO(2)_j$ neuron.
Summary: This work presents a lightweight architecture based on scalars, vectors, and tensors for learning in 3D, subject to $SO(2)$-equivariance in a known direction that varies by sample. They are motivated by the jet-tagging problem in high energy physics, in which a given batch is equivariant with respect to a certain axis (which may vary between batches), and test their framework on this problem. Strengths: Restricting to cross products and matrix products is new relative to previous work, which tends to focus more on the expressivity via irreducible representations of higher orders. The jet HEP dataset is also not commonly used in similar papers, and could provide a useful dataset for future papers. The proposed operations do indeed seem to be equivariant, and they are written out explicitly. Weaknesses: 1. The proposed architecture is a simple restriction of many other existing architectures. The “tensor bilinear layer”, is a special case of the more general CG product that is now standard practice in other architectures, and it is not clear what benefits it has over a more general CG architecture. The nonlinearity of scaling by the vector or tensor norm is also not new: it is subsumed by the nonlinearity of applying an arbitrary function to the norm, and then scaling the vector or tensor by this value, which similarly was widespread in foundational works such as Tensor Field Networks (Thomas et al 2018) and subsequent works. 2. It does not seem like this architecture is particularly expressive, due to the use of low order features and the simple nonlinearity. (Note that prior work on the universality of point cloud architectures, and equivariant architectures more generally, usually requires making statements about polynomial approximation, where higher order tensor products are required to approximate higher degree polynomials — see e.g. Lorentz nets, Bogatskiy et al 2020, or Dym et al 2020 on the universality of point cloud architectures. Therefore, it seems to me that these simpler layers will likely have worse approximation properties.) 3. Based on the paper’s description (but the authors can correct if this is not the case), the motivating problem is really SO(2)-equivariant about a known but sample-dependent axis. Therefore, the discussion of the SO(3) CG product and other SO(3) architectures is somewhat misleading/confusing. It is also not explained clearly how using this paper’s SO(2)-equivariant architecture compares to the standard approaches in HEP based on the “transverse and longitudinal projections”. 4. The baselines and experiments are not sufficiently developed. For example, the only baseline is a DeepSet-style permutation invariant architecture, presumably on the raw 3D coordinates. However, the authors should compare to approaches using the apparently more standard “transverse and longitudinal projections”, as well as to SO(2)- or SO(3)-equivariant baselines on the raw coordinates. The paper claims that using a physically intuitive restriction of the space of possible operations is beneficial, but does not demonstrate this with an ablation study or by using e.g. a full CG product. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. What exactly are the “transverse and longitudinal” projections that other methods employ on this data, and why do other methods use them? Is returning to the full 3D coordinates with SO(2)-equivariance likely to be more efficient than reducing to these coordinates in the first place, from a sample complexity perspective? 2. Line 113 says “these methods all essentially rely on reducing the problem to vectors in a 2-dimensional plane”. Can the authors elaborate on what this means? If these methods use the jet direction to canonicalize the coordinate system, for example, they could easily use frame averaging (Puny et al 2021) to obtain equivariant outputs — but I am not sure if this is what they are doing or not. 3. What’s the difference between this approach and using a standard method for SO(2)-equivariance, such as circular convolutions about the direction j? 4. Line 48 claims that the tensor bilinear layer consists of “commonly known and physically intuitive operations”. Can the authors elaborate on the problem-specific physical intuitions that come with these operations? Is there some intuitive benefit to using physically intuitive operations as opposed to a more general framework relying on the full CG product? 5. Does the HEP problem enjoy translational symmetry as well? Suggestions: 1. A diagram of the experimental set-up and the individual particle directions would make the paper and its motivation much more clear. 2. Order 2 tensors can be defined somewhere early in the paper. 3. The abstract refers to this method as “judicious symmetry breaking”. I think this is a misleading phrase, when really it is just that the symmetry group is a subgroup of SO(3) (rather than all of SO(3)). In other literature, symmetry-breaking refers to situations more complex than simply having (consistency) a smaller group than SO(3); for example, addressing the ambiguity between 6s and 9s in the MNIST digit dataset under rotation. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 1 poor Limitations: There is no potential negative societal impact. The paper does not address a potential lack of expressivity of the architecture. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for a thoughtful response to our paper. We address specific points below: * The baselines and experiments are not sufficiently developed... This is a misunderstanding; the baseline model _does_ receive as input the standard projections z_0 and d_0, cf. L234, in addition to everything else the other models get. The equivariant models _do not_ receive z_0/d_0, as they do not belong to a proper tensor representation. Regarding ablations, we agree that it's better to show the results of these studies, and have attached an expanded table of results for a series of ablations for the vector network. Unfortunately due to time constraints, only one ablation is available for the tensor network, although we would be happy to provide complete results within a few days if the reviewer is interested in viewing them during the discussion period. * "What exactly are the transverse and longitudinal projections..." Thank you for pointing out we did not include a formal definition of d0 and z0, we will add these. Briefly it is sufficient to know that they are defined in terms of somewhat-complicated expressions involving cross products, dot products, and scalar products of momentum and position vectors; this was the original ansatz to design a network that is equipped with exactly these products as "primitive" operations. As for why d0 and z0 are preferred, it seems to be historical. Physicists have been identifying b-jets decades before tenable multi-variate methods were available. Projecting 3D information into a 1D quantity makes things easier in a "cut-based" event selection. One of our motivations is to demonstrate to practitioners that full 3D information can be much more powerful. * Line 113... This is a subtle claim and it is challenging to balance the level of detail. Briefly, the "2D method" is to consider polar coordinates $(\Delta \theta, \Delta \phi)$ of each particle's momentum direction relative to the jet axis, which is considered locally flat-enough to associate particles with points on a 2D plane. For methods that consider only the "flow" of particles through the detector, it's sufficient to attach features such as charge, energy, etc. to these points and treat it as a 2D point cloud. In the b-tagging problem, the challenge in treating it as a purely SO(2) problem on a 2D coordinate system (e.g. to do circular convolution) is that the most useful observable, the impact parameter, is a fundamentally 3-dimensional quantity, and has no faithful representation in 2D. I.e. the manner in which the impact vector transforms under SO(2) rotation depends on its 3D orientation w.r.t. the jet axis. This is precisely the issue that our paper seeks to address: we "embed" a SO(2)-equivariant network within a SO(3) equivariant network to increases expressivity by exploiting the SO(2) restriction as much as possible. We also note that these methods are not mutually exclusive! E.g. circular convolution could be effected by, sampling from $SO(2)_j$ and applying it to vector/tensor-valued representations at any point in the network to yield a convolutional signal. * "Line 48..." As mentioned earlier, the original ansatz is due to the explicit formulation of the d0 and z0 observables, in terms of scalar, dot, and cross products of vectors. The extension to order-2 tensors was an afterthought, upon considering that certain terms in the vector model (specifically, the dyadic product $jj^T$) are themselves rank-2 tensors. While these operations and associated representations can be thought of as a special case of CG products and SO(3) irreps, geometric tensors (including scalars and vectors) have their own straightforward algebra that is well known and comparatively easy to work with. From a (classical) physics perspective, there are almost never circumstances in where representations other than scalars, vectors, and order-2 tensors are called for. Therefore, our assumption (i.e. inductive bias) is that anything reasonably interesting that the network would like to compute should be possible with these products and representations. CG theory of course famously arises in quantum mechanics, however, one can see this necessity as arising from non-commutative operators, indistinguishable particles, and other circumstances in the theoretical treatment that physicists knows as "First Quantization". In brief, compare the procedure for adding angular momenta in the classical case (a simple vector sum) versus the quantum case (a CG direct sum of tensor products); why use the latter when the former is expected to be sufficient? * "Does the HEP problem enjoy translational symmetry?" A good question, but it does not! We are measuring momentum eigenstates which emanate from the collision point to a far away surface (the detector). Observations cannot meaningfully be translated in either momentum or position space. The appearance of a position vector in this problem at all is a largely the reason extra care has to be taken in dealing with 3D rotations (as discussed above). We can include a brief discussion of this point, as another reviewer asked the same question. * "A diagram of the experimental setup..." Good suggestion; we've included a diagram in the attached supplemental PDF, and propose to add it as a new figure in the next revision. * "The abstract refers to ... symmetry breaking..." We feel our notion of symmetry breaking is consistent with common usage in physics; however, we would be interested in discussing and considering more carefully how it could be (mis)construed in the ML literature. From our perspective, the SO(3) symmetry is formally broken by the jet axis which fixes a ``special'' direction in space. That is, for a fixed $j$, we add terms to Eq. 4 which do not commute with arbitrary rotations, but which DO commute with arbitrary rotations about $j$, which for any given jet is expected to be a nearly exact symmetry of the underlying (physical) generative process. --- Rebuttal Comment 1.1: Title: Thank you for the response Comment: First, I unfortunately cannot find the supplemental PDF that the authors said they uploaded; to be fair (in case it was a mistake or I am missing it), I will write this response assuming that the PDF contains exactly what the authors claimed. I thank the authors for their detailed responses to all authors, which clarified some points that were originally unclear to me. In fact, I would strongly recommend that the equivariance proof provided in the rebuttal to Reviewer NLGv be included in the main paper, rather than the appendix (as suggested by the authors); I think this is quite important to the reader to understand. Overall, the use of SO(2)-equivariance in an SO(3)-equivariant architecture has some novelty. However, my concerns regarding the expressiveness of the architecture and comparison to appropriate baselines remains. The authors claim that “anything reasonably interesting that the network would like to compute should be possible with these products and representations”, but this should ideally be supported by experiments. For these reasons, as well as what I found to be an unclear presentation, I still do not think this paper is ready for publication. However, I will at least upgrade my score to a “weak reject,” and encourage the authors to address the reviewers’ comments in the next revision and resubmit.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: Before starting, I must mention that I am not a physicist, and therefore, I have focused on the machine learning aspects of the paper. ### Summary This paper proposes the use of SO(3) equivariant neural networks for B-tagging. The method proposed builds upon Deep Sets and provides a quite constrained formulation of SO(3) equivariance. The paper further indicates that breaking SO(3) equivariance locally can be used to get better representations for the task at hand, while maintaining global equivariance to that group. Strengths: The paper is presented very clearly, well-structured, and it is in general easy to follow and understand. Weaknesses: ### Concerns * My main concern is that, to the best of my knowledge, it is not possible to obtain a certain equivariance without all networks being equivariant to that group. In particular, I do not understand how global SO(3) equivariance can be locally broken into SO(2)_j equivariance while preserving global SO(3) equivariance. In the best of my understanding, as shown in roughly all the papers on group equivariance, in order to have a network be equivariant to a certain transformation, all layers need to respect that equivariance. I think that clarifying how this is possible is *crucial* for the paper. It is important to note that the paper simply states this and does not provide proofs or analyses regarding this statement. * I am very concerned with regard to the expressivity of the proposed algorithm. For instance, it has been shown in several works that equivariance can be obtained in expressive ways that do not have such hard restrictions as having biases be equal to zero –note that several similar very constraining restrictions are also defined for each of the mappings in the networks–, e.g., E(n)-equivariant Steerable CNNs, among many others. From what I understand, this work builds mostly upon the Deep Sets literature. However, this is by far not the most general way to obtain equivariance to a certain symmetry. I believe that the authors should at least state this clearly in the paper. * The paper performs multiple ablations that are not found in the submission other than by conclusive statements. For instance, in line 322, the authors state: “Finally, we note that neither family of models performs even as well as the baseline, when no bilinear operations are allowed”. I believe that clearly showing the results of these ablations will strengthen the contributions of the paper. In general, such conclusive statements in their own are often vague and not very informative. * In the final part of the conclusion it is stated that “it should be also possible to use these models for creating equivariant Graph and attention based networks”. Given that these families of networks –as stated earlier– are much more general than Deep Sets for equivariant formulations, I believe that this statement is not really easy to accomplish in practice. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: ### Comments * Line 34 - 35. What about translations? ### References This paper misses several references in several parts of the paper. For instance: * The second paragraph should include citations. * Line 52. There are several works that perform equivariance relaxations. * This paper is very similar to recent papers on Clifford Algebras, e.g., Clifford Group Equivariant Neural Networks. A discussion wrt these methods should be included. * This paper also seems very similar to Vector Neurons. A discussion wrt such methods should be included. * I have seen several nonlinearity formulations that look very much like the VReLU / TReLU, e.g., in Tensor field networks, Clifford Group Equivariant Neural Networks, etc. Please cite and discuss how the proposed nonlinearities are better. ### Introducing concepts This paper does not introduce several things properly. For instance: - Equation 4 falls completely out of the blue. To me, it is not clearly how the authors have reasoned to arrive at this. - It is never explained that \alpha and \beta are in Figure 1. - Why is x in |x| not bold? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: See previous responses. ### Conclusion In conclusion, I believe that the paper needs some work before I am able to support acceptance. I believe that there are several factors that need to be clarified, .e.g, how to get SO(3) equivariance with only SO(2)_j layers, for this to be an strong submission. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for a thoughtful response to our paper. We address specific points below: * "My main concern is that..." We agree that this point is of central importance to the entire work. The short answer is that the network is, in fact, globally equivariant w.r.t. SO(3). The reason for this is that the $SO(2)_j$-equivariant layers are themselves parameterized by the directional vector $j$, which transforms as $j \rightarrow R j$ under the coordinate rotation $R^{-1}$ in SO(3), so that the _layer itself_ can be thought of as transforming under SO(3) in such a way as to ``cancel out'' the local symmetry breaking. In particular, it can be shown that the neural weights $A$ in equation (4) transform formally as a order-2 tensor: $A \rightarrow R A R^T$. Therefore, the vector layer activation $y = A x$ satisfies SO(3) equivariance by transforming as a vector: $Ax \rightarrow R A R^T R x = R A x = R y$. The case for the tensor layer is analogous. The sense in which SO(3) is broken is that for _fixed_ $j$, the $SO(2)_j$ layer proposed here includes additional terms that would otherwise break $SO(3)$ equivariance. Since the underlying physical symmetry in the data being considered is formally $SO(2)_j$, the vector $j$ is considered fixed. In other words, while the network possesses a global symmetry, the data need not. In our example, any individual datum breaks SO(3) down to $SO(2)_j$ by specifying a particular direction. We show that we can take advantage of this by adding more expressive terms to the network that would not otherwise be allowed. We thank the reviewer for underscoring the subtlety of this argument, and would be happy to include some of the clarifying discussion in a revised draft. Due to page limits, we would also propose to include a detailed equivariance proof outlined here as an appendix. * "I am very concerned with regard to..." We certainly do not argue that the method proposed is a path to a ``most general'' equivariant model. For instance, we are aware of work by Weiler, Jenner, et al. that establish group-convolutional methods as general linear equivarant maps under many groups and conditions. The method proposed here is of course not convolutional at all; while G-convolutions may be formally general for a given form of equivarance, many useful architectures are not purely convolutional, even when equivariance is at play, for example, patch-based CNNs. Instead, our intent is to design generalized analogs to perceptron/dense/fully-connected neurons which respect the particular symmetry of interest. The use of a Deep Sets architecture is for simplicity, given the point-cloud nature of the dataset, but is ultimately irrelevant to concerns of equivariance. As stated in the conclusion, we anticipate that these neural layers can be used as elements of other, more sophisticated equivariant models. For example, the vector and tensor layers proposed here could be as for projection operations in attention-based models, or to build edge convolution networks for DGCNN, etc. * "The paper performs multiple ablations..." We thank the reviewer for pointing out this inconsistency; this remark was from an earlier preprint version, and was meant to be removed. Due to limited time to run ablations (particularly for the tensor model, which is slower to train), we decided to omit late in the editorial process. However, as it may be of interest, we are attaching a supplemental figure with an extended table of results, showing the detailed ablation studies for the vector PFN model. Unfortunately, only one ablation study is available for the tensor model (and with reduced statistics), again due to time constraints. We agree that this extended table would be of general interest to readers, and propose to include it in the next revision of the paper. Moreover, we should be able to produce analogous ablation studies for the tensor models within a few days, if the reviewer would like to see them during the discussion period. * "In the final part of the conclusion..." We certainly agree that these are not _as_ easy as the Deep Sets, hence our decision to relegate this application to future work. * "What about translations?" In this paragraph we are speaking generally and will rephrase. However, in our application, translation is not a relevant symmetry. The reason is that we are measuring momentum eigenstates which emanate from a fixed collision point to a far away surface (the detector). Experimental observations cannot meaningfully be translated in either momentum nor position space. * References We agree with the reviewer on many points, and will fill out additional references as suggested. Regarding clifford algebras, we are unfamiliar with the work but presume you are referring to "Clifford Group Equivariant Networks" arxiv:2305.11141, which appears to have been submitted to arXiv one day _after_ we submitted the present manuscript for your consideration :) Regarding activation functions, we agree that additional references are warranted. However we are in particular not aware of a saturating ReLU unit to deal with exploding magnitudes. If the reviewer has a particular reference to give, we would be grateful. * Equation 4... This is a purely geometrical result: the dyadic product $jj^T$ is an operator that when applied to a vector essentially projects the component parallel to $j$, and $I-jj^T$ therefore projects the perpendicular component. I.e. $jj^T v$ can be rewritten as $(j \cdot v) j$. We can include this brief comment in the text when Eq 4 is introduced, unless the reviewer feels an appendix is merited. * alpha and beta in figure 1 This is an oversight, the caption should explain that $\alpha$ and $\beta$ are feature indicies. * conclusion Again we agree that this point is of central interest to the overall work. If you found our discussion above adequately clarifies the issue, we would be happy to include it in a revision to the main body of text. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Dear authors, Thank you very much for your response. **We thank the reviewer for underscoring the subtlety of this argument, and would be happy to include some of the clarifying discussion in a revised draft. Due to page limits, we would also propose to include a detailed equivariance proof outlined here as an appendix.** +-> Unfortunately, I still cannot see how this can be achieved without breaking the global SO(3) equivariance. Could you please post a proof here so that I can engage in discussions about this with the other reviewers / ACs during the discussion period? **The method proposed here is of course not convolutional at all; while G-convolutions may be formally general for a given form of equivarance, many useful architectures are not purely convolutional, even when equivariance is at play, for example, patch-based CNNs.** +-> I understand. I think that making this clear in the paper would help readability. In the same vein, probably making statements about generality of the method more subtle could better define the scope of the paper. **We agree that this extended table would be of general interest to readers, and propose to include it in the next revision of the paper. Moreover, we should be able to produce analogous ablation studies for the tensor models within a few days, if the reviewer would like to see them during the discussion period.** +-> It is perhaps too late to add them during the rebuttal period. However, I believe that these results would be very interesting for the general audience. I encourage the authors to include these results even if it is for the camera-ready version of the paper. **Regarding clifford algebras, we are unfamiliar with the work but presume you are referring to "Clifford Group Equivariant Networks" arxiv:2305.11141...** +-> I see. Please ignore this coment :) **We can include this brief comment in the text when Eq 4 is introduced, unless the reviewer feels an appendix is merited.** +-> A brief comment would be sufficient. Thank you! ### Summary Altogether I am happy with the author's response. However, I don't feel comfortable to support acceptance before I understand how exactly it is possible to break equivariance locally but not globally. I encourage the reviewers to provide a detailed proof in this regard. Best regards, Reviewer NLGv --- Reply to Comment 1.1.1: Title: equivariance proof Comment: Dear reviewer, unfortunately I am not able to directly attach a document; however, since you have asked specifically for this material I have prepared a draft appendix with a more detailed proof of the global equivariance property, including a hopefully helpful, albeit unpolished sketch to illustrate the local rotations. I have posted a picture of the (anonymized) draft here, and will contact the AC to provide a proper PDF if possible. https://imgur.com/a/r3DULDD Moreover, we had tried to attach a (partial) table of ablations at the time of the original rebuttal; however, due to some technical issues it was not included in the original replies. We have likewise attached the original supplementary material as a "picture" at the following URL: https://imgur.com/a/E97aILS Thank you!
null
null
null
null
null
null
Optimal Excess Risk Bounds for Empirical Risk Minimization on $p$-Norm Linear Regression
Accept (poster)
Summary: The paper studies the excess risk of empirical risk minimization on the $p$-norm linear regression. The asymptotic bound of the excess risk of ERM for $p$ in ($1,\infty$) is known by prior work and more recent work had given high probability excess risk bounds for $p=2$ that match that asymptotic rate. This paper expands the result to $p$ in $(1,\infty) \setminus \{2\}$. It splits the results into three parts: It first provides bounds for the realizable case for $p \in$ (1,$\infty$). Then, a bound for the non-realizable case with $p$ $\in$ (2,$\infty$) where it requires mild assumptions on the moments that are natural extensions of the assumptions needed for the $p=2$ prior work’s result. For the non-realizable case with $p$ $\in$ (1,2), the paper requires some additional assumptions: a slightly stronger version of non-realizability, assuming that the inner product of the covariates $x$ with the optimal minimizer $w^*$ is almost surely not equal to the label, and the existence of the $2(2-p)$ negative moment of $y-\langle w^*,x \rangle$. The proof technique draws inspiration from Oliveira 2016, Lecue and Mendelson 2016b, which by itself is not sufficient. However, when combined with a recent result of Adil et al. 2022, the approach becomes valid by allowing the use of the second-order Taylor expansion, despite the fact that it is not exact. For the case of $p$ $\in$ (1,2), a different approach is employed, utilizing an approximation for the $p$-th power function from Bubeck et al. 2018, along with the analysis of Adil et al. 2019a. Strengths: The paper provides the first high probability excess risk bounds of ERM for $p$-norm regression for $p \in (1,\infty)\setminus \{2\}$ that are consistent with the existing asymptotic bounds. This is a considerable result. The paper does a very good job in describing the problem’s motivation, the problem’s background and the new results. Weaknesses: As the authors have already described, the case $p \in (1,2)$ requires extra assumptions that it is unclear if they are necessary, leaving the goal of the paper for that regime only partially resolved. Although the paper writing is good in presenting the problem and the results, the proof strategy of the main results is not really discussed until later sections of the paper. As a minor comment on writing, it would be better to provide a definition of $\rho$ (or a reference to its definition) inside the theorem statements that use it. Similar comment for $\sigma_p^2$. Technical Quality: 3 good Clarity: 3 good Questions for Authors: What do the three regimes for $\rho$ represent in Theorem 3? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive evaluation and constructive feedback. We will make sure to add a definition of $\rho$ and $\sigma_p^2$ inside the statements of the theorems to make them more readable. --- ***”What do the three regimes represent in Theorem 3?”*** They are an artifact of the particular probability bound we obtain in Theorem 3. Specifically, the three cases are needed to properly capture the dominating term in the sample complexity. This is due to the relative growth of the binomial coefficients compared to the exponential with base $\rho$, which varies as $\rho$ varies from 0 to 1. --- We would be happy to clarify any concerns or answer any questions that may come up during the discussion period.
Summary: This paper studies the performance of empirical risk minimization (ERM) for $\ell_p$-norm linear regression, in both the realizable and non-realizable (say, noisy measurement) settings. While the $p=2$ case has been studied extensively in the literature, not as much is known for the general $p \in (1,2) \cup (2,\infty)$ cases. For the realizable case, the author(s) improve over prior analyses and give a $O(d)$ sample complexity bound for finding the regression coefficients with high probability. For the non-realizable case, under moment assumptions, this paper shows high probability excess risk bounds. Note: this is an emergency review, and so I did not check the math carefully. Strengths: The paper is well-written and clear. The author(s) motivate the $\ell_p$ linear regression problem well, and this paper gives the first non-asymptotic excess risk bounds for ERM in this setting. The proof ideas are also explained pretty clearly. Weaknesses: My main concern/confusion is the message of the paper. The author(s) seem to sell the paper as having a high probability (excess) risk bound, yet the bound has a $1/\delta$ dependence on the failure probability $\delta$ (from using Markov, and might actually be tight) instead of logarithmic or root-logarithmic (sub-Gaussian) performance. ERM is very susceptible to heavy-tailed noise, even in the simple special case of mean estimation, so I'm a bit confused why we're studying the performance of ERM here, instead of designing better estimators that work under such mild moment assumptions. From my readthrough, it also appears that the paper is really bounding some expectation and using Markov to get the high probability bound. My question then, is, why is the paper phrased as a high probability result instead of an expectation/constant probability result, which I can imagine ERM to actually be decent at? Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Beyond the motivation question I have above: - The results are being compared to the asymptotic guarantees of ERM, but is there any evidence that ERM's asymptotic performance is statistically optimal (even when compared with other estimators)? In the special case of mean estimation, for example, we can actually write down non-asymptotic lower bounds on the estimation error in the Gaussian cases, which match the central limit theorem. - Can the author(s) further elaborate on the extra moment assumption $\mathbb{E}[|Y - \langle w_p^*,X\rangle|^{2(p-2)} X_j^4] < \infty$ in Theorem 4? Line 168 states that it's a "natural extension", but really doesn't give interpretation to this assumption, in particular, the interaction between the response and the covariate coordinates. Another misc comment: The beginning of Section 2.2 makes it sound like Theorem 1 is an old result, but after the theorem, the author(s) state that there's novelty at least in the proof presentation. Probably worth writing that more explicitly before the theorem statement. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback. Please find our detailed response below. --- ***”My main concern/confusion is the message of the paper.”*** The goal and achievement of the paper is the study of the performance of ERM non-asymptotically and under weak moment assumptions for the $p$-norm regression problem. We motivate the ERM setting in the next paragraph. ***”I'm a bit confused why we're studying the performance of ERM here, instead of designing better estimators that work under such mild moment assumptions.”*** We list a subset of the reasons why one would like to study ERM before trying to develop more sophisticated estimators with the optimal subgaussian performance under heavy tails. - Firstly, ERM is the most widely used method in practice, not the least because of its computational tractability, and so it is interesting to understand its performance and its failure modes. - Secondly, we decided to present our results under the weakest moment assumptions we could work with, which is why we incur the suboptimal $1/\delta$ dependence on the confidence parameter. Note however that under stronger assumptions, e.g. subgaussianity of $X$ and $Y$, one may easily show root-logarithmic dependence on $\delta$ using our proof techniques: it is enough to exploit the subgaussianity of $X$ and $Y$ to prove tighter concentration in Lemma 2. In such cases, which might occur in practical problems, ERM is likely to be competitive with the yet to be discovered, more sophisticated, "subgaussian" estimators. - Thirdly, just like the proof techniques used for the study of ERM for the case p=2 were very useful later in proving guarantees for more sophisticated subgaussian estimators, we believe that our proof techniques will serve as the basic machinery to study the performance of yet to be proposed estimators with subgaussian performance. In this sense, we view our work as a stepping stone towards the end goal the reviewer has outlined. --- ***”From my readthrough, it also appears that the paper is really bounding some expectation and using Markov to get the high probability bound. My question then, is, why is the paper phrased as a high probability result instead of an expectation/constant probability result, which I can imagine ERM to actually be decent at ?”*** In short, we do not have a bound on the expectation of the excess risk, and deriving such a bound requires non-standard assumptions. In more detail, we do not bound the expectation of the excess risk then use Markov's inequality, but rather combine two high-probability bounds: one of them is indeed coming from a bound on an expectation (Lemma 2), the other however is a uniform probability estimate (Proposition 1), and is in fact weaker than a bound on the expectation. Roughly speaking, this is because a bound on the expectation requires controlling the lower tail of the smallest eigenvalue of the empirical covariance matrix at all levels, whereas Proposition 1 provides a non-trivial bound only at a fixed level. Getting control over all levels requires non-standard assumptions (in particular, a quantitative version of the small-ball condition), please see (Mourtada, 2022) for such an approach and a thorough discussion of this issue. --- ***”The results are being compared to the asymptotic guarantees of ERM, but is there any evidence that ERM's asymptotic performance is statistically optimal (even when compared with other estimators)?”*** Our goal and emphasis is to study the performance of ERM, so it is natural for us to compare our non-asymptotic guarantees with the asymptotic ones. The optimality of our bounds that we claim is in the sense that the leading term of our non-asymptotic bounds matches the asymptotically exact expression for the excess risk of ERM given by equation (1). Concerning the statistical optimality of ERM, if the reviewer is referring to minimax optimality over a given class of distributions, please see (Mourtada, 2022) for results for the case $p = 2$, where it is proven that, under mild regularity conditions, ERM is minimax optimal for well-specified models, and is asymptotically minimax optimal for misspecified ones. While we agree that it would be nice to extend such results to the cases $p \neq 2$, this is outside the scope of our paper, and is an interesting future work. --- ***”Can the author(s) further elaborate on the extra moment assumption [...]?”*** It is an extension in the sense that when $p = 2$, this assumption reduces to the fourth moment condition of Theorem 1. We are afraid there is no straightforward "intuitive" interpretation of this assumption in terms of the interaction between the response and covariates. Nevertheless we qualified the assumption as natural in that it arises directly from the proof and in a completely analogous way to the proof of the case $p=2$. --- We would be happy to clarify any concerns or answer any questions that may come up during the discussion period. --- Rebuttal Comment 1.1: Comment: I thank the authors for the response. I appreciated in particular the clarification on why the result is phrased as a $1/\delta$-dependence result as opposed to an expectation result. I have follow-up questions and comments: 1. Could the authors point me to how the study of ERM in $\ell_2$ regression led to sub-Gaussian estimators in that setting? 2. Asymptotic vs non-asymptotic guarantees: my question is actually, is the asymptotic rate of ERM the minimax rate, even if it is the minimax rate attained by another estimator? Ignoring the $\delta$ dependence, are there lower bounds on this minimax rate (for general $p$)? 3. It seems very unsatisfactory that there is an uninterpretable assumption in a main result. Can the author(s) say anything more beyond "this is how we get the proof to go through"? --- Reply to Comment 1.1.1: Comment: We thank the reviewer for following up on our rebuttal. Please find our response to the additional questions below. --- ***"1- Could the authors point me to how the study of ERM in $\ell_2$ regression led to sub-Gaussian estimators in that setting?"*** Our claim was that the proof techniques used to study the performance of ERM in the case $p=2$ were useful later in proving the subgaussian performance of newly proposed estimators. Indeed, the decomposition of the excess $\ell_{2}$ loss in terms of quadratic and multiplier processes (see, e.g., equation (2.1) in Lecue and Mendelson, 2016) was first proposed in the context of the study of ERM (Lecue and Mendelson, 2016 and references therein), and was reused, among others, in the analysis of the first subgaussian estimator proposed by (Lugosi and Mendelson, 2016) (see also Lugosi and Mendelson, 2019) and subsequently for the subgaussian estimator proposed by (Lecué and Lerasle, 2020). Furthermore, this decomposition, coupled with the realization that the quadratic process is lower bounded with high probability even in the presence of heavy tails (Koltchinskii and Mendelson, 2015; Oliveira, 2016), helped isolate the weak concentration of the multiplier process in ERM (equivalent of Lemma 2 in our paper) as the reason for its suboptimality. Newly proposed algorithms focused on better estimating this component of the excess loss, thereby achieving the desired subgaussian performance. In summary, the study of the performance of ERM contributed to the development of subgaussian estimators in multiple ways. - It motivated their development since ERM was found to suffer a suboptimal dependence on the confidence parameter $\delta$ in the case of heavy-tails (see, e.g., the discussion in Sec 1.1. in Lugosi and Mendelson, 2016). - It helped isolate the reason for this suboptimality (the weak concentration of the multiplier process discussed above). - It provided the basic technical machinery to prove the subgaussian performance of newly proposed estimators (among others, the excess loss decomposition, and the control of the quadratic process from below under heavy tails). --- ***"2- Asymptotic vs non-asymptotic guarantees: my question is actually, is the asymptotic rate of ERM the minimax rate, even if it is the minimax rate attained by another estimator? Ignoring the $\delta$ dependence, are there lower bounds on this minimax rate (for general $p$)?"*** To the best of our knowledge, the minimax rate for $\ell_{p}$ norm regression is unknown for $p \neq 2$, and we are not aware of any lower bounds on it. We are therefore not in position to say whether the asymptotic rate of ERM is minimax or not for general $p$. As discussed above, the case $p=2$ was only very recently settled by Mourtada, (2022), and to the best of our knowledge, it remains an interesting open problem for other values of $p \in (1, \infty)$. --- **References** Guillaume Lecué, and Shahar Mendelson. "Performance of empirical risk minimization in linear aggregation." (2016): 1520-1534. Gábor Lugosi and Shahar Mendelson. "Risk minimization by median-of-means tournaments." Journal of the European Mathematical Society 22.3 (2019): 925-965. Gábor Lugosi and Shahar Mendelson. "Mean estimation and regression under heavy-tailed distributions: A survey." Foundations of Computational Mathematics 19.5 (2019): 1145-1190. Guillaume Lecué and Matthieu Lerasle. "Robust machine learning by median-of-means: theory and practice." (2020): 906-931. Vladimir Koltchinskii and Shahar Mendelson. "Bounding the smallest singular value of a random matrix without concentration." International Mathematics Research Notices 2015.23 (2015): 12991-13008. Oliveira, Roberto Imbuzeiro. "The lower tail of random quadratic forms with applications to ordinary least squares." Probability Theory and Related Fields 166 (2016): 1175-1194. Jaouad Mourtada. "Exact minimax risk for linear least squares, and the lower tail of sample covariance matrices." The Annals of Statistics 50.4 (2022): 2157-2178.
Summary: This paper studies the empirical risk minimization of linear regression with the p-norm loss function, where p ∈ (1,∞). Main contributions: - The authors provide a high probability excess risk bound on the empirical risk minimizer, for p ∈ (1,∞), where for p ∈ [2,∞) only weak moment assumptions are needed, and for p ∈ (1, 2) there are assumptions that guarantee the existence of the hessian at the minimizer of the risk. - The authors strengthen a previous bound of perfect recovery in the realizable case. Strengths: Some strengths are: - This paper is the first work providing the high probability excess risk bounds for empirical risk minimization under the p-norm loss function with any p ∈ (1,∞). - The authors strengthen a previous bound of perfect recovery in the realizable case. Weaknesses: Some weaknesses of the paper: - The proof part is poorly written. For example, the statements of Corollary 1 and Proposition 1 don't make any sense (the inf of the left-hand side is less than the right-hand side?), where even for Proposition 1 the authors just want to cite an existing theorem in a previous paper (Theorem 3.1, Oliveira [2016]). - There is a lack of the argument of the importance of this work. The authors discuss the benefits of using p-norm as the loss with p != 2, but why moving from asymptotic bound to finite sample bound is important for p != 2 is missing. Typos: - At line 232, second line of the equation misses index i on X and Y. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: NA Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback. We kindly ask the reviewer to re-evaluate the paper in light of our response below. --- ***”The proof part is poorly written.”*** We thank the reviewer for pointing us to the typos in Corollary 1 and Proposition 1; we fixed these in the supplementary version, but we understand that this must have been a source of confusion when reading the original submission. To be more explicit, the required changes are as follows. - For Proposition 1, it suffices to move the term $\||v\||_{L^{2}}^{2}$ from the RHS of the statement to the denominator of the LHS. - Similarly, for Corollary 1, the term $\||w - w_{2}^{*}\||^2_{H_{2}}$ on the RHS should be moved to the denominator of the LHS. The proofs of both statements remain unchanged. We invite the reviewer to consult the supplementary version for a correct and clearer version of both statements. Please note that we have attempted to formulate our proofs in an accessible and readable way, and have dedicated a lot of effort in motivating all the steps in the proofs, emphasizing where our proofs differ from previous ones, and where our insights were needed. If the reviewer has other specific comments about the proofs, we would be happy to address them. ***”There is a lack of the argument of the importance of this work. The authors discuss the benefits of using p-norm as the loss with p != 2, but why moving from asymptotic bound to finite sample bound is important for p != 2 is missing.”*** We clarify this with the following sentence in the revised version of our paper. *Among other reasons, non-asymptotic bounds are useful in characterizing **when** the asymptotic regime kicks in, and under what assumptions it can be attained for moderately large $n$, see, e.g., (Ostrovski and Bach, 2021) for more on this argument.* --- We would be happy to clarify any concerns or answer any questions that may come up during the discussion period. --- Rebuttal 2: Comment: I thank the authors for the response. I'm willing to increase the score to 5.
Summary: The paper considers linear regression with inputs in dimension d and with n data points. It consider the Lp loss |.|^p with a focus on p different from the classical value 2. The paper provides non-asymptotic risk excess bound for this Lp loss, the excess being between the optimal linear combination w^* and the one obtained by minimizing the empirical risk. The bounds are of different natures in the two cases where the problem is realizable, meaning that the output is exactly a linear function of the input, and where the problem is not realizable. Strengths: The paper is well written. The problem is theoretically interesting. The theoretical results are well-connected to the existing literature and their novelty compared to this literature is well motivated. The proofs appear to be innovative and to combine arguments from different branches of the literature. Weaknesses: The content is technical and so the paper is not extremely easy to follow. An option could be to take advantage of the appendix (with no space limit) to add more slow-paced pedagogical content. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I do not have specific questions. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I do not see potential negative societal impact. The limitations are discussed adequately Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive evaluation. We will be happy to answer any questions or comments that may arise during the discussion period.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper studies excess risk bounds for empirical risk minimization for linear regression under $\ell_p$-norm. It extends existing work $p=2$ to general $p\in (1,\infty)$. Sampling complexity bounds are given for recovering the true parameter in the realizable case. Additive error excess risk bounds are given (under mild assumptions on $O(p)^{th}$ moments) in the non-realizable case I generally like the paper but some issues need to be discussed/resolved, based on which my rating could move either way. Strengths: * natural extension to $p$-norm regression * non-asymptotic bounds take similar forms as the asymptotic results in the p=2 case (up to dependencies on constants and $p$) * rigorous theory * good high-level discussion on the difficulties of generalizing previous proofs to general $p$ and how they are handled * good overview over existing work Weaknesses: * "Optimality", claimed already in the title is not supported by any (existing or new) lower bounds * some assumptions seem to 'eat' terms, that depend on the dimension d. There is some, but only very limited discussion on this * no result for the second most important case $p=1$ * in a few places very 'mathematical' statements without explaining the semantic or intuition behind * bounds depend on Hessian forms of $\ell_p$ loss functions, which fit _perfectly well_ to the quadratic function in the $p=2$ case but do not capture the loss as closely for other $p$, even though the bounds look similar * discussion on (third paragraph, ll 23-32) motivating the $\ell_p$ loss regression problem could be improved and seems vague wrt the properties of the model. I suggest rewriting this, stating that it is the standard linear model with $p$-generalized Gaussian error distribution that has certain statistical properties parameterized by $p$. See https://arxiv.org/pdf/2203.13568.pdf and references therein. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Optimality is a strong claim. In what sense can your bounds be considered _optimal_ ? I see no lower bounds at all !?! 2. Theorem 3 seems (at first glance) even below known lower bounds. For instance if the generating distribution realizes standard basis vectors with equal probability and the target is not contained in any subspace spanned by a proper subset, it is known that $\Omega(d \log d)$ is necessary to recover the exact target (follows simply from the coupon collector's theorem). So, my question is, what is the role and the dependence of the $\rho$ parameter on the dimension? 3. Again Theorem 3: the three cases seem a bit arbitrary, why is it necessary to distinguish between them? is it only to make the upper bounds work in any case or are there stronger indications for actually separated complexities in the three cases? 4. Why are there no bounds on $p=1$, would there be a major complication (in your analysis or in general) in tackling this case? 5. What is a "negative moment"? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The very important case $p=1$ is not handled or discussed at all beyond the intro/motivation Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive evaluation and detailed feedback. We address the reviewer’s concerns below. --- ***”Optimality is a strong claim. In what sense can your bounds be considered optimal?”*** Our bounds are optimal in the sense that the leading term in them matches the asymptotically **exact** expression of the excess risk of ERM given by equation (1) as $n \to \infty$, as we described after the statements of Theorems 4 and 5 (lines 164-165 and 181-182). In other words, **any** bound on the excess risk that holds for large enough $n$ must dominate the first term in the RHS of equation (1); our bounds are optimal in that they exactly recover this term (up to constants that depend only on $p$). --- ***”What is the role and the dependence of the parameter $\rho$ on the dimension?”*** The role of the parameter $\rho$ is to quantify the degree of "degeneracy'' of the distribution of the covariates (see, e.g., Mourtada 2022), i.e., how well can the support of the distribution of the covariates be approximated by some fixed hyperplane. In general, $\rho$ does not depend directly on the dimension. Indeed, on the one hand, and when the distribution of the covariates $X$ is absolutely continuous with respect to Lebesgue measure, $\rho = 0$, no matter how large the dimension is. On the other hand, $\rho$ can be arbitrarily close to 1, even in very low dimensional settings. Indeed, let $\varepsilon \in (0, 1)$, and consider one dimensional $X$ satisfying $X = 0$ with probability $1 - \varepsilon$, and $X = 1$ with probability $\varepsilon$, then $\rho = 1 - \varepsilon$. Finally, let us mention that this parameter and closely related ones have been used many times in previous work (Rudelson et al., 2015; Lecue and Mendelson, 2017; Mourtada 2022), and that from a technical perspective, as we have discussed in the paper (lines 129-139), it is a natural parameter to consider. --- ***”Why is it necessary to distinguish between the three cases in Theorem 3? Is it only to make the upper bounds work in any case or are there stronger indications for actually separated complexities in the three cases?”*** It is only to make the upper bound on the failure probability work. --- ***”Some assumptions seem to "eat" terms that depend on the dimension d. There is some, but only very limited discussion on this.”*** If the reviewer is referring to the constants arising in our bounds, please note that we discussed their dimension dependence right after the statements of Theorems 4 and 5 (lines 167-171 and 196-197), and how that affects the optimality of our bounds. As for the constants $\sigma^2_{p}$, they are powers of norm equivalence constants; one can show that they can be arbitrarily bad, even in a one dimensional setting, using an example similar to the one we used above to demonstrate the case $\rho=1-\varepsilon$. On the other hand, and to the best of our knowledge, no dimension dependent lower bound on them is known in the literature. As for Theorem 3, the above discussion on $\rho$ shows that it does not depend on the dimension directly. We will add a discussion on both $\rho$ and $\sigma^2_{p}$ in the final version of the paper, but please note that our use of these constants is in alignment with previous work, and an inspection of our proofs would show that we did not purposefully hide dimension dependence in these constants, but that rather they naturally arise from the problem. --- ***”Why are there no bounds on p=1, would there be a major complication in tackling this case?”*** Yes, the case p=1 is technically more challenging, and we suspect that one needs new proof techniques to cover this case (as well as the case $p=\infty$). Our proof technique breaks since Lemma 4 does not hold for the case $p=1$. This is in part due to the vanishing of the second derivative of the absolute value function (where it exists), which prevents us from extracting a quadratic lower bound on the excess empirical risk. Please note that even the asymptotic behavior of the excess risk of ERM is quite difficult to obtain in the case $p=1$, and can widely vary depending on the degree of "non-realizability" of the problem; we refer the reviewer to (Knight, 1998) for a detailed account. --- ***”Bounds depend on Hessian forms of $\ell_{p}$ loss functions, which fit perfectly well to the quadratic function in the case $p=2$, but do not capture the loss as closely for other $p$, even though the bounds look similar.”*** While the $\ell_{p}$ losses are indeed not quadratic for $p \neq 2$, in a small enough neighborhood of the optimum, the quadratic approximation of the risk is very accurate. Indeed, this is the basis of the asymptotic expansion in equation (1). As we have argued above, our bounds are optimal in that they asymptotically match the exact excess risk of ERM as $n \to \infty$, so that one cannot hope to obtain better bounds for large enough $n$. We agree however that for small $n$, better approximations of the $\ell_{p}$ loss could potentially lead to better bounds. --- ***”Discussion motivating the $\ell_{p}$ loss regression problem could be improved and seems vague wrt the properties of the model.”*** We will include the reviewer’s suggestion in the final version of our paper as an additional motivation for $\ell_{p}$ norm regression. Please note however that the main perspective we take in this paper is learning theoretic, and our only concern is prediction: we are not postulating a statistical model, we make no assumptions on the distributional form of X and Y, and we are not motivating ERM as a maximum likelihood procedure. --- ***”What is a ‘negative moment’?”*** For a random variable $X$, we call $E[X^{-k}]$ its $k$-th negative moment. (with the convention $1/0 = \infty$). --- We would be happy to clarify any concerns or answer any questions that may come up during the discussion period. --- Rebuttal Comment 1.1: Comment: I thank the authors for the response, which mostly clarified my questions. However, Question no. 2 is still open (the two examples of distributions where $\rho$ does not depend on the dimension do not resolve my concern). I assume $\delta$ is a constant, so it is out of the following discussion. Let me try to elaborate on what I think is wrong: - You claim "$O(d)$ samples are enough to exactly recover the target." in the abstract. To show this, you need to prove that for *any* distribution on (X,Y), O(d) samples suffice. - In my Question no. 2, I mentioned that *there exists* a distribution that requires $\Omega(d\log d)$ samples to solve the task. This already contradicts the claim in the abstract (which should be modified). - Assuming Theorem 3 is correct for *any* distribution (which I believe), the only way to make it compatible with the lower bound is when the remaining terms depending on $\rho$ are at least $\Omega(\log(d))$ - This implies that *there exists* a distribution where $\rho$ depends on the dimension (which I think needs to be discussed briefly in the paper). --- Reply to Comment 1.1.1: Comment: We thank the reviewer for following up with additional questions. Please find our response below. --- ***"Assuming Theorem 3 is correct for any distribution (which I believe), the only way to make it compatible with the lower bound is when the remaining terms depending on $\rho$ are at least $\Omega(\log{(d)})$. This implies that there exists a distribution where $\rho$ depends on the dimension (which I think needs to be discussed briefly in the paper)."*** We agree with the reviewer. To be very precise, there exists a **sequence** of distributions, indexed by their dimension, for which $\rho$ grows with the dimension. We will briefly discuss this in the paper. However, please note that the spirit of our result is for a fixed but unknown distribution, and as our two examples above show, in such a general setting, $\rho$ does not depend on the dimension, which is why we didn't not mention this in our original submission. --- ***"You claim "$O(d)$ samples are enough to exactly recover the target." in the abstract. To show this, you need to prove that for any distribution on $(X,Y)$, $O(d)$ samples suffice. In my Question no. 2, I mentioned that there exists a distribution that requires $\Omega(d \log{d})$ samples to solve the task. This already contradicts the claim in the abstract (which should be modified)."*** We agree with the reviewer that our statement needs to be modified. We used the $O$ notation to hide distribution specific constants but we did not specify this in the abstract, and as it stands, the current claim is imprecise. We will add the following sentence in the abstract to fix this: *"...,$O(d)$ samples are enough to exactly recover the target with high probability, where the $O$ notation hides dependence on a distribution specific constant as well as on the confidence level."* This revised statement does not contradict the reviewer’s example since the $\log{d}$ factor in their specific example should be recoverable from the distribution dependent constant $\rho$, which we are hiding in the $O$ notation. Please note that we initially tried to include the dependence on $\rho$ in the $O$ notation in the abstract. Unfortunately, this requires introducing the constant $\rho$, but more importantly, it requires listing the three cases of complexity of Theorem 3 to properly capture the dependence on $\rho$, which is quite cumbersome to do in the abstract. We settled on the current version, but we agree that we needed to clarify what we were hiding with the $O$ notation. We welcome any additional suggestion the reviewer might have in phrasing this statement precisely without encumbering the abstract.
null
null
null
null
null
null
Multi-task Representation Learning for Pure Exploration in Bilinear Bandits
Accept (poster)
Summary: This paper studies multi-task representation learning for the problem of pure exploration in bilinear bandits. The authors design algorithm GOBLIN that uses an experimental design approach to optimize sample allocations for learning the global representation as well as minimize the number of samples needed to identify the optimal pair of arms in individual tasks. Their results corroborate that by learning the shared representation across tasks, they achieve significantly improved sample complexity compared to the traditional approach of solving tasks independently. Strengths: 1. This paper is well-written. The considered problem is interesting and well-motivated. 2. The theoretical analysis looks sound. Weaknesses: 1. What is the main technical novelty of this work, beyond the existing SVD, G-optimal design, phased elimination methods in the bilinear/linear bandit literature? 2. Could you elaborate more on the difference of formulation, algorithms and results between this work and [Du et al. 2023]? Can the considered multi-task representation learning for bilinear bandit problem somehow be transformed to the multi-task representation learning for linear bandit problem in [Du et al. 2023]? Could you explain more about the claim “if one runs DouExpDes [Du et al. 2023] then its sample complexity will scale as $\tilde{O}(M k_1 k_2 / \Delta^2)$” in Discussion 2? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the weaknesses above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Please see the weaknesses above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for reviewing our paper. Below, we clarify the key novelty in our approach and compare our work with [Du et al. 2023]. *Weakness* 1) Even though we use existing ideas in SVD, G-optimal design phased elimination in linear bandit setting, it is highly non-trivial to extend that analysis to multi-task bilinear bandit setting where the regret must scale with the intrinsic dimensions $k_1$ and $k_2$ instead of $d_1$ and $d_2$. We list these technical novelties below: - We use E-optimal design in an initial exploration to learn two feature extractors $\mathbf{B}\_1$ and $\mathbf{B}\_2$ jointly across all the $M$ tasks. This requires a new estimator (see eq (8)) and concentration techniques not studied in [Jun et al., 2019](); [Lu et al. 2021](), [Kang et al., 2022](), [Du et al. 2023](). The sample complexity of this step scales as $\frac{\sqrt{d_1 d_2 r}}{S_r}$ where $S_r$ is the spectral bound of $\mathbf{\Theta}_\star$ and scalses as $S_r= \Theta(1/\sqrt{r})$. - We then estimate the individual task-specific parameter matrix $\mathbf{S}_{m,*}\in\mathbb{R}^{k_1 \times k_2}$ using a new estimator in eq (9). In particular, we use a regularized minimization optimization problem with a nuclear norm penalty to construct the estimator. Note that [Du et al. 2023]() does not involve this type of estimator as it only needs a good estimate of the vector $w_m,$ which is obtained by a simple OLS estimation. Our sample complexity of this step scales as $\frac{M \sqrt{k_1 k_2 r}}{S_r}$. - Then we rotate the arm sets (similar to [Jun et al., 2019]()) to reduce the $M$ bilinear bandits to $(k_1 + k_2)r$ dimensional linear bandits. We then use G-optimal design to learn the optimal arm for these $(k_1 + k_2)r$ dimensional linear bandits such that the sample complexity scales as $\frac{M\left(k_1+k_2\right) r}{\Delta^2}$. We are inspired by the proofs of G-optimal design from [Fiez et al. (2019)]()) to derive a guarantee for this step. Finally, note that this step is also not there in [Du et al. 2023](). So the final sample complexity scales as $O\left(\frac{M\left(k_1+k_2\right) r}{\Delta^2}\right)$ as $S_r = \Theta(1/\sqrt{r})$. - All of these steps have to be done carefully and it is highly non-trivial to combine them so that the desirable sample complexity scaling appears. - We mention these technical novelties/approaches of combining these methods in the proof overview of Theorem 1 in lines 192-225 or Theorem 2 in lines 300-327. Please reach out to us if you have any specific questions about our novelty. 2) Thanks for asking this clarification question. [Du et al. 2023]() critically consider the linear bandit setting which is a simpler setting than the bilinear bandit setting. Crucially there is a single feature extractor in the linear bandit setting of [Du et al. 2023]() whereas we have two feature extractors. [Du et al. 2023]() do not require to rotate the arms and learn in the rotated arm space as we do (and explain) in section 2.2 and section 3.3. Learning in the rotated arm space leads to a regret of $M(k_1 + k_2)r/\Delta^2$ instead of $M(k_1k_2)/\Delta^2$ (as done in [Du et al. 2023]()). Hence, we make the statement that DouExpDes [Du et al. 2023]() has a sample complexity of $\tilde{O}\left(M k_1 k_2 / \Delta^2\right)$ in Discussion 2. We will add the clarification in the camera-ready version. - KS Jun, R Willett, S Wright, R Nowak, Bilinear bandits with low-rank structure, ICML 2019 - T Fiez, L Jain, K G. Jamieson, L Ratliff, Sequential Experimental Design for Transductive Linear Bandits, NeurIPS 2019 - Y Du, L Huang, W Sun, Multi-task Representation Learning for Pure Exploration in Linear Bandits, arXiv preprint arXiv:2302.04441, 2023 - Y Lu, A Meisami, A Tewari, Low-rank generalized linear bandit problems, International Conference on Artificial Intelligence and Statistics, 2021 - Y Kang, CJ Hsieh, TCM Lee, Efficient frameworks for generalized low-rank matrix bandit problems, NeurIPS, 2022 We hope we have answered your questions and clarified your concerns sufficiently for you to consider raising your final score. Please reach out to us during the discussion period if you have further questions. --- Rebuttal Comment 1.1: Title: I raised my score from 4 to 6 Comment: Thank the authors for their response. My concerns were addressed, and I raised my score from 4 to 6.
Summary: The authors study the problem of pure exploration in bilinear bandits, where the reward is a bilinear function of the feature vectors of two types of arms. They consider both single-task and multi-task settings, where the latter involves learning a shared low-dimensional representation across multiple tasks. They propose a phase-based algorithm called GOBLIN, which uses optimal experimental design techniques to estimate the hidden parameters and identify the optimal pair of arms for each task. They provide sample complexity analysis for both settings and show that their algorithm achieves significant improvement over existing methods. They also conduct experiments to demonstrate the empirical performance of their algorithm. Strengths: 1) Originality: The paper presents a novel approach to the problem, combining existing ideas in a creative way and formulating the problem in a new and insightful manner. 2) Quality: The methodology used in the paper is rigorous and well-executed, with clear and thorough explanations of the techniques employed. 3) Clarity: The writing is clear and concise, making it easy to follow the arguments and understand the main contributions of the paper. 4) Significance: The results presented in the paper have important implications for the field, removing limitations from prior results and opening up new avenues for research. Weaknesses: 1) The paper does not provide a comprehensive literature review of the existing work on the topic, and does not compare or contrast its approach with the previous methods or results. (Related Work) The paper only cites a few papers on bilinear bandits, linear bandits, and representation learning, but does not discuss how they are related to or different from its own approach. It also does not mention any existing work on pure exploration in bilinear bandits or multi-task representation learning in online settings. For example some missing papers are: [1-2] [1] Soare, Marta, Ouais Alsharif, Alessandro Lazaric, and Joelle Pineau. "Multi-task linear bandits." In NIPS2014 workshop on transfer and multi-task learning: theory meets practice. 2014. [2] Deshmukh, Aniket Anand, Urun Dogan, and Clay Scott. "Multi-task learning for contextual bandits." Advances in neural information processing systems 30 (2017). 2) The paper does not explain the rationale or motivation behind the choice of the data set, the experimental design, the evaluation metrics, or the baselines. (Methodology and Experiments) The paper only briefly describes the data set and the experimental setup, but does not justify why they are suitable or relevant for the problem. It also does not explain how it chooses the evaluation metrics or the baselines, or how they reflect the performance of its algorithm. 3) The paper does not report the details of the data preprocessing, feature engineering, model architecture, hyperparameter tuning, or implementation. (Methodology and Experiments) The paper does not provide any information on how it preprocesses the data, extracts features, designs the model architecture, tunes the hyperparameters, or implements its algorithm. It also does not provide any pseudocode or algorithmic description of its method. 4) "We assume that the matrix Θ∗ has a low rank r which is known to the learner" This seems like a big assumption too. Authors should include this in their list of assumptions. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: 1) What is the motivation and application of pure exploration in bilinear bandits? Authors gave motivation for bilinear bandits but it was not clear why pure exploration. 2) What are the limitations of the proposed algorithms and analysis? 3) How does GOBLIN handle the cases when the rank r of Θ∗ is unknown or when the spectral bound Sr of Θ∗ is unknown or inaccurate? 4) How does GOBLIN handle the cases when the feature extractors B1 and B2 are not shared across tasks or when they are noisy or incomplete? 5) How does GOBLIN handle the cases when the tasks have different reward distributions or different reward gaps? 6) How does GOBLIN handle the cases when the arm sets X and Z are infinite or continuous or dynamic? 7) How does GOBLIN handle the cases when the rewards are not sub-Gaussian or when there are outliers or adversarial noise? 8) What are the computational and memory complexities of GOBLIN and how can they be improved or optimized? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 2 fair Contribution: 2 fair Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for appreciating the soundness and contribution of our work. We answer the questions raised below. *Weakness* 1) Thanks for pointing out these papers. We will discuss the papers pointed out in the revision. Here we provide a brief comparison of our work with [1][2]. [1] studies regret minimization for the multi-task linear bandit, whereas we study pure exploration for multi-task bilinear bandit, which requires a different algorithmic design and analysis. Similarly [2] studies regret minimization for multi-task contextual bandit, which is a different setup than the pure exploration studied in our work. We remark that there is only one work on bilinear bandit pure exploration [Geovani et al. (2021)](), which focuses on graph structure for a single task setting. We will add more detailed discussion and comparison to the revision. 2) We are not sure how some of these comments are related to our experimental setting. We use a standard single and multi-task bilinear bandit experimental setting similar to [Jun et al., 2019](); [Lu et al. 2021](), [Kang et al., 2022](). However note that these papers only study single task regret minimization setup whereas we study multi-task pure exploration setting. Our metric of performance is sample complexity against arms or tasks (similar to the setting of [Fiez et al. (2019)]() and [Du et al. 2023]()). 3) We are not sure how this comment is related to our paper. We use a standard bilinear bandit setting similar to [Jun et al., 2019](); [Lu et al. 2021](), [Kang et al., 2022](), [Du et al. 2023](). It will be very helpful if the reviewer reaches us to us during the discussion phase to point out any specific questions on the experimental setup. 4) We will make it clear that rank $r$ knowledge is required and will state this as a separate assumption. Similar assumptions have been commonly considered in prior works like [Jun et al., 2019](); [Lu et al. 2021](), [Kang et al., 2022](). In practice, one can leverage domain knowledge or cross-validation to choose the rank. *Questions* 1) In many real-world applications where obtaining a sample is expensive and time-consuming, e.g., clinical trials (Zhao et al., 2009; Zhang et al., 2012), it is often desirable to identify the optimal option using as few samples as possible, i.e., we face the pure exploration scenario rather than regret minimization. This is a well-studied setting in linear bandits but has not been studied in the bilinear setting. To avoid the time-consuming process of conducting clinical trials for individual tasks and collecting samples, we utilize the shared representation and decrease the number of required samples. We mention the reason for pure exploration in bilinear bandits in multi-task setting in lines 37-48. 2) In the current analysis we explicitly rely on the linear structure analysis. It will be very interesting to extend this line of research to generalized linear bandits. Also, note that currently there is no lower bound for multi-task pure exploration in the bilinear bandit setting with no assumption on the underlying structure. It will be interesting for future works to give lower bounds for this setting. 3) Thanks for pointing out this new direction. The current analysis requires knowledge of the rank and $S_r$ similar to the works of [Jun et al., 2019](); [Lu et al., 2021](); [Kang et al., 2022](). We leave this line of research for future works. 4) The key idea of a multi-task setting is that the feature extractors are shared across tasks so that there is some gain in accelerating learning and reducing the rate of regret (See [Yang et al., 2020](), [Yang et al., 2022](); [Du et al., 2023]()). If tasks do not share the common feature extractors $B_1$ and $B_2$ then the problem is reduced to $M$ independent subproblems, with a regret scaling as $O\left(M(d_1d_2 r)/\Delta^2\right)$. 5) Currently we assume each task has different distributions (but all are parametric sub-Gaussian distributions) with different reward gaps. The regret scales with minimum gap across all tasks. This is an interesting direction for future work. 6) For continuous action space currently there are no existing works in bilinear bandits. It is not clear how to extend our current analysis to continuous action space. 7) It is an interesting future work to extend our analysis to non-parametric or even adversarial noise. We have taken the first step towards understanding pure exploration for bilinear bandit setting in the case of parametric sub-Gaussian reward distribution setting. 8) The computational and memory complexity of GOBLIN is same as that of [Jun et al., 2019](); [Lu et al. 2021](), [Kang et al., 2022]() and scales as $\tilde{O}(d T)$. Thank you for taking the time to read our response. We hope we have answered your questions and clarified your concerns sufficiently for you to consider raising your final score. Please reach out to us during the discussion period if you have further questions.
Summary: This paper studies pure exploration of bilinear bandits with multi-task representation learning, i.e., different tasks share a common low-dimensional linear representation. As an intermediate step, the authors propose the first single-task bilinear bandit pure exploration algorithm, based on G-optimal design. Each phase of the proposed algorithm first estimates the unknown parameter $\hat{\Theta}$ (stage 1), and then use it to rotate the active arm set, and convert the problem to a $(d_1 + d_2)r$-dimensional linear bandit problem, on which G-optimal design is applied (stage 2). When extending to multi-task setting, stage 1 is modified to first estimate the feature extractors that are common to all tasks, and then estimate hidden parameter unique to each task. Then G-optimal design is applied as before. Strengths: The problem setting studied in this paper, i.e., different tasks share common feature extractors while each having unique hidden parameter, is natural and well-motivated. To the best of my knowledge, this paper proposes the first solution to pure exploration of bilinear bandits for both single-task and multi-task settings. Weaknesses: The authors mention improvement over RAGE (Fiez et al., 2019), but there lacks a more rigorous discussion about lower bound of the sample complexity in either single-task or multi-task setting. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. What is the lower bound for sample complexity in (single-task) bilinear bandit pure exploration? Does Theorem 1 already attains optimality in terms of $(d_1 + d_2)r, \Delta, S_r$? 2. In multi-task setting, as the estimation of task-specific parameter $\hat{S}$ depends on the estimation of the common feature extractor, how is this reflected in the theoretical guarantee on $||\hat{S}-S||^2_F$? Intuitively, larger number of tasks means more samples for the estimation of feature extractors, which further helps the estimation of $\hat{S}$. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for appreciating the soundness and contribution of our work. We answer the questions raised below. *Weakness*: 1) Thank you for raising this point. There is no lower bound for the single task setting for pure exploration in bi-linear bandits with unknown underlying structures. In the paper [Rizk et al. (2021)]() they conjecture that under certain graph structure for pure exploration the lower bound of [Soare et al (2014)]() or [Fiez et al. (2019)]() can be reached. But currently, without assuming any additional assumption (like underlying graph structure) there is no lower bound. We leave this line of research for future work and will discuss this point in the camera-ready version. - G Rizk, A Thomas, I Colin, R Laraki, Y Chevaleyre, Best arm identification in graphical bilinear bandits, International Conference on Machine Learning, 2021 - M Soare, A Lazaric, R Munos, Best-arm identification in linear bandits, Advances in Neural Information Processing Systems 27 (NIPS 2014) - T Fiez, L Jain, K G. Jamieson, L Ratliff, Sequential Experimental Design for Transductive Linear Bandits, Advances in Neural Information Processing Systems 32 (NeurIPS 2019) 2) That's a great observation and thanks for reading the proof in detail. Note that the $M$ tasks have different unknown task-specific parameters $\mathbf{S}\_{m, \star}$. The estimation of each $\mathbf{S}\_{m, \star}$ requires using samples from the corresponding task $m$, thus resulting in a linear scaling with the number of tasks $M$ for the total sample complexity. However, this scaling is with $O\left(\frac{M(k_1+k_2)r\log(\delta^{-1})}{\Delta^2}\right)$ which depends in $k_1$, $k_2$ and rank $r$ instead of $d_1d_2$ and scales linear with $M$. This is where the dominating term of our sample complexity arises in Theorem 2. Also note that the estimation of $\mathbf{B}_1$ and $\mathbf{B}_2$ cannot be further improved as we used the standard Davis-Kahan theorem for bounding the error in estimation similar to [Yang et al 2021](), [Yang et al 2022](), [Du et al. 2023](). Again note that these works address linear bandit settings whereas we address bilinear bandit setting. - Jiaqi Yang, Wei Hu, Jason D Lee, and Simon S Du. Impact of representation learning in linear bandits. In International Conference on Learning Representations, 2021. - J Yang, Q Lei, JD Lee, SS Du, Nearly minimax algorithms for linear bandits with shared representation, arXiv preprint arXiv:2203.15664, 2022 - Y Du, L Huang, W Sun, Multi-task Representation Learning for Pure Exploration in Linear Bandits, arXiv preprint arXiv:2302.04441, 2023 Please reach out to us during the discussion period if you have further questions and we hope that you will consider raising our score given these clarifications.
null
null
Rebuttal 1: Rebuttal: References: - KS Jun, R Willett, S Wright, R Nowak, Bilinear bandits with low-rank structure, ICML 2019 - T Fiez, L Jain, K G. Jamieson, L Ratliff, Sequential Experimental Design for Transductive Linear Bandits, NeurIPS 2019 - Y Du, L Huang, W Sun, Multi-task Representation Learning for Pure Exploration in Linear Bandits, arXiv preprint arXiv:2302.04441, 2023 - Y Lu, A Meisami, A Tewari, Low-rank generalized linear bandit problems, International Conference on Artificial Intelligence and Statistics, 2021 - Y Kang, CJ Hsieh, TCM Lee, Efficient frameworks for generalized low-rank matrix bandit problems, NeurIPS, 2022 - G Rizk, A Thomas, I Colin, R Laraki, Y Chevaleyre, Best arm identification in graphical bilinear bandits, International Conference on Machine Learning, 2021 - M Soare, A Lazaric, R Munos, Best-arm identification in linear bandits, Advances in Neural Information Processing Systems 27 (NIPS 2014) - Jiaqi Yang, Wei Hu, Jason D Lee, and Simon S Du. Impact of representation learning in linear bandits. In International Conference on Learning Representations, 2021. - J Yang, Q Lei, JD Lee, SS Du, Nearly minimax algorithms for linear bandits with shared representation, arXiv preprint arXiv:2203.15664, 2022
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
To Repeat or Not To Repeat: Insights from Scaling LLM under Token-Crisis
Accept (poster)
Summary: This paper conducts an empirical study on the scaling of transformer models under limited training data, which they call a *token crisis*. They show that under token crisis, training T5 models for multiple epochs results in the degradation of pre-training and downstream task performance. They also show that the dataset quality is unimportant for this multi-epoch degradation. They reveal that using dropout can alleviate the multi-epoch degradation. They also observe that the behavior of Mixture-of-Expert (MoE) models can be used to predict the training behavior of dense models, and they use this observation to search for the best dropout rate for the dense model using MoE models. Strengths: 1. This paper is well-written, and the empirical takeaways are clear and useful 2. The *token crisis* is important and is expected to be more severe. It is good that someone studies this problem. Weaknesses: 1. This paper only uses a single task as the downstream task, so it is unclear if the result will hold for other datasets. 2. This paper only studies encoder-decoder models like T5, so it is unclear if the results will hold for other models. Specifically, decoder-only models (like GPT3, Chinchilla, and LLaMA) are more widely used and are the predominant architecture of current LLMs. Still, I think this paper has merits since for focusing on Enc-Dec models, and such a study is still valuable. 3. Some conclusions are too assertive and are not fully supported by the experiment results. - Section 3.1 Insight (3) attributes the performance difference between the two models to dataset size. However, the batch size is also different. I wonder if the key difference is the dataset size or the batch size. - Section 3.3: UL2 degrades more. There should be some statistical significance comparison or variance for the downstream performance based on multiple runs shown in Table 3. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Q1. Line 249: balance between regularization and model performance. Doesn't regularization mean less overfitting, which leads to better model performance? What does the balance mean here? Should it be the balance between regularization and training efficiency? - Q2. Does Chinchilla scaling law really hold for Enc-Dec models like T5? Chinchilla law is based on Decoder only models. More extensive experiments (more than the experiments shown in Insight (1)) are needed to verify such a claim. - Q3. The sentence on Line 111 (`When a larger model outperforms a smaller model, it indicates that the smaller model has received sufficient tokens`) is unclear to me. I don't see why the prior situation indicates the latter conclusion. - Q4. Line 153: Why is C4 with cleaning still extremely low-quality? Presentation === Line 244: "We" should not be capitalized. Other suggestion === **The following comment does not influence my rating of the paper, and I know that the authors are not required to compare their work with the following concurrent work**. I am just listing it since I believe that when the paper under review is published, readers will be wondering the following questions. Another very recent work also studies token crisis, "Scaling Data-Constrained Language Models [1]". It will be beneficial to discuss the similarity and differences between that paper and the current paper under review. For example, why does the Chinchilla law need to be adjusted in [1] while the current paper claims that the Chinchilla law still holds? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitations are well-addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Q1: This paper only uses a single task as the downstream task, so it is unclear if the result will hold for other datasets. Thank you for your insightful suggestion. We conducted more downstream evaluation on BoolQ and RTE datasets, which are two widely used datasets in SuperGLUE benchmark. As shown in the rebuttal PDF, we observe a similar trend in both SuperGLUE and SQuAD results. > Q2: This paper only studies encoder-decoder models like T5, so it is unclear if the results will hold for other models. Still, I think this paper has merits because for focusing on Enc-Dec models, and such a study is still valuable. Thank you so much for your positive feedback. Please see the response of SEVe Q1 and Appendix A.1. > Q3: Section 3.1 Insight (3) attributes the performance difference between the two models to dataset size. However, the batch size is also different. I wonder if the key difference is the dataset size or the batch size. This is a great question! We conduct another set of experiments in rebuttal pdf file by fixing the batch size and let the model go through the 4 times larger dataset. We can see that model trained by fixed batch size has a similar trend to the model trained with a 4 times larger batch size. > Q4: Section 3.3: UL2 degrades more. There should be some statistical significance comparison or variance for the downstream performance based on multiple runs shown in Table 3. We added the standard deviation in the PDF submission. > Q5: Line 249: balance between regularization and model performance. What does the balance mean here? Dropout serves as a regularization technique to reduce overfitting in LLM training to improve model performance. However, it can slow down early learning. The "balance" refers to optimizing both training efficiency and model performance through the whole training process. To achieve this balance, we experimented with applying dropout mainly in later epochs to prevent overfitting, while avoiding its use in the initial training stages. We've clarified this point in the latest version of our paper and will update it accordingly. > Q6: Does Chinchilla scaling law really hold for Enc-Dec models like T5? Chinchilla law is based on Decoder only models. More extensive experiments (more than the experiments shown in Insight (1)) are needed to verify such a claim. First, we argue again that Decoder-only is not that different from Enc-Dec models. In addition, we check the Chinchilla scaling law in our paper because we want to see whether encoder-decoder architecture is also similarly data-hungry. Through our experiments depicted in Figure 2, we indeed observed that larger models outperform smaller ones given a fixed computation budget, with the requirement for an increased dataset size. This observation reinforces the necessity of investigating multi-epoch training. To ensure clarity and precision in our assertion, we have revised our claim as follows: "Encoder-Decoder models trained on the C4 dataset exhibit comparable data-hungry behavior as described in the Chinchilla scaling law.". This would be reflected in our forthcoming version of the paper. > Q7: The sentence on Line 111 (When a larger model outperforms a smaller model, it indicates that the smaller model has received sufficient tokens) is unclear to me. I don't see why the prior situation indicates the latter conclusion. Larger model is more data-hungry so the larger model is superior only after learning from enough tokens. > Q7: The sentence on Line 111 (When a larger model outperforms a smaller model, it indicates that the smaller model has received sufficient tokens) is unclear to me. I don't see why the prior situation indicates the latter conclusion. Larger model is more data-hungry, which means larger model is superior only after learning from enough tokens. Assume we get more computation budget and infinite training tokens, the question here is whether we should train a larger model or just train a smaller model for longer? Since smaller model is cheaper during inference, we only want a larger model when the larger model is better than the smaller one given the same training cost. Since smaller model has relatively limited capacity to consume more data, it would be outperformed when we have enough resource to train larger model for long enough. That is the case that the smaller model has received enough data and we should train a larger model instead. > Q8: Line 153: Why is C4 with cleaning still extremely low-quality? Sorry for the confusing statements. As we discussed in Appendix C, the terms "high" and "low" quality are relative, and our classification of C4 as low-quality is made in comparison to Wikipedia. To prevent any misunderstanding, we will incorporate a footnote in our upcoming version. > Q9: Discuss the concurrent token crisis paper, "Scaling Data-Constrained Language Models [1]". Thank you so much for pointing this out! We are very happy when seeing this concurrent work focusing on token-crisis and glad to discuss the difference. While both our paper and the referenced work address token-crisis concerns, they diverge in focus. The other work primarily delves into the specific scaling law associated with repeated data usage—identifying the optimal repetition times based on model and dataset sizes. In contrast, our primary objective is to scrutinize the multi-epoch scaling behavior. Notably, we explore influential factors like UL2 and MoE techniques, which exhibit heightened data dependency, and we discover the surprising effectiveness of dropout in mitigating multi-epoch degradation. These papers offer orthogonal insights into the token crisis problem. The other work contends that the Chinchilla scaling law necessitates adjustment due to the use of repeated data—a distinct context from the original Chinchilla paper. In contrast, our experimentation with T5 does not involve data reuse, maintaining a setting more akin to the Chinchilla study. --- Rebuttal Comment 1.1: Title: Re: Rebuttal Comment: Thank you for your detailed responses. I carefully read the responses, and most of my questions are answered. I appreciated the supplementary experiments, and I encourage the authors to add them to the final version should this paper be accepted. Still, I do not get the responses to question 7. Reading Section 2 Insight (1) again makes me unsure of what this paper is doing. Here, the paper tries to verify if Chinchilla scaling law applies to T5 when training on C4. However, the experiment here compares T5 of different sizes (6 different models) when training with different numbers of tokens (and I am not sure how the optimal number of tokens is calculated) and training at various computation budgets (according to Line 110). How can we fit the Chinchilla scaling law using the above experiment? In the Chinchilla paper, they either fix the model size and vary the number of training tokens or fix the computation budget (FLOPs) and vary the model size. Last, they use all those models to fit the Chinchilla law. However, it seems to me that all those factors vary in this paper, so it is unclear to me how the scaling law is verified here. Can the authors elaborate more on how they conduct the experiments, how they calculate the optimal number of tokens in Figure 2, and how this supports the claim that "Chinchilla law holds on T5"? --- Reply to Comment 1.1.1: Title: Re: Re: Rebuttal Comment: We really appreciate your careful review! And Sorry for the confusing statements. We devote to checking "Encoder-Decoder models trained on the C4 dataset exhibit comparable data-hungry behavior as described in the Chinchilla scaling law." Chinchilla's paper figured out the specific scaling law in their paper, but what we want to do is checking a similar trend exists in our setting, and this is to make sure the following experiments are meaningful. Therefore, we used a simplified (cheaper) way: Since T5 models are trained by inverse square root LR schedule instead of the cosine LR schedule used in Chinchilla, to get different checkpoints with varied costs and fixed model size, we do not need to run the same model size many different times by setting the "max_cosine_schedule_steps" before training. Therefore, within a single training run, we can use different checkpoints during training as the models with the fixed model size and varied training tokens. For instance, a T5-Base trained with 5K steps is exactly the same as the T5-Base trained with 10K steps' checkpoint at 5K steps. And similarly, if we consider the case of "fix the computation budget (FLOPs) and vary the model size", we can use a smaller model checkpoint trained with more steps and a larger model checkpoint trained with fewer steps. These two checkpoints consumed the same FLOPs but have different model sizes. We will add this explanation to our submission. Hope this explanation solved your concern well. Thank you so much for your careful review and comments again. Your comments improved this draft a lot!
Summary: The paper presents an empirical study on the effect of training on multiple epochs in the data limited regime. They show that Chinchilla's scaling laws holds for T5 style models. The authors show that repeated tokens result in degradation of accuracy. The also study some of the factors contributing to this degradation. Towards the end, they show that MoEs can used a proxy to tune the hyperparameters of the larger models. Strengths: - a detailed empirical study - the results are carefully studied Weaknesses: - Some of the conclusions are trivial. For example, larger models are more susceptible to overfitting. - MoE is presented as a way to tune hyperparameters of larger models but it is not clear to what extent. For example, shoiuld MoE always be iso-parameter with the base dense model? Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - Have the authors considered the effect of learning rate in overfitting? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: No Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Q1: Some of the conclusions are trivial. For example, larger models are more susceptible to overfitting. Thanks for the suggestion. We argue that, although some of our conclusions are intuitively trivial, these conclusions play an important role in studying the token-crisis problem. For instance, to ensure that our investigation on relatively fewer tokens and moderate-size models can adapt better to train larger models with more data for fewer epochs, verified the insight that larger models are more susceptible to overfitting. Therefore, these conclusions not only provide insights to readers but also pave the way to more in-depth and insightful conclusions. > Q2: MoE is presented as a way to tune hyperparameters of larger models but it is not clear to what extent. For example, should MoE always be iso-parameter with the base dense model? We cannot say MoE is always iso-parameter with the base dense model before studying all hyper-parameters. And as we know, it is prohibitively expensive and almost impossible to ablate all hyper-parameters even if for a moderate-size model in LLM training. However, in this paper, we studied MoE is iso-parameter with more computation-heavy dense model at three different scales (Base, Large, XL) and under six different dropout ratios (0, 0.1, 0.2, 0.3, 0.4, 0.5). This indicates a strong trend towards iso-parameter behavior with MoE, so this suggests MoE is a promising approach for efficient hyperparameter tuning. > Q3: Have the authors considered the effect of learning rate in overfitting? Thank you for your insightful query. We fully agree that, as a general principle, training with an excessively small learning rate over an extended duration has the potential to exacerbate the overfitting issue. Therefore, we delved into the impact of learning rate on overfitting through an ablation study involving both smaller and larger learning rates. Our investigation revealed that while the learning rate does influence overall model performance, it only exerts a limited effect on altering the overfitting trend in our experiments.
Summary: The authors propose a concept called token-crisis, which means the growth rate of high-quality text data available is much slower than the growth rate of data required by LLMs. This paper is the first empirical study of the repeating pre-training data for the token-crisis problem. Some major findings are: larger models are more prone to overfitting and would affect downstream tasks; using dropout is an effective way to alleviate the multi-epoch degradation, and setting the dropout rate to 0.2 or 0.3 yields the optimal performance. Strengths: 1. The experiments are very comprehensive, and most of the conclusions are well-supported. 2. Many finds in the paper provide valuable insights to train better open-source LLMs in academia with limited resources. 3. The dropout mitigate token-crisis findings seem to be pretty useful. Weaknesses: 1. Why there are no experiments testing for a smaller number of data repetitions? The smallest number of repetitions in the paper is 2^8, which is very large. 2. The paper seems to not answer the question in the title: to repeat or not to repeat. If we have limited data, should we repeat or not in pre-training? There seems to be no experiment showing how many repetitions are optimal with the same set of data. 3. The dataset-quality-does-not-matter-much conclusion seems to be a bit not-well-supported. If the Wiki data is of higher quality than C4 data, why the downstream performance of Wiki pre-trained without repeating model does not outperform the C4 pre-trained model? Considering there is a recent paper [1] that claims that true high-quality data helps a lot in code generation, I'm a bit unsure about this conclusion. 4. Does model architecture matters? All experiments use T5, an encoder-decoder model. Do all conclusions hold when using a decoder-only model? [1] Gunasekar, S., Zhang, Y., Aneja, J., Mendes, C.C.T., Del Giorno, A., Gopi, S., Javaheripi, M., Kauffmann, P., de Rosa, G., Saarikivi, O. and Salim, A., 2023. Textbooks Are All You Need. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. See Weaknesses. 2. Since most of the experiments are done on the C4 dataset, which is known to be low quality, I'm wondering if the conclusions would still hold when using high-quality data. I think there currently is a trend to utilize higher quality data (whether generated from GP4 or hand-crafted) to fine-tune or pre-train smaller LLMs, which seems to obtain very good results. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitation section is pretty honest and actually echoes some of my concerns. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Q1: Why there are no experiments testing for a smaller number of data repetitions? The smallest number of repetitions in the paper is 2^8, which is very large. Thank you for your question. In our plots, we actually have the experiments testing for a small number of data repetitions. For instance, in Figure 3, we trained Base, Large, and XL models with a larger dataset (four times larger) and a reduced number of epochs (e.g., 2^6). Furthermore, in several of our figures (e.g., Fig 4, 5, 6, 7), the plotted validation performance spans the entire training process, encompassing datapoints from the initial stages, which actually involve only a few epochs of data. > Q2: The paper seems to not answer the question in the title: to repeat or not to repeat. If we have limited data, should we repeat or not in pre-training? There seems to be no experiment showing how many repetitions are optimal with the same set of data. Thanks for the great question again! We actually make our conclusion through the investigation but did not clearly highlight this in the conclusion section. To address this, we revised the conclusion section of our paper to provide a more explicit summary of our findings. Specifically, if we do not conduct any tricks, according to our experiments in Figure 4,5,6,7, it is okay to repeat for a few epochs. A concurrent work [6] also figured out the specific scaling law of training with repeated data. In this paper, we think studying the factors of scaling LLM under token crisis is also highly important because we find the scaling law (or the optimal repetitions) is very sensitive to many factors like dataset size, model size, training objective, and regularizations. Based on our findings, in Figure 9, we can see simply setting dropout as 0.1 can greatly change the mult-epoch degradation so that the scaling law would also be very different after adding dropout. Therefore, if we add an appropriate dropout for multiple epoch LLM training, it would be okay to repeat. [6] Muennighoff, Niklas, et al. "Scaling Data-Constrained Language Models." arXiv preprint arXiv:2305.16264 (2023). > Q3: The dataset-quality-does-not-matter-much conclusion seems to be a bit not-well-supported. If the Wiki data is of higher quality than C4 data, why the downstream performance of Wiki pre-trained without repeating model does not outperform the C4 pre-trained model? Considering there is a recent paper (Textbook is all you need) that claims that true high-quality data helps a lot in code generation, I'm a bit unsure about this conclusion. This is a great question. We also discussed the dataset quality assumption in Appendix C. Since Wikipedia dataset is actually smaller than C4, even if we use the full wikipedia dataset, the model is actually trained for a few epochs when training with 500K steps. However, the C4 is large enough so that there is no repeat at all. Therefore, it is reasonable that wikipedia is slightly worse than C4 when training with enough data. For the Textbook is all you need paper, the data quality is extremely high, which is also very similar to the downstream tasks like HumanEval (See Section 6 of the Textbook paper). So using such high-quailty instruction following data without web-scale pre-training data to achieve good performance on some benchmarks is possible. However, in this paper, we are discussing the quality of web-scale pre-training data. Note the high quality here is relative. Compared with C4, wikipeida is good. But if we compare wikipedia with Phi-1 or Vicuna’s ShareGPT instruction following data, wikipedia is relatively not that ideal. > Q4: Does model architecture matter? All experiments use T5, an encoder-decoder model. Do all conclusions hold when using a decoder-only model? Please see the response for Reviewer SEVe Q1 and our Appendix A.1. > Q5: Since most of the experiments are done on the C4 dataset, which is known to be low-quality, I'm wondering if the conclusions would still hold when using high-quality data. I think there currently is a trend to utilize higher quality data (whether generated from GP4 or hand-crafted) to fine-tune or pre-train smaller LLMs, which seems to obtain very good results. We argue that, firstly, C4’s quality is relatively low when compared with wikipedia but it doesn’t mean C4 is a terrible dataset. Note that LLaMA also used C4 as part of its pretraining data. As we mentioned above, The textbook data works on HumanEval because its data is similar to HumanEval benchmark to some extent. If we want to get a model which is good at everything, it is still better to pretrain the model with web-scale dataset. The instruction finetuning is out of the scope of this paper. Due to the popularity of ChatGPT, users are writing instructions every day. We can even mine instructions in the daily conversation or movie transcript and then generate a huge amount of instructions. So the number of instructions would grow fast recently and it has not approached the upper bound. In other words, it is too early to discuss the token-cirsis of instruction following data. We think it is better to wait for what would happen in the instruction following data and see whether we will really run out of instructions before studying “instruction-crisis”. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. I feel overall positive about the paper. I raised my score to 6. Below are some additional comments: 1. Table 2 seems to suggest that C4 and Wikipedia have the same number of tokens (2^27), which is not consistent with the author's rebuttal, which suggests that the two datasets are not of the same sizes. If that's the case, why not use a subset of C4 that has the same size as Wikipedia data to perform the data quality experiment? It doesn't seem to be able to reach any conclusion on data quality when the two datasets are different in size. 2. About the model architecture comment, I didn't mean that encoder-decoder models are not as good as decoder-only models. I'm just curious whether the model architecture plays a role in the multi-epoch training degradation. The authors could include some discussion on this in the paper. --- Reply to Comment 1.1.1: Title: Re: Official Comment by Reviewer kBHt Comment: We appreciate your reconsideration of the evaluation score! Sorry for the confusing statements in Table 2. Since Wikipedia has fewer tokens in total, even if we use the full dataset, the num of epochs should be larger than 1 from the start, which means the dataset size gap between full dataset training and subset training is smaller. For instance, we are actually comparing: C4($2^{35}$tokens $\times$ $2^{0}$ epoch) vs C4($2^{27}$tokens $\times$ $2^{8}$ epoch) Wiki($2^{34}$tokens $\times$ $2^{1}$ epoch) vs Wiki($2^{27}$tokens $\times$ $2^{8}$ epoch) The noteworthy observation from Table 2 is that despite Wikipedia having a smaller dataset size gap, it experiences a comparatively larger performance degradation. This insight strengthens our argument. Rather than rendering our conclusion incorrect, this observation further reinforces its validity. Sorry again for the confusing statements in the paper. We will add this explanation and the discussion about encoder-decoder vs decoder only in our paper. And thank you again for the great suggestions! Your suggestions significantly improved this draft.
Summary: This paper delves into the token-crisis issue in language models, a situation characterized by performance decline when the same pre-training data is used across multiple epochs. The authors scrutinize several methods of training language models with recurring tokens, encompassing regularization strategies and the use of mixture-of-experts for refined hyper-parameter tuning. Furthermore, they look into the possibilities of creating supplementary data utilizing existing language models and formulating more data-conservative model structures. The paper provides a comprehensive analysis of the token-crisis issue and its roots, as well as the efficacy of various strategies aimed at alleviating this problem. Strengths: This paper has several strengths across different dimensions: - The paper focuses an important problem in language modeling, the token-crisis problem, which has not been extensively studied before and has big potential. The authors provide a thorough investigation of the problem and explore various approaches to mitigating it, including regularization techniques and mixture-of-experts for efficient hyper-parameter tuning. - The presented insights have the potential to improve the performance and efficiency of language model learning. The paper's contributions, including a thorough investigation of the token-crisis problem and its causes, as well as the effectiveness of various approaches to mitigating this issue, are significant for the field of natural language processing. - The paper is well-written and well-organized, with clear explanations of the problem and the proposed solutions. The authors provide a detailed analysis of the factors contributing to multi-epoch degradation and the effectiveness of various approaches to mitigating this issue. The experiments are well-designed and the results are presented clearly and comprehensively. Weaknesses: The first weakness of this work is that all the experiments were conducted on the T5-style masked language modeling, which is opposite to the recently surging GPT-style (i.e., causal) language modeling. And it's unclear whether all the sights/findings are applicable or transferable to the CLMs. Meantime, the experiments were conducted on models with up to 3B parameters and did not explore the performance of the proposed approaches on larger models, such as the GPT3 or its equivalents. This limits the generalizability of the results to larger-scale models. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Refer to the weakness Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Q1: All the experiments were conducted on the T5-style masked language modeling, which is opposite to the recently surging GPT-style (i.e., causal) language modeling. And it's unclear whether all the sights/findings are applicable or transferable to the CLMs. Thank you for your insights. We totally agree that the training objective is highly important in the token-crisis problem. That is the reason why we investigate UL2[2] training objective (which is used in PaLM-2[3]) in Section 3.3. The UL2 training objective not only includes the T5-style masked language modeling but also covers causal language modeling and a more challenging masked token prediction (longer masked span or higher mask ratio), which is similar to fill-in-the-middle objective[4], an advanced training objective proposed by OpenAI and used in open-sourced models like StarCoder [5]. [2] Tay, Yi, et al. "Ul2: Unifying language learning paradigms." The Eleventh International Conference on Learning Representations. 2022. [3] Anil, Rohan, et al. "Palm 2 technical report." arXiv preprint arXiv:2305.10403 (2023). [4] Bavarian, Mohammad, et al. "Efficient training of language models to fill in the middle." arXiv preprint arXiv:2207.14255 (2022). [5] Li, Raymond, et al. "StarCoder: may the source be with you!." arXiv preprint arXiv:2305.06161 (2023). > Q2: The experiments were conducted on models with up to 3B parameters and did not explore the performance of the proposed approaches on larger models, such as the GPT3 or its equivalents. Thank you for your valuable suggestion. We fully agree that it would have potential benefits of extending our experiments to models like GPT-3 or its counterparts. However, as mentioned in our response to Reviewer SEVe's Question 1, the primary challenge lies in resource availability. Training a 175B model on a substantial dataset for multiple epochs requires significant computational resources, which only a handful of institutions possess. It's worth emphasizing that our interest extends to conducting ablation studies that delve into the various components of multi-epoch training. These studies necessitate even more computational resources than training LLaMA-2 70B from scratch if we consider larger models. In the current paper, even focusing solely on Figure 10 comes with a considerable cost, approximately 47K USD for Google Cloud TPU usage. This cost is approximately seven times higher than training BERT-Large from scratch. To contribute to our research community, we have to scale down to align with a reasonable budget. At the same time, to make our conclusion as sound as possible, as depicted in Figure 3 and Figure 4, we conducted experiments to verify larger model is usually more data-hungry and easier to overfit within fewer epochs.
Rebuttal 1: Rebuttal: Dear reviewers and AC: We thank all the reviewers for their feedback. Two main concerns and concise arguments are summarized below. The full version can be found in the corresponding response of different reviewers. * Use Decoder-only Model instead of Encoder-Decoder Model. => This project started before the popularity of ChatGPT. Due to the wide range of experiments, we did not finish this work until 2 months ago. As a result, we opted for the well-established T5 framework, which was widely studied at the time. More importantly, we argue that decoder-only architecture is not that different from the encoder-decoder architecture. As we discussed in Appendix A.1, there are two main distinctions to consider: (1) The encoder-decoder architecture has approximately twice the number of "trainable parameters" compared to the decoder-only model. (2) The encoder-decoder requires clearly separated input-output pairs to apply bidirectional attention only to the input tokens. To verify the viewpoint above, we compare encoder-decoder, decoder-only, and MoE-based decoder-only models in Figure 1 of our rebuttal pdf. We can see encoder-decoder is clearly better than decoder-only but the MoE based decoder-only model with comparable trainable parameters with encoder-decoder performs almost the same as encoder-decoder model. Therefore, as suggested by UL2[2] paper, the different behaviors of encoder-decoder and decoder-only are more from the training objective instead of model architecture. That is the reason why we explore UL2 training objective in our Section 3.3. * Larger Model and More Training Data. -> As we stated in Appendix A.3, we fully agree that a super-scale investigation would be helpful, but, since we want to investigate a number of different factors resulting in token-crisis and multi-epoch degradation, we have to implement and try a range of ablation experiments. Using a very large model and dataset is prohibitively expensive for our setting. To verify larger model is usually more data-hungry and easier to overfit within fewer epochs, in Figure 3 and Figure 4, we investigate the scaling-up behaviors in multi-epoch training. This supports that our insights based on the smaller model with more epochs can also apply to larger models with fewer epochs. Best, Authors Pdf: /pdf/4154a6368b159bd63ba9333c0e71ef5011755f26.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: Data is one of the most important factors for training a well-performed Large Language Model(LLM). There are not enough studies that are deep into the effect of data. This paper addresses this key problem and analyzes the effect from the important aspects including the effect of pre-training, the effect of downstream tasks, and so on. Strengths: 1. Data is very important for LLM. This work analyzes the impact of data from various aspects. 2. The experiments are rich and detailed. 3. The paper is well-written except some typos, for example missing ‘.’ in Line 74. Weaknesses: 1. The backbone of this paper is T5 1.1. And we know that GPT is one of the most important generative models. Hence, it is necessary to add GPT as the backbone. 2. Data is the key factor for LLM. The data volume used in the experiment could affect the conclusion seriously. In this work, the maximum amount of data is 2**35, about 34B data. It is so small data for training LLM that the experiment results may not be reliable enough. The figures of training loss over train tokens from LLAMA 1 and LLAMA 2 show that 300~400B data could display a stable trend (at least 200B). In other words, 2**27(0.13B) tokens repeated many times are different with 1TB tokens repeated the same times. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. Please update the experiment results with more data, for example, 400B. 2. If possible, show the performance of GPT-3. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Please see the weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Q1: Use GPT instead of T5 as backbone. Thank you so much for your valuable insights. As we discussed in Appendix A.1, the differences between the encoder-decoder and decoder-only models might not be as big as our community thought. There are two main distinctions to consider: (1) The encoder-decoder architecture actually has approximately twice the number of "trainable parameters" compared to the decoder-only model. This is one important reason why encoder-deocder is sometimes better. (2) The encoder-decoder setup requires clearly separated input-output pairs to work effectively, enabling bidirectional attention only on the input tokens. Interestingly, this mechanism is also employed in decoder-only prefix-LMs. An example of such a model is UPaLM [1]. They fine-tuned PaLM (a model trained by casual LM objective) to create UPaLM with just a few training steps. This suggests that the difference between prefix-LMs and casual LMs is relatively minor. Moreover, since both prefix-LM and encoder-decoder use input-only bidirectional attention, we can view the prefix-LM as an encoder-decoder model that shares the trainable weights across the encoder and decoder. To show the encoder-decoder and decoder-only are not that different from the model architecture perspective, we conduct another set of experiments in Figure 1 in our rebuttal pdf. We can see the encoder-decoder is clearly better than vanilla decoder-only model due to more parameters. If we use MoE-based decoder-only model with comparable trainable parameters with encoder-decoder model, the gap can be patched almost perfectly. In addition, to be honest, this project was initiated before ChatGPT gained widespread popularity. As evident in our paper, we explored various factors related to the "token-crisis" phenomenon, which necessitated a wide array of implementation and timely training efforts. As a result, we opted for the well-established T5 framework, which was widely studied at the time. More importantly, once again, we believe the T5 architecture is quite similar to the widely used decoder-only design. > Q2: If possible, please update the experiment results with more data, for example, 400B. Thank you so much for your suggestion. We totally agree that super-scale experiments like training a 175B model with 400B tokens for multiple epochs would make our conclusion more solid. This was also mentioned in our Appendix A.3. While we acknowledge the potential benefits of super-scale experiments, resource limitations (especially during the rebuttal phase) prevented us from pursuing this scale. If we have enough computation resource, we promise that we will have a run on this level. However, considering the resource we have so far, to make our conclusion as sound as possible, as depicted in Figure 3 and Figure 4, we conducted experiments to verify larger model is usually more data-hungry and easier to overfit within fewer epochs. This supports that our insights based on the smaller model with more epochs can also apply to larger models with fewer epochs. [1] Tay, Yi, et al. "Transcending scaling laws with 0.1% extra compute." arXiv preprint arXiv:2210.11399 (2022). --- Rebuttal Comment 1.1: Comment: I have read the rebuttal, thanks to the authors for their rebuttal. My score is unchanged, as most of my concerns remain: 1. Both encoder-decoder and decoder-only structures are needed. 2. From the loss figures of LLAMA2 (and LLAMA1), we could see clearly that 400B (at least 250B) tokens could show the tendency. --- Reply to Comment 1.1.1: Title: Official Comment by Authors Comment: Thank you so much for reading our rebuttal. **1. Encoder-Decoder vs. Decoder-Only Comparison:** As shown in the Rebuttal PDF file Figure 1, we have shown that the encoder-decoder is not that different from decoder-only. Also, the real difference is the training objective, as stated in the UL2 paper above. And we took a closer look at the training objective in our paper. **2. Scale of Experiments:** While we concur on the potential advantages of conducting larger-scale experiments, we respectfully maintain our perspective that a 400B scale is not an absolute requirement for acceptance of a transformer scaling paper. - **Learning Schedule Dependency:** The training loss trend greatly depends on the learning schedule. For instance, LLaMA 1 used 4M batch size for 250K steps with a cosine LR schedule but our model used 64K batch size for 500K steps with an inverse square root LR schedule. The large batch size and cosine learning schedule of LLaMA 1 make models achieve smaller learning rate until over 100B tokens, leading to later stable loss. But our model can achieve a smaller learning rate with much fewer tokens, so that, as shown in Figure 5 of this paper, models with enough tokens can achieve stable training loss much faster. - **Addressing Cost and Accessibility:** Second, training a Billion-level model at such a scale is indeed prohibitively expensive for most institutes. There is no doubt we should scale the experiments to a moderate size. We believe that the 3B model on over 30B tokens ablation studies in this paper has been not cheap for most institutes and definitely not trivial (One single run has been around 6 times more expensive than training BERT-Large from scratch and we actually run these experiments for many times in our ablation study.). Few institutes are as rich as the big techs. We sincerely hope our community can be inclusive to invite broader participation and a wider range of insights. We believe this would not only accelerates the pace of transformer scaling research but also nurtures a diverse pool of scaling researchers worldwide.
null
null
null
null
null
null
No-Regret Online Prediction with Strategic Experts
Accept (poster)
Summary: This submission studies the problem of online decision-making using predictions given by experts with incentives to behave strategically. In particular, at each time-step, K experts hold a belief about a binary outcome. Each expert reports a prediction to a learner, who picks a set of m experts. The learner then suffers a loss which is a function of both the binary outcome and the chosen experts' true beliefs about the outcome. The goal of the learner is to achieve no-regret with respect to the best-in-hindsight choice of m experts while maintaining incentive compatibility, i.e. reporting their true belief as their prediction is a weakly-dominant strategy for each expert at every time-step. The authors study two settings: one in which the learner has a modular utility function and one in which their utility function is submodular. An existing algorithm (WSU, [6]) for the 1-expert version of this problem (i.e. the version of the problem where the learner selects one expert at each time-step) is capable of getting sublinear regret in both settings, albeit at the cost of exponential runtime. For modular utility functions, authors show how to use a variant of the well-known Follow the Perturbed Leader (FTPL) algorithm under a sufficient condition for the perturbation distribution to guarantee both no-regret and approximate incentive compatibility. For submodular utility functions, the authors use an "online distorted greedy algorithm" to obtain no-regret while maintaining incentive compatibility constraints exactly. At a high level, the online distorted greedy algorithm runs m algorithms for the strategic 1-expert problem concurrently. At each time-step, each of the m sub-algorithms selects an expert, and the learner uses these m experts as their selection. Based on the loss the learner receives, they set the loss of each sub-algorithm in a particular way. The authors use the WSU algorithm of [6] as their sub-algorithm. Along they way, they derive an adaptive regret bound for the WSU algorithm, which may be of independent interest. Finally, the authors empirically evaluate their two algorithms on a dataset from a FiveThrityEight forecasting competition to predict the outcomes of games in the 2022-2023 NFL football season. They find that both algorithms obtain similar performance in this setting. Strengths: While others have studied the (non-strategic) m-expert problem, as well as the strategic 1-expert problem, the authors are the first to study the strategic m-expert problem. This setting is well-motivated by applications such as forecasting competitions. While the algorithms presented by authors are not particularly novel, their application to the strategic m-expert problem requires non-trivial theoretical analysis, as well as some new ideas, such as (1) the sufficient condition for the perturbation distribution to guarantee approximate incentive compatibility in the modular setting, and (2) the adaptive regret bound for the WSU algorithm used as a subroutine for the submodular setting. While I would normally consider the experimental results on real-world data as a strength, their impact is limited due to the lack of relevant comparisons (see Weaknesses for more details). Finally the writing is clear, which makes the authors' contributions easy to understand. Weaknesses: The lack of any sort of lower bound is a weakness, especially since (as the authors point out) the loss function they consider is exp-concave, and thus the regret rates of Hedge in the non-strategic version of the problem where one expert is selected scale only logarithmically in the number of experts (and independently of the time horizon). While it may indeed be the case that going from constant-in-T to T^{1/2} regret rates is the price to pay for incentive compatibility in the m-expert problem, this submission presents no evidence either for or against this claim. It would be nice to have provide some background on the current state of the (non-strategic) m-expert problem in the related work. Finally, the experimental results would be strengthened considerably if (1) the theoretical rates and (2) the regret of the naive application of the WSU algorithm were also plotted. Currently, it is not easy to see how well the algorithms are actually doing without the relevant baselines. Additionally, it would be interesting to see how closely the empirical performance of the algorithms match their corresponding theoretical bounds. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: What are the runtimes of both algorithms? What regret rates are obtained for the non-strategic m-experts problem with squared loss? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors have adequately addressed the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: First, we would like to thank the reviewer for their constructive feedback and comments. We hope our responses below may help and perhaps convince the reviewer to raise their score. We are happy to answer any further questions during the author-reviewer discussion period. $\bullet$ **Q:** The lack of any sort of lower bound is a weakness, especially since (as the authors point out) the loss function they consider is exp-concave, and thus the regret rates of Hedge in the non-strategic version of the problem where one expert is selected scale only logarithmically in the number of experts (and independently of the time horizon). While it may indeed be the case that going from constant-in-$T$ to $T^{1/2}$ regret rates is the price to pay for incentive compatibility in the $m$-expert problem, this submission presents no evidence either for or against this claim. **A:** While there are lower bounds for the $m$-experts problem without the incentive compatibility property (and without using the fact that losses are quadratic) (see the response to your next question), neither the prior works nor our work provides lower bounds under the incentive compatibility assumption (and using the fact that losses are quadratic). That being said, we thought a lot about this and we conjecture that incurring an $\mathcal{O}(\sqrt{T})$ regret is inevitable in the strategic setting. In particular, approximately incentive-compatible algorithms typically incur an $\mathcal{O}(\sqrt{T})$ term in their regret bound only due to not being fully incentive-compatible. Therefore, even if the algorithm has better regret bounds in the non-strategic setting (i.e., the equivalent of Theorem 4 for FTPL) (e.g., $\mathcal{O}(\ln K)$ regret bound of Hedge as discussed in lines 361-363 of the paper), the overall regret would be $\mathcal{O}(\sqrt{T})$. We also came up with multiple fully incentive-compatible algorithms, however, it seems that obtaining regret bounds better than $\mathcal{O}(\sqrt{T})$ is at odds with the algorithm being fully incentive-compatible. $\bullet$ **Q:** It would be nice to have provide some background on the current state of the (non-strategic) $m$-expert problem in the related work. **A:** For the non-strategic setting with modular utilities, [10] proposed the Component Hedge (CH) algorithm and obtained a regret bound of $\sqrt{2m\ell^*\ln(\frac{K}{m})}+m\ln(\frac{K}{m})$ where $\ell^*$ is the cumulative loss of the best-chosen set in hindsight. They also gave a matching lower bound for this problem. Applying the same analysis as in Theorem 7 to the setting of the naive approach with $K \choose m$ meta-experts, we can show that the regret bound of WSU matches the aforementioned lower bound. [11] studied the FTPL algorithm with Gaussian noise distribution and provided an $\mathcal{O}(m\sqrt{T\ln (\frac{K}{m}}))$ regret bound for this setting. For the non-strategic setting with submodular utility functions, [13] proposed the online distorted greedy algorithm (with dual averaging algorithm as the algorithm $\mathcal{A}_i$ for $i=1,\ldots,m$) whose regret bound is $\mathcal{O}(\sqrt{mT\ln (\frac{K}{m})})$. More recently, [12] studied the $m$-experts problem under various different choices of the utility function (sum-reward, max-reward, pairwise-reward and monotone reward). In particular, for the setting with modular utilities (sum-reward), they proposed an algorithm that matches the optimal regret bound of the CH algorithm while being computationally more efficient. We will make sure to add this discussion to the "Related work" Section (Section 1.1) in the final version of the paper. $\bullet$ **Q:** The experimental results would be strengthened considerably if (1) the theoretical rates and (2) the regret of the naive application of the WSU algorithm were also plotted. Currently, it is not easy to see how well the algorithms are actually doing without the relevant baselines. Additionally, it would be interesting to see how closely the empirical performance of the algorithms match their corresponding theoretical bounds. **A:** We tried to implement the naive application of the WSU algorithm in our experiments. However, the number of meta-experts for our two experiments was $20 \choose 5$$=15504$ and $100\choose 5$$=75287520$ and the experiments did not finish (we waited for a few hours). In the "global" response to all the reviewers, we have included two new plots in which we have compared the running average of the regret of our proposed algorithms with that of the FiveThirtyEight aggregated predictions as the baseline. Please see the "global" response for more details. $\bullet$ **Q:** What are the runtimes of both algorithms? **A:** Both algorithms had similar runtime for our experiment which was around 3-4 minutes for each. $\bullet$ **Q:** What regret rates are obtained for the non-strategic $m$-experts problem with squared loss? **A:** While there exist prior works on the non-strategic $m$-experts problem (discussed above), none of these papers take the structure of the loss function (quadratic loss in our setting) into account. In order to obtain the $O(\ln K)$ regret bound for the $1$-expert problem (with squared loss) using the Hedge algorithm, the algorithm makes a single prediction $\sum_{i=1}^K \pi_{i,t}p_{i,t}$ at round $t\in[T]$ and its loss is $(\sum_{i=1}^K \pi_{i,t}p_{i,t}-r_t)^2$. In other words, choosing an expert $i\in[K]$ with probability $\pi_{i,t}$ at round $t$ is not good enough to obtain the improved $\mathcal{O}(\ln K)$ regret bound. Moving on to the $m$-experts problem, the main challenge for obtaining regret bounds better than $\mathcal{O}(\sqrt{T})$ is to decide how to aggregate the $K$ predictions as $m$ scalar values. --- Rebuttal Comment 1.1: Comment: Thanks for your detailed reply. Please allow me to clarify my question about the runtime. In particular, I am asking about the theoretical runtime in Big O notation, as opposed to wall clock time. --- Reply to Comment 1.1.1: Title: Running Time of the Proposed Algorithms Comment: Thanks for the clarification. At each round, the FTPL algorithm simply requires picking the $m$ experts (among the $K$ available experts) with the smallest noisy losses. We implemented this using a binary heap whose running time is $\mathcal{O}(K+m\ln K)$ per round. The online distorted greedy algorithm uses $m$ instances of the WSU algorithm for the $1$-expert problem and simply outputs the union of the experts chosen by these algorithms. At round $t\in[T]$, The WSU algorithm takes $\mathcal{O}(K)$ to compute the probabilities $\pi_{i,t}$ and pick one expert according to these probabilities. Therefore, the running time of the online distorted greedy algorithm is $\mathcal{O}(mK)$ per round. We are happy to answer any further questions.
Summary: This paper generalizes the problem of online binary prediction with expert advice, where at each round the learner can pick m>1 experts and the overall utility is a modular or submodular utility of the chosen experts. The experts are strategic and wish to be selected by the algorithm as often as possible hence they may misreport their beliefs about the events. They design algorithms that are incentive-compatible and achieve sublinear regret in the hindsight. Previous work has studied this problem for m=1. In the case of m>1, previous work have focused on designing no-regret algorithms and did not take into consideration the incentive-compatibility issues. Their algorithm builds on a prior work that studies the FTPL algorithm for the m-expert problem with modular utility functions and derive conditions for the perturbation function to guarantee approximate incentive compatibility. In particular, they show that FTPL while Gaussian perturbations is not incentive-compatible, Laplace or hyperbolic noise distribution guarantees approximate incentive-compatibility. Moreover, they propose an algorithm that builds upon online monotone submodular maximization s.t. matroid constraints that takes m incentive-compatible for standard experts problem (i.e. m=1) and outputs their combined prediction. Compared to the first algorithm, this one achieves exact incentive-compatibility however, at the price of an extra \sqrt{m} term in the regret bound. Strengths: The paper is written nicely and is easy to follow. The problem seems to be well-motivated. I did not go through the proofs. The second algorithm is more straightforward, however, I think it is nice that they included it and compared it with their first algorithm. Weaknesses: It might be helpful to add a discussion on the regret lower bound for this problem. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Minor comments: The citations need to be fixed, they do not show the authors' name, need to use \citet. Line 268, can you expand on why this holds? Question : What is the lower bound for the regret of this problem? Can you give a lower bound that is stronger than the case for m=1? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: First, we would like to thank the reviewer for their constructive feedback and comments. We hope our responses below may help and perhaps convince the reviewer to raise their score. We are happy to answer any further questions during the author-reviewer discussion period. Also, we have provided additional plots that better demonstrate the performance of our proposed algorithms. Please see the "global" response to all the reviewers for more details. $\bullet$ **Q:** It might be helpful to add a discussion on the regret lower bound for this problem. What is the lower bound for the regret of this problem? Can you give a lower bound that is stronger than the case for m=1? **A:** For the non-strategic setting with modular utilities (and without using the fact that losses are quadratic), [10] proposed the Component Hedge (CH) algorithm and obtained a regret bound of $\sqrt{2m\ell^*\ln(\frac{K}{m})}+m\ln(\frac{K}{m})$ where $\ell^*$ is the cumulative loss of the best-chosen set in hindsight. They also gave a matching lower bound for this problem. Applying the same analysis as in Theorem 7 to the setting of the naive approach with $K \choose m$ meta-experts, we can show that the regret bound of WSU matches the aforementioned lower bound. For the non-strategic setting with submodular utility functions (and without using the fact that losses are quadratic), [13] proposed the online distorted greedy algorithm (with dual averaging algorithm as the algorithm $\mathcal{A}_i$ for $i=1,\ldots,m$) whose $(1-\frac{c}{e})$-regret bound is $\mathcal{O}(\sqrt{mT\ln (\frac{K}{m})})$ and this is the best bound for submodular utilities. While there are lower bounds for the $m$-experts problem without the incentive compatibility property (and without using the fact that losses are quadratic) as discussed above, neither the prior works ([6,7]) nor our work provides lower bounds under the incentive compatibility assumption (and using the fact that losses are quadratic). That being said, we thought a lot about this and we conjecture that incurring an $\mathcal{O}(\sqrt{T})$ regret is inevitable in the strategic setting (and considering the fact that losses are quadratic). In particular, approximately incentive-compatible algorithms typically incur an $\mathcal{O}(\sqrt{T})$ term in their regret bound only due to not being fully incentive-compatible. Therefore, even if the algorithm has better regret bounds in the non-strategic setting (i.e., the equivalent of Theorem 4 for FTPL) (e.g., $\mathcal{O}(\ln K)$ regret bound of Hedge as discussed in lines 361-363 of the paper), the overall regret would be $\mathcal{O}(\sqrt{T})$. We also came up with multiple fully incentive-compatible algorithms, however, it seems that obtaining regret bounds better than $\mathcal{O}(\sqrt{T})$ is at odds with the algorithm being fully incentive-compatible. $\bullet$ **Q:** The citations need to be fixed, they do not show the authors' name, need to use citet. **A:** We will make sure to fix this in the final version of the paper. $\bullet$ **Q:** Line 268, can you expand on why this holds? **A:** We can write: \begin{equation*} \mathbb{E}[\frac{1}{m}\sum_{t=1}^T\sum_{i\in S_t}\ell(b_{i,t},r_t)-\min_{S:|S|=m}\frac{1}{m}\sum_{t=1}^T\sum_{j\in S}\ell(b_{j,t},r_t)]=\mathbb{E}[\frac{1}{m}\sum_{t=1}^T\sum_{i\in S_t}\ell(p_{i,t},r_t)-\min_{S:|S|=m}\frac{1}{m}\sum_{t=1}^T\sum_{j\in S}\ell(p_{j,t},r_t)]+\mathbb{E}[\frac{1}{m}\sum_{t=1}^T\sum_{i\in S_t}\ell(b_{i,t},r_t)-\frac{1}{m}\sum_{t=1}^T\sum_{i\in S_t}\ell(p_{i,t},r_t)]+\mathbb{E}[\min_{S:|S|=m}\frac{1}{m}\sum_{t=1}^T\sum_{j\in S}\ell(p_{j,t},r_t)-\min_{S:|S|=m}\frac{1}{m}\sum_{t=1}^T\sum_{j\in S}\ell(b_{j,t},r_t)]. \end{equation*} Let $S_1=\text{arg}\min_{S:|S|=m}\frac{1}{m}\sum_{t=1}^T\sum_{j\in S}\ell(b_{j,t},r_t)$ and $S_2=\text{arg}\min_{S:|S|=m}\frac{1}{m}\sum_{t=1}^T\sum_{j\in S}\ell(p_{j,t},r_t)$. We have: \begin{equation*} \mathbb{E}[\frac{1}{m}\sum_{t=1}^T\sum_{i\in S_t}\ell(b_{i,t},r_t)-\min_{S:|S|=m}\frac{1}{m}\sum_{t=1}^T\sum_{j\in S}\ell(b_{j,t},r_t)]\leq\mathbb{E}[\frac{1}{m}\sum_{t=1}^T\sum_{i\in S_t}\ell(p_{i,t},r_t)-\min_{S:|S|=m}\frac{1}{m}\sum_{t=1}^T\sum_{j\in S}\ell(p_{j,t},r_t)]+\mathbb{E}[\frac{1}{m}\sum_{t=1}^T\sum_{i\in S_t}\ell(b_{i,t},r_t)-\frac{1}{m}\sum_{t=1}^T\sum_{i\in S_t}\ell(p_{i,t},r_t)]+\mathbb{E}[\frac{1}{m}\sum_{t=1}^T\sum_{j\in S_1}\ell(b_{j,t},r_t)-\frac{1}{m}\sum_{t=1}^T\sum_{j\in S_1}\ell(p_{j,t},r_t)]\leq \mathcal{O}(\sqrt{BT\ln (\frac{K}{m})})+\frac{1}{m}\sum_{t=1}^T\sum_{i\in S_t}\frac{4B}{\eta-2B}+\frac{1}{m}\sum_{t=1}^T\sum_{j\in S_1}\frac{4B}{\eta-2B}=\mathcal{O}(\sqrt{BT\ln (\frac{K}{m})})+\frac{8BT}{\eta-2B}, \end{equation*} where the last inequality follows from line 265 and line 267 of the paper. --- Rebuttal Comment 1.1: Title: Responding to rebuttal Comment: Thank you for your detailed response. I understand that addressing the lower bound is not a necessity for this paper and it can be studied in future work. It might be helpful to add your conjecture for the lower bound and raise it as an open question. --- Reply to Comment 1.1.1: Comment: Thanks for taking the time to read our response, we will make sure to use the additional content page in the final version of the paper to add the discussion about lower bounds.
Summary: - The paper studies the problem of online prediction with expert advice. In their setting, a learner at each round selects $m$ experts, and the loss is determined by a submodular or modular function of the beliefs reported by the selected experts. Each expert, being strategic, may intentionally misrepresent their beliefs to maximize their chances of being selected in the next round. The goal of the learner is to design no-regret algorithms that incentive experts to report their beliefs truthfully (also called IC). - To tackle this problem, the authors first present an inefficient reduction to the case when $m=1$. Subsequently, they propose two efficient algorithms that are for the specific types of loss functions considered. - In the case of the modular loss function, the authors investigate Follow-The-Perturbed-Leader (FTPL) and propose a general condition for the perturbation distribution that ensures approximate IC. - For the submodular loss function,they propose a simpler algorithm named the online distorted greedy algorithm. This algorithm not only guarantees exact IC but also attains the optimal approximation ratio. Strengths: - The paper studies an important and generic problem of online prediction with strategic experts that may find its applications in many scenarios. - The proposed algorithms in this paper are not only easy to implement and also enjoy good theoretical guarantees of being (approximate) IC and no regret. Weaknesses: 1. Several claims may have some issues, see eg Questions 3, 5, 6. 2. The presentation could be improved. For instance, Section 1.2 titled "Contributions" seems to contain some elements of background and motivations which might be more appropriately placed in the introduction of the paper. Moreover, it would be beneficial to include more discussions following important definitions (like Def 1) and theorems to provide more intuitions. 3. In the experimental section, the authors plot the regret curve that is not normalized. This makes it hard to interpret, as unnormalized regret curves don't explicitly demonstrate the rate at which the algorithms learn. Additionally, to reflect the difference between reported and true beliefs, a uniformly random value within the range guaranteed by Theorem 3 is added, but this theorem doesn't ensure a uniform distribution for this difference. Finally, incorporating a comparison of the proposed algorithms with and aggregated predictions of FiveThirtyEight could potentially enrich the analysis. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Regarding Def 1: - Is this a new concept being proposed, or has it previously been studied in previous works? - The experts are long-standing in this setting, but the focus of the paper is on myopic IC. Although the authors mentioned that the analysis of FTPL could be extended to the more general setting of maximizing a conic combination of probabilities of being chosen at all subsequent rounds, is it also true for the online distorted greedy algorithm? What's the main modification/challenge of extending to non-myopic experts? - Would achieving IC become impossible if the definition is for all $r_t$, rather than the expectation over Bern( $b_{i,t}$ )? 2. In the "forecasting competitions" example described in Section 3.1, it seems that the loss function is not captured by the modular function studied in this paper, given it is normalized by $|S_t|$ rather than $m$. 3. About Thm 1, could you clarify why the reduction preserves IC? In particular: - How is the belief of each meta-expert defined in this context? Is the new loss function $\ell_S$ still proper? How to define IC when the belief of each expert is multi-dimensional? - Even though WSU is IC for the $m=1$ problem, it only guarantees every single expert reports truthfully when fixing other's reports. In particular, it doesn't rule out the possibility that a group of experts form a coalition and misreport together to increase the sum of their probabilities of being selected, which is exactly what would happen when a single expert misreports in the $m$-expert problem -- it results in the simultaneous misreporting of $\binom{K}{m-1}$ meta-experts. 4. Thm 3 guarantees approximate IC in the sense of $|p^\star\_{i,t}-b\_{i,t}|$ being small. However, a more common notion of approximate IC is through the difference of utility, which, applies to this setting, is the difference in $\pi_{i,t+1}$ when $p^\star_{i,t}$ and $b_{i,t}$ are reported respectively. Could the authors include a discussion about this alternative notion? 5. Combining Thm 3 and Thm 5, it seems that as $B$ approaches 0, the FTPL algorithm becomes more IC and also yields a smaller regret. However, this contradicts the intuition that if the learner's decisions are entirely noise-driven (i.e., $B=0$), the regret should be high. Could the authors comment on this? 6. On lines 297-298, you mention that the $(1-\frac{c_f}{e})$ approximation factor is optimal. Is this claim supported by a lower bound? If so, could you point to the relevant references? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: First, we would like to thank the reviewer for their constructive feedback and comments. We hope our responses below may help and perhaps convince the reviewer to raise their score. We are happy to answer any further questions during the author-reviewer discussion period. Also, given the size limit for the rebuttal, we have included the responses to the rest of your questions at the end of the "global" response to all the reviewers. $\bullet$ **Q:** The presentation could be improved. For instance, Section 1.2 titled "Contributions" seems to contain some elements of background and motivations which might be more appropriately placed in the introduction of the paper. Moreover, it would be beneficial to include more discussions following important definitions (like Def 1) and theorems to provide more intuitions. **A:** Thanks for your suggestions. We will make sure to use the additional page in the final version of the paper and provide intuitions for the definition of incentive compatibility (Definition 1) and add more discussions following our theorems. Also, we will revise the "Contributions" section (Section 1.2) and move the motivations to the "Introduction" section (Section 1) and the literature review to the "Related work" section (Section 1.2). $\bullet$ **Q:** In the experimental section, the authors plot the regret curve that is not normalized. This makes it hard to interpret, as unnormalized regret curves don't explicitly demonstrate the rate at which the algorithms learn. Additionally, to reflect the difference between reported and true beliefs, a uniformly random value within the range guaranteed by Theorem 3 is added, but this theorem doesn't ensure a uniform distribution for this difference. Finally, incorporating a comparison of the proposed algorithms with and aggregated predictions of FiveThirtyEight could potentially enrich the analysis. **A:** In the plots attached to the "global'' response to all reviewers, we have plotted the running average of regret $\frac{1}{t}\mathbb{E} \big[\max_{S\subseteq [K]:|S|=m}\sum_{\tau=1}^t f_{\tau}(S)-\sum_{\tau=1}^t f_{\tau}(S_{\tau})\big]$ for $1\leq t\leq T$. In other words, we have normalized the regret by the horizon length (please let us know if we misunderstood your point about normalized regret curves and you had a different type of normalization in mind). Also, we have included the plot for running average regret of the aggregated prediction of FiveThirtyEight to highlight the superior performance of our proposed algorithms. Please see the "global" response to all the reviewers for more details. As we have shown in the proof of Theorem 3, at round $t\in [T]$, expert $i\in [K]$ needs access to the reports of other experts $p_{j,t}$ for $j\neq i$ to be able to compute the optimal reported belief $p_{i,t}^*$. Given that this is generally not possible, we have added a uniformly random value in the range derived in Theorem 3 to model the difference between the reported and true beliefs. Note that this uniformly random value adversely affects the performance of the FTPL algorithm because the algorithm receives the noisy reported beliefs, however, its performance is evaluated based on the true beliefs. $\bullet$ **Q:** Regarding Def 1: a) Is this a new concept being proposed, or has it previously been studied in previous works? b) The experts are long-standing in this setting, but the focus of the paper is on myopic IC. Although the authors mentioned that the analysis of FTPL could be extended to the more general setting of maximizing a conic combination of probabilities of being chosen at all subsequent rounds, is it also true for the online distorted greedy algorithm? What's the main modification/challenge of extending to non-myopic experts? c) Would achieving IC become impossible if the definition is for all $r_t$, rather than the expectation over $\text{Bern}(b_{i,t})$? **A:** Definition 1 was first introduced by [6]. Given that the WSU algorithm of [6] is only incentive-compatible in the myopic setting, we focused on this definition of incentive compatibility to be able to use the WSU algorithm as a sub-routine in the online distorted greedy algorithm. However, as we mentioned in the paper, it is easy to show that FTPL is also approximately incentive compatible with respect to the non-myopic incentive structure (we just need to repeat the same analysis as the one in the proof of Theorem 3 multiple times for the probability of being chosen in each of the subsequent rounds). If we use FTPL as the algorithm $\mathcal{A}_i$ for $i=1,\ldots,m$ in the online distorted greedy algorithm, the online distorted greedy algorithm would be approximately incentive-compatible in the non-myopic incentive structure as well. We are not sure if we understand your last question (part c), it would be great if you could clarify it a bit further. $\bullet$ **Q:** In the "forecasting competitions" example described in Section 3.1, it seems that the loss function is not captured by the modular function studied in this paper, given it is normalized by $|S_t|$ rather than $m$. **A:** We defined the modular utility function as $f_t(S_t)=\frac{1}{m}\sum_{i\in S_t}(1-\ell_{i,t})=\frac{|S_t|}{m}-\frac{1}{m}\sum_{i\in S_t}\ell_{i,t}$ to make sure that the utility function is monotone and modular so that the online distorted greedy algorithm is applicable. If the cardinality of set $S_t$ is equal to $m$ (which is true for $S_t$ output by our proposed algorithms), the utility $f_t(S_t)$ is simply one minus the average loss of chosen experts. This is the utility function considered for the "forecasting competitions" example in Section 3.1. --- Rebuttal Comment 1.1: Comment: Thank you very much for the detailed response. Given that most of my concerns have been addressed, I'm happy to raise my score from 5 to 6. --- Reply to Comment 1.1.1: Comment: Thanks for taking the time to go over our response, we are glad that we were able to address most of your concerns.
Summary: The paper broadly focuses on the design and analysis of no-regret and incentive-compatible algorithms for the m-experts problem (pick a subset $S_t \subseteq [K]$ of $m$ experts in each round $t$ to obtain a utility value $f_t(S_t)$ which is a function of the losses $\ell_{i,t} \in [0,1]$ for all $i \in S_t$), with specific utility functions (on the space of m-sized subsets of experts) in each round which are either modular or sub-modular. * They try to improve on the inefficient baseline of using the no-regret, incentive-compatible WSU algorithm (Freeman et all 2020) on the set of $\binom{K}{m}$ meta-experts corresponding to sets of $m$ experts. * They analyze the FTPL algorithm in the modular utility function case where the perturbation distribution is zero-mean symmetric from an exponential family. They propose a sufficient-condition for this distribution (related to bounded hazard rate) which implies that FTPL would be no-regret and approximately incentive-compatible. In particular, they show that this condition is satisfied for the Laplace distribution and the symmetric hyperbolic distribution etc. * They propose and analyze an "online distorted greedy algorithm" (based on earlier work by (Harvey et al. 2020) on online submodular maximizination) for the submodular utility function case. They show that the algorithm has sublinear $\alpha$-regret bounds and is $\alpha$-approximately-incentive compatible. where $\alpha$ may be $ < 1$ for general submodular functions (depending on the "curvature") while $\alpha = 1$ in the modular case. This of course implies that the algorithm has no-regret in the usual sense and is fully incentive compatible in the modular case, but at the expense of more computation and a slightly worse regret bound compared to FTPL. Strengths: * The online $m$-experts problem is well-motivated in the introduction, as is the importance of incentive compatibility for the problem. * The algorithms proposed (FTPL and online distorted greedy) are natural for the problem and build nicely on previous work. All the appropriate details are given, along with the code and experiments. * The proposed sufficiency condition for the perturbation distribution of FTPL is fairly general, and may be of independent interest in the study of incentive compatibility. * Sufficient proof-sketches are given for quite a few of the results in the main paper itself, and more detail is provided in the supplementary material. Weaknesses: * The FTPL analysis applies to a single modular utility function defined in the paper (Line 136). Of course, this utility function is fairly natural for the problem. * The necessity of "Condition 1" for getting approximate-incentive-compatibility with FTPL is not explored. * The experimental evaluation provided is fairly limited (of course, the thrust of the work is theoretical). Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: None Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Not applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: First, we would like to thank the reviewer for their constructive feedback and comments. We hope our responses below may help and perhaps convince the reviewer to raise their score. We are happy to answer any further questions during the author-reviewer discussion period. $\bullet$ **Q:** The FTPL analysis applies to a single modular utility function defined in the paper (Line 136). Of course, this utility function is fairly natural for the problem. **A:** While the FTPL algorithm is only applicable for the setting with modular utility functions, we can use FTPL as the algorithm $\mathcal{A}_i$ for $i=1,\ldots,m$ in the online distorted greedy algorithm, and obtain results for the submodular utility setting as well. In this case, the online distorted greedy algorithm would be only approximately incentive-compatible. $\bullet$ **Q:** The necessity of "Condition 1" for getting approximate-incentive-compatibility with FTPL is not explored. **A:** We believe that Condition 1 is not necessary to achieve approximate incentive compatibility. For example, it is well-known that the FTPL algorithm with Gumbel noise distribution is equivalent to the Hedge algorithm. [7] showed that the Hedge algorithm is approximately incentive-compatible for the $1$-expert problem. However, as we have mentioned in lines 248-249 of the paper, Condition 1 does not hold for the Gumbel distribution. $\bullet$ **Q:** The experimental evaluation provided is fairly limited (of course, the thrust of the work is theoretical). **A:** We have provided additional plots that better demonstrate the performance of our proposed algorithms. Please see the "global" response to all the reviewers for more details. --- Rebuttal Comment 1.1: Title: Response Comment: Thank you for the clarifications in this and in the global rebuttal. I do believe incorporating some of those points into the final version of the paper will certainly make it better. I will keep the score for now but will take the clarifications in consideration if further discussion with the AC becomes necessary. --- Reply to Comment 1.1.1: Comment: Thanks for taking the time to read our response, we will make sure to use the additional content page in the final version of the paper and incorporate the points you mentioned in your review (particularly the necessity of Condition 1).
Rebuttal 1: Rebuttal: In the attached file, we have included two additional figures plotting the running average of regret $\frac{1}{t}\mathbb{E} \big[\max_{S\subseteq [K]:|S|=m}\sum_{\tau=1}^t f_{\tau}(S)-\sum_{\tau=1}^t f_{\tau}(S_{\tau})\big]$ for $1\leq t\leq T$ for our proposed algorithms and also the FiveThirtyEight aggregated predictions in the $K=20$ and $K=100$ settings. Note that while our proposed algorithms choose $m$ predictions at each round $t\in[T]$, the FiveThirtyEight aggregated prediction $\bar{p}_t\in [0,1]$ is a single scalar value and we measure its loss as $(\bar{p}_t-r_t)^2$. As could be seen in the plots, while the regret of all three algorithms converge to zero as $t$ gets larger, both our proposed algorithms have a superior performance compared to that of the FiveThirtyEight prediction. It is also noteworthy that the overall regret of both our algorithms is way better than the corresponding theoretical bounds proved in the paper. Given the huge distance between the theoretical bounds and the regret curves, we decided not to include them. # Rest of Response to Reviewer xXJd: $\bullet$ **Q:** About Thm 1, could you clarify why the reduction preserves IC? **A:** Note that for the naive approach discussed in Section 3.2, we still define incentive compatibility with respect to individual experts (instead of the $K \choose m$ meta-experts). To be precise, we define $\pi_{i,t}=\sum_{S:|S|=m,i\in S}\pi_{S,t}$. For both choices of the loss function ($\ell_S=\frac{1}{m}\sum_{j\in S}\ell_{j,t}$ and $\ell_S=\prod_{j\in S}\ell_{j,t}$ where $i\in S$), the loss of the set is linear in $\ell_{i,t}$ and given the fact that the loss function is proper, we can show that in such cases, the WSU algorithm applied to $K \choose m$ meta-experts is incentive-compatible. For example, for the setting with modular utility function ($\ell_S=\frac{1}{m}\sum_{j\in S}\ell_{j,t}$), we can show the following: \begin{equation*} \pi_{i,t+1}=\sum_{S:|S|=m,i\in S}\pi_{S,t+1}=\pi_{i,t}(1-\frac{\eta}{m} L_{i,t})-\frac{\eta}{m} \sum_{s\neq i}(\sum_{S:|S|=m,\{i,s\}\subseteq S}\pi_{S,t})\ell_{s,t}. \end{equation*} Given that $\pi_{i,t+1}$ is linear in $L_{i,t}$, $L_{i,t}=\ell_{i,t}-\sum_{j=1}^K \pi_{j,t}\ell_{j,t}$ is linear in $\ell_{i,t}$, and the loss function is proper, we can conclude that incentive compatibility holds in this setting. Thanks for your question, we will make sure to clarify this point in the final version of the paper. $\bullet$ **Q:** Thm 3 guarantees approximate IC in the sense of $|p_{i,t}^*-b_{i,t}|$ being small. However, a more common notion of approximate IC is through the difference of utility, which, applies to this setting, is the difference in $\pi_{i,t+1}$ when $p_{i,t}^*$ and $b_{i,t}$ are reported respectively. Could the authors include a discussion about this alternative notion? **A:** As we have shown in the proof of Theorem 3, the expected utility of expert $i$ (according to her belief $b_{i,t}$) at round $t$ (i.e., her probability of being chosen at round $t+1$) is $(1-b_{i,t})F_0(-p_{i,t}^2)+b_{i,t}F_1(-(1-p_{i,t})^2)$, where $Y_0=\eta \gamma_{i,t} + L - X_0^{(t)}$, $Y_1=\eta \gamma_{i,t} + L - X_1^{(t)}$, $f_{0}(Y_0)\propto \text{exp}\Big(-\nu\big(\frac{Y_0-(L-X_0^{(t)})}{\eta}\big)\Big)$, and $f_{1}(Y_1)\propto \text{exp}\Big(-\nu\big(\frac{Y_1-(L-X_1^{(t)})}{\eta}\big)\Big)$. Also, $F_0$ and $F_1$ denote the cdf corresponding to $f_0$ and $f_1$ respectively. Given that $F_0$ and $F_1$ are Lipschitz (because $f_0$ and $f_1$ are bounded), the difference in $\pi_{i,t+1}$ when $p_{i,t}^*$ and $b_{i,t}$ are reported can be bounded as a constant times $|p_{i,t}^*-b_{i,t}|$ and therefore, this alternative notion of incentive-compatibility is satisfied as well. $\bullet$ **Q:** Combining Thm 3 and Thm 5, it seems that as $B$ approaches 0, the FTPL algorithm becomes more IC and also yields a smaller regret. However, this contradicts the intuition that if the learner's decisions are entirely noise-driven (i.e., $B=0$), the regret should be high. Could the authors comment on this? **A:** In this paper, we focused on zero-mean symmetric noise distributions from the exponential family, i.e., $f(\gamma)\propto \text{exp}(-\nu(\gamma))$. Given that $|\nu'(z)|\leq B$, if $B=0$, $\nu(\cdot)$ needs to be constant which corresponds to the uniform distribution (note that in this case, the decisions are not entirely noise-driven). However, uniform distribution does not belong to the exponential family. Therefore, $B>0$. As we have mentioned in the proof of Theorem 3, we have: \begin{equation*} p_{i,t}^*=\frac{b_{i,t}}{b_{i,t}+(1-b_{i,t})A}, \end{equation*} where $A=\text{exp}\big(\nu(\frac{-(1-p_{i,t})^2-(L-X_1^{(t)})}{\eta})-\nu(\frac{-p_{i,t}^2-(L-X_0^{(t)})}{\eta})\big)$. Ideally, we want $A$ to be as close to 1 as possible because if $A=1$, $p_{i,t}^*=b_{i,t}$ and the algorithm would be incentive-compatible. In order to ensure approximate incentive compatibility, the pdf of the noise distribution $f$ needs to be such that $\frac{f(z)}{f(z+1)}$ does not grow to infinity for very large $z$. One way to enforce this condition is via a Lipschitzness assumption on $\ln f$. Condition 1 implies that $\ln f$ is $B$-Lipschitz. That is why smaller values of $B$ lead to better approximate incentive compatibility which in turn results in smaller regret bounds (given that the term $4TC$ appears in the regret bound where $C$ is the bound on the approximate incentive compatibility derived in Theorem 3). $\bullet$ **Q:** On lines 297-298, you mention that the $(1-\frac{c_f}{e})$ approximation factor is optimal. Is this claim supported by a lower bound? If so, could you point to the relevant references? **A:** You can find the reference paper below, we will make sure to include this in lines 297-298 of the paper: Sviridenko, M., Vondrák, J., \& Ward, J. (2017). Optimal approximation for submodular and supermodular optimization with bounded curvature. Mathematics of Operations Research, 42(4), 1197-1218. Pdf: /pdf/596999f6cf04b30f3d94e810a56440d127f89e39.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
A Unified Solution for Privacy and Communication Efficiency in Vertical Federated Learning
Accept (poster)
Summary: This research proposes a novel cascaded hybrid optimization approach named VFL-CFOZO for VFL that combines zeroth-order (ZO) gradient and first-order (FO) gradient optimization techniques. The critical output layer of the clients uses the ZO gradient, while other parts utilize the FO gradient. This approach enhances convergence while maintaining privacy protection. Experimental results demonstrate that VFL-CZOFO achieves similar utility as the Gaussian Mechanism in DP privacy preservation while significantly reducing communication costs compared to state-of-the-art communication-efficient VFL frameworks. Strengths: - A novel idea and good motivation. Based on existing research about FOO and ZOO frameworks in VFL, this research focuses on privacy in transmitted parameters and efficiency on other layers to load balance. - Complete analysis and proofs. Compared to the previous works in ZOO-VFL, such as [40], this research supplies clear proof of DP privacy. - Moreover, convergence proofs are also sufficient. Weaknesses: - Lack of scalability. In VFL, the dimension of the dataset often increases with the scale of clients, which means higher sample times should be applied to guarantee convergence. Thus, the current experiments with two clients are insufficient in this aspect. The evaluation of the correlations of client size and “enough” sample times should be added. - The potential risk of offline clients. The identical random sequence demands that the server and client maintain high synchronization. If a client drops for a while and reconnects with high delay, renegotiating the random seed may be better than using the original random ID because the completeness and timeliness of the information in an offline client are usually uncertain for the server. - Some minor mistakes. In the explanation of Figure 4 in Section 6.4, the symbol q is miswritten as p. Inconsistent expression of the paper. ‘VAFL-ZOFO’ in Figure 5 of Appendix E.4 but ‘VAFL-CZOFO’ throughout the paper. Table 1 of Appendix A.2 shows that the output embeddings without delay formula should be Φ(w^t) instead of the w^t with the wide tide. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Why not add VAFL+DP to Figure 3 in Section 6.3? This work also incorporates privacy and communication efficiency. Though the paper is compared for training accuracy in 6.2, it needs to be clarified and intuitive enough. 2. Is there any basis or analysis of the application of the chain rule on a cascaded hybrid gradient? [40] indicates that the chain rule is inapplicable between two ZOE gradients due to extra variance. However, this research directly uses it between FO and ZO gradients. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: As mentioned in Section 7, the extra computational costs for the server do limit the scalability and utility of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >Weakness 1 (Experiment with more clients needed): We conduct experiments with 4 and 8 clients to further assess the performance of our framework on a larger scale. The results are presented in the Figure 1&2, Table 1&2 in the **attached PDF (in the topmost rebuttal box)**. It is essential to note that Vertical Federated Learning (VFL) typically involves a **small number of participants** (e.g. [5, 6, 10, 15, 27, 40] all used less than 5 clients). Unlike Horizontal Federated Learning (HFL), which involves millions of smartphones, tablets, and similar devices as clients, VFL primarily engages big companies and institutions. The added experiments will be included in the appendix of the manuscript. >Weakness 2 (Synchronization of the random seed): The synchronized random sequence does not need to be generated streamy (repeatedly) during the entire training process. Note that in Eq. 2, the random sequence $u_{m, i}^j $ **has no superscript for iteration time $T$**. Therefore, the participants only need to negotiate $u_{m, i}^j $ once, rather than repeatedly negotiate it during the entire training. Here we also provide one lazy method to generate the synchronized random sequence without communication, which is directly using the sample ID as the random seed to generate the random sequence. In our work, we assume that generating the same random sequence with the same seed is feasible and not very expensive. However, further exploration of the engineering aspects related to generating random sequences is **beyond the scope** of this paper. >Weakness 3 (Minor issues): Thank you very much for pointing them out! The typo $p$ in Section 6.4 has been corrected to $q$. Figures 5 and 6 in Appendix E are also typos, we corrected them to “CZOFO”. We corrected the symbol for the $\Phi(w^t) $ in the notation table in Appendix A, along with some layout problems in that table. >Question 1 (Add VAFL+DP in Figure 3 of the manuscript): We attached the figure where VAFL+DP is added in **Figure 6 of the PDF attached**. Yes, our primary objective of section 6.2 is **comparing training efficiency and communication cost**, while carefully excluding the influence of other factors. Moreover, it is worth noting that VAFL has better convergence performance compared to "VAFL+DP," as shown in the attached figure, where the latter incurs significantly higher communication costs. Therefore, we compared the VAFL baseline with the optimal convergence rate in that section. Besides, another fundamental concern is that the algorithm “VAFL[6]+DP[1]“ presented in our paper is **not a published baseline**; rather, it is a customized algorithm designed by us that offers equivalent protection to our proposed framework. Therefore, we did not treat it as a baseline for section 6.2 when we presented our work. >Question 2 (Basis of Chain ZOE. The discussion of the chain rule in [40]): To the best of our knowledge, we are the first to propose the chain of different gradient estimators, explore its advantages in VFL, and provide a comprehensive convergence analysis. In [40], the author means that the **multiplication of two ZOE** introduces extra variance. Specifically, separately estimating the two terms in $\frac{\partial f}{\partial h} \cdot \frac{\partial h}{\partial w}$ with two ZOE will lead to extra variance. Therefore, they directly estimate $ \frac{\partial f}{\partial w}$ with ZOE to avoid this extra variance. In our framework, we also avoid the multiplication of two ZOE. We solely estimate $\frac{\partial f}{\partial h} $ with ZOE and calculate $\frac{\partial h}{\partial w}$ with FO gradient. As a result, no extra variance is introduced in our framework. >Limitation (Extra computational cost on the server): We acknowledge this limitation. However, we have several justifications that this limitation is acceptable in practice. First, the number of clients involved in VFL is typically **quite small**, therefore the increase in the number of clients is controllable. Second, it is worth noting that **the bottleneck of VFL is communication cost** rather than computation cost [5, 27]. Local computations consume significantly less time compared to communication with other participants. Third, the server as the initiator and beneficiary **typically possesses more computational resources**, therefore using the higher computation cost of the server to trade off communication cost/privacy is favorable in the strategy of building VFL. --- Rebuttal Comment 1.1: Comment: Thanks for your reply. Since the paper focuses on VFL, you should clarify how data samples are partitioned for different clients. In the paper, there is only one statement, "The datasets were vertically partitioned among all participants in our experiments. Each client held a portion of the features of each sample." Please give a more detailed explanation about how an image sample is partitioned. --- Reply to Comment 1.1.1: Title: Detailed Explanation of the Data Partitioning Comment: Thank you sincerely for your response and valuable suggestions. We will incorporate additional details regarding the dataset partitioning to enhance the manuscript. Regarding the MNIST experiment, we flattened the image and then equally distributed the dimensions among each client. Specifically, in the context of the two-client experiment detailed in Section 6, the first client received the **upper half** of each image, while the second client was allocated the **lower half**. For the experiment involving four clients in the attached PDF, the first client received the **uppermost 1/4** of each image; the second client obtained the segment spanning from the **upper 1/4 to 1/2**; the third client from the **lower 1/2 to 3/4**; finally, the fourth client was assigned the **bottommost 1/4**. A similar split was implemented for the experiment involving eight clients. (This dataset has also been employed in other VFL research [6, 27, 40], where the features are equally distributed among clients.) Regarding the CIFAR10 Experiment in Appendix E.3, we split the image by the last dimension. Therefore, the first client was assigned the **left half**, while the second client received the **right half** of each image. (This dataset has also been employed in other VFL research [5, 6, 10], where the features are equally distributed among clients.) Regarding the GiveMeSomeCredit experiment in Appendix E.4, each sample comprises 10 distinct features. The first client was assigned **the first 5 features** for each sample, while the second client received **the remaining 5 features**. (This dataset has also been employed in other VFL research [13, 40], where the features are equally distributed among clients.) Regarding the a9a dataset experiment in Appendix E.5, each sample comprises 1 label and 123 features. The first client was assigned **the first 62 features** of each sample, while the second client received the **remaining 61 features**. (This dataset has also been employed in other VFL research, [40] and Zhang et al. 2022) --- ### Extra Comments on Data Partitioning in VFL: It is worth noting that certain VFL studies with **specific objectives** may have employed a different data partitioning method, e.g., distributing the features via random selection when studying Graph Neural Networks (Ni et al., 2021), distributing the features unevenly when studying feature unbalanced (Zhang et al. 2022), or using multimodality datasets when studying heterogeneity among parties (Castiglia et al. 2022). ### References *Castiglia, Timothy, Shiqiang Wang, and Stacy Patterson. "Flexible vertical federated learning with heterogeneous parties." arXiv preprint arXiv:2208.12672 (2022).* *Ni, Xiang, et al. "A vertical federated learning framework for graph convolutional network." arXiv preprint arXiv:2106.11593 (2021).* *Zhang, Jie, et al. "Adaptive vertical federated learning on unbalanced features." IEEE Transactions on Parallel and Distributed Systems 33.12 (2022): 4006-4018.*
Summary: The paper proposes a solution for privacy and communication efficient vertical FL training framework by utilizing zero-order-optimisation (ZOO). Since the convergence of ZOO-based VFL is significantly slower than standard gradient-based VFL, the paper proposes to only implement ZOO on the cut layer. Strengths: The paper provides a strong motivation and a realistic setup of VFL. It also provides a detailed comparison with the SOTA VFL framework in Section 2 with convergence rate. The method of apply different optimization for different layer is very interesting. Weaknesses: - The method proposes to use a compressor for communication since the ZOO will generate extra communication cost, but didn’t analyse which compressing method is the best for the method. - It seems from Fig 2 that the method give lower performance compared to Gaussian. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Why choose the uniform scale compressor in particular for the experiment? Have you compared it with other compressors? - It mentions that the paper proposes an asynchronous VFL, but how does asynchronous work? To finish the forward prop, the model will need to have all the intermediate output from all clients. I'm surprised to see that the synchronous version (Syn-VFL-ZO) perform much worse than asynchronous version (the paper version). Is there any reason why? Also, the synchronous version doesn't seem to use the Avg-RandGradEst. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - Only 2 clients for experiments. Especially in this case, ZOO can potentially lower the performance when the number of clients increase. - Only work on the image datasets, which each client holds half of the image. This particular experimental setup is not realistic in the real-life scenarios. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >W1&Q1 (The choice of compressor): Yes, applying different compressors may lead to different communication costs. We will add an experiment on the Top-K compressor in the appendix to enrich the discussion. However, comparing the performance of different compressors is **not the focus** of our paper. The goal for this part is to show that compression, as a common practice in distributed learning, is compatible with our framework. Therefore, we theoretically proved the convergence of our method with **any compression method**, and only used a basic compressor for demonstration in the experiment. >W2 (Convergence of Figure 2): When we did the experiment, we only applied the basic zeroth-order estimator (ZOE) because our primary objective was to demonstrate comparable performance with VAFL+DP while significantly reducing the communication cost. The basic ZOE (Eq. 2) has a large forward bias because the perturbation is only on the function's forward side. Applying a slightly “advanced” centralized version of ZOE, we can reach higher test accuracy and better convergence. More specifically, we convert Eq.2 to its centralized version which reduces the bias by sampling from both sides of the function: $ \frac{\phi(d_{h_m}) }{2 q \mu_m } \sum_{j=1}^{q} [{f_i(h_{m, i} + \mu_m u_{m, i}^j ) - f_i ({h_{m, i}}- \mu_m u_{m, i}^j } )] u_{m, i}^j$ With this centralized ZOE, we achieve a smoother convergence, similar privacy budget ($\epsilon=95$), and better test accuracy (96.32%) than the corresponding VAFL+DP of (95.94%). We attached the result in **Figure 4 in the attached PDF (in the topmost rebuttal box)**. >Q2 (How does asynchronous work? Why is Syn-VFL worse than Asyn-VFL in Figure 3? Why ZOO-VFL does not use Avg-RandGradEst?): Regarding “how does asynchronous work”: Yes, the server requires embeddings from all clients. Therefore, to support asynchronous updates, the server maintains a table of embeddings from all clients. During each round, only the embedding of the activated client undergoes an update. This specific detail has been explained in lines #238-#240 of the manuscript. Furthermore, a comprehensive explanation of Asynchronous VFL can be found in Appendix D.1 of the manuscript. Regarding “Syn worse than Asyn”: the primary reason is that the fundamental difference between Asyn-VFL and Syn-VFL causes some ambiguity in measuring the convergence. To address this ambiguity, we will add the definition of the x-axis in the manuscript. We explain this ambiguity here: In some other works, the "communication round (**CR**)" may be used to measure the convergence of Asyn-VFL and Syn-VFL (**Figure 5 in the PDF attached** shows the "Syn outperforms Asyn" with the x-axis changed to CR). In one CR, Asyn-VFL updates only **one client** and the server, while Syn-VFL updates **all clients** and the server. This difference in the definition of CR is the reason behind the observed superiority of "Syn" over "Asyn" in this particular case. It is worth noting that, in each communication round, the communication cost for Syn-VFL is greater than that of Asyn-VFL. In our study, we extend the concept of **“epoch”** in Asyn-VFL, to advocate an intuitive understanding comparable to traditional ML. For Asyn-VFL the number of CR in one “epoch” precisely matches the number of CR required for all clients to traverse the dataset in an ideal no-delay case. As per this definition, the server in Asyn-VFL undergoes more updates on the server compared to its corresponding Syn-VFL within one "epoch". Therefore shows “Syn worse than Asyn”. Note that the communication cost for one “epoch” is the same for Asyn-VFL and the corresponding Syn-VFL, which brings some convenience in the discussion of communication cost. Regarding "why ZOO-VFL does not use Avg-RandGradEst": The reason is that applying Avg-RandGradEst in ZOO-VFL will **significantly increase its forward communication cost**. To implement Avg-RandGradEst, these frameworks would need to generate $q$ perturbations on its local model and send all of these embeddings to the server, i. e. forwarding $q$ different $h(x+\mu u)$ with different $u$. And this large cost is inevitable. Besides, a more basic consideration is that the original work of ZOO-VFL [40] did not apply Avg-RandGradEst, and we followed the implementation of this baseline. Our method effectively circumvents the expense associated with multiple sampling because the perturbation is on the layer of clients’ output, i.e. $h(x)+\mu u$, with $q$ different $u$. This random sequence of $u$ can be shared via sharing the random seed, and only need to be generated once. Therefore, in each iteration, our framework only needs to send $h(x)$ instead of $ h(x+\mu u)$, thereby avoiding the communication cost incurred by multiple sampling of Avg-RandGradEst. >L1 (Experiment on more clients needed): We conducted experiments with 4 and 8 clients to further assess the performance of our framework on a larger scale. The results are presented in the Figure 1&2, Table 1&2 in the **PDF**. It is essential to note that Vertical Federated Learning (VFL) typically involves a **small number of participants** (e.g. [5, 6, 10, 15, 27, 40] all used less than 5 clients). Unlike Horizontal Federated Learning (HFL), which involves millions of smartphones, tablets, and similar devices as clients, VFL primarily engages big companies and institutions. The added experiments will be included in the appendix of the manuscript. >L2 (Need experiment in real-life scenarios): We not only conducted experiments on the mainstream CV dataset (MNIST and CIFAR10) but also on the **real-world dataset** for VFL in the Appendix of the manuscript. Specifically, the experiments conducted on GiveMeSomeCredit and Adult(a9a) datasets can be found in Appendix E.4 and E.5. The results from these experiments demonstrate that we achieve comparable test accuracy to other baseline approaches, while significantly reducing communication costs. --- Rebuttal Comment 1.1: Comment: I extend my gratitude to the authors for their insightful rebuttal and for providing the accompanying new results. The explanation is clear. --- Reply to Comment 1.1.1: Title: Thank you and further results on different compressors. Comment: Thank you sincerely for your response and valuable suggestions. We also conducted further experiments incorporating other compressors as you suggested in W1. The table below shows the ablation study of applying Top-K and Random-K (Stich et al. 2018, Shi et al. 2019) on the backward message of VFL-CZOFO. This experiment will be added in the Appendix of the manuscript. In terms of convergence rate, Top-K (K=10) shows similar convergence to Uniform (8-bit), while Random-K (K=10) converges notably slower. However, the sparsification technique has advantages in enhancing test accuracy, possibly due to preventing overfitting. | | Compressor on Backward Message | Test Accuracy | Backward Cost (95%) | Backward Cost (total) | |-----------|--------------------------------|------------------|---------------------|-----------------------| | VFL-CZOFO | None | 95.30 $\pm$ 0.25 | 19 MB | 75 MB | | | Uniform (8bit) | 94.58 $\pm$ 0.21 | 5 MB | 19 MB | | | Top-K (K=10) | 96.35 $\pm$ 0.24 | 5 MB | 15 MB | | | Random K (K=10) | 95.56 $\pm$ 0.23 | 8 MB | 15 MB | ### Reference *Stich, Sebastian U., Jean-Baptiste Cordonnier, and Martin Jaggi. "Sparsified SGD with memory." Advances in Neural Information Processing Systems 31 (2018).* *Shi, Shaohuai, et al. "Understanding top-k sparsification in distributed deep learning." arXiv preprint arXiv:1911.08772 (2019).*
Summary: The paper is aiming at solving two critical problems of Vertical Federated Learning (VFL), the convergence rate of ZOO-based VFL and the privacy guarantee of ZOO-based VFL. This study provides a simple solution of using different optimizations i.e., the first-order optimization (FOO) and zeroth-order optimization method (ZOO) in the VFL framework, and theoretically explained the privacy guarantee with differential privacy. Experiments are conducted with regard to differential privacy, training efficiency, and communication cost which showed significant improvement in communication cost compared with SOTA and baselines. Strengths: This study discusses the training efficiency and privacy in applying ZOO to VFL, which is a significant problem in this area. The idea of cascaded different optimization is novel and interesting. The motivation of balancing the advantages of FOO&ZOO sounds. I am appreciated that it proposes a simple solution that is easy to understand and will inspire more following studies in this area. The solution is validated from different dimensions and thus I think the effectiveness of the solution is reliable. The paper is also theoretically contributed. Theorem 4.1 explains the privacy of ZOO. Theorem 5.2 guarantees training efficiency. The experiments are solid from my perspective, comprehensively covering the essential aspects of the algorithm, including the privacy budget of DP, the learning curve, and the communication cost at each stage. The method has high performance with a substantial improvement in communication efficiency, making this work great potential of being a new SOTA baseline. Weaknesses: According to Table 1, solely applying ZOO on the client layer only reduce the backward message size from $d_h B$ to $q$ compared with FOO-based VFL. However, the forward communication size was not improved. I agree with the improvement of your work in improving the convergence rate, however, a large reduction in the total communication cost could be due to the compression. Could you illustrate more on the contribution of compression and ZOO in reducing the total communication cost? As you mention in #261-#263, the fundamental difference between ZOO and Gaussian Mechanism is that your privacy budgets ($\epsilon, \delta$) are implicitly controlled by the parameter of ZOO. Will this limit the application of your solution when a certain degree of privacy budget is required? Technical Quality: 3 good Clarity: 3 good Questions for Authors: It is less clear to me whether the reduction of total communication cost is from “applying ZOO” or “applying compression”. The current ablation study did not tell the ratio of the reduction of the communication cost between “applying ZOO” and “applying compression”. How your approach solves the problem if a certain privacy budget is required? Since the privacy budget of your framework is not directly controlled by the magnitude of the noise. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes, the limitation was discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >Weakness 1 & Question1 (The contribution of ZOO and compression in reducing communication cost): Yes, both methods contribute to enhancing communication efficiency. However, typically ZOO makes a more significant contribution to reducing the communication cost. Take the experiment presented in Table 2 as an example, The application of the ZOO results in a reduction of backward message cost **from 3073MB to 75MB**, achieving a total reduction of **2998MB**. Subsequently, by using compression, the forward message size diminishes from 3073MB to 769MB, and backward costs additionally reduce from 75MB to 19MB, leading to a total reduction of **2360MB**. For this experiment, the ratio of the absolution contribution between ZOO and compression is around **1.27: 1**. >Weakness 2 & Question 2 (How to achieve a certain privacy budget): Yes, that is the fundamental difference between our method and Gaussian Mechanism. If a certain privacy budget ($\epsilon, \delta$) is required. We need to run the parameter tuning process for the ZOO to achieve that privacy budget. For example, if the privacy budget is too large, we need to reduce the sampling time $q$ to make the gradient estimation less accurate or reduce the number of iterations $T$ to make the attacker accumulate less information (early stopping). We acknowledge that this tuning process is less convenient than Gaussian Mechanism where the corresponding magnitude of the Gaussian noise can be directly calculated. However, with our scheme, we achieve DP-guarantee by reducing the communication amount, while Gaussian Mechanism achieves DP-guarantee by adding noisy information. >Other notes We also included **additional experiments in the attached PDF** (located in the topmost rebuttal section), containing the experiment involving 4 and 8 clients. --- Rebuttal Comment 1.1: Comment: Thank you very much for your reply and the additional experiment. Apart from those experiments, I am also interested in the privacy-utility trade-off of CZOFO in your discussion with other reviewers. Could you demonstrate on a set of different privacy budgets and their corresponding test accuracy? This would be a good experiment to demonstrate the privacy-utility trade-off of your method. --- Reply to Comment 1.1.1: Title: Thank you and further experiment result Comment: Thank you sincerely for your response and valuable suggestions. Following your suggestion, we conducted further experiments on the privacy-utility trade-off across a comprehensive range of privacy budgets. Through this experiment, we demonstrate our framework's versatility in achieving varying trade-offs between privacy budget and test accuracy. The corresponding results are presented in the table provided below. This result will be included in the appendix of the manuscript. It is worth noting that the last column demonstrates that VFL-CZOFO can achieve comparable test accuracy as VAFL, given the corresponding privacy-utility trade-off. **Table 1: Privacy Budget and Corresponding Test Accuracy** | $\bar{\epsilon}=$ | 12 | 20 | 35 | 95 | >>100 | |-------------------|------------------|------------------|------------------|------------------|------------------| | VAFL+DP | 72.34 $\pm$ 0.59 | 84.17 $\pm$ 2.83 | 93.18 $\pm$ 0.52 | 95.94 $\pm$ 0.29 | 97.36 $\pm$ 0.14 | | VFL-CZOFO | 75.92 $\pm$ 3.51 | 85.86 $\pm$ 2.78 | 93.34 $\pm$ 0.15 | 96.32 $\pm$ 0.22 | 97.35 $\pm$ 0.05 | --- The following section provides the experimental details: First, to ensure consistency with the experiments in the manuscript, we did not use any compressor on VFL-CZOFO. Besides, we further employed the centralized zeroth order estimator to enhance the convergence stability, i.e. converting the right-hand side of Eq.2 into $ \frac{\phi(d_{h_m}) }{2 q \mu_m } \sum_{j=1}^{q} [{f_i(h_{m, i} + \mu_m u_{m, i}^j ) - f_i ({h_{m, i}}- \mu_m u_{m, i}^j } )] u_{m, i}^j $. The $\bar{\delta}$ are tuned to 0.01 for all trials. To achieve different accumulated privacy budget $\bar{\epsilon}$, we reduce the sampling time $q$ of Avg-RandGradEst and reduce the number of iterations $T$ of VFL-CZOFO as mentioned in our response to Weakness 2&Question2.
Summary: This paper presents a pioneering Zero-Order Optimization (ZOO)-based VFL algorithm that effectively ensures privacy preservation while significantly enhancing communication efficiency. Regarding privacy, the paper demonstrates theoretically that ZOO can inherently offer $(\epsilon,\delta)$-differential privacy, providing a strong foundation for understanding the privacy preservation achieved by ZOO. Concerning communication efficiency, the method ingeniously combines first-order and zero-order gradient optimization, resulting in remarkable improvements in training and communication efficiency. Moreover, the paper rigorously proves the convergence of the proposed algorithm. Extensive experiments are conducted, further affirming the superiority of the method. Strengths: 1. The paper presents a novel VFL method that applies different optimization methods to different layers of the global model in each iteration, significantly improving the convergence rate of ZOO-based VFL while preserving privacy. 2. The theoretical proof that ZOO can inherently offer $(\epsilon,\delta)$-differential privacy provides a strong foundation for understanding the privacy preservation achieved by ZOO in VFL. 3. This paper conducts extensive experiments, offering concrete evidence of its effectiveness. Weaknesses: 1. The experiments are currently conducted with only two clients, but further experiments with varying numbers of clients are necessary. 2. The article divides experiments on different datasets into multiple chapters for presentation, which hinders a comprehensive display of the model. 3. The computation cost of the server is extremely high, and it escalates with an increase in the number of clients due to the average random gradient estimation for each client. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Does the number of clients significantly affect the computational efficiency? 2. In Appendix E.4 and E.5, Figure 5 and Figure 6 present VFL-ZOFO, which is different from VFL-CZOFO. Do they have any differences? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: 1. The computation cost of the server is extremely high. 2. This method is not suitable for many client situations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >Weakness 1& Question 1& Limitation 2 (Experiment on more clients needed): We conduct experiments with 4 and 8 clients to further assess the performance of our framework on a larger scale. The results are presented in the Figure 1&2, Table 1&2 in the **attached PDF (in the topmost rebuttal box)**. It is essential to note that Vertical Federated Learning (VFL) typically involves a **small number of participants** (e.g. [5, 6, 10, 15, 27, 40] all used less than 5 clients). Unlike Horizontal Federated Learning (HFL), which involves millions of smartphones, tablets, and similar devices as clients, VFL primarily engages big companies and institutions. The added experiments will be included in the appendix of the manuscript. >Weakness 2 (Presentation of the experiment): We aimed to conduct a comprehensive evaluation of our algorithm, putting the experiment for all datasets together could be the optimal choice. However, because of space limitations, we had to put the experiment on other datasets in the appendix. We considered each dataset as a distinct scenario of applying VFL, therefore we separated them into different chapters for clarity. >Weakness 3 & Limitation 1 (Computational cost of the server): We acknowledge this limitation, and we also include an experiment on the computational cost on the server with more clients in **Table 4 of the PDF attached** (the experiment setting is the same as Appendix E.2, but changing the number of clients). However, we have several justifications that this limitation is acceptable in practice. First, the number of clients involved in VFL is typically **quite small**, therefore the increase in the number of clients is controllable. Second, it is worth noting that **the bottleneck of VFL is communication cost** rather than computation cost [5, 27]. Local computations consume significantly less time compared to communication with other participants. Third, the server as the initiator and beneficiary **typically possesses more computational resources**, therefore using the higher computation cost of the server to trade off communication cost/privacy is favorable in the strategy of building VFL. >Q2 (Figure 5&6 in Appendix E): That is a typo for Figures 5 and 6 in Appendix E, both should be “CZOFO”. Thank you very much for pointing that out! --- Rebuttal Comment 1.1: Comment: Thanks for your response. I'd like to keep the initial rating. --- Reply to Comment 1.1.1: Title: Thank you. Comment: Thank you sincerely for your commitment of time in evaluating the manuscript and the valuable suggestions to help us improve the quality of the manuscript.
Rebuttal 1: Rebuttal: **We would like to thank the reviewers for dedicating their time and effort to assess the manuscript.** **Attached is the PDF for the figures and tables.** Pdf: /pdf/478498f259b2a4437fce9c9e7d80e0426fe01cf1.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper introduces a hybrid Federated Learning (FL) framework, named VFL-CZOFO, which aims to provide intrinsic privacy protection while also significantly improving the convergence rate when compared to existing ZOO-based frameworks. Strengths: 1. Faster Convergence: The paper demonstrates that VFL-CZOFO achieves faster convergence compared to other ZOO-based frameworks. 2. Theoretical Solidity: The paper is theoretically sound, with a proof of convergence, and guarantees $(\epsilon, \delta)$-DP (differential privacy). Weaknesses: The major concern lies in the experimental performance of VFL-CZOFO. 1. The experiments only involve 2 clients, which are considered too small for a conclusive evaluation. The proposed algorithm's performance in a large-scale federated learning system remains unclear. 2. The usage of only 2 datasets and lack of clarity regarding heterogeneity raise concerns about the model's generalizability and applicability in diverse scenarios. 3. The choice of epsilon (around 90) for privacy evaluation seems excessively high, raising doubts about the actual privacy protection offered. Additionally, the training doesn’t seem to converge well at epoch 50 in Fig. 2, potentially affecting the total privacy budget. 4. Table 3 shows that the proposed method sacrifices accuracy compared to the FOO-based method while improving communication cost. This trade-off is questionable, as accuracy is generally more critical in most cases. Technical Quality: 3 good Clarity: 3 good Questions for Authors: VFL-CZOFO is designed for better privacy protection while reducing communication costs. In the case where VFL-CZOFO guarantees a significantly smaller privacy budget than VAFL+Gaussian, what will the accuracy be? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The paper introduces a framework that provides intrinsic privacy protection while also improving communication costs. However, there are weaknesses and questions listed above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >Weakness 1 (Experiment on more clients needed): We conduct experiments with 4 and 8 clients to further assess the performance of our framework on a larger scale. The results are presented in Figure 1&2, Table 1&2 in the **attached PDF (in the topmost rebuttal box)**. It is essential to note that Vertical Federated Learning (VFL) typically involves a **small number of participants** (e.g. [5, 6, 10, 15, 27, 40] all used less than 5 clients). Unlike Horizontal Federated Learning (HFL), which involves millions of smartphones, tablets, and similar devices as clients, VFL primarily engages big companies and institutions. The added experiments will be included in the appendix of the manuscript. >Weakness 2 (The model’s generalizability in diverse scenarios): We not only conducted the experiment on the mainstream CV dataset (MNIST and CIFAR10) but also on the **real-world dataset** for VFL. Specifically, the experiments conducted on GiveMeSomeCredit and Adult(a9a) datasets can be found in Appendix E.4 and E.5 of the manuscript. The results from these experiments demonstrate that we achieve comparable test accuracy to other baseline approaches, while significantly reducing communication costs. Besides, it is worth noting that there is no data distribution heterogeneity in VFL because, in VFL, each client shares different features of the **same sample**. Other types of heterogeneity in VFL such as system heterogeneity and feature unbalanced, is beyond the scope of our paper, including them will obscure the clarity of our theoretical and experimental result. >Weakness 3 (Privacy budget seems high. Deviation from convergence at epoch 50 in Figure 2.): The $\epsilon$ in that figure is the accumulated privacy budget for the entire training, demonstrating the guarantee in the **the worst case** in VFL. The worst case means that the attacker can acquire all the messages from the server by colluding all clients, during the entire training procedure of 50 epochs. Besides, the sampling time of 100 is very large in this experiment, whose margin gain for convergence is small. Therefore, the privacy budget demonstrated here is larger than practical. To acquire a smaller $\epsilon$, we can use fewer iterations to reduce the accumulated information or use smaller sampling times to estimate the gradient less accurately. Regarding the “epoch 50 in Fig. 2”: the basic zeroth-order estimator (ZOE) in Eq.2 has a large forward bias. This is possibly the reason that causes an unstable convergence around epoch 50 in Figure 2. Applying a slightly “advanced” centralized version of ZOE, we can get a more stable convergence and higher test accuracy, with a trade-off of extra computational cost on the server. More specifically, we convert Eq.2 to its centralized version: $ \frac{\phi(d_{h_m}) }{2 q \mu_m } \sum_{j=1}^{q} [{f_i(h_{m, i} + \mu_m u_{m, i}^j ) - f_i ({h_{m, i}}- \mu_m u_{m, i}^j } )] u_{m, i}^j$ We add an experiment that combines the above techniques in **Figure 3 of the PDF attached**. Applying early stopping, smaller sampling times, and centralized ZOE, we demonstrate a smaller accumulated privacy budget of $\epsilon=32$, without significantly influencing the convergence. The privacy budget can be further reduced with more techniques. However, it is worth noting that there is a privacy-utility trade-off in DP, a small privacy budget will cause low test accuracy [31, 35]. For example, in [35], when $\epsilon = 100$, they achieve around 90% test accuracy, while $\epsilon=50$, the accuracy drops to around 68%. >Weakness 4 (Sacrifice in test accuracy): Our framework also has the capability to achieve the same test accuracy as the corresponding VAFL baseline (not sacrificing the test accuracy), applying a corresponding privacy-utility trade-off. Note that the first three baselines sacrifice privacy and get higher test accuracy (“test-accuracy-focused trade-off”), while the last three baselines take a balance between privacy and utility (“balanced trade-off”). In Table 3, we only demonstrated the “balanced trade-off” for our method. However, our framework can also demonstrate the “test-accuracy-focused trade-off”, achieving the comparable test accuracy of the corresponding asynchronous VFL baseline (VAFL [6]). The cost is a larger privacy budget and increasing communication-computation costs. The experiment result is attached in **Table 3 of the PDF attached**. Besides, regarding the "balanced trade-off", the basic zeroth-order estimator (ZOE) we used has a large forward bias. To correct this, we apply a slightly “advanced” centralized version of ZOE so that we can reach higher test accuracy and better convergence. When we did the experiment in the manuscript, we only applied the basic ZOE because our prime goal was the communication cost, therefore we demonstrated a comparable test accuracy with the target baseline in Table 3. With this centralized ZOE, we can achieve a smoother convergence, a similar privacy budget ($\epsilon=95$), and better test accuracy (96.32%) than the corresponding VAFL+DP of (95.94%). (Experiment result is shown in **Table 3 of PDF attached**.) >Question (What would be the accuracy if VFL-CZOFO has a smaller privacy budget than VAFL+DP): The privacy-utility trade-off exists within all DP algorithms. If we achieve a smaller privacy budget than VAFL+Gaussian, it means that we apply a trade-off that places a greater emphasis on privacy, leading to lower utility and less test accuracy. However, the advantage of our framework is that we provide comparable protection as the DP mechanism while reducing the communication cost simultaneously. --- Rebuttal Comment 1.1: Title: Thank you and we provide further experiment on Weakness 3&4 Comment: Thank you sincerely for your commitment of time and effort in evaluating the manuscript. We would be happy to offer additional clarification if needed. We conducted further experiments on the privacy-utility trade-off across a comprehensive range of privacy budgets. Through this experiment, we demonstrate our framework's versatility in achieving varying trade-offs between privacy budget and test accuracy. The corresponding results are presented in the table provided below. This result will be included in the appendix of the manuscript. It is worth noting that the last column demonstrates that VFL-CZOFO can achieve comparable test accuracy as VAFL, given the corresponding privacy-utility trade-off. **Table 1: Privacy Budget and Corresponding Test Accuracy** | $\bar{\epsilon}=$ | 12 | 20 | 35 | 95 | >>100 | |-------------------|------------------|------------------|------------------|------------------|------------------| | VAFL+DP | 72.34 $\pm$ 0.59 | 84.17 $\pm$ 2.83 | 93.18 $\pm$ 0.52 | 95.94 $\pm$ 0.29 | 97.36 $\pm$ 0.14 | | VFL-CZOFO | 75.92 $\pm$ 3.51 | 85.86 $\pm$ 2.78 | 93.34 $\pm$ 0.15 | 96.32 $\pm$ 0.22 | 97.35 $\pm$ 0.05 | --- The following section provides the experimental details: First, to ensure consistency with the experiments in the manuscript, we did not use any compressor on VFL-CZOFO. Besides, we further employed the centralized zeroth order estimator to enhance the convergence stability, i.e. converting Eq.2 into $ \frac{\phi(d_{h_m}) }{2 q \mu_m } \sum_{j=1}^{q} [{f_i(h_{m, i} + \mu_m u_{m, i}^j ) - f_i ({h_{m, i}}- \mu_m u_{m, i}^j } )] u_{m, i}^j$. The $\bar{\delta}$ are tuned to 0.01 for all trials. To achieve different accumulated privacy budget $\bar{\epsilon}$, we reduce the sampling time $q$ of Avg-RandGradEst and reduce the number of iterations $T$ of VFL-CZOFO as mentioned in our response to weakness 3. --- Rebuttal 2: Comment: Thank you very much for the clarifications and comments. I will raise my score to 5
null
null
null
null
null
null
Two Heads are Better Than One: A Simple Exploration Framework for Efficient Multi-Agent Reinforcement Learning
Accept (poster)
Summary: This paper propose a novel and compute-efficient exploration method: COIN, which can incorporate Curiosity-based and Influence-based exploration. For influence-based exploration,COIN quantify how each agent’s action affect the other agents and use the influence degree as the intrinsic reward to promote the exploration. For curiosity-based exploration,COIN uses the prediction errors of local observations of each agent and the global states to measure the uncertainty of the model to the environment. The experiments on three benchmarks StarCraft II, MACO, and Google Football show the superiority and effectiveness of COIN. Strengths: (1) This work designs a MARL framework which combines two kinds of main exploration methods and achieve sufficient and efficient exploration without bringing many computational costs. (2) The exploration method proposed in this paper can be easily applied to other multi-agent methods. (3) The overall idea of the paper is clear and easy to understand. Weaknesses: (1) To be honest,it lacks a certain degree of novelty. As is well known, the methods used in this paper like Qmix, MI and Prediction Error are all previous work. We just do some combination work. (2) It should explain how to extend the method in this paper to other MARL methods. (3) The existing experimental results seem insufficient,mainly reflected in the following aspects: Many result curves do not show the final convergence state, but rather the intermediate state; Lack of quantitative forms of expression, such as reward values or case study. (4) The main body of this paper is clear to understand, here is space for improvement. I defer some of my issues in the appendix to "Questions". Technical Quality: 3 good Clarity: 2 fair Questions for Authors: (1) In Equ.(1) and (2), the trajectory τkt should include state and action. So what is the meaning of p({a_k^t|\tau}_k^t\ )? (2) Influence-based intrinsic reward is higher. This indicates that the action is more encouraging. Thus, In Equ.(8), why is“-”instead of“+”? (3) Comparing with Fig.2, In Equ.(7), why is the input different between them ? (4) The description of hyperparameters in the experiment can not be found. (5) In Fig.5(a), why is the performance of Qmix better than Qmix+Inf and Qmix+Cur? (6) Is the exploration framework of this paper attached to other MARL algorithms? How well? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: In section 6, the authors talk about some limitations about this paper. At present, the potential negative social impact of this work has not been identified. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1:** It lacks a certain degree of novelty, and just do some combination of previous work such as MI and prediction error. **A1:** Off course, MI and prediction error are commonly used in RL. **In this paper, our main motivation is to highlight that different kinds of exploration play a different role in different scenarios. And we provide a simple but efficient way to combine two kinds of exploration strategies.** Compared with the related works, our proposed MI is (a) more reasonable and (b) easier to compute. More details are left in Appendix. **Q2:** Is the exploration framework of this paper attached to other MARL algorithms? How well? **A2:** Our framework is suitable for most of the MARL algorithms. Our method does not need to modify the baseline model, but only set up additional modules for computing the intrinsic rewards. These modules can be trained simultaneously with the original model. **Q3:** Many result curves do not show the final convergence state. Lack of quantitative forms of expression, such as reward values or case study. **A3:** All randoms seeds of all baselines converge at the end in this paper. (1) Different random seeds may induce different outcomes with a large gap. (2) We add the smooth weight while drawing the curve. These are probably the reasons that it ``looks not to converge''. **Q4:** what is the meaning of $p(a_k^t|{\tau}_k^t)$? **A4:** ${\tau}_k^t$ is defined as $({o}_k^0, {a}_k^0,...,{o}_k^t)$ in this paper (see Preliminaries) which do not contains the actions in the current time-step. Hence, $p(a_k^t|{\tau}_k^t)$ means the probability to take action $a_k^t$ when given the current observations and history observation-action pairs. **Q5:** In Equ.(8), why is“-”instead of“+”? **A5:** We subtract (“-”) the influence-based intrinsic reward from $Q_i$ and get a pessimistic \~$Q_i$ to encourage the agents to influence each other during training. Considering the simple case of $y=r+\gamma*Q'-(Q-r_{inf})$. Abstract $r_{inf}$ from $Q$ is equivalent to add $r_{inf}$ to $r$. We apply the influence-based intrinsic rewards before each $Q_{i}$ is sent to the central critic because it can be scaled along with the ${Q}_{i}$. ** Q6:** In Equ.(7), why is the input different between them? **A6:** Thanks for pointing out that. The inputs are the same, and we will rectify these typos in the revision. **Q7:** In Fig.5(a), why is the performance of Qmix better than Qmix+Inf and Qmix+Cur? **A7:** Fig.5(a) is an interesting finding. After watching the replay, we can provide an intuitive explanation. In this GRF scenario, each agent is initialized at the same location on the map. Hence, they need to explore the whole map in acceptable training time steps. Meanwhile, influence-based exploration will not promote such behavior, but only focus on the interactions among agents. This leads to too many unnecessary passes and makes the performance even worse than $\epsilon$-greedy. The sharing parameter paradigm leads to the same behavior of agents at the beginning of the training procedure. However, curiosity-based exploration will not promote the agents to influence each other and can not bring diversity for agents, which makes QMIX+Cur get worse. --- Rebuttal Comment 1.1: Comment: In this round of feedback, the author provided detailed modifications and explanations for the first round of feedback. Especially, the meaning of relevant formulas has been supplemented in depth, which enables readers to quickly and accurately grasp the content of the article. Unfortunately, the experimental part is still worth discussing, although the author has explained convergence and some experimental results, the relevant explanations may not be sufficient. If random seeds have a significant impact on experimental results, can it indirectly indicate that the stability of our method is weak? It is strongly recommended to refine and improve the experimental section. Finally, I hope the author can improve the relevant parts as soon as possible to make the article more competitive. Unfortunately, I am unable to modify the relevant scores. --- Reply to Comment 1.1.1: Comment: Thanks for recognizing our work. We appreciate the concerns raised about the experimental part, and here are the replies. 1. Indeed, random seeds can significantly influence the performance of deep reinforcement learning, regardless of training algorithms and exploration strategies [1-4]. Therefore, we followed recent studies and evaluated different methods using the mean or median of 10 random seeds. It is normal for a method to have high variance in performance in different seeds. For example, the baseline QMIX[1] in the scenario "3s5z_vs_3s6z" may have 0 and 100 win rate at two different seeds. 2. Moreover, the instability in MARL often refers to the fluctuations in performance caused by the large joint action and state space within one seed. Hence, the results cannot indicate that the stability of our method is weak. We will refine the experimental section to provide more clarity on this aspect. Thank you again for your feedback and we will work to improve the quality of our work. [1] Monotonic value function factorisation for deep multi-agent reinforcement learning [2] Weighted qmix: Expanding monotonic value function factorisation for deep multi-agent reinforcement learning. [3] Qplex: Duplex dueling multi-agent q-learning [4] Maser: Multi-agent reinforcement learning with subgoals generated from experience replay buffer --- Rebuttal 2: Comment: Thanks for your careful review again.The discussion deadline is coming. If you have any other concern or question, welcome to discuss with us.
Summary: This paper investigates exploration methods in multi-agent reinforcement learning (MARL). Specifically, among the two main families of exploration methods, namely curiosity-based methods and influence-based methods, the paper reports that they could serve complementary roles to each other. Subsequently, the paper proposes a method that takes advantage of both types of exploration methods, demonstrating its effectiveness in 3 common MARL benchmarks. Strengths: 1. MARL exploration is an open and important problem for the field. The proposal of combining the two families of exploration methods in MARL is novel and the highlight of their complementary role is significant. 2. The experimental results overall show the effectiveness of the proposed method against strong baselines 3. The paper is well-written with the method well-explained. Weaknesses: 1. There are some inconsistencies in results across base algorithms of the proposed methods in different environments. Some additional ablation results could improve this (will be mentioned in the question section) 2. The mentioned inconsistencies are not well explained, making the results less convincing. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The results of QMIX + ours in Aloha seem to be quite odd given the early burst and drop with the plateaued bad results. Any intuition or explanation for the phenomenon? 2. Can you elaborate more on the possible reasons behind the performance discrepancies between the results of QMIX and QPLEX when adding your method? 3. Do you have the results for QPLEX+ours in GRF? This could strengthen the results 4. Given the drastic difference between QMIX and QPLEX when combined with the proposed exploration method, do the ablation results apply to QPLEX too 5. Can you elaborate on how causal relationships could help in balancing the two terms? 6. More of an open question, Could this have any failure cases specific to the multi-agent setting like the noisy TV issues in the single-agent case? 7. Would these results apply in fully decentralized MARL methods without sharing parameters? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: One limitation is discussed. No negative societal impact was discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1:** The results of QMIX + ours in Aloha seem to be quite odd given the early burst and drop with the plateaued bad results. Any intuition or explanation for the phenomenon? **A1:** In Aloha, messages collided when two adjacent agents send messages simultaneously. Each successful transmission will gain a positive global reward (i.e., 0.1) and the collision will gain a negative global reward (i.e., -10). Meanwhile, the agents can not communicate with each other and do not know whether the action will cause a collision. In the beginning, they try to explore and hence get more successful transmissions. However, they gain more punishment at the same time. The severe punishment mechanism makes the agents adopt conservative strategies. That is the reason why the curve looks like this. **Q2:** Can you elaborate more on the possible reasons behind the performance discrepancies between the results of QMIX and QPLEX when adding your method? **A2:** In most scenarios, COIN can improve the performance of both QMIX and QPLEX. The discrepancies between the results may come from environmental uncertainty and the method itself. The MARL scenarios are much more complicated than single-agent RL, thus, it is hard to say which method will definitely outperform the other. Even the same method with different random seeds can get different performance with a huge gap. Hence, it is a normal phenomenon where discrepancies exist, but it is hard for us to provide specific reasons for them. *Q3:** Do you have the results for QPLEX+ours in GRF? Do the ablation results apply to QPLEX too? *A3:** Our method improves QPLEX in GRF scenario, and the results are shown in Fig. (2) of the the affiliated file. We show the ablations of our method when apply on QPLEX in the affiliated file Fig. (1). Our method can improve QPLEX in different content in different scenarios except for a failure case ''Gather''. *Q4:** Can you elaborate on how causal relationships could help in balancing the two terms? *A4:** Given the causal relationships, we can know the potential interdependence among agents. We can assign lower $\beta_{inf}$ to the agents that do not rely on the others and higher $\beta_{inf}$ to the agents that most agents rely on. Meanwhile, we can simplify the computation of Eq.(3) to avoid computing the unnecessary MI. **Q5:** Could this have any failure cases specific to the multi-agent setting like the noisy TV issues in the single-agent case? **A5:** There is a failure case like ``noisy TV'' in the Gather scenario. A higher global reward of 10 is received if all agents choose the goal set at the beginning of the episode, and a relatively lower reward of 5 is received if no agents choose this goal. Obviously, QPLEX fails and falls into suboptimal results and our framework can not bring too much improvement. **Q6:** Would these results apply in fully decentralized MARL methods without sharing parameters? The experiments can also be applied in methods without sharing parameters, and the results could be even better. In some scenarios, the sharing parameters paradigm brings challenges and is harmful to MARL training. For example, in GRF, the sharing parameter paradigm leads to the same behavior of all agents at the beginning of the training procedure and hampers the possibility of the agents' cooperation. --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal. My concerns have been addressed. I Have raised my score to 6. I think the consideration of different types of exploration is useful. Even though the combination of methods seems simple, the perspective it brings has novelty. Please incorporate your discussions of the results in the rebuttal in the paper. --- Reply to Comment 1.1.1: Comment: Thanks for rasing the score! We will incorporate the discussions into the paper. If you have any other question or concern, welcome to discuss with us.
Summary: In this paper the authors propose a new framework to improve exploration in MARL. The method COIN improves exploration by combining the concepts in the literature of curiosity and influence. Strengths: This paper proposes a method that combines in a framework two popular concepts in the reinforcement learning literature, curiosity and influence, aiming to improve exploration. The authors analyse how the different components of their method affects their approach. A detailed description of how the method works is also provided and it is then tested in a set of different environments. Weaknesses: - "Li et al. [12] point that shared parameters among agents induce similar behaviors of players which makes the model fail to learn successful policies on this challenging task. Our influence-based intrinsic reward promotes the agents to affect each other and hence brings diversity." - this is a problem caused by the famous parameter sharing paradigm, that the authors also adopt in COIN; i feel that this is a big claim that I believes it needs further evidence. Minor: - line 156: "courage"->"encourage" - line 96-97: "it shows unstable and innefective" poorly phrased - l 144: "varriance" -> variance - l 154: shouldnt it also be $\tau^~t$? - line 283: plays -> play - description of the dec pomdp in section 3 needs to be reviewed: for example, the observation model is missing and $\mathcal{O}$ represents the observation function; $\mathcal{N}$ inside the tuple G should be the number of agents and not the set; among others Please find my questions below. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. In line 46 the authors state that their method is compute-efficient. Is there any evidence or proof for this? 2. in equation 7, does it mean that the only difference to predict state and observations is that to predict observations the agents use only the trajectories from a single time step? If so, is this accurate if we dont consider the previous values, since MARL is a non stationary sequential problem? 3. I have some questions about the influence of agents on others: does influence-based exploration mean that the agents are going to explore states where each agent has more influence on the actions of the others? If this is the case, how do they measure this influence? From my understanding, this is measured by summing the mutual information between all trajectories and the action of each agent individually to see the influence of each agent on the others; but does this guarantees that influence on the others is always good influence? i.e., can there be cases where we have bad influence and so these states shouldn't be explored? 4. In line 156: "Curiosity-based intrinsic rewards aim to courage the unfamiliar state, and influence-based intrinsic rewards aim to promote coordination." While curiosity based methods clearly directly tackle exploration problems, can we say the same from influence-based methods? Following the same logic, we could also say that almost any method in MARL is a method that tackles specifically exploration problems since most of them promote coordination. 5. the authors start the paper by mentioning that exploration is important in sparse reward environments; in this sense, have they tried to run their method on starcraft with sparse rewards too? Why choosing only normal stracraft environments in this case? 6. in section 5.3 it is unclear what the authors end up picking for $\beta_{inf}$ and $\beta_{cur}$: do they get the same weight, i.e., the same value? 7. there are also other methods that work towards efficient exploration, such as []. These methods, for example, use sub-goals instead of curiosity, which is also a common benchmark for some exploration-based methods. Have the authors considered comparing against some of these too? [1] https://arxiv.org/abs/2206.10607 Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Please see above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1:** ''Our influence-based intrinsic reward promotes the agents to affect each other and hence brings diversity." needs further evidence. **A1:** The parameter-sharing paradigm leads to the same behavior of different agents in Google Research Football(GRF). After watching the replay, we find that for the models without adding the influence-based intrinsic reward, different agents tend to perform the same action (same as Li et al.[1]). Meanwhile, influence-based rewards encourage each agent to influence the others' actions in the future, and hence can bring diversity. **Q2:** Is there any evidence or proof for compute-efficient? **A2:** COIN is compute-efficient for two main reasons: (1) First, we adopt simple formulations of both MI for influence-based intrinsic rewards and prediction errors for curiosity-based intrinsic rewards. (2) Both the MI and prediction errors can be estimated by neural networks, and the summation symbol in Eq.(5) and Eq.(6) can be computed in a single forward propagation. We will provide more discussions and emphasize them in the revision. **Q3:** Is the influence are measured by summing the mutual information? Does this that influence on the others is always good influence? **A3:** Yes, we define the influence of agent A on agent B as the MI, which means how agent A's current action will influence the future trajectory of agent B. And the summation of all the MI of agent A to its peers becomes the influence-based intrinsic reward of agent A. Our method can not guarantee the influence is always useful. Indeed, no method can guarantee the influence among agents is always good. Recent literature deems that promoting the interaction is often more useful to find meaningful states than exploring blindly. Meanwhile, what we want to highlight is that ''two heads are better than one'' which encourages the readers to try different kinds of strategies simultaneously rather than focusing on one single strategy. **Q4:** Curiosity based methods clearly directly tackle exploration problems. Following the same logic, we could also say that almost any method in MARL is a method that tackles specifically exploration problems since most of them promote coordination. **A4:** Curiosity-based methods encourage the agents to experience the unseen joint states. However, in the MARL setting, meaningful states are too sparse and hard to find. Therefore, how to find potentially meaningful states within acceptable explore time steps is a problem. Different kinds of exploration strategies follow different assumptions, and the influence-based methods assume that the states where the agents interact with each other are more likely to be meaningful. We find that the MARL scenarios are often complicated and only relying on one kind of strategy may not get the desired effect. **Q5:** Have tried to run on starcraft with sparse rewards ? Have the authors considered comparing against some methods with subgoals[1]? We perform experiments on sparse rewards, and compare our method with MASER[1], which is a goal-oriented method. The results are left in Fig. (3) of the affiliated PDF . Fig. (3) depicts that our method outperform MASER in 3m and 2s3z. Our results are slightly inconsistent with those reported in [1]. The reasons may come from the game version, random seed, et al. And we will perform more experiments to justity the superiority of our method. [1] https://arxiv.org/abs/2206.10607 **Q6:** It is unclear what the authors end up picking for the hyperparameters? **A6:** We mainly follow two principles to design the hyper-parameters: 1) The two kinds of intrinsic rewards are on the same order of magnitude. 2) The two scaled intrinsic rewards are about 100 to 1000 times lower than the maximum extrinsic reward at the beginning of the training. We choose different $\beta_{inf}$ and $\beta_{cur}$ for different scenarios which are shown in Appendix. --- Rebuttal Comment 1.1: Comment: Thanks for answering my questions. Regarding Q1, I am still not fully convinced that the proposed method will bring that much diversity. Watching replays might not be enough, as it might depend on the seeds, for example. Some sort of metric to evaluate this would be useful, or even looking at the weights of the trained networks could give a better idea of this diversity. The rest of my questions have been addressed. I believe the experiments in sparse settings and comparisons with MASER [1] are particularly important. I recommend incorporating the discussions and results of the rebuttal in the paper. I will raise my score, assuming that these will be included. --- Rebuttal 2: Comment: Thanks for your careful review again.The discussion deadline is coming. If you have any other concern or question, welcome to discuss with us.
Summary: This paper looks at two different exploration strategies within Multiagent Reinforcement Learning. The two exploration strategies are curiosity-based and influenced based, where the latter is unique to MARL and takes into account the mutual information of one agents actions on the trajectory of another agent. The study then looks at combining these two forms of exploration and studies the effects of the individual intrinsic rewards as well as their combined effects on three distinct environments. A number of ablations are performed during these studies. Strengths: The paper introduces a novel algorithm which is able to explore in a MARL setting using two different additional intrinsic reward signals. It seems that such an exploration strategy is effective in the examples given at improving performance. The algorithm is relatively clean and the explanation is clear and precise. Exploration strategies are clearly a very important topic within the realm of computationally costly settings and so such an improvement is clearly important. Weaknesses: The language in the paper needs some significant work. I would recommend passing the paper through an LLM for checking for spelling and grammar throughout, as there are a fair number of such issues. The lack of clear exploration in the hyperparameter space is a certainly an important limitation of this work. While the choice of hyperparameters is motivated, it is not clear that this has actually been studied, and in particular where the mixture of two intrinsic rewards is being used together, it seems that such an investigation would be very important to have a deeper understanding of their effects. Although an ablation study has been performed, this is really a binary investigation, whereas the continuum of mixings between the rewards would be most interesting. Although there are a fair number of plots showing the learning trajectory on the test score, no summary statistics have been included. It would be useful to have a more detailed, quantitative set of measures of improvement, in particular over the hyperparameter sweeps. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: I believe that the weaknesses highlight the questions that I have, in particular surrounding the hyperparameters and a more detailed investigation of the statistics. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: While limitations around the hyperparameters have been noted, these do not give sufficient response to the points raised above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1:** The lack of clear exploration in the hyperparameter space. The continuum of mixings between the rewards would be most interesting. Quantitative set of measures of the hyperparameter sweeps. **A1:** The importance of two kinds of exploration is various in different time steps. Thus, it is very hard to decide which kind of strategy will dominate. In this work, We follow two principles to design the hyper-parameters: 1). Both two kinds of intrinsic rewards are on the same order of magnitude. 2). The two scaled intrinsic rewards are about 100 to 1000 times lower than the maximum extrinsic reward at the beginning of the training stage. We choose different $\beta_{inf}$ and $\beta_{cur}$ for different scenarios, which are shown in Appendix. Furthermore, it is worth noting that different scenarios require different exploration strategies, when it is hard to decide which one to use, implementing both in a simple and efficient way be a good choice. **Q2:** Checking for spelling and grammar throughout. **A2:** Thanks for your careful reviews and suggestions again. We will carefully proofread our paper and revise all typos. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thank you for the clarifications. I have now increased my score to a 6. --- Reply to Comment 1.1.1: Comment: Thanks for raising the score! If you have any other question or concern, welcome to discuss with us.
Rebuttal 1: Rebuttal: Thanks for your careful reviews and discerning comments. We will explain all the mentioned questions in the following, and some new experiments are added to the affiliated file. Pdf: /pdf/97fd42a2492bd1fc4bc520b4ee8fd55320a39c43.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Every Parameter Matters: Ensuring the Convergence of Federated Learning with Dynamic Heterogeneous Models Reduction
Accept (poster)
Summary: This work establishes a general convergence analysis for a heterogeneous federated learning (FL) algorithm that trains a shared global model using a sequence of time-varying and client-dependent local models. In particular, this work establishes sufficient conditions for the convergence of such a heterogeneous FL algorithm to the neighborhood of a stationary point of the standard FL. Based on the theoretical results, the authors propose practical suggestions for designing heterogeneous FL algorithms and conduct thorough experiments to support their claims. Strengths: Overall, this paper is well-written. It is the first to provide a general convergence result to the neighborhood of a stationary point for the standard federated learning (FL) framework, specifically for a heterogeneous FL algorithm that trains a shared global model through a sequence of time-varying and client-dependent local models. This result provides a convergence guarantee for several previously proposed heterogeneous FL algorithms. The optimality gap in the result depends on two important parameters: the minimum coverage index and the model reduction noise. These parameters offer valuable insights for designing practical heterogeneous FL algorithms. The experimental results align well with the theoretical findings presented in the paper. Weaknesses: The last term on the right-hand side of the inequality in Theorem 1 (and similarly for Theorem 2) corresponds to the average of the norms of the global parameters encountered during the optimization process. This term is not necessarily small in a straightforward manner. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Can the authors provide additional discussions regarding the last term in Theorem 1 (and Theorem 2)? Specifically, it would be helpful to explore whether this term is tight or under what circumstances it can be considered small. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive and insightful comments and suggestions. Following are our responses to your questions and concerns. > Q1. The last term on the right-hand side of the inequality in Theorem 1 (and similarly for Theorem 2) corresponds to the average of the norms of the global parameters encountered during the optimization process. This term is not necessarily small in a straightforward manner. * It is true that it might be hard to directly quantify the term $E\Vert \theta_q\Vert^2$, since it depends on the exact FL problem setup. We note that this term $E\Vert \theta_q\Vert^2$ is bounded and should be examined together with its coefficients in the convergence bound, i.e., $\frac{\delta^2}{\Gamma^*} E\Vert \theta_q\Vert^2$. In particular, considering the coefficient $\delta^2$ together with $E\Vert \theta_q\Vert^2$, we have $E\Vert \delta \theta_q\Vert^2$, which relates to the pruning noise introduced in Assumption 2 and is equivalent to the bounded model reduction noise, i.e., $\|\theta_{q}-\theta_{q} \odot m_{q,n}\|^{2} \leq \delta^{2}\|\theta_{q}\|^{2}$. We also note that it is common to have a bounded model norms term in heterogeneous federated learning such as [1][2]; or a similar term of $|w_1-w_*|^2$ in works e.g. [3]. Our analysis shows that to reduce this term $\frac{\delta^2}{\Gamma^*} E\Vert \theta_q\Vert^2$, we can either increase $\Gamma_{min}$ (by designing pruning strategies) or decrease mask noise $\delta$ (by adjusting pruning level). We will add these discussions to the revised paper. >Q2. Can the authors provide additional discussions regarding the last term in Theorem 1 (and Theorem 2)? Specifically, it would be helpful to explore whether this term is tight or under what circumstances it can be considered small. * We will add more discussion on this term in the revision. Although it is hard to directly tell how large this $E\Vert \theta_q\Vert^2$ is, our evaluation results demonstrate how this term affects learning performance. For instance, in the main results (Table 1 in main paper) we can compare “Pruning-Greedy” and “Pruning-Optimised”: the former has a larger $E\Vert \theta_q\Vert^2$ and the latter has a larger $\Gamma_{min}$, and as a result the latter one comes with a lower loss and converges with a faster convergency rate. Similar results can be found in other cases in the main results (e.g. by comparing “Pruning-Medium (Greedy)” and “Pruning-Medium-Optimised” in Fig2 (a,b)) and additional results in the appendix. We will add these explanations and more insights in the revised paper. [1] Jiang, Zhida, et al. "Fedmp: Federated learning through adaptive model pruning in heterogeneous edge computing." 2022 IEEE 38th International Conference on Data Engineering (ICDE). IEEE, 2022. [2] Xiaolong Ma, et al. "Effective Model Sparsification by Scheduled Grow-and-Prune Methods." International Conference on Learning Representations. 2022. [3]Xiang Li, et al. "On the Convergence of FedAvg on Non-IID Data." International Conference on Learning Representations. 2020. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal and the discussion the authors try to add in the future. I will take it into consideration during the discussion phase with AC.
Summary: This paper studies federated learning with heterogeneous client models and non-iid client data. By assuming that the client models are pruned versions of a common global model and using the notion of minimum covering index, the paper provides the convergence of FedAvg under client model pruning. Theoretical results show that pruning techniques that more evenly update the parameters and result in smaller model distortion are preferable to aggressive pruning. Such result is also verified through the numerical results on MNIST and Cifar-10, and Cifar-100 datasets. Strengths: Novelty: this paper provides a first convergence analysis to FedAvg with model pruning. The theoretical proof is sound and clear. Clarity: the paper thoroughly discusses the convergence result in theorems 1 and 2 and verifies the results through numerical experiments. Weaknesses: 1. Term $E\Vert \theta_q\Vert^2$ in the bound. This term appears in all theorems and lemmas in the main paper. However, it is unclear how large this term can be. Neither theoretical discussion nor numerical justification of this term is provided. This undermines the strength of the theoretical results. 2. The connection between the theorem to the general convergence result of FedAvg under the same assumption. When $\delta = 0, \Gamma_{min} = T$, the convergence should recover the rate of standard FedAvg. However, such a discussion is missing, weakening the paper's clarity. 3. The connection between the proof of pruning and lossy compression. Assumption 2 takes the standard assumption of model compression in compressed FL (e.g., [19, 21, 22]). Yet there is a critical difference in the minimum covering index. The connection and difference between the proof technique should be discussed. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please address the above weaknesses. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors have discussed the limitation of the paper, that partial client participation is not considered, and how the theoretical result guides the design of an optimal model pruning strategy is also limited. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the constructive comments. We will follow these helpful comments in our revised version. Following are our responses to your questions and concerns. > Q1. Term $E\Vert \theta_q\Vert^2$ in the bound. This term appears in all theorems and lemmas in the main paper. However, it is unclear how large this term can be. Neither theoretical discussion nor numerical justification of this term is provided. This undermines the strength of the theoretical results. * Indeed it is hard to directly quantify the term $E\Vert \theta_q\Vert^2$, since it depends on the exact FL problem setup. We note that this term $E\Vert \theta_q\Vert^2$ should be examined together with its coefficients in the convergence bound, i.e., $\frac{\delta^2}{\Gamma_{min}} E\Vert \theta_q\Vert^2$. In particular, considering the coefficient $\delta^2$ together with $E\Vert \theta_q\Vert^2$, we have $E\Vert \delta \theta_q\Vert^2$, which relates to the pruning noise introduced in Assumption 2 and is equivalent to the bounded model reduction noise, i.e., $\|\theta_{q}-\theta_{q} \odot m_{q,n}\|^{2} \leq \delta^{2}\|\theta_{q}\|^{2}$. We also note that it is common to have such a bounded model norms term in heterogeneous federated learning, such as [1][2]; or a similar term of $|w_1-w_*|^2 $ in works e.g. [3]. Our analysis shows that to reduce this term $\frac{\delta^2}{\Gamma_{min}} E\Vert \theta_q\Vert^2$, we can either increase $\Gamma_{min}$ (by designing pruning strategies) or decrease mask noise $\delta$ (by adjusting pruning level). We will add these discussions to the revised paper. * Although it is hard to directly tell how large this $E\Vert \theta_q\Vert^2$ is, our evaluation results demonstrate how this term affects learning performance. For instance, in the main results (Table 1 in main paper) we can compare “Pruning-Greedy” and “Pruning-Optimised”: the former has a larger $E\Vert \theta_q\Vert^2$ and the latter has a larger $\Gamma_{min}$, and as a result the latter one comes with a lower loss and converges at a faster convergence rate. Similar results can be found in other cases in the main results (e.g. by comparing “Pruning-Medium (Greedy)” and “Pruning-Medium-Optimised” in Fig2 (a,b)) and additional results in the appendix. > Q2. The connection between the theorem to the general convergence result of FedAvg under the same assumption. When $\delta = 0, \Gamma_{min} = T$, the convergence should recover the rate of standard FedAvg. However, such a discussion is missing, weakening the paper's clarity. * This is a good point. While $\delta = 0, \Gamma_{min} = T$ reduces the system model to standard FedAvg, our proof approach here is quite different due to additional steps (i.e., Lemma 1,2,3 in appendix) that are required to bound the training of heterogeneous local models, their differences at the end of each round, and the impact on global aggregation. Thus, we may not exactly recover the same convergence result as FedAvg. Nevertheless, as shown in Theorems 1 and 2: “We prove that heterogeneous FL algorithms satisfying certain sufficient conditions can indeed converge to a neighborhood of a stationary point of standard FL (with a small optimality gap that is characterized in our analysis)”. When $\delta = 0, \Gamma_{min} = T$, the radius of the error ball (i.e., the last term in the convergence bounds) indeed becomes zero, implying convergence to standard FedAvg. This discussion will be added to in the revised paper. > Q3. The connection between the proof of pruning and lossy compression. Assumption 2 takes the standard assumption of model compression in compressed FL (e.g., [19, 21, 22]). Yet there is a critical difference in the minimum covering index. The connection and difference between the proof technique should be discussed. * There are three key differences: (1) In this paper, we introduced the concept of minimum coverage index for the first time, where we show that only model compression alone is not enough to allow a unified convergence analysis/framework for heterogeneous federated learning. Minimum coverage index, together with pruning/compression noises, determines convergence in heterogeneous FL. (2) Our results show that heterogeneous FL algorithms satisfying certain sufficient conditions can indeed converge to a neighborhood of a stationary point of standard FL. This is a stronger result as it shows convergence to standard FL, rather than simply conversing to somewhere. A minimum coverage index of $\Gamma_{min}=0$ means that the model would never be updated, which is meaningless even if it still converges. (3) In terms of the proof techniques, additional steps (i.e., Lemma 1,2,3 in appendix) are required to bound the training of heterogeneous local models, quantify their differences at the end of each round, and analyze the impact on global aggregation. These make the proof much more complicated than convergence analysis of standard FL. We will add more discussions on these in the revised paper. [1] Jiang, Zhida, et al. "Fedmp: Federated learning through adaptive model pruning in heterogeneous edge computing." 2022 IEEE 38th International Conference on Data Engineering (ICDE). IEEE, 2022. [2] Xiaolong Ma, et al. "Effective Model Sparsification by Scheduled Grow-and-Prune Methods." International Conference on Learning Representations. 2022. [3]Xiang Li, et al. "On the Convergence of FedAvg on Non-IID Data." International Conference on Learning Representations. 2020. --- Rebuttal 2: Comment: Dear reviewer mRuV, We would like to thank you again for the time you dedicated to reviewing our paper and your valuable comments. We believe that we have addressed your concerns. Since the end of the discussion period is approaching and we have not heard back from you yet, we would appreciate it if you kindly let us know of any other concerns you may have, and if we can be of any further assistance in clarifying any other issues. Thanks and sincerely, Authors --- Rebuttal Comment 2.1: Comment: I thank the author for their response, and those clarifications should be added in the revised manuscript. Other than that ,I think this is a paper above the accept line. I will take the response into consideration during the discussion phase with AC.
Summary: This paper focuses on the cross-device federated learning setting. The authors introduce a general theoretical framework for analyzing FedAvg with masks on local pruned models. This analysis is particularly valuable in establishing the convergence of federated schemes when the global model is distributed across multiple edge clients. Furthermore, the paper includes numerical illustrations of their algorithm, providing practical insights on its performance. Strengths: * The paper is well-written and easy to understand, but with few points that might be improved (see "Questions" part). * The paper conducts a thorough theoretical investigation on federated learning with reduced-size models. Specifically, the authors provide novel tools to study the convergence of FL pruned models. Weaknesses: * Although interesting, some parts may seem to have been rushed. For example, assumptions and lemmas should be referred with ref{}. Further, I also spotted a few errors on the supplement, even though I haven't read the supplementary in detail. It's important to review and correct these typos. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The article is interesting, could you address the following points so that I can raise my score? * It is unclear whether $\Gamma_q^{(i)}$ is defined or if it refers to $N_{q}^{(i)}$ mentioned on line 177. Similarly, does $\Gamma^*$ denote the same quantity as $\Gamma_{min}$? * The proposed update in the equation on line 134 is not exactly the same implemented in the "Update.py" script. I think the mask is not applied to the gradient. * I haven't looked at the supplementary paper, but it seems to me that an expectation is missing on Eq.(20). * Is it necessary to use $\delta<1$ in assumption 2? Since I don't see $1/(1-\delta)$ in your bounds, I wonder if this constraint is useful. **Requested Changes:** * Line 59: "will it converge" inside Figure 1 legend. * Line 75: client is denoted by $i$, then latter by $n$ at the end of page 2. * Line 95: "models are to share". * Lines 108: "works like Hermes can". * Definition of $I_0$ in Theorem 1 and Theorem 2 is different, there is maybe an additional $\delta^2$. * Some punctuation is missing, e.g., Eq. (9). * Conclusion seems little heavy. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors did not address the potential negative societal implications of their research, but this does not seem critical for this particular theoretical and numerical study. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments and for confirming the contributions of our paper. We provide clarification to your questions and concerns as below. > Q1. It is unclear whether $ \Gamma_q^{(i)}$ is defined or if it refers to $N_{q}^{(i)}$ mentioned on line 177. Similarly, does $\Gamma^*$ denote the same quantity as $\Gamma_{min}$? * Yes, $ \Gamma_q^{(i)}$ is defined according to Eq(8) on line 177. Essentially $\Gamma^*$ and $\Gamma_{min}$ are the same thing. $\Gamma_{min}$ is introduced as a main concept of the paper, and $\Gamma^*$ is used during the derivation for simplicity. We will clarify this in the revision. > Q2. The proposed update in the equation on line 134 is not exactly the same implemented in the "Update.py" script. I think the mask is not applied to the gradient. * Thanks for checking our code implementation! In this proof-of-concept experiment, we did not apply masks to the gradients, as this would cause an error if we directly modify the gradients since PyTorch autograd framework is responsible for handling gradients. Instead, for each batch of training, we directly apply the mask to the weights and biases before calculating the loss and after the back-propagation (e.g. the use of “get_sub_paras” helper function at Line 22 in main_fed_20N_AVGALL.py). This is equivalent to applying the mask to the gradients and has been used as a workaround in previous work for more easily obtaining proof-of-concept implementations. > Q3. I haven't looked at the supplementary paper, but it seems to me that an expectation is missing on Eq.(20). * Thanks for examining the appendix section and pointing this out, indeed there should be an expectation sign in Eq.(20). We have carefully proofread the supplementary document again and fixed the typos. > Q4. Is it necessary to use $\delta<1$ in assumption 2? Since I don't see $1/(1-\delta)$ in your bounds, I wonder if this constraint is useful. * It is necessary and useful to have this condition. In fact, $\delta<1$ holds by definition and directly follows from Assumption 2. Since the local model is extracted from the global model, it has to be smaller than the global model (as some parameters are pruned). According to Eq.(11) to quantify the difference/noise resulting from applying a mask, $\delta$ must be smaller than 1 by definition. That’s why we listed it in the assumption. We will add a footnote to clarify this in the revised paper. > Definition of $I_0$ in Theorem 1 and Theorem 2 is different, there is maybe an additional $\delta^2$ * Thanks for pointing this typo out. The definition of $I_0$ in Theorem 2 should be exactly the same as that in Theorem 1 (thus the additional $\delta^2$ is absorbed into $I_0$ in Theorem 2 as well). This is a type introduced when we were trying to simplify the auxiliary variables in both theorems. Full equations and their derivations are presented in the appendix. > Requested Changes: Some punctuation is missing, e.g., Eq. (9). Line 59: "will it converge" inside Figure 1 legend. Line 75: client is denoted by $i$, then latter by $n$ at the end of page 2. Line 95: "models are to share". Lines 108: "works like Hermes can". Conclusion seems little heavy. * Thanks for pointing out the typos, grammar, format issues, and writing suggestions, they will be addressed in the revision. --- Rebuttal 2: Comment: Dear reviewer oYBg, We would like to thank you again for the time you dedicated to reviewing our paper and your valuable comments. We believe that we have addressed your concerns. Since the end of the discussion period is approaching and we have not heard back from you yet, we would appreciate it if you kindly let us know of any other concerns you may have, and if we can be of any further assistance in clarifying any other issues. Thanks and sincerely, Authors
Summary: In this work, the authors provide a general theoretical framework to analyze the convergence of Federated training schemes conducted over local models of heterogeneous network structures. Such structures can usually be obtained through different model reduction methods, such as model pruning/sparsification or model extraction. The proposed framework introduces the minimum covering index concept to conduct the analysis, representing the number of local models concurrently updating the same set of parameter indices. The paper is well written, but a couple of clarifications are necessary. Strengths: Very well-written paper with clear objectives and contributions. Generalized framework to encapsulate different model reduction algorithms. The introduction of the minimum coverage index concept and its interplay with model reduction noise can lead to very promising and interesting insights. Weaknesses: Further elaboration is needed on terms and concepts used during the theoretical analysis. Empirical evaluation needs improvement. Technical Quality: 3 good Clarity: 3 good Questions for Authors: From my understanding, your framework seems to consider reduced model structures when these structures are obtained when the global model is reduced (pruned). Will your analysis hold even in the case of local model reduction, i.e., client-side reduction, not server-side? Will the model reduction noise (assumption 2) and bounded gradient (assumption 3) still hold? Your analysis shows that local model training is performed over the entire network. How would that affect your analysis if you were to enforce training over the parameters of the reduced model (enforce masks during training)? The proof outline does not clarify how the mask is constructed. Is it always static, or can it be dynamically changing? In PruneFL, masks can periodically increase to accommodate model regrowth. Similarly, FedDST[1] performs local model regrowth with a fixed pruning degree during model communication. Analogously, FedSparsify[2] performs progressive model reduction (progressive sparsification), where the global or local model is progressively pruned. Can your framework capture such dynamic mask construction? Please expand your related work and discuss whether your framework can also accommodate such dynamic pruning approaches. The model settings through which the model reduction level is obtained in Table 1 are not clear. Can you please elaborate on what are the methods you used to perform the greedy pruning, pruning-optimized, static subnet subtraction, and homogeneous methods? Did you use structured or unstructured pruning? Random weight magnitude or based on magnitude? Can you report the noise reduction (percentage-wise) in the models learned at every model reduction level? Such an analysis could lead to key insights with respect to the index coverage vs. noise reduction tradeoff. Can your analysis be extended to activations and/or neuron pruning as part of your future work? Also, have you considered extending your framework for asynchronous federated settings? [1] Federated Dynamic Sparse Training: Computing Less, Communicating Less, Yet Learning Better Sameer Bibikar, Haris Vikalo, Zhangyang Wang, Xiaohan Chen. https://ojs.aaai.org/index.php/AAAI/article/view/20555/20314 [2] Federated Progressive Sparsification (Purge-Merge-Tune)+. Dimitris Stripelis, Umang Gupta, Greg Ver Steeg, Jose Luis Ambite. https://openreview.net/pdf?id=GLQqPTRrQMx Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your review! > Q1. From my understanding, your framework seems to consider reduced model structures when these structures are obtained when the global model is reduced (pruned). Will your analysis hold even in the case of local model reduction ... How would that affect your analysis if you were to enforce training over the parameters of the reduced model? We would like to clarify that our framework indeed considers local model training over the reduced networks (i.e., the masks are always enforced during the training of local models). It is shown in Eq.(2) at line 131 and the equation at line 134 how the local models are obtained by pruning and then trained as reduced networks. A pseudocode example is provided in Algorithm 1 (page 1) in the appendix, showing the training of local models with fixed masks and their aggregation to the global model. If we understand the comment correctly, this is the client-side reduction and reduced local model training, as the reviewer pointed out. This problem formulation makes convergence difficult to analyze, which is the main contribution of this paper. Our analysis shows that under Assumptions 2 and 3 (both of which relate to reduced local models), convergence (to standard FL) will stand for as long as certain conditions are met, that is, every parameter is included and updated at least once throughout the training, which is the key idea of this paper: every parameter matters. Further, our experiment section results are generated in a way that local model training is done on reduced-size models. > Q2. The proof outline does not clarify how the mask is constructed. Is it always static, or can it be dynamically changing? .... Can your framework capture such dynamic mask construction? Please expand your related work and discuss whether your framework can also accommodate such dynamic pruning approaches. Our paper establishes convergence conditions in the general form, which apply to both static and dynamically changing masks. With dynamically changing masks, we denote the reduced models/networks as $\theta_{q,n,t}$, which means that the model structure (with its corresponding mask) can change between each round of communications $q$ and during each local training epoch $t$, and can be different from other local clients. We show that as long as the dynamic heterogenous FL framework can be framed as the setting above, our convergence analysis in this paper applies. Thus, our results establish the general convergence condition covering cases of static model pruning, dynamic model reduction, and grow-and-prune type of training, e.g. [1-4]. However, indeed the current description of mask generation can be improved by expanding to more related works for better understanding. We will make such changes in the revision as the reviewer suggested. > Q3. The model settings through which the model reduction level is obtained in Table 1 are not clear. Can you please elaborate on what are the methods you used to perform the greedy pruning, pruning-optimized, static subnet subtraction, and homogeneous methods? ..... Such an analysis could lead to key insights with respect to the index coverage vs. noise reduction tradeoff. Due to page limitations, we summarized the key findings in the main paper and presented detailed settings of each model reduction method in the appendix. Specifically, we listed how local models are generated (how we define greedy pruning, pruning-optimized, static subnet subtraction, and how masks are generated, etc) and their respective coverage index and each of their reduction levels in percentage-wise (which leads to the initial model reduction noise). For instance, we explained that "Pruning-Optimised" works at a low model reduction level and is made up with 10 local models consisting of 4 full-size models, 6 reduced-size models at 75% size of the global model that covers 3 different regions (with each setting applied to 2 models) as {$S_{1},S_{3},S_{4}$}, {$S_{1},S_{2},S_{4}$}, {$S_{1},S_{2},S_{3}$}, where $S_1$ represents the region consists of the top 25% largest-magnitude parameters and $S_4$ the smallest, and {$S_{1},S_{2},S_{3},S_{4} $} as the full model. We considered both structured and unstructured pruning including unstructured weights pruning, structured neuron pruning, and leading subnet extraction. We will add a brief summary to further clarify this in the revised paper. > Q4. Can your analysis be extended to activations and/or neuron pruning as part of your future work? Have you considered extending your framework for asynchronous federated settings? The current setting of model reduction considers both structured and unstructured model reduction, which includes neuron pruning. In fact, in the experiment section and appendix, we have results for neuron pruning. In Table 1 of the main paper, we used “pruning” to demonstrate unstructured pruning and “static subnet subtraction” to demonstrate structured neuron pruning (continuous neuron pruning), and due to page limits, we have in the appendix results for (unstructured) neuron pruning in table 2 and table 3 and their illustrations in Figure 1. We agree extending the framework to asynchronous federated settings would be an interesting direction for future work. We will include some discussions in the revised paper. [1] Bibikar, Sameer, et al. "Federated dynamic sparse training: Computing less, communicating less, yet learning better." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 36. No. 6. 2022. [2] Stripelis, Dimitris, et al. "Federated progressive sparsification (purge, merge, tune)+." arXiv preprint arXiv:2204.12430 (2022). [3] Tao Lin, et al. "Dynamic Model Pruning with Feedback." International Conference on Learning Representations. 2020. [4] Alam, Samiul, et al. "Fedrolex: Model-heterogeneous federated learning with rolling sub-model extraction." Advances in Neural Information Processing Systems 35 (2022): 29677-29690. --- Rebuttal Comment 1.1: Comment: Thank you for clarifying that the masks are enforced during local model training. It was not evident in the original text; you might need to state this explicitly. I also appreciate your effort in explaining the different types of model reduction you considered in your evaluation and how your analysis encapsulates various pruning methods. Overall, the authors have addressed all of my concerns. My score remains the same. --- Reply to Comment 1.1.1: Comment: Dear reviewer 9LDk, We would like to thank you again for the time you dedicated to reviewing our paper and your valuable comments that will further improve the clarity of the paper. We are happy to see that our response has addressed all of your concerns. Thanks a lot again and with sincerest best wishes, Authors
Rebuttal 1: Rebuttal: Dear reviewers, thank you all again for your valuable time and positive suggestions that will definitely make our paper stronger. We have provided each reviewer with our responses in a Q&A format. We are happy to answer further questions if any.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Enhancing User Intent Capture in Session-Based Recommendation with Attribute Patterns
Accept (poster)
Summary: The paper presents a transformer-based method, namely Frequent Attribute Pattern Augmented Transformer (FAPAT), that considers attribute patterns as supplementary information. More specifically, FAPAT builds attribute transition graphs, mine frequent attribute patterns, and match attribute patterns to better captures user intents. Experiments conducted on two public benchmarks as well as three industrial datasets show that the proposed method deliver noticeable performance boosts (4.5%) averaged over all experiments. The code and datasets are promised to be released after acceptance. Strengths: 1. Introduction section is written very well to present the problem and the high-level picture of the proposed method. 2. Table 1 in Related Work section is very straight-forward and efficient in comparing a large number of existing methods and demonstrating how they differ from the proposed method. And the categorical items (temporal info, history attention, etc.) are reasonably defined. 3. Methodology section is clear and easy to follow. 4. Experiments are thorough with a good number of baselines. Weaknesses: 1. Some of the thought process is not explained, for example, in section 4.1.1, what are the thoughts process behind adopting gSpan and how were those twenty types of patterns chosen? Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. In introduction, can you elaborate more on “We argue that the current use of item-side metadata provides little assistance in SBR models as user intent may change over time.“? 2. Typo in line 131 - "whi". 3. I am not sure what is the purpose of Section 3 - Background and Motivations, especially 3.2 and 3.3, they read like they should belong to Related Work. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Yes, the authors discuss limitations in Appendix A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1. Clarifications for pattern acquisition** A1. End-to-end retrieval becomes a viable option in the absence of resource constraints. However, given the multitude of substructures—numbering in the millions or even billions—within industrial sessions, we must ponder efficiency. Data mining methods, like gSpan, offer near-linear complexity, drastically trimming the pool of pattern candidates to a few thousand. This ensures efficiency and effectiveness in both training and inference. We exclusively consider motifs with three or four nodes, encompassing only those exhibiting circular or triangular structures. These two types of structures notably curtail randomness and enhance robustness.
Summary: This paper proposes FAPAT, which augments session-based recommendation models using item attributes. Specifically, FAPAT constructs attribute graphs and models user intent using pattern mining and an improved graph attention mechanism. Strengths: 1. The paper studies an important application task, i.e., session-based recommendation. 2. Experiments are conducted on 2 public datasets and 3 large-scale industrial datasets. Weaknesses: 1. Over-complicated method. Session-based recommendation methods that incorporate session graphs have been criticized for being overly complex. The significant increase in algorithm complexity generally yields limited improvements in performance. The method proposed in this paper, involving modules such as motif extraction and graph attention mechanisms, adds further complexity to this paradigm. However, based on the experiments in Table 2, even with such complexity, this method only outperforms the concise baseline GRU4Rec by a limited margin on the classic benchmark dataset diginetica. The results raise doubts about the necessity of such a complex approach. It could be possible to achieve comparable results by introducing attributes using a much simpler method. 2. Missing baselines. Two important baselines are missed. SASRec [1] is a highly popular baseline method for sequential/session-based recommendation, based on Transformer modules. FDSA [2] is a straightforward method based on SASRec that introduces item attributes. Both of these methods are concise and can be regarded as a basic version of the proposed model. These two methods should be included in experiments and compared against the proposed method. 3. Code is not available during the reviewing phase, making reproduction of the reported results difficult. [1] Kang et al. Self-Attentive Sequential Recommendation. ICDM 2018. [2] Zhang et al. Feature-level Deeper Self-Attention Network for Sequential Recommendation. IJCAI 2019. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Please refer to "Weakness". Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 1 poor Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1. Over-complicated method** A1. Differing Perspective on GRU4Rec. I must respectfully disagree with the reviewer's perspective. The counter-example of GRU4Rec showcases remarkable performance on diginetica (mean length: 4.850). Nevertheless, its effectiveness significantly wanes on Tmall (mean length: 6.649) and our extensive industrial dataset (mean length exceeding 10). This results in substantial disparities (18.82 vs. 32.45, 73.95 vs. 92.72, 47.21 vs. 81.62, and 58.46 vs. 78.36). Such variations provide robust validation for the ingenuity and novelty driving our proposed approach. **W2. Missing baselines** A2: We extend our gratitude to the reviewer for highlighting the two baselines. We will comprehensively discuss these baselines in the final version. Regarding SASRec [1], which employs stacked Transformer layers, it can be seen as the non-pretrained S3-Rec model. The performance comparison is presented below: | method | | diginetica | | | Tmall | | | Beauty | | | Books | | | Electronics | | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | | Recall@10 | NDCG@10 | MRR@10 | Recall@10 | NDCG@10 | MRR@10 | Recall@10 | NDCG@10 | MRR@10 | Recall@10 | NDCG@10 | MRR@10 | Recall@10 | NDCG@10 | MRR@10 | SASRec | 32.15 | 17.86 | 13.52 | 13.69 | 9.47 | 8.16 | 85.18 | 70.87 | 66.21 | 67.09 | 52.10 | 47.25 | 61.14 | 45.31 | 40.08 | S3Rec | 33.48 | 18.58 | 14.04 | 18.24 | 12.30 | 10.46 | 89.64 | 75.56 | 70.99 | 75.00 | 58.54 | 53.23 | 74.36 | 56.03 | 50.16 | FAPAT | 37.42 | 21.31 | 16.39 | 32.45 | 22.02 | 18.72 | 92.72 | 76.29 | 71.09 | 81.62 | 61.08 | 54.39 | 78.36 | 56.81 | 49.80 | FDSA [2] introduces separate channels for features, demonstrating the effectiveness of metadata. We are unable to present the results due to data security concerns. However, based on our experience, FDSA's performance lies between SASRec and S3-Rec. A comprehensive discussion of these two baselines will be included in the final version. **W3. Unavailable code** A3. Code can't be uploaded due to security concerns currently. It'll be released upon paper acceptance and security department's approval. --- Rebuttal Comment 1.1: Title: Thank you for the constructive rebuttal Comment: Thank you for the additional details provided in response to my comments. **Reply to W2. Missing baselines - (1)** However, I believe it is important to address all concerns raised. Unfortunately, there's still a significant issue that hasn't been adequately addressed: the absence of a comparison with the FDSA baseline in your experiments. FDSA [2], which introduces item attributes into the SASRec, sharing the same task formulation and can be seen as a basic form of your model. It is therefore imperative to include it for a comprehensive performance evaluation. As previously mentioned, I'm concerned about the increased complexity of your proposed method. FDSA might help to test whether comparable performance can be achieved using a simpler method to introduce item attributes. Thus, I encourage you to include a comparison against FDSA in your final version. **Reply to W2. Missing baselines - (2)** In addition to the aforementioned comments regarding the missing baseline method FDSA, I also find the supplementary results you provided for SASRec to be unusual. It is generally accepted within the community that SASRec usually outperforms GRU4Rec on most datasets, as demonstrated in papers such as BERT4Rec [3], S^3-Rec [4], TiSASRec [5], CL4Rec [6], and HyperRec [7]. However, in the tables you've provided as a supplement, SASRec appears to underperform in comparison to GRU4Rec. This is notably uncommon and raises concerns regarding the reliability and validity of the results presented. I would greatly appreciate it if you could clarify these discrepancies, as proper baseline comparison is crucial for contextualizing the effectiveness of the proposed method. [2] Zhang et al. Feature-level Deeper Self-Attention Network for Sequential Recommendation. IJCAI 2019. [3] Sun et al. BERT4Rec: Sequential Recommendation with Bidirectional Encoder Representations from Transformer. CIKM 2019. [4] Zhou et al. S^3-Rec: Self-Supervised Learning for Sequential Recommendation with Mutual Information Maximization. CIKM 2020. [5] Li et al. Time Interval Aware Self-Attention for Sequential Recommendation. WSDM 2020. [6] Xie et al. Contrastive Learning for Sequential Recommendation. ICDE 2022. [7] Wang et al. Next-item Recommendation with Sequential Hypergraphs. SIGIR 2020. --- Reply to Comment 1.1.1: Title: Reply to Reviewer WQyi Comment: Thank you for engaging in a continued discussion regarding potential issues related to baselines. We are in agreement that the inclusion of more competitive baselines serves to bolster our claims and demonstrate the effectiveness of our approach. We are grateful to the reviewer for highlighting a pertinent work, FDSA, which regrettably was omitted from the current manuscript. We firmly believe that incorporating this reference will further enhance the strength and novelty of our ideas. Rest assured, we are committed to adding it in the final version. However, we would also like to address the reviewer's concerns regarding our present comparative results. It is important to acknowledge that our comparison already encompasses 13 baselines across five distinct techniques (as listed in Figure 1). This thorough evaluation constitutes a robust contribution to the field of session-based recommendations, even when contrasted with the broader literature on sequence recommendations, as you have pointed out. Additionally, it's worth noting that our evaluation is grounded in data of 100 million E-commerce interactions, a scale unparalleled even in papers from the industry. We emphasize that our innovation lies not in solely challenging sequence models, but rather in augmenting them by addressing potential limitations within graph neural networks. Our guiding principle is to explore alternative methods for constructing spatial structures to facilitate anonymous recommendations. Turning to the comparison between SASRec and GRU4Rec, we indeed recognize the observed performance disparities. It is important to highlight that we have adopted the cross-entropy loss as the optimization objective for all methods. While the original GRU implementation and paper suggest the use of TOP1 and BRP loss functions, our experimentation revealed that the cross-entropy loss enhances the stability of GRU and even enables it to outperform transformers in scenarios with shorter session lengths. Nonetheless, we acknowledge that GRU still struggles to match the performance of the transformer architecture, as evident in the discrepancies in Beauty, Books, and Electronics categories. We greatly value the reviewer's engagement in discussing and identifying distinctions among various architectures. We want to affirm our commitment to defending our stance and refuting unfounded allegations. Our intention is to foster a constructive exchange of ideas that strengthens the rigor of our work. Thank you once again for your thoughtful review, which continues to guide our revisions and improvements.
Summary: This paper presents a novel framework for session-based recommendations. The code idea of the method is to extract highly frequent attribute patterns from graphs to augment session sequence encoding. Specifically, it first leverages frequent graph pattern mining for attribute pattern retrieval. Then it applies GAT-based encoders for item representations and use these representations as memory to faciliate session-based encoding. Finally, it converts item-side graph representations into sequences and aggregates them with session sequences for transformer-based model training. Empirical results compared with sequence-based and graph-based baselines show the advantages of the method. Strengths: The paper is well-written wrt the high-level topic and what the high-level intuitions are. Good motivation of mining attribute features for sequence encoding enhancement, which is a significant problem to the community. Experiment is extensive and convincing. Experimental results on two public datasets and one industrial dataset seem promising compared with sequence-based and graph-based baselines. Comprehensive analysis in the ablation studies and the appendix. Weaknesses: Some clarifications can be made more clear. For example, it mentions using gSpan to mine patterns and keep patterns in the twenty types as it shows. However, it is not clear why keeping patterns in these twenty types, like it is a common practice or it can faciliate training. More powerful sequence-based approaches can be discussed. As far as I know, P5 [1] and M6-rec [2] are more powerful sequence-based baselines. It would be better to discuss these methods. [1] Geng, Shijie, et al. "Recommendation as language processing (rlp): A unified pretrain, personalized prompt & predict paradigm (p5)." RecSys 2022. [2] Cui et al. "M6-Rec: Generative Pretrained Language Models are Open-Ended Recommender Systems." Arxiv 2022. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Being not familiar with graph-based transformers, I mainly have concerns about the design choice mentioned in its method. - What is the reason of choosing the GAT-based encoder rather than GCN and GraphSAGE, for learning attribute graph representations? - What is the insights and design intuition of using attribute representations as memory to augment sequence encoding? - Why using sequences as the final representations rather than aggregate information using graph representations? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Design intuition and technical details can be explained more clearly. It would be better to provide discussion and comparision of more state-of-the-art baselines. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1. Clarifications for pattern acquisition and baselines** A1. Upon the motifs containing three or four nodes, we exclusively consider those featuring a circle or a triangle. These two structural types play a crucial role in diminishing randomness and enhancing robustness. In response to the reviewer's suggestions, we will explore and elaborate on recent literature. However, these references lack specificity regarding session-based recommendations, which, in turn, present more formidable challenges due to truncated behavioral histories and the absence of user profiles. **Q1. Reasons to choose GAT rather than GCN** A1. We tested various message-passing methods and found GAT to have higher recall. **Q2. Insights of memory augmentation** A2. Relying solely on temporal historical data remains constrained in prior research. Simultaneously, forming graph topologies might introduce noise from random clicks, and using graph neural networks would help counteract over-smoothing. Hence, we suggest incorporating the transformer architecture from a temporal perspective, while combining collaborative details via metadata and frequent attribute patterns in the spatial view. **Q3. Reasons to choose sequence models** A3. Recommendations regarding session behaviors must consider temporal signals. Research is ongoing on constructing suitable graph topologies for training and inference. Furthermore, utilizing graph neural networks could help address over-smoothing concerns. --- Rebuttal Comment 1.1: Title: Reply to Reviewer BdTQ Comment: Dear Reviewer BdTQ, We hope our comprehensive rebuttal has addressed some of your concerns. With the discussion phase deadline drawing near, we kindly request the opportunity to provide further clarification or address any additional questions you may have. Your consideration is greatly appreciated. Many thanks, Authors of Submission 125 --- Rebuttal Comment 1.2: Comment: Thanks for your efforts. My concerns regarding the design have been addressed and I will keep my rating.
Summary: This paper studies session-based recommendation in E-commerce. The authors propose to enhance user intent identification by constructing attribute transition graphs and using frequent attribute patterns as memory to augment session representations. The study employs frequent graph pattern mining algorithms to find consequential graphlets and uses these attribute patterns as accessible memory to augment session sequence encoding. It leverages multi-head graph attention to learn patterns and local session graph representations in the aligned space. The experiments are conducted on two public benchmark datasets and three large-scale industrial datasets, demonstrating notable improvement across various evaluation metrics. Strengths: 1. Session-based recommendation is an important research topic in data mining. 2. The proposed method achieves better results. 3. The authors provide sufficient comparisons with baseline methods. Weaknesses: 1. The motivations for different model components are not clearly stated nor discussed. The model components are directly introduced, without detailed discussions on their intuitions and big pictures. The experimental analysis also does not well elaborate on the reasons and technical findings. 2. The proposed method is complicated and its computational cost is high. Session-based recommendation has strict latency restrictions and it is unclear whether the proposed method meets the requirements. Further efficiency analysis is highly needed, which is important for supporting the main motivation. 3. The evaluation is purely offline and no online experimental result is reported. It is not clear whether the proposed method can be deployed online with acceptable effort. 4. The proposed framework is a combination of mature techniques. Thus, the technical contributions are not quite salient. More empirical analysis could be made. 5. It would be helpful to compare the results on long-tailed/cold-start attributes and items, since the proposed method seems to be suitable for handling these cases. Technical Quality: 3 good Clarity: 3 good Questions for Authors: How about the computational cost of this work? Is there any analysis of the real training and inference time? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors mainly discussed the limitations at the technique level, but are not aware of the potential negative societal impact. Some discussions on recommendation fairness and diversity can be added. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1. Unclear motivation** A1. In E-commerce services, like scenarios involving new users or those in private mode, session-based recommendation (SBR) is challenging as it doesn't incorporate user profiles. Solely relying on temporal data is a limitation in prior research. Simultaneously, constructing graph topologies might introduce noise due to random clicks; here, utilizing graph neural networks helps address the issue of over-smoothing. In Section 6, our experimental evaluation begins by assessing next-item prediction through metrics like Recall and ranks (MRR and NDCG). We underscore the importance of attribute patterns and our proposed graph-nested attention. Furthermore, we conduct two experiments focused on intent capture by estimating attribute predictions and period-item recommendations. We welcome suggestions and comments aimed at enhancing clarity, if feasible. **W2. Latency and cost** A2. Computation encompasses offline and online components. The offline facet involves pattern mining and retrieval, while the online facet entails encoding session data via attribute pattern augmentation. The offline costs can be disregarded, given their nearly linear correlation with data size, not to mention resulting data structures fewer than those in the original sessions. As depicted in Table 6, attributes and patterns number in the thousands, whereas sessions could reach several million. Moreover, graph density of resulting frequent patterns remains stable at around 1.0. In contrast, other methods (e.g., global and shortcut) might exhibit factors of 10x or even 100x. As evident from our runtime, our model operates as efficiently as transformers. **W3. Online evaluation** A3. Online assessment of recommendations entails corporate security considerations, necessitating approval from the company. **W4. Combination of mature techniques** A4. Regrettably, we differ from the reviewer's perspective. Contemporary data mining technologies find widespread utility across diverse fields. We employ these techniques to reliably extract refined patterns, utilizing them as a repository to enhance recommendations. Furthermore, we introduce graph-nested attention, showcasing its effectiveness by outperforming state-of-the-art approaches by an average of 4.5%. Our framework stands poised to enrich the SBR and graph-related retrieval research community. **W5. Long-tail and cold-start evaluation** A5. In Figure 6, we address the cold-start scenario. Short-period cases relate to brief sessions, where our model excels in stability. **Q1. Computational cost** A1. Please see W2 and A2 for the cost analysis. Due to the attention mechanism's parallelism, both training and inference can be as efficient as regular transformers. There's an additional memory attention cost, but it's capped at 12 patterns. --- Rebuttal Comment 1.1: Title: Reply to Reviewer 2VLV Comment: Dear Reviewer 2VLV, We hope our comprehensive rebuttal has addressed some of your concerns. With the discussion phase deadline drawing near, we kindly request the opportunity to provide further clarification or address any additional questions you may have. Your consideration is greatly appreciated. Many thanks, Authors of Submission 125
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper proposes a framework that effectively utilizes attribute graph patterns to enhance anonymous sequence encoding for session-based recommendations. Given the lack of personal information in session-based recommendation scenarios, making accurate item suggestions within the session is crucial. The authors performed an extensive experimental evaluation, and it seems there is an improvement. I would appreciate more info on methodology used for experimentation. Minor comments: The citations are clickable, which is convenient for further reading. It would be beneficial to add references to Table 1 for better context. Providing more explanation in lines 70-75 would enhance clarity and understanding. While I'm not an expert in the area, it seems that the related work section could benefit from including more recent papers. Currently, there is only one paper from 2023. Fig 5-6 are clear and easy to read, which is commendable. Strengths: 1. Extensive experimentation Weaknesses: 1. Not clearly demonstrated novelty of the method 2. The paper is a bit hard to follow 3. Some examples of the data can be useful in the paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Can you please list a number of the current practical applications that would benefit from the proposed method? Could you provide more information about the costs associated with the model? Additionally, I'm interested in knowing about the latency, or response time, of the model. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1. Unclear novelty** A1. The session-based recommendation (SBR) does not provide the user profile in E-commerce services, such as for new or private mode users. Solely relying on temporal data remains limited in prior research. Simultaneously, constructing graph topologies might introduce noise from random clicks, while graph neural networks can exacerbate over-smoothing. We thus suggest a transformer architecture for the temporal view, while aggregating collaborative data via meta-data and frequent attribute patterns in the spatial view. Comprehensive experiments underscore the proposal's novelty and effectiveness. **W2. Hard to follow** A2. We commence by defining the problem and subsequently introduce the session and transition graph construction procedure. The primary methodology comprises three parts: (1) acquiring frequent attribute patterns through data mining, (2) encoding sessions via relevant pattern retrieval and memory augmentation, (3) providing recommendations through attention. In the experimental section (Section 6), we initially assess next-item prediction by computing Recall and ranks (MRR and NDCG). We demonstrate the significance of attribute patterns and our proposed graph-nested attention. Furthermore, we conduct two experiments on intent capture by estimating attribute predictions and period-item recommendations. We welcome suggestions and comments to enhance clarity, if feasible. **W3. Example illustration** A3. In Figure 1 and Figure 2, we present two cases: silver ↔ silver ↔ blue ↔ blue, which illustrates the user's color intent; additionally, the brand pattern Apple ↔ Apple ↔ Samsung suggests a potential change in intent. **Q1. More practical applications** A1. Session-based recommendations (SBR) serve as a direct industrial use, while our method aids other graph-related applications, such as answering questions over graphs. **Q2. Latency and cost** A2. Computation involves offline and online parts. Offline includes pattern mining and retrieval, while online encodes session data with attribute pattern augmentation. Offline costs are negligible due to linearity with data size, especially compared to resulting structures—far fewer than original sessions. As seen in Table 6, attributes and patterns total several thousand, sessions possibly several million. Graph density for frequent patterns remains stable at around 1.0, in contrast to methods like global or shortcut, which are 10x or 100x. As evident from performance, our model runs as efficiently as transformers. --- Rebuttal Comment 1.1: Title: Thank you Comment: I have read the authors' rebuttal, thanks a lot for all the provided explanations. I will change my score accordingly. --- Reply to Comment 1.1.1: Title: Reply to Reviewer jZGH Comment: We extend our gratitude for your alignment with this endeavor and for your active engagement in the analysis. Our confidence remains steadfast in both the strength of our conceptual framework and the validity of our current results. Additionally, the potential of related work within this domain sparks our excitement. Your insightful comments and comprehensive reviews are deeply valued by broadening the reach of our audience and elevating the quality of this paper.
null
null
null
null
null
null
A Reduction-based Framework for Sequential Decision Making with Delayed Feedback
Accept (poster)
Summary: The paper studies the problem of sequential decision-making problems with delayed feedback. The authors propose a reduction from standard sequential decision-making algorithms to the following problem providing regret guarantees in many settings: linear bandits, tabular RL, linear and general function approximations RL, tabular and linear zero-sum Markov Games, tabular general-sum Markov Games. Strengths: - The authors improve previous results in the delayed feedback setting for multi-armed bandit, linear bandit, tabular MDP, tabular multi-player general sum MGs. - The authors provide the first regret guarantees (in the delayed feedback setting) for linear MDPs, general-function approximation MDPs, and tabular and linear two-player zero-sum MGs. - The strategy is simple: - single-agent: apply multi-batched algorithm for RL to the delayed feedback setting running some extra steps depending on the feedback delay $\tau_k$. - zero-sum tabular and general-sum tabular: using the doubling trick to update the policy. - zero-sum linear: update the policy when there exists $h \in [H]$ such that $\text{det}(\Lambda^k_h) > \eta \cdot \text{det}(\Lambda_h)$. Weaknesses: ### Presentation The presentation of the paper can be improved. The paper proposes a general strategy to convert multi-batch algorithms to the delayed feedback setting, changing (if I understood correctly) when the policy is updated. This is a simple and elegant transformation and it is not very clear from the presentation where the authors divide the transformations into three sections without providing this connection between them. I suggest providing an algorithm for the general setting which changes the update depending on the setting. Moreover, there are some minor issues in the way the algorithms are presented: - provide the input parameters in a clear way (e.g. algorithm 1 what is $l_m$?, algorithm 2 how can we compute $\tau_t$ since it is a random variable, algorithm 4 what is i?) - provide output to the algorithms. ### Novelty The simplicity of the strategy can be seen as a lack of novelty (due also to the previous point (see Presentation)). I suggest making more clear the challenges. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: See weaknesses. Some minor questions: - In algorithm 2, do we know $\tau_t$ or are we waiting until we do not receive the feedback, and then we can evaluate $\tau_t$? - From a technical point of view what are the main challenges in converting regret bounds from the "classic" setting to the delayed feedback one? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The paper is mainly theoretical and the algorithms presented are not easily usable in practice. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your careful reading and valuable comments! Q1: Presentation of the framework A1: In fact, the framework we propose in Algorithm 2 is already for the general setting, which covers both single-agent and multi-agent, both tabular and linear. This framework can transform any multi-batched algorithm into an algorithm in the delayed setting, regardless of the specific type of batches. The “three sections” you mentioned are a strategy for designing multi-batched algorithms, and are not part of the main framework. Q2: Parameters in Algorithm 1 A2: $\ell^m$ is the length of the policy sequence $\pi^m$ calculated by the algorithm. It is not an input parameter. For example, in the phase elimination algorithm for linear bandits in Lattimore and Szepesvári(2020), $\ell^m=T_m\approx 2^m$ (for the definition of $T_m$ see Appendix C). Q3:$\tau_t$ in Algorithm 2 A3: In algorithm 2, we do not know $\tau_t$ in advance. In fact, we do not need to know $\tau_t$ or evaluate it during the whole algorithm. We only wait until we receive enough feedback so that the stopping condition in this batch is satisfied. In other words, the algorithm does not rely on knowledge of $\tau_t$. It is just a mathematical symbol for ease of presentation. Q4: Parameter $i$ in Algorithm 4 A4: In algorithm 4, $i$ is used to state the trigger set $\mathcal{L}$. $\mathcal{L}$ contains all powers of 2, as long as they do not exceed $KH$. Q5: Outputs of the algorithms A5: We will clarify the output of the algorithms in our revision. Most of them are sequential decision making algorithms that output a sequence of policies. Thanks for your suggestion. Q6: Novelty and simplicity A6: Our framework is simple yet effective, as it can handle various settings, and using our framework we obtain many results that match or surpass previous results. We believe that simplicity and effectiveness are also a form of novelty. Besides, we also summarize our novel contribution as three points. Please see our **common rebuttal** for more explanation. Q7: From a technical point of view what are the main challenges in converting regret bounds from the "classic" setting to the delayed feedback one? A7: The key point is to decompose the regret bound into two parts: (i) the regret bound of multi-batched algorithms in the classic setting; and (ii) the additional regret incurred by the waiting time at the end of each batch. The first part follows previous results and the second one uses the property of multi-batched algorithms and stochastic delay. Our proof is simple but effective in the sense that we can obtain a unified result for decision making with delayed feedback. Moreover, when specialized to concrete problems, the obtained results match or even improve existing results. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their detailed replies. I continue to recommend the acceptance, hoping the authors will address the clarity issues.
Summary: This submission studies the problem of learning from delayed feedback in various online learning settings including multi-armed bandits, linear bandits, linear MDPs, RL with function approximation, and various markov game settings (among others). Delays are assumed to be i.i.d. random variables. Additionally, they are sometimes assumed to be subexponential random variables. The authors main result is a framework for sequential decision making with delayed feedback. Their framework makes use of "multi-batched" algorithms, which, as the name suggests, runs in batches. In each batch, a multi-batched algorithm outputs a sequence of policies to be used one-after-one (which are trained using data from previous batches), as well as a stopping criterion which decides when to stop the current batch. After a batch is run, the new data is incorporated with the existing data and is used to produce a new sequence of policies and stopping rule for the next round. At a high level, the framework converts a multi-batched algorithm for the setting of interest (e.g. linear bandits) to one which can handle delayed feedback. The key idea is to run the policy given by the multi-batched algorithm for some extra time-steps in order to satisfy the stopping criteria. The authors are then able to bound the additional regret due to the delayed feedback by bounding the number of additional steps needed to satisfy the stopping criteria. By instantiating their framework in the various settings listed above, they can obtain provable guarantees on the learner's regret. The rates obtained improve upon existing results in some settings (e.g. linear bandits and tabular MDPs), are slightly worse in others (e.g. multi-armed bandits), and are the first of their kind in other settings (e.g., RL with general function approximation). Strengths: The problem of delayed feedback in online learning settings is a well-motivated problem with many real-world applications. Furthermore, the authors are the first to obtain results for learning with delayed feedback in a variety of (important) settings such as RL with general function approximation, as well as improve upon existing results in others such as linear bandits. The two main strengths of this paper are the simplicity/clarity of the proposed framework and the sheer number of results the authors are able to obtain using their framework. It is impressive that results for so many important settings are able to be obtained by applying one simple framework. Moreover, all that is required to use their framework is a multi-batched algorithm, which already exist in the literature for many online learning settings. Weaknesses: At times, the writing feels a bit rushed and as a result the submission may be a hard read for someone who is not an expert in RL theory. Specifically, I felt that the mutli-agent results were somewhat lacking in detail. (For example, why is the CCE the "right" solution concept, what happens when all players do/don't play a CCE policy, etc.) This is probably due to the fact that the authors have lots of results, which they wish to highlight in the main body of the paper. One suggestion would be to move either subsection 5.1, 5.2, or 5.3 to the Appendix, and use the extra space to provide more background/detail/hand-holding for the reader. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: I was a bit confused by the notation in line 9 of Algorithm 2. Shouldn't this read (both in words and mathematically) "Collect trajectory feedback in this batch that is observed **by** the end of the episode"? Would your framework be able to obtain stronger results if, instead of Assumption 2, the delays are assumed to be subGaussian random variables? A reference is missing to "Banker Online Mirror Descent: A Universal Approach for Delayed Online Bandit Learning" by Huang et al. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors have adequately addressed the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your careful reading and valuable comments! Q1: Writing and readability A1: To comply with the page limit, we have moved the details about multi-agent Markov Games to the appendix. We will polish the contents to ensure clarity and coherence for the readers in the revision. We appreciate your valuable feedback. Q2: Why we consider CCE in general-sum Markov Games? A2: Both NE and CCE are common learning objectives in Markov games. However, finding the Nash equilibrium is computationally hard in general-sum Markov games. In this case, a coarse correlated equilibrium (CCE) is a more tractable notion of equilibrium that strictly generalizes NE. Unlike NE, a CCE can be computed efficiently in polynomial time. Q3: Notations in line 9 of Algorithm 2 A3: You are right. We will correct this in our final version. We mean that in the $k$th episode, after executing the batch policy we collect all the feedback that has been delayed until the end of the episode. Q4: Stronger results using subgaussian assumption A4: You are right. If we use subgaussian assumption instead of the subexponential assumption, we can get a stronger result since we use different concentration inequality. In fact, subgaussian assumption is stronger than subexponential assumption, since any subgaussian random variable is also subexponential. Q5: Reference to Huang et al. A5: Thanks for your reminder. We will add reference to this paper in our revision. --- Rebuttal Comment 1.1: Comment: Thanks for the reply. As a follow-up to Q2, a third choice of equilibrium that is natural to consider is a correlated equilibrium (CE). Could your framework be adapted to learn a CE instead? As a follow-up to Q4, how do you hypothesize your regret rate in Theorem 1 would change under a subGaussian delay assumption? --- Reply to Comment 1.1.1: Comment: Thanks for your reply! Follow-up to Q2: We can also extend our framework to learn a CE. The key is to devise an algorithm that can learn a CE with low batch numbers in the undelayed environment, and then integrate it with our framework to handle the delayed environment. For instance, in the tabular Markov Game setting, we can modify the CE-version of V-learning algorithm from (Jin et al., 2021) by using a similar idea as in our Algorithm 4, and obtain a multi-batched version of the algorithm for this setting. Follow up to Q4: If we assume that the delay is $\sigma$-subgaussian, the problem-dependent constant $C_\tau$ in (7) will be $\sqrt{2\sigma^2\log(3KH/2\delta)}$ instead.
Summary: The paper presents a new framework for handling stochastic delayed feedback in general sequential decision-making problems, encompassing bandits, single-agent Markov decision processes (MDPs), and Markov games (MGs). The authors introduce a novel reduction-based framework that converts any multi-batched algorithm for sequential decision-making with instantaneous feedback into a sample-efficient algorithm capable of managing stochastic delays. They provide various examples, demonstrating the efficacy of their framework in different scenarios, and present several new results, improving existing findings for linear bandits, tabular RL, and tabular MGs. Strengths: The paper addresses a significant and practical issue in sequential decision-making: stochastic delayed feedback. It is well-motivated with practical examples like recommendation systems, robotics, and video streaming. The paper presents a novel, reduction-based framework that shows versatility across multiple domains (bandits, single-agent MDPs, and MGs), exhibiting a comprehensive approach. The proposed solution offers theoretical advancements by improving the regret bounds for linear bandits and tabular RL and provides the first theoretical guarantees in RL with function approximation and multi-agent RL settings. The authors effectively integrate existing multi-batched algorithms into their framework, illustrating the generality of their approach. Weaknesses: It seems that the contribution is rather incremental. The idea that multi-batched algorithms can be used in delayed setting is not new. What appears to be new is some new theorems bounding the regret when using multi-batched algorithms for delayed settings in a variety of settings. The authors could strengthen their paper with some simple experiments to add to the theory Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Can you elaborate more on the specific limitations of the current state-of-the-art methods that your framework is intended to overcome? Can your framework be generalized to handle other types of delays, or is it specifically designed for stochastic delays? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Can we develop new algorithms that work even better than existing multi-batch algorithms? The paper leaves open the question of whether their results are tight for MDP and MG with delayed feedback. Thus, there is room for future work to tighten these results. The paper suggests that a multi-batched algorithm with a smaller batch number for tabular Markov games could be derived, which may provide more efficiency, but does not provide it in this paper. It remains unclear if their findings and the proposed framework will translate effectively into practical scenarios. The authors have not provided any real-world evaluations or use-cases to support their claims. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your careful reading and valuable comments! Q1:Is the contribution rather incremental? Explain specific limitations of the current state-of-the-art methods. A1:We respectfully disagree with the reviewer’s argument that our contribution is rather incremental. Please see our **common response** for more explanation. Q2: Can our framework handle other types of delays? A2: Our framework is specifically designed for stochastic delays. As we mentioned in section 1.1, our framework achieves a better result than directly applying the reduction from adversarial delay setting to stochastic delay setting. Q3:Other limitations A3:For other limitations such as designing multi-batched algorithms with smaller batch numbers, we have discussed in our paper and we leave this as future work. One possible direction is to follow the idea of Zhang et al. (2022b) and adapt their algorithm to linear RL/Markov game setting. In terms of numerical results, we will consider adding some experiments in the future version. Thanks for your suggestion. --- Rebuttal Comment 1.1: Title: Response to Authors Comment: Thanks for clarifying the contributions. After reading the rebuttal and the other reviews I wouldn't be opposed to voting for acceptance if the authors can include additional discussion about the contribution in the camera-ready paper. --- Reply to Comment 1.1.1: Comment: Thank you for dedicating your time and providing your valuable support. We **promise** that we will incorporate the discussions into the revision. We would appreciate it if you would reconsider your score in light of our clarification.
Summary: This paper provides a general framework for analysing any ‘mini-batch’ or rarely switching algorithm in the presence of delayed feedback. This approach covers bandits, finite horizon MDPs and finite horizon Markov games. The results provided match or improve on the best-known results for delayed feedback in settings that have been studied before, and provide the first bounds in some settings where delayed feedback has not been studied before. Strengths: It is nice to have a comprehensive study of delayed feedback in various settings. It is interesting that phase-based algorithms allow one to deal with delayed feedback effectively in a variety of settings. It is nice that the general framework provided in this paper allows us to recover the same bounds as for algorithms for specific settings and provides new bounds in settings where delayed feedback has not been studied previously. Weaknesses: The writing and clarity could be improved. I also found the structure quite confusing with a couple of pages dedicated to Markov games while a lot of the content needed to understand that section (e.g. related work and algorithms) relegated to the appendix. The amount of space dedicated to Markov games also meant that there was not sufficient room to discuss the results for delayed feedback in bandits/RL, nor for all details of the method/results to be provided (see the many questions below). The idea of using phase-based algorithms which switch arms less frequently for delayed feedback is not new and has been used in e.g. Lancewicki et al (2021), Howson et al (2021), Pike-Burke et al (2018), Vakili et al (2023),… This should be stated in section 4. Additionally, I believe Vakili et al (2023) also provide the same bound for the linear bandit setting with delayed feedback as is established in this paper, so this should also be checked and added. References: Vakili, Sattar, et al. "Delayed Feedback in Kernel Bandits." ICML, 2023. Pike-Burke, Ciara, et al. "Bandits with delayed, aggregated anonymous feedback." ICML, 2018 Technical Quality: 3 good Clarity: 2 fair Questions for Authors: The two results in e.g. theorem 1 are a bit confusing and the difference between them is not well explained. Is the second one (7) on the expected regret and the first (6) on the high probability regret? For (6), I am assuming that q is a parameter which can be chosen, can any guidance be provided for how to choose it? Does it depend on \delta? I am interested in whether experimentally these rarely switching algorithms actually perform better than algorithms tailored to the specific setting, or just versions of optimistic algorithms that only use available data? A couple of times ‘minimax optimal’/’tight’ is stated. Are there lower bounds for all these settings with delayed feedback? Can the lower bounds/best results for the non-delayed settings be added to the table of results to enable us to clearly understand what the impact of delayed feedback is? In the general algorithm provided in section 4 and algorithm 2, do we need to know the delay distribution to define the batch lengths? The example in the text would suggest that we do, but the pseudo-code does not mention it. In the introduction it is mentioned that the sub-exponential assumption on the delay distribution is only sometimes needed, yet I cannot find any discussion of when it is used for the main results. Is it needed for all results in Theorem 1? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Very brief discussion of limitations in conclusion Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your careful reading and valuable suggestions! Q1. Writing and clarity A1. To comply with the page limit, we have moved the details about Markov Games to the appendix. We will arrange the contents to ensure clarity and coherence for the readers. We appreciate your valuable feedback. Q2. Novelty of our idea A2. We point out that unlike previous works that propose different phase-based algorithms and analyze them case by case, our work introduces a generic class of algorithms that can be integrated with any multi-batched algorithm in a black-box manner. We also provide a unified theoretical analysis for the proposed generic algorithm. Moreover, papers such as Howson et al (2021) and Pike-Burke et al (2018) fail to fully exploit the potential of multi-batched algorithms, and their bounds are inferior to ours. (see Table 1 for a comparison of the results) Q3. Checking Vakili et al (2023) A3. Our work does not consider the kernel setting, but we conjecture that our framework can handle this problem by designing a multi-batched algorithm for kernel bandits. We appreciate your suggestion of this related work and we will discuss it further in the revision. Q4.Explanation on results in Theorem 1 A4. Both (6) and (7) are high-probability regret bounds. Here (7) holds with probability $1 - \delta$. Thank you for pointing this out and we will clarify in the revision. The main difference is the use of concentration inequality for the number of delays. (6) uses concentration for sub-exponential delays, and (7) uses concentration in the quantile form. The parameter $q$ can be any real number in (0,1) and does not rely on other parameters such as $\delta$. Q5. Numerical experiments A5.We will consider adding numerical experiments to compare the performance with algorithms tailored to the specific setting. However, we note that all these algorithms are optimistic algorithms, and the main difference lies in the timing of policy update, which directly affects the regret. Q6.Lower bounds A6.Here we can give a simple lower bound. Consider the case when the delay $\tau$ is a constant. For this case a trivial lower bound is $\Omega(R\tau)$ where $R$ is the upper bound of reward in each episode ($R=1$ in bandits, and $R=H$ in episodic MDP), since if we run the algorithm for only $\tau$ episodes, then the algorithm cannot observe anything, and it cannot do better than tossing a coin. Together with existing lower bound for undelayed problems, we obtain a lower bound $\Omega(\text{undelayed lower bound} + R\tau)$. By this lower bound, we know our results are tight in bandits setting. In MDPs and MGs, we improve existing results or provide the first line of study. Q7.Do we need to know the delay distribution in advance? A7.We do not need to know the delay distribution in advance. Moreover, we do not have to predefine the length of each batch beforehand. We simply run the batched policy until we collect enough data for the batch to stop. Q8.Where is the sub-exponential assumption used? A8.The sub-exponential assumption on the delay distribution is needed when we derive regret bound (7) in Theorem 1. It is used for concentration of delays. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for answering some of my questions. However I think there are still some things that are not clear. Q2: I agree that the unifying framework the authors provide is new. However, I think that it is good scientific practice to acknowledge that the general idea of using rarely switching algorithms to mitigate the effects of the delayed feedback is not new, so it would be good if the authors mentioned that in Section 4. Q3: Vakili et al (2023) already uses a rarely switching algorithm since it is based on the BPE algorithm of Li & Scarlett (2022). Does the BPE algorithm thus fit into the framework in this paper? In which case what results are obtained? Also note that when the results of Vakili et al (2023) are applied to the linear kernel setting, they improve upon those of Howson et al (2021) to obtain the same $E[\tau]$ penalty as is obtained in this work. Therefore, the table on pg2 should be updated to include the results of Vakili et al (2023) for linear bandits. Q8: Does this mean that the sub-exponential assumption is not used for the regret bound in (6)? This should be mentioned. Li, Z. and Scarlett, J., 2022. Gaussian process bandit optimization with few batches. In International Conference on Artificial Intelligence and Statistics (pp. 92-107). --- Reply to Comment 1.1.1: Comment: Thanks for your reply! Q2: We will add discussions on the ideas of previous works and emphasize our contributions in the revision. Thank you for your suggestion. Q3: In the kernel bandit setting, the BPE algorithm also fits into our framework. In this case, our framework gives a $\tilde{O}(\Lambda\sqrt{T\gamma_T}+\mathbb{E}[\tau]+C_{\tau})$ regret upper bound when the delays are subexponential (For the notation $\Lambda$ and $\gamma_T$ see Li & Scarlett (2022)). We will modify Table 1 to include results of Vakili et al (2023). Thank you for your suggestion. Q8: You are right. The sub-exponential assumption is not used for the regret bound in (6). We will make this more clear in our revision.
Rebuttal 1: Rebuttal: ## Common rebuttal for the contributions of our paper: We respectfully disagree with the reviewers’ argument that our contribution is rather incremental. Here we describe some drawbacks of previous works, and how our work addresses them. The previous works suffer from several drawbacks: (i) they require case by case algorithm design and analysis; (ii) their regret bound is not tight; (iii) they do not explore some settings (linear MDPs and MGs). Our work addresses these challenges, and our contributions are as follows: (1)new framework: We propose **a generic class of algorithms** that can be integrated with **any** multi-batched algorithm in a black-box fashion. Meanwhile, we provide a **unified** theoretical analysis for the proposed generic algorithm. (2)improved results and new results: By applying our framework to different settings, we obtain state-of-the-art regret bounds for multi-armed bandits, and derive sharper results for linear bandits and tabular RL, which significantly improve existing results. (3)new algorithm: we design new multi-batched algorithms for multi-agent Markov games, and handle the delayed feedback in Markov games by combining our framework and the newly designed algorithm.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Dense and Aligned Captions (DAC) Promote Compositional Reasoning in VL Models
Accept (spotlight)
Summary: This paper studies the problem of getting models like CLIP to perform compositional reasoning. One observed problem with these models is that they devolve into "bags of objects" models, and this work seeks to address their variable-binding ability towards identifying more complex relationships. This paper takes a data-first approach, hypothesizing that the data behind these models might be flawed. The paper starts with the Conceptual Captions 3M dataset. The quality of captions is improved by: 1. Using BLIP2 + OPT to recaption the images 2. Using a GPT-Neo-2.7B model to perform "knowledge expansion". This hallucinates extra text for the given caption, though it should be noted that this is addressed by MIL losses that are introduced. 3. Using segment-everything to get a bunch of regions, which are then fed to BLIP@. 4. Using "negative text augmentation" to change words around to form captions that don't match the image. The paper studies CLIP models that are finetuned on CC3M, using LoRA to reduce catastrophic forgetting. The CLIP losses are extended to incorporate the negative text augmentation strategy as well as to address the knowledge-expansion hallucination issue with MIL. The paper evaluates on VL-Checklist, ARO, and Elevater, which are all datasets for compositional reasoning. Linear probing results show that the model performs better on these datasets with benefits on compositionality. --- update: increasing my score from 6->7 as my main concerns were resolved. I vote to accept this paper. I think I'm in agreement with the other reviewers, except for bpWN whose concerns I'm not really understand (they feel like curious questions IMO but not reason to reject the paper). Strengths: Overall I am a fan of this paper. I think it addresses an important question: how to make CLIP models more robust to compositional reasoning and variable binding, and it does so by addressing an often-understudied aspect, the role of data. The experimental results and ablation study seem solid to this reviewer, and suggest that the various recaptioning / data augmentation strategies help. Weaknesses: To this reviewer, the paper seems strong overall. However, I think the collection of caption augmentations seem a bit complicated and so I'm left wanting to know which is the most effective in terms of overall "bang for your buck". For Figure 3 I'm not convinced by the use of CLIP score here. If the paper presents ways of exceeding CLIP performance on compositional data then I'm not sure if CLIPScore is a reasonable metric for evaluating captions that are truly better than CLIP. I would also prefer if these improvements were tried on a larger CLIP model, as ViT-B/32 seems rather small. See the questions below - I think also they could improve the paper if addressed. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: * What strategies are the most effective in terms of pretraining/inference compute? * Do these results hold across model sizes (e.g. CLIP models and recaptioning models)? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: thanks for adding a limitations section! Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful comments. In the following, we provide a response to the questions raised in the review: 1. **On the most cost-effective caption enrichment strategy:** Great question! In the ablation section 5, in Table 4 of the paper, we evaluated the contributions of different caption enhancement and enrichment strategies. From that table, we can see that all techniques proposed in DAC are effective on their own, while the biggest gains are delivered when all are combined. That said, we can observe in this table that the LLM expansion is providing the biggest leap forward (over 14% improvement to the base model) when employed with MIL losses for handling noise and negative augmentation during training (these are performed on the fly and do not require any collection). It is our belief that LLM expansion may be the most cost-effective strategy in the case we already have a VL dataset containing images paired with captions and we are interested to finetune on this dataset to enhance the model’s compositional reasoning. One of the reasons for this being cost-effective is that LLMs research develops very rapidly these days, with large and performing open source models released on a regular basis together with efficient finetuning and inference techniques for these LLMs (e.g. qLoRA and Text Generation Inference server by HF). In cases one would like to employ an unlabeled image collection, quality enhancement (captioning) would additionally be required and, as can be seen from the table, quality + LLM density expansion leads to the best average improvement overall. We will add this discussion to the paper. 2. **On CLIPscore in Fig. 3:** In order to complement the analysis in Fig. 3 that indeed utilized the CLIP score as a proxy for quality (being most cost-effective automatic metric), we conducted the following user study. We have asked 121 human subjects to review 110 images randomly sampled from CC3M. For each presented image we offered 3 choices to the subject asking which choice better fits the image. A and B were captions, either produced by BLIP2 for the given image or the image’s original CC3M caption (which of those is A and which is B was randomized to prevent any bias). Option C was “neither caption fits the image”. We found that 80.7% of the responses favored the BLIP2 caption, 2.2% preferred the original caption, and “neither” was chosen in 17.1% of the cases. This indicates that BLIP2 captions are indeed better aligned with human perception than the alt-text collected from the Web in CC3M. Intuitively, such better alignment to humans, who are inherently good at compositional reasoning, likely leads to the significant compositional reasoning average performance improvements observed when gradually increasing the percent of the higher quality (BLIP2) captions in the fine-tuning data (Fig 3c in the paper). Thanks for suggesting to enhance the CLIPscore analysis, we will include this additional analysis above in the paper. 3. **On experimenting on a larger CLIP model:** Thanks for this suggestion! We have performed this experiment in Table 1 of the global response PDF. As can be seen from the table, we have additionally tested DAC on the larger Vit-L/14 (bigger model and more image patch tokens of smaller size) also pre-trained and released by OpenAI. As a result, we can observe that DAC successfully improved the Vit-L/14 compositional reasoning average performance by 23.2%, similarly as for Vit-B/32, without reducing its representation power and vision and language alignment evaluated on ELEVATER. We will add this to the paper. --- Rebuttal Comment 1.1: Title: thanks! 6->7 Comment: thanks for the helpful response! increasing my score from 6->7 as I think you've addressed all my main concerns.
Summary: The current vision-language (VL) models suffer from "object bias" issues, this paper proposes two components to improve the compositional reasoning capability of the existing VL models. It employs the pretrained LLMs to augment the captions of the images, in terms of quality and density, then finetune the CLIP models on CC3M corpus, and demonstrates promising results on two benchmarks. Strengths: 1. This paper employs the strong pretrained LLM to enhance the caption quality and quantity of the images, and finetunes the pretrained CLIP on the augmented CC3M corpus, and shows the promising results on two compositional reasoning benchmarks. Weaknesses: 1. An incremental work to improve the compositional capability of the VL models by augmenting text captions with images, and show decent performance on the two benchmark, and weak performance on the Elevater benchmark. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. I assume the the first row in the Table 4 is the CLIP finetuned on CC3M baseline, right? If yes, it looks hard negative captions are key component to improve the performance (row 3 in A), and quality looks not an important component comparing row 0 in B and row CLIP, it gets worse. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the time and effort spent reviewing our paper. In the following, we provide a response to the questions raised in the review: 1. **On the paper contributions:** Our paper establishes caption quality and caption density to be very important factors in finetuning VL models for better compositional reasoning performance. We offer several practical approaches towards enhancing caption quality and caption density in arbitrary public VL datasets and demonstrate the effectiveness of those approaches to significantly improve (by up to 27% on some metrics) the compositional reasoning performance on the most popular VL model (e.g. CLIP) while also proposing practical methods to maintain the base model representation power (evaluated using ELEVATER), while fine-tuning to get these improvements. Moreover, as noted by other reviewers and in the paper (ll. 151-157), our proposed approach can leverage any **unlabeled** image collection for finetuning a VL model for better compositional reasoning performance without forgetting its representation capabilities (ELEVATER), thus further increasing our approach’s practical value. In fact, our best results were obtained without using the original captions, so in fully **unlabeled** mode. 2. **We would also like to clarify some seeming inaccuracies in the reviewer [xcfP] summary of the paper, as well as the seeming inaccuracies in summarizing the paper's strengths:** (we apologize if those stem from our misunderstanding of the respective parts of the review, we would like to do our best to reduce any misunderstanding) - We use a VL model for the caption quality enhancement (and not an LLM as stated by the reviewer); - We propose two approaches for caption density enhancement, one based on LLM that uses its intrinsic world knowledge to provide additional possible details corresponding to the situation described in the caption, while the other (seemingly missed by the reviewer) - uses semantic (over) segmentation followed by VL-model-based captioning of the expanded segment crops; - The ELEVATER benchmark was used to evaluate that VL model improved by our proposed DAC approach (in terms of compositional reasoning) has “not forgotten” its representation capabilities, and was not used as a competitive benchmark (as seems to be indicated by the reviewer). When improving large-scale pre-trained VL models (e.g. CLIP) we certainly need to retain all their known advantages and we used ELEVATER to verify the CLIP's property of aligned vision and language representations was not impacted (Table 2 in the paper). 3. **Regarding questions about Table 4 (ablations):** The first row in Table 4 (above block A) is out of the box CLIP without finetuning. Fine-tuning of CLIP on CC3M leads to performance degradation (block A, first row). Therefore, row 1 in block B (finetuning with enhanced quality only) is only fair to compare with row 1 in block A, as both finetune on CC3M. As can be seen, quality alone improves 8 points in this comparison. Additionally, the quality enhancement is not offered as a standalone technique, best improvements are obtained with a combination of quality, density, and negatives, improving the base CLIP by almost 22 points and the negatives only finetuning by close to 6 points on average, which is quite significant. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: Thanks the authors for the response. I re-examine some details of the paper, somehow, this work still looks incremental, for example, how to differentiate it with one of important baselines (SVLC)? This work is built on the top of SVLC, (and its codebase) which is as one of the components in the paper, and the major difference is built on the top of BLIP2 backbone. It is reasonable for the gain, if comparing this work with SVLC baseline in the table 1. --- Reply to Comment 1.1.1: Title: differences from SVLC Comment: Dear reviewer, we appreciate your concern, however: 1. Our work is **not** built on top of **BLIP2 backbone**. In fact, our method (DAC) results in Table 1 are of fine-tuning the **CLIP backbone**, demonstrating how our proposed DAC can significantly improve (by almost 22% on average) its compositional reasoning performance while preserving the representation power of its embedding space (Table 2) and being able to train on completely **unlabeled** images collection (as also noted by other reviewers). 2. Our gains over SVLC are substantial and significant - we have **5.76%** average gains over SVLC on VL-checklist + ARO combined (Table 1), with up to 20.8% gain over SVLC obtained for the VL-checklist most challenging Relation metric. 3. Our approach significantly defers from SVLC. We show how improving caption quality and caption density we can generate V&L training data from an **unlabelled image collection** and finetune a VL model (CLIP in our experiments) to significantly improve its compositional reasoning performance while preserving the representation power of its vision and language encoders. The caption density and caption quality enhancements were proposed in DAC and did not exist in SVLC. Their contributions to the finetuned model performance are very significant (in comparison to what is possible to obtain with SVLC) as noted in point #2 above. Moreover, SVLC relied on the existence of a **paired (labeled)** VL data collection, while DAC can work on a **completely unlabeled** image collection (without any paired text), as also noted by other reviewers. We hope the above explanation clarifies and resolves your concerns. We would be happy to provide any further clarifications as requested.
Summary: This paper highlights two problems with existing image-text datasets that make them unsuitable for use as pre-training datasets for evaluating performance on attributes and relations, and claims that the models simply act as bag of words when trained on these datasets. The first problem they identify is that often in these web-scraped datasets, the text is not actually describing the contents of the image, but instead is describing how the captioner feels about the image or unrelated information entirely. The second is that the captions often only describe a part of the image. They propose two approaches - captioning images to improve quality and density as well as a fine-tuning approach that is able to handle the noisy captions generated by their method. Edit: I have updated my score to reflect the rebuttal Strengths: ## Originality and Significance * The paper attempts to address two important issues with pre-training on image text datasets (quality and density), which could improve the performance of vision and language models * They use two interesting approaches to improve density of captions - using an LLM to probe for other things that can be said about a given caption (without access to an image) and based on SAM. * They evaluate performance on image classification benchmarks to see performance compared to the base model (CLIP) Weaknesses: ## Major issues * "the CLIP matching score statistics are significantly improved by replacing the original captions with BLIP captions (Fig. 3a), and qualitatively they are a better fit in many cases " The conclusion that BLIP2 based captions are better than the original captions is drawn by computing the CLIP matching score and observing that these are scores higher, which is expected since the BLIP2 model is based on CLIP and would understandably score these outputs higher. This is insufficient evidence and needs to be supported either by human judgement or other approaches (it is unclear what is meant by qualitatively they are a better fit in many cases <-- was a study conducted to conclude this?) * LLM expander approach : "Indeed, the source caption is the only information provided to the LLM, and some facts are likely to be hallucinated and unrelated to the corresponding image." <-- is there some statistic on how often this happens? Some human analysis on a random selection of images that verifies how many expansions have hallucinated facts about the image would be useful to judge the usefulness of this approach. * The SAM expander section does not provide any information on how captions are created from the segments. How are they provided as input to the captioning model? Is it just a crop of the single segment? If it is multiple segments how are they handled? * It is unclear to me how performance on relations (Fig 3) can improve using the SAM approach when as far as I understand, it is only possible to obtain single object using the pseudolabelling approach described in the paper. * It seems odd to me that BLIP-2 does so poorly on the ARO benchmark given that it is a generative model (13% on COCO and FLICKR when CLIP gets more than 3x). Especially when Table 4 Row B(first sub row) shows performance on ARO using captions generated by BLIP and it is around 29 & 40%, and the model on COCO-captions achieves more than 140 CIDEr. Could the authors please describe how BLIP2 was evaluated? (See Table 6 in [1] which shows that even a blind LM decoder can get 99% on the COCO and FLICKR splits that test for word order). My guess is that both ARO and this work use the ITM head - which would not be as good as scoring the likelihood of the caption under the LM head. * Using BLIP-2 to caption and then using those captions as data is the same as distilling information from the dataset BLIP-2 was trained on. An experiment that is missing, would be to compare a model trained on BLIP-2 training data (having CC12M, COCO, VG, SBU) with regular fine-tuning vs the approaches described in this paper as the improvements could be due to access to this larger source of image-text data. ## Minor issues * Fig 3 (a) is unclear - what are the X-axis and Y-axis depicting? Also (c-e) * Negative example generation is not sufficiently described in the paper [1] Image Captioners Are Scalable Vision Learners Too. Michael Tschannen et al 2023 Technical Quality: 2 fair Clarity: 3 good Questions for Authors: In Table 4 Row B : Could you explain why adding the LLM expansion would increase the score on COCO and FLICKR ordering subsets of ARO by 2x (row 1&2) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful comments. We address all the reviewer comments below: 1. **On the advantage of the caption quality improvement:** We performed a user study to complement the evidence provided in the paper (CLIP score analysis - Fig 3a; qualitative examples - Fig 3b; the impact of gradual quality enhancement - Fig 3c). We have asked 121 human subjects to review 110 random images from CC3M. For each image, we gave 3 choices asking which choice better fits the image. A and B were either image's BLIP2 caption or its original CC3M caption (A/B assignment randomized to prevent bias). Option C was “neither caption fits the image”. Of all responses, 80.7% favored BLIP2 caption, 2.2% original caption, and 17.1% were “neither”. This shows that BLIP2 captions are better aligned with human perception. Intuitively, such better alignment to humans, who are inherently good at compositional reasoning, likely leads to the corresponding performance improvements resulting from caption quality enhancement (Fig 3c). 2. **On the analysis of LLM expansion:** We have conducted a human evaluation on 100 random images of CC3M. In DAC, the LLM is prompted to produce a multi-sentence caption - these sentences are then used separately in the “bag” of our MIL loss. We analyzed the correctness (w.r.t. the image) of each individual sentence from the LLM expanded caption and found that 54% of them add correct (visible on the image) and provide new information on top of the original caption, supporting the value of LLM expansion (combined with MIL to cope with noise). 3. **On the details of SAM-based expansion and why it improves relations metric:** The process of SAM expansion was summarized near the end of section 3.2 (ll.179-190). SAM [28] has a mode in which it can produce full image segmentation by generating segments from a regular grid of points positioned on the image. This mode does not require any prior information on objects' positions. The resulting segments were processed one by one, each used to produce a (noisy) collection of captions (one for each segment) later employed in our training through MIL (Sec. 3.6). To produce a caption from each SAM segment, we do the following. We apply morphological operations (OPEN, with rect kernel of 1% image max size) to enlarge the segment and smooth its boundaries. Following the morphology, we crop an area around the segment and feed the resulting crop into BLIP2 captioner to produce the caption for the crop. To further analyze the significant positive impact of SAM-based caption density enhancement on the compositional reasoning performance of the fine-tuned model (Fig 3e), as proposed by the reviewer bpWN, we have conducted a manual (human) evaluation on 100 random images from CC3M. On average, we measured 3.4(+-1.1) correct relations generated per image from SAM segments (correct = appear in the generated caption of the segment and verified as ‘visible’ by a human in the segment crop). Since relations (e.g. “in”, “on”, “holding”, “touching”, “sitting on”, “standing by”, etc.) typically involve overlapping or nearly overlapping objects, our segment bounding box crops and consequently the resulting captions often do capture those relations and teach them to the VL model in an explicit and focused way, thus logically resulting in the observed significant performance boost in relation-related metrics. We will gladly include these details & analysis in the final version of the paper. 4. **On how BLIP2 was evaluated:** Indeed, BLIP2 is trained using a multi-task objective, with the Image-Text-Matching (ITM) head being explicitly trained to predict if a given image entails a given text. As ARO evaluation is in fact an entailment task (testing which of the 2 texts, correct or incorrect, is more likely to be entailed by the image), it is a common practice (part of the ARO protocol) to use the ITM head for the evaluation (we will clarify this in Tab.1 caption). We agree that exploring the use of LM captioning head for zeros shot inference in **encoder-decoder** VL models (reference [1] provided by the reviewer), is a great concurrent work (appeared on ArXiV on June 13th 2023, months after NeurIPS deadline) with very promising results (following training on 1B web images, much larger than CC3M). We will add this to our related work discussion. That said, improving compositional reasoning in encoder-only VL models (CLIP) still has strong merit as these models can be computed for each modality separately and hence are significantly faster in zero-shot inference in many practical applications (compared to **encoder-decoder** counterparts that need a decoder forward pass for any image+text pair, which is considerably slower and even a bit unfair to compare to **encoder-only** that only use cosine similarity and hence allow fast sub-linear matching and vector databases use). 5. **On comparing to larger scale (CC12M+VG+COCO+SBU < 14M size) training:** In Tab.2 of the attached PDF, we fine-tune CLIP on a 15M subset of LAION. We report regular finetune (full-FT) and LoRA finetune (LoRA-FT). Both full-FT and LoRA-FT are close to base CLIP performance (3.35% lower) and are 25% below DAC. This shows that gains attained by DAC are not due to distillation from larger data, but result from the proposed density and quality expansions. 6. **Axis of Fig3a:** The x-axis is the clip-score, the y-axis should have been the normalized prob. density. We will fix the y-axis labels. 7. **The negative example generation:** was done using the public TSVLC code from Doveh et al 2023 (CVPR). We used their public official code. Will provide detail in supplementary. 8. **On the LLM expansion improving word-ordering metrics (Tab. 4):** We believe this comes from the typical para-phrasing resulting from generative sampling of LLM text outputs while prompting. Para-phrasing leads to word ordering augmentation and to better modeling of the natural word ordering distribution. --- Rebuttal Comment 1.1: Title: Thank you for the clarifications! Comment: One of my primary concerns on reading the paper were about lack of details on key parts of the proposed process (ex. the SAM expander) - the explanation in section 3.2 (ll.179-190) was unclear and not sufficient to understand how the segments were used for generating captions or why they would help on relations. The authors have provided further clarifications on this and I hope they will be included in the revision of the manuscript. On the caption quality analysis, it was previously unclear what was being displayed in Fig 3(a), and the measurement of quality in terms of only clip score, was insufficient to verify whether this step was indeed valuable. I thank the authors for the human evaluation and am glad to see that there is a clear improvement in the caption quality. In addition, would it be possible to also include an analysis on the vocabulary of the generated captions - if it increases/decreases compared to the original captions, in terms of nouns, attributes and relations? This would also help to get a better understanding of why the generated captions contribute to the improved compositional understanding. On the evaluation of BLIP2: Table 1 caption states "BLIP2’s heavier encoder-decoder architecture gives it some advantage on the VL-checklist evaluation. Still, we outperform it by a large margin." This is misleading and would make the reader (such as myself) assume that it was evaluated in an encoder-decoder manner. And to clarify, I did not suggest comparison with the paper I cited, which I am aware, appeared after the NeurIPS deadline - but merely wanted to point out that even a blind LM only model can improve performance by a lot on ARO - so the results from this paper could be put in context of that. However, I agree with the authors that improving compositional reasoning on encoder-only (dual encoder models like CLIP) is valuable and this work shows how to do this. I ask the authors to include a discussion on this, to make it clear to the reader that the proposed improvements are mainly catering towards this type of model, and might not hold significance when applied to models having encoder-decoder setups unless they have results to support that their proposed methods also help for these other kinds of VLMs. The fine-tuning experiments are also great to have, and make the paper's claims stronger. --- Reply to Comment 1.1.1: Title: Thank you for the feedback! Comment: Thanks! We will certainly include all the analysis and the discussions as you suggested in the revised version of the paper!
Summary: The authors propose a method for data augmentation on image-text web corpora they call DAC. The main idea is to run both a segmentation model and an image captioning model, and then use those outputs to generate captions that are likely to describe the image using an LLM. Finally, a MIL loss + LoRA over CLIP is used to align the image with the autogenerated captions, and anti-align with some machine-generated false negatives. The authors achieve strong performance gains over vanilla CLIP (and no performance degradation in usual vision tasks) using this method when training on the CC3M dataset. Somewhat surprisingly, they actually don't even use the image captions in CC3M (L261). Strengths: - The topic of making CLIP-style models better compositional reasoners is interesting. - The proposed approach is straightforward, creative, and promising. I like that the authors used LoRA to maintain the original strong performance of CLIP (and specifically test for that). The results are strong compared to the CLIP ViT-B/32 baseline. - I admire that the authors used a public source LLM instead of OpenAI! - Ablations validate that all the pieces of the proposed pipeline play a role. Weaknesses: - I am not the biggest fan of the framing that web captions are "low quality." After all, these captions were presumably written by humans for a purpose other than ML training --- so they may be low quality for that purpose, but L43-44's blanket description (and elsewhere in the paper) I thought could be made more precise like in L160. - It would have been nice for the authors to front the fact that they actually just use unlabelled images earlier --- it's neat. - If SAM outputs are handed to BLIP in a single-segment fashion, how are the relations in figure 2 generated? e.g., "Miniature zoo of toy animals on a mat" seems to be something that would require multiple segments to parse, and I don't see how this happens given the description in L179. - It would have been nice to see scaling plots --- do these same results apply to larger versions of CLIP? And, do these results cleanly scale with the number of unlabelled images? Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Overall, I liked this work! It gives a nice, data-bottlenecked/auditable method for transferring the compositional knowledge that /feels/ like it should be in/derivable from other models (like SAM) into CLIP, while maintaining the positives of CLIP. My biggest gripe was that the authors didn't apply the method to larger CLIP models, or give scaling results to show how this method might scale with (readily available) unsupervised image data. UPDATE: The authors have addressed my concerns and I have raised my score accordingly. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: The authors briefly discussed limitations in a vague sense. But could more be said about the potential risks, e.g., of propagating errors that BLIP2 makes to downstream models? Does DAC result in augmented data with more (social?) biases than the original human-authored data? Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful comments. In the following, we provide a response to the questions raised in the review: 1. **On caption quality references in L43-44:** Thanks for pointing this out! We **will revise** the L43-44 caption quality reference to be more like the way caption quality is explained in L160 as you propose. Indeed, the web-collected alt-text captions were not intended for ML training, and as noted in L160 served different purposes rather than necessarily being image descriptive or detailed - qualities that, as we demonstrate in our paper, are very important for inducing compositional reasoning into VL models. 2. **On highlighting our approach strength of using unlabeled image collections to enhance compositional reasoning:** We absolutely agree! We mentioned it in lines 151-157 near the end of section 3.1 of the paper but should have indeed highlighted this in the abstract and intro. We certainly agree with the reviewer that this is a strong suit of our DAC approach - being able to enhance the compositional reasoning performance of VL models using unlabelled image collection as input. Indeed, this has very positive potential implications for future cost-effective (no labeling cost) scaling possible with our proposed approach. Thanks for pointing this out! 3. **More details on SAM expander:** - To clarify, the “Miniature zoo of toy animals …” sentence in Figure 2 is part of the result of the LLM expander (applied to the caption input “A child playing with toy animals”), the arrows of SAM expander and LLM expander on figure 2 are of the same color and that might have created the confusion. The SAM expander on Figure 2 produced sentences like: “The elephant is standing on a white background”, “The image shows a close-up of a toy tiger”, etc. We will make arrows coming out of SAM expander and LLM expander of different colors in Fig. 2, in order to prevent this confusion. Thanks for noticing! - Also, we would like to provide more detail on the process of SAM expansion, beyond how it was summarized near the end of section 3.2 (ll.179-190). SAM [28] has a mode in which it can produce full image segmentation by generating segments from a regular grid of points positioned on the image. This mode does not require any prior information on objects' positions. The resulting segments were processed one by one, each used to produce a (noisy) collection of captions (one for each segment) later employed in our training through MIL (Sec. 3.6). To produce a caption from each SAM segment, we do the following. We apply morphological operations (OPEN, with rect kernel of 1% image max size) to enlarge the segment and smooth its boundaries. Following the morphology, we crop an area around the segment and feed the resulting crop into BLIP2 captioner to produce the caption for the crop. To further analyze the significant positive impact of SAM-based caption density enhancement on the compositional reasoning performance of the fine-tuned model (Fig 3e), we have conducted a manual (human) evaluation on 100 random images from CC3M. On average, we measured 3.4(+-1.1) correct relations generated per image from SAM segments (correct = appear in the generated caption of a segment and verified as ‘visible’ by a human in the segment crop). Since relations (e.g. “in”, “on”, “holding”, “touching”, “sitting on”, “standing by”, etc.) typically involve overlapping or nearly overlapping objects, our segment bounding box crops and consequently the resulting captions often do capture those relations and teach them to the VL model in an explicit and focused way, thus logically resulting in the observed significant performance boost in relation-related metrics. We will gladly include these details & analysis in the final version of the paper. 4. **On scaling:** Thank you for this suggestion! We have performed a data scaling ablation by measuring the compositional reasoning average accuracy for several working points by subsetting the full CC3M data. The graph plotting the accuracy (Y axis) vs data size (in Millions of images) is available in Figure 1 of the global response PDF attached to this rebuttal. As we can see from the graph, our DAC approach demonstrates nice scaling properties with a noticeable increase and significant gradient (slope) with adding more data points. It does not seem to be plateauing near the end and this suggests that further data scaling would further improve performance. It would be very interesting to see data scaling further explored in future work with a more significant investment in compute. In addition, we have performed a model size scaling experiment in Table 1 of the global response PDF. As can be seen from the table, we have additionally tested DAC on the larger Vit-L/14 (bigger model and more image patch tokens of smaller size) also pre-trained and released by OpenAI. As a result, we can observe that DAC successfully improved the Vit-L/14 performance by 23.2%, similarly as for VitB/32, without sacrificing its ELEVATER performance. We will add these scaling ablations to the paper. 5. **On additional limitation discussions:** Thanks for this suggestion! We agree and **will add** more potential limitations to our paper limitations section as suggested by the reviewer. Generally, the safety of our approach relies on the safety of the underlying models used for quality and density expansions, which is an active field of research for those models and all VL models in general. And we hope and expect the research community to produce safer and safer models with each generation going forward. Also, during our manual analysis of the produced captions mentioned above, we have not observed social biases in the produced captions, but cannot guarantee they will never occur, therefore will also add this to potential limitations. Thanks! --- Rebuttal Comment 1.1: Title: Thanks! Comment: Thanks for the thorough response! Many of my comments have been addressed, thanks for the updates. I will raise my scores accordingly. That scaling plot is quite promising! --- Reply to Comment 1.1.1: Title: Thank you! Comment: Thank you for the prompt response and the useful insights and suggestions. We agree that the scaling plot shows great promise and thank you for suggesting this and other experiments.
Rebuttal 1: Rebuttal: We thank all the reviewers for their efforts in reviewing our paper and for providing helpful and insightful feedback. We are happy to see that they found our work: **interesting and creative** `(93oA, bpWN, Fh7T)` and **addressing important questions** `(Fh7T)`. Furthermore, we also thank them for highlighting that our work has **strong results and good ablations** `(BYVP, 93oA, xcfP, Fh7T)` that show that **all pieces of the method play a role** in the overall success `(93oA)`. We are pleased that the reviewers found the work **clear and well written** `(BYVP)`. In the attached PDF (referred to in individual responses as `global response PDF`), we present more information supporting our original claims and providing more clarity following the reviewers' comments. A brief summary of the main aspects of our rebuttal response: 1. We performed a survey evaluating the quality of the generated captions compared to the original `(BYVP, bpWN, Fh7T)` 1. We performed a survey evaluating amount of **new** information in the LLM expanded generated text `(BYVP, bpWN)` 1. We show the scaling experiments on various amounts of data `(BYVP, 93oA, bpWN)` and model size `(93oA, Fh7T)` We would like to thank the reviewers again for their work and look forward to an open and constructive conversation with the reviewers during the discussion period. Pdf: /pdf/d452a2d2aa872b42d8ccc39b7811650c49c1a9f5.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper proposes methods to improve performance on compositional reasoning tasks — by improving the caption quality (alignment) and density (description bias). The paper observes that the poor compositional task performance is due to these limitations in pretraining / fine-tuning data. The approach itself, is quite simple — data augmentation using existing Vision, and Vision-Language models. The noisy augmented data is utilised for fine-tuning with an appropriate multiple-instance loss function, which results in significant improvement in downstream compositional reasoning tasks across datasets. Strengths: (1) Paper is well-written. The approach is described clearly, and obvious issues (e.g., the hallucination introduced due to LLM-based knowledge expansion) are analyzed and discussed. (2) It is clear when visualising the noisy web-scraped training data for VLM models that the captions often are not factual descriptions of the contents or activity of the scene. So, many of these samples are unsuitable for learning a good vision-text representation that can be used for reasoning about object-relationships and other tasks. This was explored in detail [76]. The proposed solution — *generating* captions that are better aligned, and fine-tuning the model on that, is simple but somewhat surprisingly, is sufficient to improve downstream reasoning task performance. The paper also validates the generated captions with CLIP matching score. (3) Increasing the density of captions for a given image by leveraging recent improvements in the segmentation task, is a very reasonable direction to overcome people’s bias to describe only certain “interesting” elements of the image. This aspect has been discussed in the past in the image captioning literature. The paper proposes noisy augmentation techniques, that improve the learned representation for downstream tasks. They key is training with MIL which seems to account for the noise introduced into the training samples. (4) There is significant performance improvements in the reasoning tasks. Weaknesses: (1) Over-segmentation with SAM — could one explicitly filter out noisy captions that correspond only to object parts instead of relying on the MIL to do much of the heavy lifting? For instance, could one evaluate the image features of object-parts with full-objects to automatically recognise (and filter out) over-segmentations? (2) Noisy knowledge expansion with LLMs. The authors are aware of this, and discuss this. The MIL loss is introduced to handle this noise. Nevertheless, this still means that the approach is training the model with data that we know is quite noisy. Indeed, it results in performance improvement. But might be nicer to not augment w/ so much noise. This is a limitation, and it would be nice to explore a better way to augment this “knowledge”. It’s an open question Technical Quality: 3 good Clarity: 3 good Questions for Authors: (1) How much worse is these noisy predicted captions algorithm compared to when you may use GT captions? I.e., what may be the upper-bound of this method if we consider high quality captions? I'm not sure if there's a good way to evaluate this. I'm just curious, so I'm raising this point. (2) What about potential improvement in performance with dataset size? Since the approach uses existing pretrained models, at some point, one would expect the performance to saturate as one keeps increasing the size of the unlabeled dataset. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes, the paper discusses limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful comments. In the following, we provide a response to the questions raised in the review: 1. **On filtering over-segmentation:** Thank you for this suggestion! It is indeed interesting to explore filtering beyond the proposed MIL losses. While we would prefer to primarily leave this exciting research direction for future work, we have done some manual review analysis on 100 randomly sampled images from CC3M to check how many correct unique relations (correct = visible on the image; unique = not mentioned in any full-object segment for the same image) were contributed by the object-part segments. Interestingly, since sometimes object-part segments focus on the more interesting parts of large (as they appear on the image) objects, such as human hands for example, we observed that object-part segments have contributed 0.7 correct unique relations per image on average, that is at least 2 relations per 3 images. This is quite significant when we process large (millions) of images (unlabelled) image collections, as those unique relations serve as important demonstrators teaching them more thoroughly to the VL model. We will include this analysis in the revised manuscript. Thanks again for suggesting! 2. **On noise in LLM expansions:** We agree that further exploration of the effects of LLM expansion noise as well as researching ways of reducing this noise is a very interesting future research direction! To begin exploring these aspects, we have conducted a human evaluation of the LLM expanded captions on a random subset of 100 images of CC3M. In our approach, the LLM is prompted to produce a multi-sentence caption - these sentences are then used separately in the “bag of possibilities” of our MIL loss. We asked humans to evaluate the correctness (w.r.t. the image) of each individual sentence out of the LLM expanded captions and found that 54% of them add correct (visible on the image) new information on top of the original caption, supporting the value of LLM expansion when combined with the MIL approach to cope with the noisy part of the expansions. That said, future research on this topic may include additional, e.g. image conditioned LLM expansion filtering, better (and potentially multi-hop) LLM prompting strategies, etc. We will gladly include this discussion and analysis in the paper. 3. **On potentially setting upper bounds with GT expanded captions:** This is a very interesting suggestion, thanks! As we showed in our paper, the quality (meaning utility for fine-tuning to enhance compositional reasoning performance) of GT captions in typical VL datasets (collected from the web by pairing images with their alt-text) is relatively low, and many times the alt-text captions lack sufficient detail about the image. Unfortunately, to do the proposed GT upper bound analysis, we would need human-generated higher quality and expanded captions data, which might not be easy or cheap to collect. However, taking inspiration from the great progress in LLMs, it might be possible that future work would employ similar processes to RLHF or even RLAIF in order to generate ground truth supervision for compositional reasoning quality improvement and density expansion. This is also an exciting future work direction we would gladly add to the discussion section of the paper! 4. **About improving performance with data scaling:** Thank you for this suggestion! We have performed a data scaling ablation by measuring the compositional reasoning average accuracy for several working points by subsetting the full CC3M data. The graph plotting the accuracy (Y axis) vs data size (in Millions of images) is available in Figure 1 of the global response PDF attached to this rebuttal. As we can see from the graph, our DAC approach demonstrates nice scaling properties with a noticeable increase and significant gradient (slope) with adding more data points. It does not seem to be plateauing near the end and this suggests that further data scaling would further improve performance. It would be very interesting to see data scaling further explored in future work with a more significant investment in compute. We will add this data scaling ablation to the paper.
null
null
null
null
null
null
GPT is becoming a Turing machine: Here are some ways to program it
Reject
Summary: Authors propose a way to prompt GPT-3 to exhibit behavior simulating execution of iterative programs. Authors propose the following prompt constructs: providing structured examples of program execution; using fragments of execution; not using self-attention on some parts of the generated text. Authors compare the results to baselines and show significant improvements Strengths: - Authors introduce a method that enables GPT-3 to mimic the execution of iterative programs. They achieve this by supplying the model with intermediate steps and outcomes. This is somewhat novel and could be useful for using LLMs to solve problems that require iterative processing. - The use of path fragments may prove beneficial in situations where the context size is insufficient for comprehensive examples. - The strategy of confining self-attention to particular segments of the output might be advantageous when the context size needs to be considered - The examples provided in the paper are well written Weaknesses: - Authors' approach requires the manual construction of prompts for each problem at hand. It is not automated and not scalable. This limits the usefulness of the approach in practice. - Authors compare their approach to simple baselines. There should be a comparison to at least chain-of-thought reasoning. - Paper is hard to read and accept as a standalone without appendices. Authors refer to the content in the appendices too much. - The significance of the work is low. It is known that LLMs can produce iterative output. Although the authors have enhanced the quality of such outputs via structured prompting, structured prompting is not entirely novel. Same can be said about using fragments in prompt/context. EDIT: I have raised my evaluation of the paper from 3 to 4. Authors have promised to address the issues I and other reviewers have raised. However, in my opinion, such changes would require a major rewrite of the paper. I am not confident if these changes can be done well for the publication. I have read the author’s rebuttal and further discussion with authors based on my questions and feedback. Authors' rebuttal addressed some of my concerns by more in depth discussion of IRSA relationship to chain-of-thought reasoning and possible automated generation of IRSA prompts. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Is there a generic prompt structure that could be used for any algorithm or some class of algorithms? In Prompt 3 is the “Final List: 6, 7, 3, 5” intentional? It is not sorted. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 1 poor Limitations: It would be good if authors explored the limitations of what can be achieved by their approach. At some point the LLM "execution" (simulation really) of the program should fail. The points at which the simulation would fail likely depend on the input/output/context size. It may also depend on the algorithm semantic and/or algorithmic complexity. (Those are different complexities and may affect the breaking point differently). These questions could be explored and would be useful to know. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: For any program, we prompt the LLM with an execution path described well, but using arbitrary language (keywords and such). Therefore, an automatic way of creating IRSA prompts would be to simply generate an execution path programmatically, but following the rules of general repetitive structure, and providing inline explanations on each state transformation, as well as, importantly, which conditions to check to understand if a new iteration is necessary. Such well explained execution paths then serve as instructions to GPT how to execute the algorithm on new input (Prompt 1, a single execution for Bubble Sort, is an example of such a “program for GPT” that tells it to how to execute it for new inputs). So, our main point is that GPT can simulate execution of programs when prompted like this, and that fragmented prompting and skip attention further assist in this. This is important in two ways: 1. Executing algorithms by LLMs has been of interest to the community (as evidenced by various benchmarks), and it turns out this is quite doable using the ideas from our paper. 2. Complex reasoning tasks often masquerade as word problems when they can be solved with known algorithms if translated into a form consumable by those algorithms, and we show that LLMs can both perform that translation and run those algorithms (Logical deduction task) which is important for evaluation of such tasks with CoT prompts. (See also the general response on this.) Baseline CoTs: CoTs can vary dramatically, so in our experiments we focused on the ones that rely on program specification to reason about the input. In our experiments, we compared with few-shot and zero shot prompts with and without such programmatic instructions. Furthermore, in our GPT-4 comparisons, we induced execution path generation by GPT-4 with prompt A.11: GPT-4 shows its chain of thought reasoning on how the program should be executed. Still, it fails to match our results with IRSA. The reason for that is seen in the differences between A.11 and A.12 where the same prompting on different inputs induces different behaviors. With IRSA the behavior is much more consistent. As a result, GPT-3 with IRSA achieves 93% and GPT-4 with prompting in A.11, which can also execute a sort of state evolution and tracking, only reaches 63%. That is to say: GPT models can dazzle us with correct program execution sometimes, but IRSA raises the frequency of correct execution dramatically. Re Prompt 3, it is an example of a fragmented prompt, where each fragment starts with a state and ends with an explained transformation of that state. Some of these starting states are impossible, like the state just before “Final List: 6, 7, 3, 5”. However, the description of how such a state would be processed if reached is accurate. Correct transformations are what is needed for correct program execution, even if they are illustrated on unreachable states. When these transformations are executed correctly starting from the beginning of the execution, the incorrect states are never (or rarely) reached (as seen in experiments). Regarding a generic prompt structure, one can decide to always use the same set of keywords to describe state transformations. Furthermore, as we show in Prompt A.2 and A.3 in the Appendix, we can even create a prompt that will compile execution paths in a consistent language given a program description. Choice of language is up to the user, but nothing prevents them from using an existing one. We have an example of this with the dynamic programming prompt (2.4) which was generated using the strategy highlighted in Prompts A.2 and A.3. Using this prompt, other “programs for GPT” can be created following the same syntax. --- Rebuttal Comment 1.1: Title: Thanks to the authors for their response Comment: I appreciate the comments by the authors with responses to my questions. In my opinion a general and automated way of creating IRSA prompts would increase the contribution a lot. It is worth not just a comment in the response, but rather actual implementation and evaluation - hopefully in the future work or future versions of this paper. I don't believe that automation is as simple as authors claim. In fact, the authors themselves show how complex it is to create IRSA prompts for problems: someone has to write an algorithm - even better a program - then pick the important state of the program and construct it all into the IRSA prompt. This might be automatable via some program interpreter, but it is not obviously trivial. Based on authors' comments, it seems that IRSA is really one approach to specify a very structured CoT context (prompts) to the LLMs. I think this is a key way to look at what authors have done and how to evaluate their work. As authors correctly observe, the CoT is well known, widely adopted and practiced, and also can vary dramatically. This raises a difficulty of comparing the authors' approach to "CoT baseline(s)" since - as authors observe - such baselines can vary a lot both in content and in results. It would be worthwhile to try to specify some distribution of CoT baselines, e.g. prompts that just ask LLM to use CoT; prompts that provide input-output example; prompts that provide an execution example (which gets close to IRSA); and possibly others. In the future IRSA could be considered as one specific CoT approach to be compared against. However, this emphasizes even more the need to automate the way of creating IRSA prompts, since the issue authors raise regarding CoT prompts ("vary dramatically") also is present in IRSA prompts if they are manually written for each problem and algorithm. I remain concerned that the paper does not present the main important material in self-contained manner. The number of references to Appendixes is numerous (there are 33 references to Appendixes in the paper!). A lot of responses of authors also point to Appendixes. Appendixes themselves are not well written or readable, even if readers could access them. They understandably look like paragraphs that did not fit the paper and were thrown into Appendixes. I think the readers should be able to read a paper that is clear and self-contained. --- Reply to Comment 1.1.1: Title: Thanks for reviewer tn6w's comments and insightful discussion (1) Comment: We thank a lot for Reviewer tn6w's great comments and engagement in the discussion (esp. with other reviewers)! We try to address each of the remaining concerns as follows: (1-a) **Automation of IRSA prompts**. We appreciate the reviewer's perspective on the importance of automation. To address this concern, we'd like to emphasize the following points. - First, as reviewer AAhn pointed out, the topic of the paper is algorithm execution, not discovery, which is in itself an unsolved problem, except for the fact that many (if not most) practical problems rely on a small number of known core algorithms. - Second, similarly, automated prompting is a very active research area now and is not a solved problem. In fact, it is only starting to be studied by the community and it's usually invented after some novel manual construction of those prompts. For example, the original CoT prompting requires manually writing down few-shot examples per task to trigger the reasoning process, sharing a similar manual construction process as ours. Therefore, we agree with the reviewer that automating IRSA prompting could be a promising research direction to explore in the future. Meanwhile, as emphasized in our general response (1) and our first paragraph in the first response to reviewer tn6w, our major contributions in this paper are slightly different from proposing an automatic way of constructing IRSA prompts. - Third, regarding the difficulty of automation, we recommend our second bubble sort prompt (Prompt 3 in the main text, also see the full basic ISRA in Appendix Prompt A.4) as a template for achieving the best results. This prompt structure encapsulates the benefits and main design of IRSA prompting, offering a strong reference for the potential automation process of creating IRSA prompts. Moreover, somehow related to the reviewer AAHn's comment on this point (see the last paragraph in https://openreview.net/forum?id=ARJG1kr8A7&noteId=M2edKRYFRp), when the algorithm to solve the problem is given, IRSA prompts could be crafted relatively easier. However, we agree with the reviewers that finding the correct algorithm solution is an important future direction to explore following up our work. Following the reviewer's suggestion, we will add a thorough discussion and implementation details to such an important issue. Furthermore, the results from fragmented prompting on bubble sort underscore our point: despite variations in the prompt (any fragments to use), the performance remains consistently high. This indicates that our IRSA approach has a level of robustness and the choice of program pieces is not an obstacle for the automation process. We hope our response could resolve the reviewer's concern. (1-b) **Comparison with CoT**. We appreciate the reviewer's suggestion of clarifying the relationship with CoT and a more systematic evaluation of various CoT baselines. - First, please refer to our response to reviewer n7ti for a complete response to clarify the definition of IRSA and its relationship with CoT. - Second, regarding the infinite variation of what CoT recipes may include, we agree with you that this makes CoT comparisons difficult. Part of the message of the paper is that IRSA, as a special kind of CoT, as all reviewers agree, leads to a surprising level of accuracy (but not perfection) in the execution of algorithms. - Third, among all the baselines the reviewer suggested, we currently have compared with one of the most competitive ones, prompts asking for CoT using GPT-4 (prompt A.11 and A.12), which gets 69\% accuracy on LCS-Short, while our ISRA with skip attention gets 93\% on CodeX. We believe such a close comparison clearly demonstrates the benefits of our IRSA prompt design. To address this concern, we will move such results from the appendix to the main text and clearly explain them in the main experiment section.
Summary: Making LLMs follow procedural rules precisely, as done when executing a program, is been a challenging task. In this paper, the authors introduce iterations by regimenting self-attention (IRSA), a prompting technique to make large-language models (LLMs) *execute* a hand coded programs (on novel inputs) precisely. The authors propose three techniques for IRSA: 1) by prompting the LLM with a step-by-step example showing many state-transitions in details, 2) By prompting with fragments of the state-to-state transitions only (along with the latest state for which it predicts the transition), and finally 3)skipping attention on intermediate state-to-state transitions. LLMs are shown to be significantly more successful at tasks such as sorting arrays, finding longest sub-sequence in a string, or simpler logical puzzles etc. when prompted with IRSA. Strengths: ### Originality Recently, many new proposals for prompting LLMs, including scratchpad, and Chain-of-Though prompting have been proposed. However, most approaches focus on making LLMs reason and solve problems/puzzles. Instead, in this paper, the focus is on making LLMs *follow instructions accurately* (which then can be used for solving certain puzzle). This is a novel and original direction. ### Quality The paper is well presented, including the figures, and the tables. Additionally, the experiments have been conducted on many tasks to sho ### Clarity The paper is very clearly written and well presented. Specifically, I found the prompt examples very useful in understanding the paper's ideas. ### significance Getting LLMs to precisely execute procedural rules is of intereset to the research community at large. LLMs inability to successfully tackle procedural problems (such as multiplication) of complexity beyond the training set complexity has raised questions regarding its ability tackle compositional problems. This paper provide an important perspective and countering result that will further enrich this discussion. Weaknesses: The two main drawbacks of the paper are motivation and experiments. ### Motivation The paper does not sufficiently motivate the problem statement. Why should we care about making LLMs execute programs - perform iterative behavior? When the process is deterministic and easy to programmatically describe, why would we prefer LLMs over a deterministic typical program (one can even use programs which explicitly *show* transition rules applied as well)? The authors mention education or software engineering vaguely, but there is no concrete motivation in these use-cases (when would a LLM be more suitable for this task over a REPL-like loop with python/cpp?). ### Experiments 1) The authors do not evaluate on significantly larger sequences sizes. Since fragmented prompting seems to allow arbitrarily large sequence of state-transitions, I believe authors can indeed use IRSA on larger sequence problems. The trend between success rate and sequence length would be insightful. 2) I strongly appreciate the authors for showing the negative result in Figure A.1 (Appendix section A.3.2). This shows that despite using IRSA, the model may end up performing wrong state-transitions based on its correlation to patterns in recent history. If multiple previous state transitions contain sequences where the statement "2 < x = True" appears, then when asked "2 < 1 =" the LLM has higher likelihood of filling True than False. This seems to directly negate the claim of this paper that we can make LLMs execute programs precisely. It clearly seems to be affected by the prompt history which can make them act in unreliable ways. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: I would appreciate if the authors can respond to the weaknesses raised above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: Yes the authors have adequately addressed the limitations and potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: The versions of GPT we had available at the time of submission had 4k token limit. The problem with LCS is that its state is proportional to the product of sequence lengths (the DP matrix). With newer models, this will indeed be less of an issue (see the response to the previous reviewer, too). Nevertheless, the results on longer LCS problems from Big Bench still significantly beat the SOTA, and when we get a chance to run the experiments with APIs allowing higher token limits, we expect to do even better. We do not seek to recommend LLMs over deterministic programs for the execution of these programs, nor do we claim that LLMs can always execute programs precisely. Rather, we demonstrate the power of these models to perform the iterative steps necessary to execute an algorithm with high probability (empirically) on various algorithms. We describe the techniques required to trigger that sort of reasoning from the models as well as their limitations, such as with incorrectly correlating patterns and with solving larger problems. We explore how close we can get to accurate execution with prompting techniques that take advantage of existing architecture. See also the general response, on this question; briefly there is direct interest in ability of LLMs to execute programs in the research community. But, also, there is an indirect interest among those who study complex reasoning tasks that can be solved algorithmically: We’d like them to recognize that LLMs do not have to entangle the language understanding, common sense reasoning and algorithmic thinking in some cryptic form; instead prompts like ours for Logical Deduction can separate the processing of the word problem into an input to an algorithm from algorithm execution (by the LLM), making them equivalent to a combination of problem translation into a machine-readable form and a separate call to an algorithm (ran on a computer). And people who make the benchmarks should understand that it is possible to make CoT prompts that will trigger the kind of algorithmic reasoning they are interested in investigating regarding LLMs. Currently, one direction being used to evaluate/improve LLMs on complex reasoning tasks is to use benchmarks that are solvable algorithmically. We have shown that LLMs are in fact already capable of doing this through prompting that triggers iterative execution. --- Rebuttal Comment 1.1: Title: Thank you for the rebuttal! Comment: ## Summary On one hand, the key contribution of the paper, a prompting strategy to make LLMs execute algorithms in an iterative step-by-step fashion, and the superior results to previous prompting strategies makes the work significant, and of high interest to the community. On the other hand, the paper has some drawbacks, namely 1) The presentation 2) lack of certain baselines and experiments. I am overall tending towards increasing my rating for the paper. The paper has its drawbacks, and its still unclear to me *when* this approach can be employed. However, it offers the insightful that LLMs can be (approximately) made to execute algorithms by simply changing the prompting strategy and this insight will likely intrigue researchers using LLMs at large. ## Detailed response to the rebuttal > do we claim that LLMs can always execute programs precisely. Thank you for correcting me. While the paper does not state that the LLM can execute programs precisely, that is the intended goal of the method (to make the LLM follow an algorithm *precisely*). My critique is that the proposed method seems highly susceptible to patterns in local history, making it more unreliable (i.e. IRSA might not work for executing algorithms which might have higher presence of recurrent patterns in its state-transition sequence). > perform the iterative steps necessary to execute an algorithm with high probability (empirically) on various algorithms I would appreciate if the authors can specify a simple clear use-case where we would like an LLM to perform iterative algorithmic steps. I believe a stronger (and more practical) alternative would be a a LLM-in-the-loop system where such programmatic iterative steps are handled by an existing coding language like python, and parts of the algorithm which involve uncertain semantics such as language understanding/common sense reasoning are performed via API-calls to the LLM (some simpler examples of such as are ViperGPT/VisProg). I think providing a concrete use-case will ground the reader's motivation. > See also the general ... (ran on a computer). I mostly agree. > prompts like ours for Logical Deduction can separate the processing of the word problem into an input to an algorithm from algorithm execution (by the LLM), making them equivalent to a combination of problem translation into a machine-readable form and a separate call to an algorithm (ran on a computer). In my understanding this decoupling of logical deduction into problem translation and separate call to an algorithm can already be achieved by using LLMs with code-interpreters. (i.e. asking the LLM to synthesize an algorithm in a programming language such as python, along with its inputs, and explicitly using the language's interpreter to run the algorithm). Do we even need IRSA for this decoupling? > We have shown that LLMs are in fact already capable of doing this through prompting that triggers iterative execution. With IRSA style prompts, we are only testing the LLM's ability to 1) translate natural language to create inputs to an algorithm, and 2) running the algorithm step-by-step. However, the benchmarks test is also 3) Selecting the right algorithm or creating the right algorithm for the problem. I am not sure (3) is tested in the current setup as the in-context examples use the same algorithm that we want the LLM to follow. ## Regarding Reviewer tn6w comments I agree the reviewer's critique regarding the paper's readability. In my opinion, the presence of full-page prompts in the main paper reduces the space available for exposition and experiments, pushing the material into the appendix. I believe if the revised versions reduce the space taken by the prompts, the authors may be able to pull back some of the material from appendix to the main draft. I also agree that in some sense IRSA seems like "extremely structured" CoT prompting, and that presence of various gradations in types of CoT prompting in ablations would strongly improve the work. I however disagree with the reviewer on the difficulty of creating an algorithm's corresponding IRSA prompt. If an algorithm can be specified in pseudo-code, it is reasonable that a (python) program can be written to 1) consume the pseudo-code and convert to executable code, and then 2) follow the algorithm, and print all state variables at each step for some example inputs. Furthermore, the state can be pruned to show only relevant variables (which change over the course of the algorithm's execution etc.). I believe the key difficulty lies in crafting the algorithm itself, which I think this work does not address (and does not target addressing). --- Reply to Comment 1.1.1: Title: Thanks for reviewer AAHn's comments and insightful discussion (1) Comment: We really appreciate the great comments from reviewer AAHn, as well as the comments on the significance of our work of being `of high interest to the community` and `this insight will likely intrigue researchers using LLMs at large`. We are glad that our clarifications further improved your appreciation of the paper and that you are leaning toward raising the grade. We try to address all remaining concerns and questions from the reviewer as follows. (1-a) **Precise execution**. Thanks for clarifying your argument. Re precise execution, with LLMs there are no guarantees (even GPT4 still “hallucinates” and our interesting figure in Appendix A.3.2 is most like a demonstration for this, and isn't used against our main contributions), and our main point is that the execution can actually be much closer to “precise,” **than previously thought**, though never as accurate as in LLM-in-the-loop (or LLM as translators, as we put it in the appendix). Moreover, under a circumstance with the `higher presence of recurrent patterns in its state-transition sequence`, the potential negative effect of local history patterns is very likely to be reduced by the fragmented and skip attention prompts (by removing or skipping some repetitive patterns/states in the context). In summary, we believe the combination of all ingredients in our proposed IRSA prompts will contribute to a more precise execution and better control of LLMs. We hope this will clarify some potential confusion here and make sure we're on the same page. (1-b) **Use case and motivation of why decoupling translation and execution**. Re motivation, part of our point is to inform the communities who feel that an indication of an LLM’s higher “cognitive abilities” may be its increased ability to (appropriately) execute algorithms such as constraint satisfaction, or search, graph traversal, or even just keep track of multiple alternatives. Because token stream of the LLM can be used as memory, and it can follow clear instructions well, these abilities, as long as we allow CoT, are already there. So, part of why this paper should be seen by those communities is to help them decide what and how to test. Note that mixing algorithmic and common-sense/associative reasoning can be quite subtle. If the LLM can transform the problem into a form executable by an algorithm, and can execute it, but also make an outside API call to execute it, then which parts of the algorithm will just be explained to the LLM and which will be executed exactly by that API will depend on the developer, and the application may require back-and-forth between LLM’s processing and API calls. E.g., if we want an LLM to create a few initial guesses, and then reason through them to find the best answer, this may be easier to do in an LLM-only world. On the other hand, e.g., if a landscape architect wanted to come up with a prompt that would cause LLM to iterate over possible combination of plants for a garden given some constraints, running a CSP outside of LLM may be more of a headache than justified, esp. given that the architect will select/massage the result any way. (Someone said that English is the hottest new programming language, but perhaps it will be by using it to demonstrate execution paths, possibly by people without a CS degree, rather than by translating language into code). And given the success of fragmented prompting, we expect that people will come up with some pretty creative solutions that mix algorithmic and common sense reasoning. Given the above thoughts, there are plenty of exciting use cases and we've discussed a few of them in the section of `Possible consequences` in A.3.1. The reviewer AAHn mentioned an LLM-in-the-loop system that is closely related to the section `Hybrid models - LLMs as translators`. Another potential scenario could be in environments where traditional programming languages aren't feasible or where non-specialists require a more naturalistic interface for algorithm execution. For instance, a user might need to modify or execute a procedure in real-time through natural language, without knowing the specific code. Such IRSA-enabled LLMs can bridge this gap, providing both flexibility and precision. Following the reviewer's suggestions, we will incorporate more concrete examples in the earlier part of the paper to better ground the reader's motivation.
Summary: The paper proposed several novel prompting methods that could trigger GPT-3 to perform iterative behavior for executing algorithms with loops. The main technique the paper presented, IRSA, is to use highly structured prompts that contains information about unrolled execution trace, program states, and a detailed explanation of the motivation of a specific action, thus bringing strict attention controls to LLMs and help them reason about the procedures to get the solution. Based on the proposed IRSA method, the authors also introduced two alternative prompting methods: Fragmented prompting and Skip attention. The fragmented prompting technique strips out some of the iterations in the full execution trace, thus enabling the prompt to contain more diverse scenarios under a limited prompt length. The skip attention technique explicitly marks program state information with a special token and puts emphasis on the original prompt that serves as a demonstration to the LLM and the last execution state where the LLM could continue its execution from, thus also bringing LLM server side and client side optimization opportunities. Meanwhile, the authors also briefly discussed the automatic generation of the proposed prompts using LLMs. The proposed prompting methods were evaluated on various tasks whose solutions involved loops. It is shown by the evaluations that the proposed methods could achieve state-of-the-art results on multiple tasks. Strengths: 1. The paper showed great originality and insights in that the authors identified the failing reason of the LLM on tasks involving iterations, considered related Turing machine concepts, and designed several novel prompting methods incorporating highly structured information about unrolled execution trace, program states, and a detailed explanation of the motivation of a specific action, bringing strict attention controls to LLMs and help them reasoning about the procedures to get the solution. 2. The proposed prompting methods showed state-of-the-art results on multiple loop-involving tasks. 3. The proposed Fragmented prompting method can be a promising technique that could encode diverse scenarios while keeping the prompt relatively short. This can be helpful for working with LLM APIs. 4. The proposed Skip attention prompting method can also be promising in that it emphasizes the concept of program state in the context of using LLMs as general Turing machines. This technique could also help reduce prompt length, and with adequate supporting modifications and implementations on LLMs, this can be a solid base for future works. Weaknesses: The overall presentation of the work can be relatively hard to comprehend for the readers. Especially for the presentation of the proposed Skip attention, it could be better if the authors provided a figure that briefly demonstrates the idea of the server and client-side implementations of the method. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. From the presentations of the work, it seems like the potential of the proposed Skip attention method was not fully demonstrated due to the prompt length limitations of the LLM APIs. Can the authors further evaluate the method on other LLMs (with online APIs or a local deployment) that accept longer prompts? Is it possible to adapt some encoding scheme that further reduces the prompt length, thus enabling the method to be applied to more tasks? 2. Could you provide some quantitative results of the experiments on prompting to compile a program? For example, the success rate of triggering GPT3 to execute iterative algorithms on a certain task such as LCS. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer 2ucu for acknowledging that our work showed great originality and insights, our prompt methods showed state-of-the-art results on multiple tasks, and the proposed Skip attention and Fragmented prompting are promising techniques as a solid base for future work. Below we address all the concerns and questions from the reviewer: (a) **Presentation and explaining Skip attention**. We address the flow (see global response 2) and demonstrate more clearly the ideas behind Skip attention, see the uploaded figure to demonstrate the implementations of skip attention. (b) **Prompt length limitation and longer context length**. Regarding the prompt length limitations, demonstrating skip attention on bigger tasks with prompt length restrictions could be explored further by utilizing shorter or more concise syntax within the IRSA prompts. In fact, skip attention allows prompt designers to address some of the length limitations that come from unrolling potentially long iterative algorithms, particularly on bigger tasks, while simultaneously avoiding confusion from accidentally generated patterns. Although the strategy should increase the number of tokens that can be generated (by effectively removing some generated text to make room for newly generated text), overall token limitations for how much the model is allowed to generate will always create a bound on the problem. That said new models, unavailable at the submission time, indeed have up to 8 times larger token limit, which will allow us to explore LCS for longer sequences in the final paper. (LCS has space requirements proportional to the product of the sequence lengths, and IRSA needs to see the whole state.) Regarding the encoding of state, that may be possible perhaps, with run-length encoding of the DP matrix, or something clever like that. But on the other hand, new models and architectures (with sparse attention, for instance) are extending the token limit anyhow. (c) **Prompting to compile a program**. In terms of providing examples of triggering execution by prompting to compile, we used prompt A.2 to compile an execution path in Prompt A.3 and used both to induce IRSA on LCS problems, the results being shown in Table 2. We can make this more clear as an example of using GPT-3 as an interpreter/compiler. We did not systematically study the interpretation of, for example, random programs, which could be a good topic for future work.
Summary: The paper explores the use of regimented self-attention (IRSA) to prompt GPT-3 to perform iterative behaviors necessary for executing programs involving loops. The authors investigate three approaches to trigger the execution and description of iterations. The results suggest that IRSA leads to larger accuracy gains than using the more powerful GPT-4 for dynamic program execution. The authors highlight the potential applications of IRSA in education and discuss the implications for evaluating large language models (LLMs). While LLMs have limitations in complex reasoning tasks, prompt design plays a crucial role in their performance. Strengths: + Interesting problem and approach + Providing examples of prompts Weaknesses: - Presentation - Concern about reliability - Concern about "Turning machine" claims Technical Quality: 3 good Clarity: 2 fair Questions for Authors: There are a couple of points that could benefit from further clarification and discussion: 1. The paper mentions (Line 71-73): "it is easy to mislead with a prompt with accidental alphabetical or numerical ordering, or some undetectable semantic bias," and "slight changes in prompts can yield dramatically different responses." However, the authors do not provide any specific syntax/structure for the IRSA query prompting scheme using the CoT paradigm. It is unclear whether changing a certain text ordering in these prompts will produce the same performance. In other words, this raises concerns about the reliability and consistency of LLMs in generating accurate outputs. Can you elaborate on the robustness of IRSA to different prompt variations and its sensitivity to prompt design choices? 2. Table 1: What does guessing mean for the longest substring, logical deduction? 3. The experimental results show significant performance improvements when using IRSA on logical deduction puzzles. However, it would be valuable to understand the generalizability of these findings. How does IRSA perform on other complex reasoning tasks beyond logical puzzles? 4. The paper claims that IRSA outperforms GPT-4 on dynamic program execution but lacks a thorough comparison or analysis of GPT -4's performance in the main paper. Can you provide more insights into the limitations of GPT-4 and why it fails to consistently execute code without IRSA prompting? 5. Line 268-269: Is there proof of this? It seems a far-fetched & ambitious statement that LLMs are similar or equivalent to Turing machines. Moreover, the authors mention "... becoming a Turing machine" in the title (that's a huge claim), but there are only two other places in the paper that mention "Turing machine" in the sentence. The authors should add more details to justify this more concretely. 6. Github URL seems to be not anonymous 7. The names of the methods "Iteration by Regimenting Self Attention (IRSA)," and "Skip attention" are misleading. It makes the reader feel that it is related to the internal attention mechanism of a GPT, but actually it deals with query/prompt engineering. 8. The paper seems to be written in a hurry, so there are a couple of typos and some discontinuity while reading. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: * The authors make huge claims, but do not discuss the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for acknowledging that our problem and approach are interesting and the potential applications of IRSA in education. Below we address all the reviewer's concerns and questions. (a) **Reliability**. We are using much stricter attention control than usual CoT prompts, however, even in our case there is a certain degree of sensitivity to formatting. There is an example in Table 1 with bubble sort where two different structures for prompts, while both being much better than the baseline, yield different accuracy on the same problem (100% vs 74%) (b) **Guessing baseline**. The motivation for guessing strategies is explained in lines 174-177. For the longest substring, with a problem library of fixed length strings, there is some most common length of the longest substrings among the problems, and guessing is described as guessing the most frequent length for every problem. For logical deduction, the dataset was balanced, with 5 potential answers for every problem, and without any bias in the dataset, guessing is correct 1 out of 5 times. The guessing strategy that exploits the most basic imbalances in the data made sense to us more than just guessing uniformly among possible values. Since almost any ML algorithm would learn those biases quickly, the difficulty of the task is not well represented by uniform guessing. (c\) **Other complex reasoning tasks**. We were using the logical deduction example to address the point that many tasks in the benchmarks are addressing problems for which there are already algorithms. If you have a natural language representation of a reasoning task, we have shown you can get the LLM to translate the problem into a canonical form using CoT reasoning. If algorithms are known and CoT reasoning is allowed, we can get the LLMs to first translate the problem into a canonical form where the algorithm can be applied, and then it can even apply the algorithm itself. If other complex reasoning tasks are of this sort (there is an algorithm that can be applied to solve it and the problem is in a natural language form such that it can be translated to a form the algorithm can work on), then we posit that it would be possible with sophisticated enough CoT prompting using techniques described in our paper to solve such tasks. As we discussed in the general response, this is important for the community to understand. CoT designs can be very sophisticated (as IRSA is) and so comparing LLMs is highly dependent on prompting. We used Logical Deductions to show how large the difference can be, begging the question: Was that the best one can do? Do we ever know what the best is? (d) **Discussing the limitations of GPT-4**. Regarding the limitations of GPT-4, we discuss this in the Appendix (lines 599-615, section A3.3). Do you suggest it should be moved to the main paper? In the case of the LCS problem, GPT-4 and GPT-3 do not need to be prompted with code, as they can generate it. Thus one might imagine that a CoT that asks for the code to be written and then executed step by step should perform similarly to our prompts. Yet, GPT-4, while better than GPT-3 under such prompts, still only gets 63% accuracy compared to 93% using IRSA. To illustrate why, we showed how different problems get processed differently using the same prompt. Prompts A.11 and A.12 both ask GPT-4 to recall a dynamic programming algorithm in Python and write down its execution with intermediate steps to solve the length of the longest common subsequence for two sequences. Their only difference is the strings used as input for the problem. Although GPT-4 successfully recalls the algorithm in Python for the problem and makes some attempts to execute it, it is not consistent in how it shows the execution with intermediate steps. In the case of prompt A.11, GPT-4 shows the initialization of a table, then immediately displays what the completed table looks like after iteration, and gives its answer. However, in prompt A.12, GPT-4 initializes the table and then displays a couple of steps, before jumping again to the end table and giving its answer. This inconsistent processing means that some answers may be (impressively) correct and thorough, while in other cases the LLM will just skip steps or start hallucinating. A more regimented prompt showing key fragments, or entire execution (as in our examples) seems to be needed to get consistent results. (e) **Claims on Turing machine**. Regarding Turing machines, see the general response. (f) **Use of "attention" in names**. Regarding the use of 'attention' in the names, all of this is possible because we can direct the attention of the model. Basic IRSA does it with prompt design, and skip attention literally prevents LLM from seeing some of its previously generated tokens because we want it to look only at the last full state. This is doable on either the server side or the client side (see the uploaded illustration figure), and then calls the server again to transform the new state; The client-side solution of skip attention keeps reprocessing the prompt, while the server could simply keep the states associated with it and just block attention to the text generated before the latest <state> ... </state)> structure, thus saving on token quota and computation and the implementation of it would be similar to how stop words are implemented by OpenAI API). (g) **Discussing the limitations**. Regarding the limitations, we discuss them in a few places. One, we show that there is a variation in performance on the same task with different IRSA-styled prompts (though they are both much better than the baseline, Table 1). Second, the appendix has a section on limitations, including a worrying problem with LLM's cryptic dependence on ordering of statements (Fig. A.1). However, with some experience, summarized in Section A.3.2, we found it possible to find prompts that lead to significantly superior results compared to SOTA. --- Rebuttal Comment 1.1: Title: Thanks for the review and looking forward to your feedabck to our rebuttal Comment: Dear Reviewer HgMc, Thank you for the time and expertise you've shared through your feedback on our paper. We've taken your comments to heart and have made appropriate revisions in response. As the author-reviewer discussion phase nears its conclusion, we wish to bring to your attention the constructive discussions we've had with the other reviewers. Through these engagements, we've made substantial progress in refining our paper and emphasizing its core contributions. Encouragingly, a consensus among the majority of the reviewers is seemingly emerging, recognizing the significant contributions our IRSA prompting offers to the broader community. We understand the many commitments reviewers such as yourself have, and we truly appreciate the time and effort you've already dedicated. We genuinely hope that our revisions and responses resonate with your observations, and we are eager to incorporate any further suggestions you may have. Thank you once again for being a pivotal part of this academic journey. Your continued engagement and feedback are invaluable to us. Best, The authors of Paper 8965
Rebuttal 1: Rebuttal: We thank the reviewers for the great comments and constructive suggestions. We are encouraged that reviewers agree that the paper provides "Interesting problem and approach,” that fragmented prompting and skip attention, "which pair well together", are original and can have applications beyond prompting ("Skip attention…, I'm surprised past work like Nye et al 2021 didn't take an approach like this"), and that "methods showed state-of-the-art results on multiple loop-involving tasks." We have addressed all comments and questions from reviewers with thorough clarifications and discussions via separate responses. There are three common comments we'd like to address. 1. **Motivation/precise language on Turing machine**. TMs are a theoretical construct, involving infinite memory, able to execute arbitrary algorithms. Practical computers execute only algorithms that fit their memory. That an LLM can be, other than for finite memory, Turing-complete is not a huge claim per se. Simple 2-tag systems are Turing-complete and GPT can emulate them (until it runs out of tokens). Schuurmans [1] has shown that an LLM with access to infinite memory can emulate TM U_{15,2}. The spirit of our title is that GPT is not only theoretically able to execute arbitrary algorithms with loops, but that there is a way to instruct these models to do so with natural language. Program execution has been of interest to the LLM communities, but the results were underwhelming, and we show that they can be much better, with two caveats. One, like computers, LLMs have limited memory (tokens), which can be ameliorated with skip attention. Two, there are no theoretical guarantees. But, empirically, given our results, it is possible to prompt an LLM into a high level of consistency in execution. To the community of researchers studying the ability of LLMs to execute algorithms, this is immediately important. And, several benchmark reasoning tasks target word problems that are solvable algorithmically, as long as the wording can be translated into a machine-readable form. The community studying CoT in that context should understand that LLMs can perform both the translation and the execution (as illustrated in Logical Deductions). Defined as anything with step-by-step instructions, CoT prompting includes IRSA solutions, which are approximately equivalent to just using an LLM to translate the word problem into algorithm input and then running it on a computer (Section A.3.2, LLMs as translators). 2. **Presentation flow**. While some reviewers consider the paper "well presented" with "the prompt examples very useful in understanding the idea", others find it difficult to parse in places. The flow will be easily but significantly improved using the reviewers’ suggestions. Prompt 1 is indeed sufficient to explain the basic IRSA, and the other examples can be listed after describing its features: It covers the full execution path; The keywords can be chosen arbitrarily, but should be used consistently (instead of "set" we could have used "change", and instead of "State: can be replaced by <state> and so on); Importantly, the basic IRSA prompt shows conditions for starting a new iteration as well as when all the iterations are finished. We also upload an illustrative figure on how it works on the server/client. 3. **Relationship with Nye [2]**, which also involves execution paths in their scratchpad. Nye [2] target much simpler programs as they argue that "GPT-3 struggles to perform addition on numbers with greater than 3 digits" and that LLMs "struggle to predict the result of executing Python." They train/tune models with execution paths. Also, with low success, they attempted in-context learning (their Appendix C). We show that GPT-3 can produce execution paths for a *fixed algorithm* on *new* inputs given just a prompt that describes how that algorithm should be executed (no tuning). In other words, we described a technique for programming LLMs, not for teaching them how to interpret (Prompt 1 is a Bubble Sort “program for GPT”). However, our Compiler/interpreter prompt A.2 does show an example of how we can prompt GPT into creating an execution path for a new program, and we used it to get an execution path for LCS that we then used in our experiments as an LCS “program for GPT”. The difference between our approach to interpret and Nye [2] is that we described the syntax step by step (from assignment, to memory retrieval, to basic loop) rather than giving a few examples of programs/executions and asking LLM to infer the commonalities. In addition, the Nye [2] Appendix C falls for a few pitfalls of non-linear exposition we described in Section A.3.2, e.g. by not explaining the action before it is done and clearly describing iteration decisions at the right time. An example of the IRSA-style execution path for the first example in Nye [2] Appendix C is here: https://platform.openai.com/playground/p/d5lzF0FOQG31NcyGAhE96JuZ?model=text-davinci-003. With this prompt GPT can compute the output of their first f() function for different inputs. Furthermore, when the Python program precedes that execution path in the prompt, then GPT may execute a new program, too (although a better compiler/interpreter is our Prompt A.2). E.g, 1st program in Nye Appendix C and the execution above are used to interpret their 2nd program here: https://platform.openai.com/playground/p/yGFDaSaVtZWdDOZOP37sy7Dw?model=text-davinci-003 The reviewers correctly observe that skip attention can be used in Nye [2] models, and more generally in training new architectures. **References**: [1] Schuurmans, Dale. "Memory augmented large language models are computationally universal." arXiv preprint arXiv:2301.04589 (2023). [2] Nye, Maxwell, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan et al. "Show your work: Scratchpads for intermediate computation with language models." arXiv preprint arXiv:2112.00114 (2021). Pdf: /pdf/47bb102891607c764349485659598daedf95de19.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This work introduces Iterations by Regimenting Self-Attention (IRSA) which is a set of LLM prompting techniques for producing repetitive, algorithm-like behavior that can be useful for a range of tasks, such as carrying out a sorting algorithm or solving a logic puzzle. There are 3 techniques discussed: 1) "Basic IRSA" which is a chain of thought prompt that looks a bit like an execution trace of some natural-language-like pseudocode - theres a lot of repetitive structure, the current state is verbosely repeated after each step, and changes to the state are explicitly describe before the happen. 2) "Fragments" which is the idea that instead of prompting with a full trace of an algorithm you can just prompt with random unordered individual steps of the algorithm to prepare the model for executing a random step. 3) "Skip Attention" where only the most recently produced state is attended to (plus the original, fragment-based prompt), since changes to the state should be independent of the history of states – this cuts down on computation and helps the LLM not get confused by patterns in its recent output. Strengths: - Skip attention and fragments (which pair well together) are great ideas, and are original as far as I know – other reviewers can correct me if I'm wrong. In many algorithms (and in fact, in the execution of interpreted code in general) only the current state matters as opposed to the history of how the state has changed. Only showing the most recent state to the LLM makes a lot of sense. It saves on computation cost and makes long running algorithms feasible, since there's no need to attend over the whole history of generations (which would become a huge problem as an algorithm runs for dozens or hundreds of steps). As the authors point out, this Markovian setup of not looking at the history of states also means that the LLM won't get confused by patterns in its recent history. - The "fragments" approach is a clever way to get the LLM used to this idea of seeing somewhat random states and needing to do a single algorithmic step for each one. - Skip attention makes so much sense, I'm surprised past work like "Show Your Work" (Nye et al 2021) didn't take an approach like this, since I imagine it would work fine with executing interpreted Python programs (where the state is the set of local variables/values along with the current line number in the program, and the LLM just has to output a next set of local variables and next line number). - More generally, getting LLMs to do things that look more like rigid computation can be difficult and I think that this is a paper with a pretty good evaluation of a particular approach to this problem, and would be useful for the NeurIPS community to see. - The evaluation is reasonable and shows unsurprisingly that the skip attention method can work great and generalize to very long sequences (eg bubble sort with 25 steps). Weaknesses: - The descriptions of what IRSA is were quite difficult for me to understand. The first line of section 2, the section describing IRSA, is "Prompt 1, as well as the prompts 2, A.4, A.5, and A.6 in the Appendix, illustrate the basic IRSA." (line 66). Written as is this feels a bit overwhelming as it suggests that I need to look at 5 different prompts (including 3 in the appendix) and try to look for the common features among them to figure out the method. Also, in reality many of these references (2, A.4, A.5, A.6) are actually going to show up later on in places where they're discussed so it's okay if I don't look in detail at them now, but since I haven't been told that I feel some need to dig them all up before continuing. - A flow for section 2 that would be much more understandable to me (and I believe others) would be the following. This is just one suggested way of doing it and I think there are many valid ways that would be widely understandable (you dont need to do the below), but the current flow is difficult to understand: - Give a brief but concise description of the key feature of IRSA (similar to lines 71-73 right now) so we're primed with looking for that *before* we're told about any prompts to look at. I might even suggest that instead of putting the CoT comparison at the end (lines 78-82), it might flow nicer to actually frame it *in terms of CoT / as an extension building on CoT* since that is a closely related framework many readers know about. In general after reading the paper I actually still find I have trouble precisely articulating what makes something count as "basic IRSA", so presenting it from the start in terms of its relation to CoT might be helpful. - Tell us to look at Prompt 1, and briefly walk us through what we're looking at / why this is IRSA. - Mention that the precise keywords/format of Prompt 1 ("EXECUTION", "Prep", "EndPrep" "Iteration", the indentations, "State:") are not important (IRSA is not a set of specific keywords to use) and point to Prompt 2 as an example of something that looks different on the surface level but is still IRSA. - At this point, you might parenthetically refer to the 3 appendix prompts as additional examples used in the evaluation that the reader can look to if they want more. - Again, to be totally clear, I'm not prescribing this format, I just find the current flow difficult to understand so I took a stab at restructuring it, but there are many other ways of doing so that would also flow well. - I'd like to see some discussion of how this relates to the paper "Show Your Work: Scratchpads for Intermediate Computation with Language Models" (Nye et al 2021) which is currently just referenced by the paper in the list of CoT related works without specific discussion. In that work, the authors showed that while LLMs are bad at directly predicting the output of a Python function called on certain inputs, they could instead have the LLM repeatedly output the current state (what the variables are set to) plus the next line of the program to run. That was essentially a form of CoT with extra structure. IRSA seems somewhere in between the strict state/instruction format of Show Your Work and the free flowing reasoning of more general CoT. I think an explicit comparison to that paper (and/or any other paper that does some form of rigid CoT) is important, so it's clear how this work should be viewed in relation to others that have structured CoT. - I'm generally coming out of this paper still somewhat unsure what precisely "basic IRSA" is (i.e., without fragmenting or skip attention), and I felt I could only gesture towards some of its important features when writing the Summary section above. - It feels related to CoT and I'd like to understand it in terms of that. One section comparing CoT to IRSA says "a significant distinction lies in the number of reasoning steps, which is limited and fixed in *usual* CoT applications" (emphasis is mine) (lines 78-82) but this is not always true (e.g. in the Show Your Work paper above – so does that make Show Your Work and instance of IRSA and this present paper is proposing a general framework encompassing that?). - (minor weakness) Section 2.4 sounds interesting but is largely confined to the appendix. It doesn't really flow with the rest of the story and as far as I can tell isn't used in the evaluation alter. But I'm a bit torn because it is actually quite cool and maybe it doesn't hurt to have as just an aside (though maybe with a slightly more clear verbose explanation). I don't terribly hold this one against the paper, it just feels a little out of place. **Overall** I think that, though it is difficult to follow the flow of this paper in places so it took me quite a while to understand, and I'm still not totally clear on what makes something "basic IRSA" or how it relates to prior work like Show Your Work, I think that in particular given the contributions of skip attention and fragments this would still be valuable work for the NeurIPS community to see. The positives outweigh the negatives in my view, but with revisions around the points mentioned above I would be more supportive. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - (as discussed in Weaknesses) How does IRSA relate to the "Show Your Work" paper? - (as discussed in Weaknesses) How, precisely, does IRSA relate to CoT? - While full LLM prompts can be useful, using 3 full pages for three full-page LLM prompts is a lot, and you might consider abbreviating some of these prompts to keep the key bits (while leaving the full version in the appendix). - "world" -> "word" typo in the bolded text of line 85 - "the Prompt 2 Appendix" (line 233) seems like some sort of typo, is this prompt 2 or prompt A.2 or something else? - For Table 1 Instead of saying Prompt 1 and Prompt A.4 I would say things like "Base IRSA (Prompt 1)" or something, so that at-a-glance you can understand these results without having to remember / look up what exactly prompt 1 and prompt A.4 are. It'd also be helpful for the entries like "Longest substring" to actually say which variant of IRSA was used for this result (or to put that in the caption, or to put it as separate columns, or do anything else that just makes it easy at a glance to see which IRSA method you're talking about). Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors address limitations adequately Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks a lot for acknowledging that our contributions of skip attention and fragments this would be "valuable work for the NeurIPS community", our approaches are novel and original, and our evaluation is reasonable that the skip attention method can work great and generalize to very long sequences. Below, we address all the comments and questions from the reviewer. (a) **Writing flow**. Thanks for the thoughtful comment! Following your suggestion, we will split line 66 to indicate that Prompt 1 is enough to demonstrate basic IRSA and further prompts are just other examples of basic IRSA. We like the idea of explaining Prompt 1 and emphasizing that IRSA is not about which specific keywords or structures are used, but the consistent and repetitive use of those chosen within a single prompt. In other words, we will restructure Section 2 to flow closer to the reviewer's suggestion. Regarding our use of full prompts, our concern was that partial prompts may confuse the reader, since they may not be able to understand the basic idea without going to the appendix to see the rest of it (not to mention the confusion that may happen once the reader moves on to the fragmented prompts). With the additional page allowed for the final version, if accepted, we will make better use of the space to adjust the space for the full prompts and their explanations, following your suggestions. (b) **Typos and captions**. Thank you for your notes on readability and we will address those. (c\) **Relateness to "Show Your Work"**. See also the general response regarding the flow as well as the relationship with "Show your work...", Nye et al.; we also give examples of IRSA prompts for their, much simpler, problems. Note that we primarily focus on ways to explain to an LLM how to execute a given algorithm, rather than to learn how to execute Python; the former is an instance of programming an LLM in its "native language", the latter an instance of interpreting or compiling, which we also touched upon in the paper. However, it is also true that skip attention can be used in Nye et al or other models. We appreciate the comment on applicability beyond our work. It also appears that Nye et al could have had better results even just by regimenting their prompts and training data a bit more strictly as we show in the [prompts](https://platform.openai.com/playground/p/d5lzF0FOQG31NcyGAhE96JuZ?model=text-davinci-003) in the general response. For example, their first example in Appendix C would be better given like this: ``` Program: def f(v0): v0 += 0 v4 = 2 while v4 > 0: v4 -= 1 v0 *= 2 return v0 Call: output = f(3) BEGIN We first pass the input of f(3) to the function. It is assigned to v0 state: v0=3 command: v0 += 0 What is v0 in the state? 3. v0 is set to v0+0=3+0=3. New state is: state: v0=3 command: v4 = 2 We set v4 to 2 and keep v0 as is. New state is: state: v0=3, v4=2 Iteration: command: v4 -= 1 What is v4 in the state? 2. v4 is set to v4-1=2-1=1. New state is: state: v0=3, v4=1 command: v0 *= 2 What is v0 in the state? 3. v0 is set to v0*2=3*2=6. New state is: state: v0=6, v4=1 check for iteration end. v4 is 1. And 1>0 is true, so we need more iterations. Iteration: command: v4 -= 1 What is v4 in the state? 1. v4 is set to v4-1=1-1=0. New state is: state: v0=6, v4=0 command: v0 *= 2 What is v0 in the state? 6. v0 is set to v0*2=6*2=12. New state is: state: v0=12, v4=0 check for iteration end. v4 is 0. And 0>0 is false, so we end the iteration. Final state is: state: v0=12, v4=0 What is v0? Answer: 12. END ``` In fact, as the second playground link shows, following this with their second program prompts GPT into executing it. --- Rebuttal Comment 1.1: Comment: I appreciate the authors thorough response and I'm glad to hear about the flow revisions. I really appreciate the extensive discussion of how this work relates to Nye et al (and the executed playground examples!), this clarifies things for me. Having some very brief mention of this relationship in the final version related work or intro would be useful context for where this work sits. My original review had some concerns around clarifying how IRSA relates to CoT, and what precisely makes something count as basic IRSA. I hope the authors' planned improvements to the explanations of Prompt 1 at the start of Section 2 will help readers with this. However I wanted to spend a little more time below discussing this to make sure that it is clear in the final version since the authors didn't explicitly bring up the CoT relation in the rebuttal (though I appreciate the time they spent on the particular Nye et al example): 1. The paper's current wording in certain places feels like it's claiming that IRSA is *distinct* from CoT prompting, when really it seems to be a particularly effectively *variant* within the broader paradigm of CoT prompting. I'm assuming the authors intend the latter interpretation of IRSA's relation to CoT because in the general response to all reviewers, the authors say "Defined as anything with step-by-step instructions, CoT prompting includes IRSA solutions". In the paper, lines 78-82 and 149-158 seem to put CoT at odds with IRSA ("However, a significant distinction..." and "Although similar to CoT prompting, there are notable differences..."), and very slight rewording could clarify that actually the features don't create a distinction between CoT and IRSA, but rather that these are the specific features that make an instance of CoT considered IRSA. 2. Assuming the authors frame IRSA as a variant of CoT, then what makes something IRSA is, in my understanding: - A. The prompt shows all state changes and explain each change before it occurs, using a rigid repetitive explanation (71-72) - B. The prompt contains condition for declaring end of execution, so it can run for an unspecified number of iterations (80-82) - Please correct me if I'm wrong or there's more I'm missing - Assuming A and B are what define IRSA and are *both* things that make a variant of CoT considered IRSA, I think some revisions could clarify this around the two places where IRSA is detailed and CoT comes up: - In 78-82, right now CoT is brought up right between A (71-72) and B (80-82) even though both A and B are about how IRSA differs from CoT, so just putting the CoT mention either before or after would be fine. - In 149-158, the same concern applies where CoT appears between A and B. - In 149-158 the version of A is "Prompting with highly structured single execution path examples" which is a little vaguer than the 71-72 version but I think that's okay since it's consistent with the language used in the abstract and intro. - In 149-158 After the mention of CoT, B is listed ("iterative reasoning that is repeated until the stop condition is reached") but then yet a third thing, call it C, is listed: "Furthermore, the execution path example for each task is deliberately chosen to be out-of-distribution". This leaves me a little confused on if C is a third feature of what makes something IRSA that wasn't mentioned in the 78-82 section (in which case perhaps it should be)? Or if it's just an elaboration on how B allows for more flexible examples (in which case maybe it shouldn't be listed as two separate distinctions but rather as B + a note on the flexibility that B allows for)? - My broader intention with all of this that I came away from this paper with trouble listing the essential features of IRSA and how it relates to CoT (even more broadly than with how it relates to Nye et al, but I very much appreciate the authors extensive discussion of the relation to that!) and so in this follow up I'm trying to nail down the points that really confused me and give some thoughts/suggestions on how they could be made more clear. I think there are some super exciting ideas in this paper and I want to make sure the groundwork laid by the "basic IRSA" section is clear. Of course, let me know if any of these suggestions are coming from my own misunderstanding of the authors' intentions. Thank you again for the care you've taken in the rebuttal/revisions --- Reply to Comment 1.1.1: Title: Thanks for reviewer n7ti's comments and insightful discussion Comment: We thank a lot for reviewer n7ti's always thoughtful and insightful comments and discussions. We are happy to see that our discussion on Nye et al clarifies the reviewer's question. We surely will add them and further enrich the related work section. We now try to address the remaining concern on **clarifying the definition of basic IRSA and its relationship with CoT**. First, we thank the reviewer for pointing out the text lines to help clarify the definition of basic IRSA for a better understanding of its relationship with CoT. As pointed out by the reviewer, points A and B do indeed define the basic IRSA as a highly regimented prompt that can be seen as part of the CoT family (and more broadly the art of prompting, as CoT can refer to almost anything that includes hints on how to perform a task). Second, with C, we point out the practical deviation from the usual application of CoT in in-context learning. CoT was initially demonstrated and usually applied, by making prompts that include a few instances of a task from the dataset in a step-by-step manner. However, we see that when the prompt is regimented with A and B, then it often works on out-of-domain problems (e.g., sorting letters, or objects by size instead of numbers, even though the prompt was given for numbers). In particular, in Bubble Sort experiments, our prompt demonstrated the execution not on an example from a dataset, but on a shorter sequence. Because the prompt focuses on the logic of the algorithm execution, like programs, it can generalize to longer sequences (which is related, but not quite the same, as shown in point B, though the reason why this generalization is possible is the same: regimented prompting that shows an LLM what to do under different conditions, thus “programming” it to execute an algorithm). Finally, when we found this out, we decided to attempt fragmented prompting, which further deviates from typical CoT applications: Instead of working out full examples in prompts, it is enough to show fragments of several, achieving better coverage with much shorter prompts. And then, fragmented prompts pointed to Skip-attention or skip-to-state (which can be done with basic IRSA, too, both to save on tokens and to avoid unnecessary long text that may or may not confuse LLM’s attention mechanism; skip-to-state draws it only to the latest state). At any rate, we're very grateful for the fruitful and engaging discussion with reviewer n7ti and for pointing out that `there are some super exciting ideas in this paper`. Your suggestions are welcome and will improve the final version of the paper. We're also hopeful that our responses may prompt a reconsideration of the paper's rating. If there are any other questions, please let us know.
null
null
null
null
null
null
Differentiable Sampling of Categorical Distributions Using the CatLog-Derivative Trick
Accept (poster)
Summary: This paper proposes a variance-reduced statistic for $\mathbb{E}_{x \sim p(x)} [f(X)]$, which could be the gradient of a loss for example. The main idea (known as Rao-Blackwellization) is simple to understand and can be summarized thusly: the quantity can be rewritten to distinguish any dimension $d$: $E_{(x_d, x_{\neq d}) \sim p(x_{d} | x_{\neq d}) p(x_{\neq d})} [f(X)]$ = $E_{x_{\neq d} \sim p(x_{\neq d})} E_{x_{d} \sim p(x_{d} | x_{\neq d})} [f(X)]$ Notationally, the difference in both sides is how $x_d$ is drawn. On the left hand side, $x_d$ is drawn jointly with the other dimensions on the right side: this means a one-to-one ratio of $x_d$ to $x_{\neq d}$. On the right hand side, $x_d$ is drawn separately from the other dimensions: this allows for a many-to-one ratio of $x_d$ to $x_{\neq d}$. In fact, when $x_d$ has finite outcomes, all of them can be used and the inner expectation (right hand side) can be computed exactly. The extra computation from the "many-to-one" sampling on the right hand side, is the price to pay for the variance-reduction that comes from using (averaging over) more $x_d$. And that computational price is acceptable for a categorical variable. The authors show that the variance reduction is effective for a variety of tasks, whether it is the actual estimation of $\mathbb{E}_{x \sim p(x)} [f(X)]$, or other tasks (e.g. optimization) that use the statistic for some other purpose such as stochastic optimization of a loss function. Strengths: The exposition is simple and clear, and the results are varied and convincing. Figure 2 is also appreciated, as it empirically links the variance of the gradients (right panel) to the speed of the optimization (left and middle panels), which although it is common wisdom that has motivated certain algorithms (e.g. SVRG) is not a proven result for any optimizer. Weaknesses: No major weaknesses to indicate so far. Technical Quality: 3 good Clarity: 3 good Questions for Authors: It is not very clear from the text, that the statistic for $\mathbb{E}_{x \sim p(x)} [f(X)]$ is sometimes the gradient of a loss (e.g. ELBO). For example, line 133 refers to a gradient without a loss without explicitly making the connection to $f(X)$. Could the authors clarify this in the text? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No specific concern on negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the concise explanation of our method and the positive assessment. Please, find below a remark about the formulas summarizing our method and the answer to your question. **Remark** Please, note that the CatLog-Derivative trick can be expressed in the following way (using your same notation): $E_{(x_{<d},x_d,x_{>d})\sim p(x_{<d},x_d,x_{>d})} \{ [f(X)] \}=E_{x_{<d}\sim p(x_{<d})} E_{x_d\sim p(x_d\mid X_{<d})} E_{x_{>d}\sim p(x_{>d}\mid x_{<d},x_d)} \{ [f(X)] \}$ This emphasizes the relation to ancestral sampling, thus differentiating from other approaches like the local expectation gradients [1] [1] Titsias, M.K., & Lázaro-Gredilla, M. (2015). Local expectation gradients for black box variational inference. NeurIPS. **Question** We have added the following additional details to the appendix to clarify our optimisation setup for both the DVAE and neural-symbolic experiment. 1. Clarified that the loss function is the ELBO, which is itself an expected value. Additionally, the predicted probabilities used for the computation of the ELBO are clarified as well (Appendix A2, Modelling, end of paragraph). 2. We explicitly added the target loss function $-\log P(\sum_{i = 1}^D d_i = s)$ for the neural-symbolic optimisation to the appendix and expressed it using an expected value (Appendix A.3, Modelling, end of paragraph + Eq. A.1). # Final Comments Please, let us know if there is anything you want to discuss. --- Rebuttal Comment 1.1: Title: Response from Reviewer 8h59 Comment: I thank the authors for their response and appreciate the discussion comparing CatLog to LEG, which I was not previously aware of. I am still following that discussion. So long as the differences between CatLog and LEG are clearly explained and LEG is properly referenced, I see no reason to lower my score and there still seems to be a novel contribution with empirical results to support it.
Summary: The paper discusses improved gradient estimation through multivariate categorial probability distributions. A special case of independent random variables is discussed more thoroughly and experiments are conducted only for this case. To summarize for independent variables i.e. $p(X) = \prod_{i \in [N]} p(X_i)$ following is my understanding: 1. Assume for applying REINFORCE trick one would use $N$ many $d-$dimensional samples $x^{(n)}$ drawn from $p(X)$. 2. This paper proposes the following: * Create $N$ many $d-$dimensional samples similar to 1. * For each variable $i$ in $[D]$ enumerate all possible values (assuming $K$ many) of variable $i$ and replace $X_i$ in each $x^{(n)}$. Thus creating $K$-times additional samples for each variable. Strengths: 1. The idea of looking 'inside' a joint probability distribution instead of treating it as a black-box is interesting. For independently distributed variables the paper makes use of this factorization. 2. Experiments (Fig. 2, 3, 4) indicate that proposed method _only sometimes_ produces better gradient estimates than REINFORCE on equal footing. Weaknesses: 3. It would have been great to have an illustration of the case discussed in Example 4.2 to see things (and symbols) pictorially. Preferably at the start of paper. 4. As discussed in line 139, the computational complexity of this approach is $\mathcal{O}(D \cdot K \cdot N)$. This looks much expensive than what REINFORCE has which is just $\mathcal{O}(N)$. Although for a fair comparison in experiments the authors do allow REINFORCE to have more samples which is termed as RLOO-F. Therefore in following I will only compare with such baselines.\ a. First two rows in Figure 2 has only positive results of the proposed approach although only with slight improvements over RLOO-F. \ b. In last row of Figure 2, GS-F is better (gumbel softmax on equal footing) with RLOO-F not so far behind. \ c. In Figure 4 the proposed method does far worse than RLOO-F! Unless it is allowed to sample $x^{(n)}$ individually for each $d$ in $[D]$. This is a serious shortcoming. 5. Figure 1 does not contain RLOO-F, why? 6. The discussed benchmarks are too small and old. It would have been better to discuss a more practical and large-scale problem to solve. 7. In Example 4.3 it will be good to not have $K = D = 3$ as it makes it more difficult to parse. Possibly make $K=2$ (i.e., binary variables). 8. Can there by any benefits of proposed approach on cases where variables are not independent? Possibly experiments can be added to put estimator in eq. 3.5 to test in a future submission. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 9. How does time complexity change due to parallelization in line 140? 10. Line 138 end: respectively w.r.t. what? 11. What about training accuracy for Figure 4? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 1 poor Limitations: 12. Overall the method falls short of its promise. It needs much computational complexity to create additional samples and REINFORCE trick does sometimes even better than proposed method if tested on equal footing. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the time dedicated to review our paper. Please, find below a discussion about the mentioned weaknesses and the answers to the questions. # Discussion The improvements with respect to RLOO-F are statistically significant, as can be seen from the reported standard error bars shown in all graphs. We also want to emphasise that the improvements over RLOO-F seem to increase with increasing K (domain size of categorical distributions). This can be seen both from Figure 1 and 4. In the former, the bias for RLOO-F increases with increasing K. In the latter, IndeCateR-D is the only method capable of providing consistent solutions to the MNIST addition problem. While regular IndeCateR does not beat RLOO-F in experiment 6.4, it has to be taken into account that RLOO-F also takes $10\cdot D \cdot N$ samples in contrast to just $10$ samples for IndeCateR. IndeCateR-D equalises both number of samples drawn and function evaluations and is as such the main competitor for RLOO-F. In the other experiments, due to the mainly binary domain of each variable, IndeCateR-D did not give meaningful improvements over IndeCateR. In the binary cases, IndeCateR takes fewer samples than RLOO-F and still manages to significantly outperform it. With respect to the last row of Figure 3, IndeCateR does lose out to GS-S. However, the Omniglot dataset was run on an older GPU (GTX 1080 Ti) in contrast to the MNIST and F-MNIST datasets. We have rerun Omniglot on the same GPU (RTX 3080 Ti) as MNIST and F-MNIST and can see that IndeCateR can make better use of the power of newer AI accelerators as it now again outperforms all other methods while all methods exploit parallelisation where possible. We have added this figure to the paper. In Figure 1, all methods were given 1000 samples while IndeCateR was only given 1. This choice was made to show that even when competitors are given more function evaluations and more samples, IndeCateR can still outperform them. In short, Figure 1 compares IndeCateR to an even stronger estimator than RLOO-F. The choice of benchmarks follows the general and expected experimental setup from the gradient estimation literature [1, 2]. We would argue that the DVAE experiment in particular presents a significant challenge for gradient estimators as it has relatively high dimensionality in conjunction with a neural optimisation component. Regarding benchmarks. Our primary focus was on the most expected benchmarks from the gradient estimation literature, which turned out to all be cases of fully factorising distributions [1, 2]. Does the reviewer have any other particular benchmark in mind? [1] Jang, Eric, Shixiang Gu, and Ben Poole. "Categorical Reparameterization with Gumbel-Softmax." ICLR (2016). [2] Richter, Lorenz, et al. "VarGrad: a low-variance gradient estimator for variational inference." NeurIPS (2020). # Questions **How does time complexity change due to parallelisation in line 140?** Our summations can be cast as special cases of the prefix sum [3], which has a parallel implementation with complexity $O(\log N)$ [4] when summing over $N$ terms. [3] https://en.wikipedia.org/wiki/Prefix_sum [4] Ladner, Richard E., and Michael J. Fischer. "Parallel prefix computation." Journal of the ACM (1980). **Line 138 end: respectively w.r.t. what?** The respective upper bounds are for each of the three nested sums. **What about training accuracy for Figure 4?** We are interested in the generalisation performance of the classifiers, which is quantified by the test set accuracy. Training accuracy is generally not considered interesting for classification problems as it is sensitive to overfitting, hence we left it out for clarity of exposition. # Final Comments We hope that we have addressed all of your concerns. Please, let us know if there is anything else you want to discuss. --- Rebuttal Comment 1.1: Comment: **In the former, the bias for RLOO-F increases with increasing K:** Figure 1 does not have RLOO-F, only RLOO. **IndeCateR-D is the only method capable of providing consistent solutions to the MNIST addition problem:** Agreed (Figure 4). **RLOO-F takes more samples than IndeCateR:** Both do use same number of function evaluations (Line 219). In RLOO-F the arguments for function evaluation are fully random while in IndeCateR they are hand-designed. Why is it important then to signify that RLOO-F takes more 'samples'? **Figure 3, slower GPU:** One could show epochs in x-axis instead of time, as also was done in Figure 4? While we are at it, why the inconsistency in x-axes of Figure 3 and 4? Looking forward to your response. --- Reply to Comment 1.1.1: Title: Addressing additional questions Comment: Thank you for the additional questions and interest. We hope the following answers your concerns to a satisfactory degree. **Figure 1 does not have RLOO-F, only RLOO.** Indeed RLOO-F is a typo in our answer. Figure 1 is supposed to just have RLOO with 1000 function evaluations. This has more function evaluations than RLOO-F would require for this experiment and hence shows we are able to outperform even stronger baselines in some cases. **Why is it important then to signify that RLOO-F takes more 'samples'?** Taking more samples introduces additional computational costs both because of the sampling itself and any downstream operations. This computational cost can lead to significant differences in performance per unit of time (RLOO-F vs IndeCateR). **Different x-axes** In Figure 3 we wanted to emphasize the computational efficiency of IndeCateR versus other methods. However a figure in function of iterations is also given in the appendix. There was no significant difference in computational time for Figure 4, just as was the case in Figure 2 where we do show both time and iterations on the x-axis. For clarity of exposition, we hence chose to only report epochs for Figure 4. The existing difference in computational time in Figure 3 is caused by the additional costs of backpropagating through the additional samples for RLOO-F. In other words, IndeCateR has similar computational requirements to RLOO-S, but performance closer to RLOO-F.
Summary: In this paper the authors provide an alternative to the log derivative trick, e.g. the REINFORCE estimator, for categorical distributions, which is unbiased and has provably lower variance than the REINFORCE estimator; the estimator is called the CatLog-Derivative trick. For the case D-dimensional multivariate categorical distributions, where each dimension is independent, the authors introduce a simplification of the CatLog-Derivative trick called IndeCateR. Strengths: The proposed idea is elegant and simple and the authors did a great job explaining it! Actually, I'm surprised they were able to fit the proofs inside the main text, which speaks to the conciseness of the paper. I also think the authors did a good job explaining the limitations of the base CatLog trick approach and the variety of experiments were great as well. Weaknesses: The major weakness is that there is no experiment demonstrating the performance of the CatLog trick. While emphasis is placed on the IndeCateR trick, it would be great the CatLog trick in action. As a suggestion, maybe a variational hidden markov model? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - There are a couple of typos in the paper and sentences that aren't complete, i.e. beginning of line 177 - Figure 1 missing y labels, making it hard to parse without reading - It would be great to have a section in the appendix doing a brief overview of alternative methods - Figure 4 seems to have problems rendering. I suggest rasterizing the figure and putting it back in the appendix. - In general, all line plots are hard to read. I suggest increasing the thickness of the lines. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Yes, they have addressed the limitations of the method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the enthusiasm and the constructive suggestions. Please find below a comment about the weaknesses and the answers to your questions. # Comment We acknowledge that we have primarily devoted attention to IndeCateR, instead of the more general CateR. Our primary focus was on the most expected benchmarks from the gradient estimation literature [1, 2], which turned out to all be cases of fully factorising distributions. We really like the suggestion of a variational HMM as a testbed for CateR and look forward to adding this to an extension of this work! # Questions **Typos** Thanks for spotting them ! We have made the following modifications: - Line 41: “reparametrisation for categorical distributions, which we discuss further in the related work (Section 5)” - Line 176-177: “Firstly, two synthetic experiments (Section 6.1 and Section 6.2) will be discussed.” **Figure 1** This was indeed not clear and labels were added to the figure. **Appendix about brief overview** The page limit did prevent us from providing a more extensive overview of possibly related work. We have taken this suggestion into account and added a more complete related work section to the appendix (Appendix B.1). **Figure 4 seems to have problems rendering** Thanks for the suggestion! Indeed, there was a problem with the rendering. **In general, all line plots are hard to read. I suggest increasing the thickness of the lines** Thanks for the suggestion to improve the readability of the figures. We have taken it into account and modified them accordingly. [1] Jang, Eric, Shixiang Gu, and Ben Poole. "Categorical Reparameterization with Gumbel-Softmax." ICLR (2016). [2] Richter, Lorenz, et al. "VarGrad: a low-variance gradient estimator for variational inference." NeurIPS (2020).
Summary: The authors derive a gradient estimator for discrete random variables that sums out one dimension while keeping a sample for the other dimensions fixed. Their estimator works for generally fully factorised distributions, and they derive a variant for fully independent distributions, with which they perform experiments on the discrete VAE and a neurosymbolic task. Strengths: The paper is very easy to follow and well-motivated. The estimator is quite interesting and competitive with the strong RLOO estimator. The experiments are useful to the community, and especially showing that likelihood-ratio-based methods work for neurosymbolic methods is a useful insight. Weaknesses: I'm afraid that the paper is very limited in novelty. The approach presented is almost the same as Local Expectation Gradients (LEG) [1], but LEG is not discussed in the paper. From my understanding, this is how they compare: Similarities: - The CatLog-Derivative Trick (Eq 3.5) is the LEG (Eq 9) under a fully factorised distribution / autoregressive model. - The IndeCateR (Eq 4.1) is precisely the simplification of LEG for fully independent distributions in Eq 11 - LEG also connects the estimator to Rao-Blackwellization Differences: - LEG uses a 'pivot' sample and then sums over dimensions, while CatLog resamples for each dimension - LEG (like RAM) did not study the shared parameter setting, although, in my opinion, this is a trivial extension by backpropagation Therefore, I don't think the paper can claim a new trick, as (at best) it slightly modified an established work. That is not to say the results of this paper are not useful but need to be recontextualised, given that the method is not novel. Suggestions are a more thorough experimental setup or a focus on neurosymbolic tasks (which are somewhat understudied in the literature on discrete gradient estimation). [1] Titsias, M.K., & Lázaro-Gredilla, M. (2015). Local expectation gradients for black box variational inference. Advances in neural information processing systems, 28. EDIT: The author rebuttal showed that CatLog indeed has some novelty in its sampling mechanism, and I increased my score from 3 to 4. I still think this paper lacks novelty: While CatLog has some novelty, there are no experiments that use it. The only experiments are with IndeCateR, but that method is not novel (LEG and REM on independent distributions are the same as IndeCateR as the authors acknowledge in their rebuttal). IndeCateR-D has some novelty compared to LEG and REM in sampling, but this method is only tested in a single experiment and is not highlighted in the writing as the focus of the paper. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - Experiments: Why not compare to RAM, as it is closest to the method? - Figure 1: What are the 2 y-scales? - Line 199: Do you assume one-hot encoded X_d? - Line 235: You claim IndeCateR uses two samples, but I do not see how that works. There are 200 dimensions, so you will get 200 samples if you resample per dimension. Or do you set N=2, and use a pivot like in LEG? - Did you evaluate the MNIST experiments on multiple runs? I do not see error bars here. These MNISTAdd experiments can have significant variance between runs, so I believe having at least 10 runs is necessary before claiming IndeCateR is the only method that can scale to 16 digits. - Clarify that the MNIST experiments are different from the multi-digit MNISTAdd experiments discussed in [1] (it's not $100x_1 + 10x_2 + x_3+100x_4+10x_5+x_6$, but rather $\sum_i x_i$). These have quite different optimisation properties (the multi-digit MNISTAdd problem has $10^{2/D}-2$ labels rather than $10D+1$) [1] Manhaeve, R., Dumančić, S., Kimmig, A., Demeester, T., & De Raedt, L. (2021). Neural probabilistic logic programming in DeepProbLog. Artificial Intelligence, 298, 103504. Typos: - Line 177: Unfinished sentence - Line 207: RLOO-F mentioned before its introduction - Line 219: Comma after samples - Line 265: Comma instead of point Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 1 poor Limitations: I would say the limitations are adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the dedicated time and for bringing the related work on local expectation gradients (LEG, NeurIPS 2015) [1] to our attention. We also thank them for their appreciation about the clarity, significance and quality of our work. Please, find below a discussion on the relation between CatLog and LEG and the answers to your questions. # Comparison between CatLog and LEG Thanks for pointing us to LEG! Indeed, LEG and IndeCateR are related. However, CatLog and LEG are two substantially different tricks/methods for the following reasons: 1. LEG does not make full use of the autoregressive parametrisation of the distribution, as it obtains samples (aka pivots) by first instantiating all variables and subsequently performing evaluation through the computation of a weighted average based on the Markov blanket. CatLog instead makes full use of the autoregressive parametrisation by interleaving sampling and evaluation. LEG is based on three distinct stages: sampling, evaluation and weighted averaging. In contrast, CatLog has only two intertwined stages based on sampling and evaluation. Another way/analogy to look at them is that LEG is inspired and related to Gibbs sampling, whereas CatLog is based on ancestral sampling. Please, refer to the detailed analysis of LEG in the attached PDF, which will be included in the Appendix of our paper and also the table underneath summarizing the main differences. 2. The two methods have different computational complexity. Indeed, computing the weighted average in LEG requires an additional $O(D)$ cost, which contributes to an overall computational complexity of $O(ND^2K)$. Therefore, the CatLog-Derivative trick makes a better use of the structure, resulting in improved efficiency over LEG, as scaling only linearly with the number of variables. 3. As the reviewer mentioned, the sampling per dimension introduced by CatLog is different from the pivot samples used by LEG. This theoretical difference also yields significant practical differences in performance, even in the case of an independently factorising distribution. This difference can be seen in Section 6.4. There, the ‘IndeCateR’ estimator corresponds to pivot samples (LEG), while ‘IndeCateR-D’ draws new samples per dimension (CatLog). IndeCateR-D is the only estimator that can consistently tackle the problem when increasing the dimensionality, showing that CatLog is both theoretically and practically different from LEG. (RESAMPLING STUFF? SEE ALSO DOUBTS BELOW)… **Table summarizing the main differences** | Name | Trick | Computational Complexity | Relation to Sampling | |---|---|---|---| | LEG | $\sum_{d=1}^D E_{(X_{<d},{\color{red}X_d'},X_{>d})\sim p(X_{<d},{\color{red}X_d'},X_{>d})} \{ E_{X_d\sim {\color{blue}p(X_d\mid X_{\neq d})}}[f(X)\partial_\lambda \log p(X_d\mid X_{<d})] \}$ | $O(ND^2K)$ | Gibbs sampling | | CatLog | $\sum_{d=1}^D E_{(X_{<d},{\color{red}X_d},X_{>d})\sim p(X_{<d},{\color{red}X_d},X_{>d})} \{ [f(X_{\neq d}, X_d)\partial_\lambda \log p(X_d\mid X_{<d})] \}$ | $O(NDK)$ | Ancestral sampling | We made the following modifications to the text: The attached PDF containing the formal proof of the theoretical difference between CatLog and LEG is added to the Appendix (Appendix B.2). We highlight the difference between taking samples per dimension (CatLog) and not doing so (LEG) in Section 6.4 by further detailing the difference between IndeCateR and IndeCateR-D following our above answer. # Answers to Questions **Experiments: Why not compare to RAM, as it is closest to the method?** When not taking new samples per dimension, CatLog, LEG and RAM all collapse to the same estimate for an independently factorising distribution. In Section 6.4, we look at both cases with (IndeCateR-D) and without sampling per dimension (IndeCateR, RAM and LEG) per dimension and observe that drawing new samples essentially makes the difference between being able to solve the problem or not. As such, CatLog does provide a measurable improvement over RAM and LEG. **Figure 1: What are the 2 y-scales?** The left y-scale concerns the bias while the right one looks at variance. We have added these labels to the figure for clarity. **Line 199: Do you assume one-hot encoded X_d?** We do not use one-hot encoded vectors throughout the paper, only direct elements of the categorical domains. **Line 235: You claim IndeCateR uses two samples, but I do not see how that works. There are 200 dimensions, so you will get 200 samples if you resample per dimension. Or do you set N=2, and use a pivot like in LEG?** We use IndeCateR without drawing new samples every dimension in the binary DVAE experiment. In essence, we take 2 samples of the joint distribution and for dimension $d$, we use the components $\neq d$ to perform the estimation. This choice was made based on the empirical observation that drawing new samples per dimension in the case of binary random variables did not give a measurable improvement. **Did you evaluate the MNIST experiments on multiple runs? I do not see error bars here. These MNISTAdd experiments can have significant variance between runs, so I believe having at least 10 runs is necessary before claiming IndeCateR is the only method that can scale to 16 digits.** We did evaluate the MNIST experiment across 5 runs and error bars are reported in Figure 4. It does seem that, depending on the PDF viewer, these error bars do not render properly on certain zooming levels. Please try using a different PDF viewer to see the error bars that confirm IndeCateR-D consistently beats the competitors. We will also rasterize the image in the final version of the paper to avoid the potential visualisation issue. **Clarify that the MNIST experiments..** Thanks for the suggestion, we will add this clarification to the main paper. # Final Comments Please, let us know if your concerns have been addressed and if you have any further question, we would be happy to answer. --- Rebuttal Comment 1.1: Comment: I thank the authors for their insightful rebuttal. I did not consider the difference between Gibbs sampling and ancestral sampling between LEG and CatLog, and indeed there is some novelty in CatLog. The complexity bounds are also useful. I will raise the score, but only to 4. I think the paper is currently missing a clear story, taking the concerns of the reviews into account. It is written as if introducing a significant new trick, but then shows it is a variation of LEG. There are probably benefits to ancestral sampling, but this is not shown as the authors only experiment with IndeCateR. Furthermore, IndeCateR _itself_ is not novel, as the authors acknowledge (it is equivalent to both REM and LEG). IndeCateR-D is slightly different, but is only introduced in the last experiment, and (according to the reviewers) did not lead to an improvement in DVAE. This could be a very strong paper, but from what I could review, I have trouble describing the core contribution. The paper could add eg experiments on autoregressive models to showcase the use of ancestral sampling, or highlight the differences between 'pivot' sampling and sampling per dimensions. (But these are of course just suggestions). Please let me know if there are any misconceptions EDIT: For some reason, I cannot modify my review right now. I will visit this page again later to do so.
Rebuttal 1: Rebuttal: We thank all reviewers for their valuable feedback and want to specifically draw attention to the comparison between CatLog and Local Expectation Gradients (LEG) [1] as brought up by reviewer JDGY. Please find our theoretical analysis that formally proves the differences in the added pdf. Indeed, LEG and IndeCateR are related. However, CatLog and LEG are two substantially different tricks/methods for the following reasons: 1. LEG does not make full use of the autoregressive parametrisation of the distribution, as it obtains samples (aka pivots) by first instantiating all variables and subsequently performing evaluation through the computation of a weighted average based on the Markov blanket. CatLog instead makes full use of the autoregressive parametrisation by interleaving sampling and evaluation. LEG is based on three distinct stages: sampling, evaluation and weighted averaging. In contrast, CatLog has only two intertwined stages based on sampling and evaluation. Another way/analogy to look at them is that LEG is inspired and related to Gibbs sampling, whereas CatLog is based on ancestral sampling. Please, refer to the detailed analysis of LEG in the attached PDF, which will be included in the Appendix of our paper and also the table underneath summarizing the main differences. 2. The two methods have different computational complexity. Indeed, computing the weighted average in LEG requires an additional $O(D)$ cost, which contributes to an overall computational complexity of $O(ND^2K)$. Therefore, the CatLog-Derivative trick makes a better use of the structure, resulting in improved efficiency over LEG, as scaling only linearly with the number of variables. 3. As reviewer JDGY also mentioned, the sampling per dimension introduced by CatLog is different from the pivot samples used by LEG. This theoretical difference also yields significant practical differences in performance, even in the case of an independently factorising distribution. This difference can be seen in Section 6.4. There, the ‘IndeCateR’ estimator corresponds to pivot samples (LEG), while ‘IndeCateR-D’ draws new samples per dimension (CatLog). IndeCateR-D is the only estimator that can consistently tackle the problem when increasing the dimensionality, showing that CatLog is both theoretically and practically different from LEG. | Name | Trick | Computational Complexity | Relation to Sampling | |---|---|---|---| | LEG | $\sum_{d=1}^D E_{(X_{<d},{\color{red}X_d'},X_{>d})\sim p(X_{<d},{\color{red}X_d'},X_{>d})} \{ E_{X_d\sim {\color{blue}p(X_d\mid X_{\neq d})}}[f(X)\partial_\lambda \log p(X_d\mid X_{<d})] \}$ | $O(ND^2K)$ | Gibbs sampling | | CatLog | $\sum_{d=1}^D E_{(X_{<d},{\color{red}X_d},X_{>d})\sim p(X_{<d},{\color{red}X_d},X_{>d})} \{ [f(X_{\neq d}, X_d)\partial_\lambda \log p(X_d\mid X_{<d})] \}$ | $O(NDK)$ | Ancestral sampling | [1] Titsias, M.K., & Lázaro-Gredilla, M. (2015). Local expectation gradients for black box variational inference. NeurIPS. Pdf: /pdf/805da7ba4ef1133296fcc817a4802c0b818b8658.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper develops a new efficient estimator for gradients of an expectation computed over a multivariate discrete random variable. The gradients are computed wrt the continuous parameters of the discrete distribution of interest. Previous work consists of two major threads - unbiased estimators, such as REINFORCE (aka the score function trick or log-derivative trick) - biased estimators, such as relaxed Gumbel-softmax trick aka "concrete" random variables [refs 15,20] To build on these, this paper presents an approach that is: * formally unbiased (like REINFORCE, unlike GS) * has low variance (unlike REINFORCE) * no free hyperparameters (other than the number of Monte Carlo samples), unlike control variate extensions of REINFORCE In Sec 3, the general approach, called CateR, is presented. The big idea is to Rao-Blackwellize REINFORCE across each dimension of the multivariate discrete vector X being sampled. See Eq 3.1 for the estimator. In Sec 4, a straightforward specialization of the estimator to a model that assumes each dimension is *independent*, caled IndeCateR, is presented. Example 4.2 gives nice intuition. This requires nested sums over the dimensions D, the K possible values of each variable, and N sampled values of remaining variables, with runtime O(DKN). Experiments in Sec 6 assess: * empirical bias/variance on synthetic problems (Fig 1) * performance in optimization on a toy problem (Fig 2) * performance in training of discrete VAEs (Fig 3) * a "neuro-symbolic" model that tries to compute the sum of 10 MNIST digit images, by sample a predicted digit label (discrete value in 0-9) for each image then adding Across the board, the experiments suggest the estimator is competitive in performance when computation cost is similar to its alternatives. Strengths: I found a lot to like about this paper. + Attacks a significant problem: many models need such a gradient estimator and current methods (like REINFORCE) are known to be difficult due to high variance + Elegant yet simple method: derivable from first principles and interpretable as Rao-Blackwellization + Comprehensive experiments on toy data help gain intuition about when/why the method works + Overall message and formal mathematics are both clearly communicated throughout + Effort to make costs fair for all methods (in terms of num func evaluations and/or num samples) is appreciated Thanks to the authors for their hard work Weaknesses: Overall I don't think there are show-stopping weaknesses here. I'll list some issues below, but I'd overall rate these as definitely worth addressing but minor. ### Improve discussion of AI accelerators The paper throughout refers to "modern AI accelerators" without much elaboration or citation, leaving unfamiliar readers in the dark. I think there could be many different kinds of hardware accelerator the authors are thinking of... * What does it mean that this approach can be implemented on a modern AI accelerator? Would this be true of alternatives? like REINFORCE or GS? * How would such accelerators improve the runtime of IndeCateR from O(DKN) to O(log D + log K + log N)? Aren't they just reducing constant factor runtime by moving computation from software to hardware? ### Claim that temperature hyperparameter of GS methods is difficult to tune needs elaboration In line 172-173, GS / concrete methods are criticized because "tuning of this [temperature] hyperparameter" is "highly non-trivial in practice". Can you provide some evidence for this claim? I don't doubt that it *could* be sensitive, but I wonder if your experiments could reveal more about this. For example, in Fig 1 or Fig 2 you could show GS with tuned hyperparameter compared to GS with a reasonable default value. ### Is there recent work extending GS/concrete that's worth discussing/comparing? The concrete distribution / GS methods [refs 15,20] were published about 6 years ago, in 2017. Seems like the current related work discussion doesn't really touch on any progress that might have been made since 2017. I'm not aware off the top of my head of such work, but I plan to dig in more during discussion period to see if there's relevant work. If so, definitely seems worth citing a few more papers. If not, perhaps highlighting the lack of further progress there is interesting. ### Presentation quality decent but needs refinement to catch isolated issues At a few spots there are incomplete thoughts or awkward phrasings, such as * line 41 * line 177 In further revision, please read carefully to catch such issues and spare a future reader any confusion. ### DVAE experiments need more details/elaboration * Need to motivate why the DVAE is an interesting model * Does the original DVAE recommend a specific gradient estimator? If so, what? * For encoder arch, are there really 3 *hidden* layers, or just 2 hidden layers and one output layer? * Why 2 samples for IndeCateR instead of just 1 or many more? How sensitive is this choice? * Can you define the probabilistic model and the optimization problem? (perhaps in supp.). In particular, is your likelihood treating the pixels as unconstrained real floats, as floats in 0.0 - 1.0, or binary values? * Can you clarify the optimization algorithm used (SGD? ADAM?)? * Where methods compared using the same initial parameter values? Or similar sampling scheme for initialization? * Is there any regularization applied? (e.g. L2 penalty on encoder/decoder parameters) Technical Quality: 3 good Clarity: 3 good Questions for Authors: ### Q1: How did you compute bias and variance in Fig 1? Presumably your estimated gradient is a vector, not a scalar. Did you just add or average each component's bias/variance to get the scalar metrics reported here? ### Q2: Can you try to give insight into why IndeCateR has lowest variance in Fig 1 and 2, but GS has lower variance with DVAEs in Fig 3? What is the key difference here? ### Q3: Why do we need sampling at all in Sec 6.4? And why frame as a binary classifier? I'm fine with what's presented in Sec 6.4 as a proof of concept that your approach can work on this problem. But is there an *advantage* to formaluating the problem by requiring each per-image predictor to produce a random sample, that is then summed? Why not just have each per-image classifier produce the expected value of the categorical distribution over digits 0-9, and sum that across images? Also, why use a binary (right/wrong) signal to supervise? Why not use a more continuous error metric, since if the true sum is 33 but I predict 32 I'd probably prefer that vastly compared to predicting 0. Minor questions * Why isn't GS-S shown in Fig 2? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: Could say more about limitations in Sec 7. There's at least 1 paragraph worth of room for it. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the detailed and thoughtful review and the extra effort to provide feedback aiming at improving the quality of the presentation and at sharpening some statements. Please, find below a discussion to your raised points and the answers to your questions. # Discussion about weaknesses **AI accelerators** We acknowledge that the statements are too vague. The main AI accelerators that we have in mind are GPUs. We are going to modify the text in the following way: - Line 44: "can be implemented efficiently by leveraging parallelisation on modern graphical processing units (GPUs)." - Line 139-141: "Leveraging the parallel implementation of prefix sums [1] on GPUs [2], the practical runtime can be reduced to $\mathcal{O}(\log D + \log K + \log N)$,...." - Line 271: "these are not amenable to be run on parallel computing hardware, such as GPUs." - Line 294: "modern GPUs." [1] https://en.wikipedia.org/wiki/Prefix_sum [2] Ladner, Richard E., and Michael J. Fischer. "Parallel prefix computation." Journal of the ACM (1980). **Tuning of hyperparameters of GS** Experimental support for this claim was not explicitly provided for simplicity of exposition. However, we did encounter the difficulties of temperature tuning during our experimental process, for instance in the benchmarks in Figure 2. In particular, many of the values in the tested range $[10, 1, 0.1, 0.01]$ led to almost random optimisation behaviour due to the sensitivity of the loss. We picked the best performing value of 0.1 and made this clearer in the appendix. **Recent related work** The page limit did prevent us from providing a more extensive overview of possibly related work. We have added a more complete related work section to the appendix (Appendix B.1). **Isolated issues** Thanks for spotting them! We replaced them as follows: - Line 41: “reparametrisation for categorical distributions, which we discuss further in the related work (Section 5)” - Line 176-177: “Firstly, two synthetic experiments (Section 6.1 and Section 6.2) will be discussed.” **DVAE experiments** * The DVAE is a classical benchmark for gradient estimation methods, as it can easily scale in dimensionality [1, 2]. Moreover, the neural components introduce a complex loss landscape to optimise over. * The encoder architecture indeed has 2 true hidden layers and one output layer, we rephrased the statement as follows: Line 233-235: “The encoder component of the network has two dense hidden layers of sizes 384 and 256 ending in a latent 200-dimensional Bernoulli variable.” * Two samples were chosen for IndeCateR in order to perform a sample equivalent comparison to RLOO, which requires at least two samples. This information was added to Appendix A.1 (Hyperparameters). It is an interesting question to analyse the sensitivity of IndeCateR to the number of samples in future work. * Our likelihood training is treating the pixels as floats (probabilities) between 0.0 and 1.0. We have adjusted our explanation by adding the following to Appendix A.2, Modelling: “Finally, following the literature, the output of the decoder is interpreted as the logits for 784 binary random variables and optimised using an ELBO loss function, which is an expected value. The correct probabilities are given by the normalised and binarised pixel values of the original image.” * We only applied the same sampling scheme and also did not use any regularisation, as this could further obscure the impact of using different estimates. These details were also added to the appendix (Appendix A.2, Hyperparameters). [1] "Categorical Reparameterization with Gumbel-Softmax." ICLR (2016). [2] "VarGrad: a low-variance gradient estimator for variational inference." NeurIPS (2020). # Questions **Q1: bias/variance Fig. 1** We followed the methodology proposed in [3], namely the cosine distance between the estimated and true gradient vector was computed to obtain a scalar metric. This follows the intuition that, during optimisation, we are mainly interested in the direction of the computed gradients. [3] "SIMPLE: A Gradient Estimator for k-Subset Sampling." ICLR 2022. **Q2: comparison between IndeCateR and GS in terms of variance** That is indeed a good observation, we speculate that the reason lies in the landscape of the loss function. In Figure 1 and 2, the loss is a direct function of the discrete random variables and behaves more erratically for different instantiations of the variables. In Figure 3, the neural decoder of the DVAE ‘smoothens’ the optimisation, which seems to favour GS. **Q3: Why sampling? Why binary classifier?** Regarding the question about sampling. In principle, it is possible to formulate this specific addition problem as suggested. However, our setup is more conceptual in nature. The symbolic component in a neural-symbolic system reasons over the domain of each random variable and the probabilities of those domain elements, not over a statistic. We want to showcase that it is possible to apply sampling in combination with gradient estimation to scale probabilistic neural-symbolic inference and learning tasks. These tasks are known to be hard due to their combinatorial nature [4, 5]. Regarding the question about binary classifier. It is possible to formulate the problem as a “regression” task (where the sum is regarded as a real value). However, our purpose is again different here. The experimental setup for 6.4 mimics exactly how inference and learning would be in a neural-symbolic setting [6]. Such systems are trained on example logical statements that are true or false with a given probability, which translates to our binary supervision signal. [4] "Scallop: From probabilistic deductive databases to scalable differentiable reasoning." NeurIPS (2021). [5] "A-nesi: A scalable approximate method for probabilistic neurosymbolic inference." arXiv (2023). [6] "Deepproblog: Neural probabilistic logic programming." NeurIPS (2018). --- Rebuttal Comment 1.1: Title: Thanks for your comments! Revisions are appreciated. Comment: My response here is purely to reply to author comments about my original review. (I haven't looked carefully yet at relationships to LEG and other concerns raised by other reviewers, I look forward to engaging on that in the discussion period). I appreciate the detailed engagement with my questions/comments. I look forward to an improved manuscript. Based on this response, **I continue to think the paper is worth accepting.** RE "AI accelerators" meaning really GPUs: Appreciate the fixes. Thanks for the neat pointer to prefix sums. RE hyperparameters of GS : Thanks for the report of practical difficulty in selecting the value. I think steering reader in main paper to further details about this in appendix would be useful RE DVAE experiments: Thanks for the revisions. Your response helps me understand the motivation and reproducibility of these experiments much better. RE Q1: Thanks. Please clarify how you use cosine distance to get a scalar in the revisions RE Q2: Interesting, I would not have expected a neural decoder to somehow favor one method over alternatives. I wonder if this hypothesis could be scrutinized by substituting a linear decoder and seeing if the same behavior occurs RE Q3: OK, please revise accordingly so that future readers understand that you are pursuing a neurosymbolic kind of task where only boolean logical statements are provided to supervise, and that you understand the alternatives I mention are possible but not of interest to your goals. Otherwise I think readers like me will be distracted by the "why not do it this other way?" ideas like I had.
null
null
null
null
null
null
When Do Transformers Shine in RL? Decoupling Memory from Credit Assignment
Accept (oral)
Summary: The paper proposes a set of metrics to measure and isolate the memory dependency of partially observable environments. The authors illustrate that some currently existing benchmarks do not sufficiently isolate the memory of an agent. Based on their analysis they propose two versions of a T-Maze environment, one of which appropriately isolates memory. Further, the authors investigate the effect of memory architecture on tasks that require long- and short-term memory. Transformer-based policies outperform recurrent policies on tasks that require long-term memory dependencies, while there seems to be no benefit on tasks that require short-term memory dependencies. Strengths: **Significance:** I fully agree with the authors that prior works often conflate different effects and do not sufficiently isolate the memory component. Therefore metrics that help disentangle memory from other environmental effects are crucial. **Originality:** The authors propose novel metrics that evaluate and isolate different aspects of POMDPs in RL, such as credit assignment and long-term memory. The proposed metrics provide an important measure on the effect of memory and will be useful to facilitate future benchmark design and evaluation of new algorithms. **Quality:** Quality of theoretical contributions is high. Weaknesses: **Scalability:** I credit the authors for mentioning the complexity for computing c^M as limitation, but it seems that the computation of other metrics is also limited. For example, l_{mem}(\pi^\asterisk) requires access to the optimal policy. For the minimalistic TMazes it is straightforward to obtain this policy, how is it obtained for more complex tasks such as Psychlab? Does it involve human experts? How is it computed for procedurally generated environments, where the optimal policy varies between levels? How would this scale to more complex environments, given that human demonstrations may not necessarily be optimal? **Relevance of proposed environments:** The authors propose two environments, namely Passive T-Maze and Active T-Maze to disentangle credit assignment from memory. The Minigrid benchmark suite already provides a T-Maze environment that should exhibit the same characteristics as the proposed Active T-Maze, namely MiniGrid-Memory [1]. Passive T-Maze could be useful, but it is very minimalistic and only evaluates whether a small bit of information can be stored and carried across long timespans in an agent's memory. There is no mention of the observation space, but providing, say, image-based observation for Passive T-Maze, would increase its complexity and enable application of vision-based methods. **Interpretation of results:** The authors mention that LSTM starts to falter at a memory length of 250 for Passive T-Maze. Figure 2, however, shows that LSTM can solve the same environment with a memory length of 750. This suggests that some other effects might influenced LSTM training, such as hyperparameter tuning. Maybe the authors can improve the results of LSTM by further tuning of the method? **Clarity:** The authors should clearly state that the paper focuses on long-term memory and does not consider other aspects, such as memory capacity, or robustness of memory to noise. **Missing relevant work:** There is prior work that derives theoretical bounds on approximation error of history-based methods [2]. Further, other history-based approaches such as a hierarchical Transformer memory [3] and pretrained models as a memory module in RL [4], are not cited. If applicable, adding these method in their experiments would strengthen the paper and might yield some more interesting findings. [1] Maxime Chevalier-Boisvert, Lucas Willems, and Suman Pal. Minimalistic Gridworld Environment for OpenAI Gym, 2018. Publication Title: GitHub repository. https://minigrid.farama.org/environments/minigrid/MemoryEnv/ [2] Gandharv Patil et al., On learning history based policies for controlling markov decision processes. 2022. [3] Andrew Lampinen et al., Towards mental time travel: a hierarchical memory for reinforcement learning agents. NeurIPS 2021 [4] Fabian Paischer et al., History compression via language models in reinforcement learning. ICML 2022 Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - The authors conclude that Transformers exhibit worse sample efficiency than LSTM on short-term memory tasks. Is there an intuition why that would be the case? On Passive Visual Match, which requires short-term memory according to the authors, a GPT reaches higher return compared to LSTM. Could it be that the drop in performance is due to different observation/action spaces (continuous vs discrete) instead of long-term vs short-term memory? - In line 304 the authors claim that the success on long-term memory tasks for GPT2 is consistent with the tendency to perform well in the supervised learning setup on large datasets. The authors draw an analogy here which lacks support. How do long-term memory tasks relate to the supervised learning setup on large datasets? Do the authors imply here, that there are only long-term dependencies present in large datasets used for supervised learning? Also, Morad et al. 2023 specifically advises against drawing comparisons between supervised learning and RL. - What is the observation space for the Active/Passive T-Mazes? - In line 270 it would be good to explain what the actual task is in Passive Visual Match, prior to discussion on the reward function. - Figure 4 left shows superiority of GPT2 over LSTM, for Passive Visual Match the difference is not as pronounced, a significance test would benefit the interpretation of these results. - Figure 5: it would be good to clarify what is meant by "optimal policies that lack (long-term) credit assignment" I assume it refers to a markovian policy, since the return on T-Maze is 0.5. - Do all figures show mean and standard deviation? This should be mentioned in the figure captions. - The last three sentences in Definition 1 could be moved to Definition 2, since they are about memory length and not context length. - Equation in line 103: I think there should be $t - l_{ctx}(\pi)+1:k$ instead of $t - l_{ctx}(\pi)+1:t$ in the subscript after the conditional independence, right? - The symbol k is reassigned repeatedly which is a bit confusing, the authors may consider using different symbols. - Will the authors make the code for reproducing their results and computing their metrics publicly available? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The authors have adressed limitations regarding computation of the metrics in more complex environments. However, it is not clear from the paper, how well the other metrics scale with complexity of the environment, or how the optimal policies are obtained, i.e. does that require human demonstration? In the case of procedurally generated environments (which are commonly used nowadays) would that require extensive "labeling" by humans? Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Does computing memory lengths involve human demonstrations? We here clarify that computing these metrics *does not* require any human demonstrations. For some tasks like T-Maze, we obtain the precise numbers with careful analysis of the task structures. For other complex tasks like Memory Maze and proc-gen environments, as written in L576-579, it's hard for human experts to determine *exact* lengths by problem definition, although some bounds can still be derived, as shown in Table 1. We will clarify how we compute the metrics in the camera-ready version. > Will you release code? Yes, we have released our code in the supplementary material. We do not use a program to compute our metrics. > How does computing memory metrics scale with the task? The complexity depends on the history space, which grows exponentially in a finite POMDP. The worst-case complexity for reward and value memory lengths is $O(T (|\mathcal O| + |\mathcal A|)^T |\mathcal A|)$. The transition and policy memory lengths' complexity is further multiplied by the cardinality of observation and action spaces. > The MiniGrid-Memory task already shares the same properties as the proposed Active T-Maze. Thank you for pointing this out! We will add MiniGrid-Memory task into Table 1. While this task has the same properties as Active T-Maze, it is easier in terms of memory and credit assignment lengths. MiniGrid-Memory (sized $13\times 13$) has its memory and credit assignment lengths of optimal policies $\le 3\times 13 = 39$, much smaller than the lengths of 250 in Active T-Maze. In addition, MiniGrid-Memory's $147$-dim observations and sparse rewards add unnecessary complexity to evaluation. Active T-Maze simplifies the problem with 2-dim observations and dense penalties. Lastly, combined with Passive T-Maze, Active T-Maze can be used to evaluate the bottleneck of memory or credit assignment. These advantages motivate us to propose Active T-Maze. > What is the observation space of T-Mazes? These tasks have 2-dim discrete observations indicating the position of an agent: Oracle, Start, Junction, Goal candidates, Corridors. We have revised the paper to include these details. > Passive T-Maze is useful but minimalistic. It only evaluates if a small bit can be stored and carried across long timespans in memory. You are correct, and that was indeed our intention. We chose this minimalistic task to focus on "memory length" rather than "memory capacity" in terms of bits. With no prior evidence that RL agents can maintain a memory spanning $1500$ steps, this task precisely targets this capability. To avoid confusion, we have updated our abstract to state "memorizing observations 1500 steps ago" instead of "memorizing 1500 observations". > How about trying an image-based version of Passive T-Maze? In fact, we have conducted experiments on an image-based version of Passive T-Maze, the Passive Visual Match. > The paper should clarify its focus on long-term memory, not capacity or robustness. Yes, we will do it. > Missing related work [2,3,4]. If applicable, running these methods? Thanks for pointing them out. We find that the hierarchical chunk attention memory [3] is related and will include it into the camera-ready version. It's worth noting that [3] uses observation reconstruction for sparse-reward tasks, unlike our model-free RL. Due to time limit, we cannot run experiments on this method. [2] and [4] are remotely related -- [2] focuses on recurrent-based RL *convergence* and [4] uses pre-trained and frozen language models for RL. > The authors say GPT is less sample-efficient than LSTM in short-term memory tasks, but GPT outperforms LSTM in Passive Visual Match. Could the drop be due to observation/action spaces (continuous vs discrete)? Yes, it is possible. As stated in L278 and L292, we find that GPTs are sample-efficient on Passive Visual Match, but not PyBullet, though both require short-term memory. > Any intuition why GPT is less sample-efficient than LSTM on short-term memory tasks? Our intuition is that GPTs have fewer inductive bias than LSTMs, thus generally require more data to learn. > How do long-term memory tasks relate to the SL setup on large datasets? We did not relate them. > Do the authors imply that there are only long-term dependencies in large datasets for SL? No. > Morad et al. 2023 advises against comparing SL and RL. We think **purely long-term memory tasks** in RL and SL are related. For example, Copy task [Arjovsky et al., ICML 2016] is like an SL version of Passive T-Maze. LSTMs failed to solve the long-term Copy task due to gradient training issue, mirroring our findings in Passive T-Maze. > In L270 it'd be good to explain what the actual task is. We will add these descriptions in the camera-ready version. > Fig 5: what is optimal policy that lacks (long-term) credit assignment? Such a policy maximizes its actions depending on *immediate* rewards only, according to definition of the credit assignment length. In Active T-Maze task, such a policy happens to be Markovian, but it can have memory in general. This is a good spot, and we will clarify it in the camera-ready. > Do all figures show mean and std dev? This should be mentioned in captions. All figures are generated by seaborn.lineplot that shows the mean and its 95\% confidence interval, but not std dev. We will add these descriptions to the captions. > The last 3 sentences in Def. 1 can be moved to Def. 2, as they are on memory length and not context length. We will separate the last three sentences from the paragraph to create a new paragraph for Def. 2. > Question on equation in L103? Actually, these two are equivalent. For random variables $X,Y,Z$, $X\perp Y \mid Z$ is equivalent to $X \perp Y,Z \mid Z$, where $X = a_t$, $Y = h_{t-l_{ctx}(\pi)+1:t-k}$, and $Z = h_{t-k+1:t}$ in this case. > The symbol k is reassigned, maybe using other symbols? We will rename these symbols. --- Rebuttal Comment 1.1: Comment: I greatly appreciate the additional experiments on LSTMs given the limited time window for the rebuttal. Overall the response clarified most of my concerns which is why I decided to increase my rating to 6.
Summary: This work explores the impact of memory and credit assignment in decision transformer architectures, presenting several significant claims. Regarding memory, the authors establish an upper bound on the memory length required for an optimal policy. In terms of credit assignment, they provide a lower bound on the number of future steps needed. Memory length is defined as the number of steps that an action distribution depends on, representing a more recent and shorter dependency than the entire input length. This length, is defined in relation to the value function, transition, and reward. Credit assignment length of the policy is defined as the number of future steps required for a greedy action to yield a higher discounted sum of rewards compared to a non-greedy action. Strengths: - This definitions of memory length and credit assignment length appear intuitive and sound. - Theorem 1 appears clear and mathematically sound. - A toy environment is presented to illustrate scenarios that are either heavily reliant on memory or credit assignment. This experiment is intuitive and sound. In the memory task, transformers perform optimally in long memory length tasks, while LSTMs struggle with long-term memory. However, in more complex tasks, the results are not as clear-cut. These results are significant to the DT community. - For credit assignment tasks, transformers do not outperform LSTMs and exhibit poor performance at longer credit assignment lengths. They also demonstrate lower sample efficiency compared to LSTMs. Again, this results is significant to the DT community. - Overall, the experiments are well-designed, clear, and straightforward, providing a substantial contribution to the field of decision transformer research. This work is highly novel and is the first to specifically investigate the benefits of transformers in either memory or credit assignment. Weaknesses: - Can the authors provide any intuition on the failure of transformers and LSTM and transformers in the long term credit assignment tasks? I find it interesting that they both have the same performance trajectory. - It would be nice to see experimental evaluation of Transformers and LSTM in additional environments other than the toy tasks for better context. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Above Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > This work explores the impact of memory and credit assignment in decision transformer architectures. We would like to clarify that our experiments evaluate the *online model-free RL* setting, not the offline RL setting that is related to Decision Transformers (DT). Nevertheless, extending our evaluation to include DT is an interesting direction for future work! > Can the authors provide any intuition on the failure of Transformers and LSTM in the long-term credit assignment tasks? We find that in Active T-Maze task that has a credit assignment length of 250, the Transformer-based agent does not reach the Oracle (return lower than 1), but still reaches the Junction (return higher than 0). This indicates that Transformer-based agent may fail to explore enough to reach the Oracle, although the exploration should be relatively easy (just taking the initial action to move left). In LSTM-based agent, the return can *sometimes* be negative, indicating that it does not consistently reach the Junction and may face additional issues beyond credit assignment. Similar issues seem to occur in the Key-to-Door tasks where both agents do not reach the key in the initial phase. > It would be nice to see experimental evaluation of Transformers and LSTM in additional environments other than the toy tasks for better context. We agree with this limitation, and we plan to evaluate agents on more complicated environments in future work.
Summary: This work aims to answer how the parametrization of policy in RL affects the performance of sparse-reward tasks that require memory and credit assignment. In particular, the authors are interested in when transformers perform well. The authors define in the POMDP settings their own criteria of how to quantify policy memory, and credit assignment. The authors go through a list of previous benchmarks to clarify which benchmarks require which, and design their own experiment which decouples the two effects. The authors conclude that while transformes do well in pure memory tasks, they do not show better credit assignment capabilities compared to LSTMs. Strengths: - The question that was posed was quite unique and interesting. - The authors do a convincing empirical study of which settings transformers perform better in, and the conclusions are aligned with empirical intuition. - Conducted analysis of previous benchmarks is extensive, and the authors have a good experiment design. Weaknesses: - The motivations for memory length is reasonable as a quantiative metric, but I wonder if there are other quantitative criteria for judging memory and credit assignment. For policies with memory (i.e. recurrent), the definition seems less relevant since the policy can remember parts of all the information starting from time 0. On the other hand, the credit assignment length definition was novel and I found it quite relevant. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - One of the key discussion that seem to be missing is whether or not the policy has the ability to succesfully abstract states that matter. For example, both in works along the lines of MDP compression with bisimulation, or Approximate Information States, they hypothesize a reduced-order model of the POMDP that the policy could possibly have that is sufficient for predicting rewards. - I wonder if aside from credit assignment and history, if transformers are predicting states that truly matter for the task as well (which is partly a question of credit assignment, but less of a question of "when" something mattered.) For example, suppose a pixel-domain example that controls an agent, but the background constantly has something going on (e.g. humans may play tennis in the park when ducks are flying in the background). In this case excelling at pure memory is actually disadvantageous because the ducks are completely irrelevant to the task. - In operations research there are classes of "submodular functions", and these have provably efficient greedy algorithms that are quantifiably suboptimal. I wonder how this is connected to credit assignment length. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors have adequately addressed limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > For policies with memory (i.e. recurrent), the definition seems less relevant since the policy can remember parts of all the information starting from time 0. Recurrent policies can *take* all the information starting from time 0 as inputs, but not necessarily *remember* it starting from that point. As mentioned in L88-91, the context lengths of recurrent policies can be infinite, but their memory lengths can be very short, due to training issues such as gradient vanishing or explosion. In this sense, our definition of memory lengths remains relevant to recurrent policies, with context lengths being infinite. > Lacking discussion on state and history abstraction, for example, bisimulation and approximate information states. Thank you for pointing it out. This work focuses on the architectural aspect (e.g., LSTMs and Transformers) of RL rather than the objective aspect (e.g., different abstraction methods). We view these two aspects as orthogonal and believe they can complement each other to enhance RL performance. We will include some related work on abstraction and clarify our focus in the camera-ready version. > Are Transformers predicting states that truly matter for the task? We evaluate model-free RL agents equipped with memory architectures. Unlike model-based RL, which explicitly predicts next states, model-free RL purely aims to maximize returns. Therefore, we do not think Transformer-based RL agents used in our experiments face the issue of predicting irrelevant parts of states, regardless of their memory capability. > In operations research there are classes of "submodular functions", and these have provably efficient greedy algorithms that are quantifiably suboptimal. How is this connected to credit assignment length? After examining the definition of submodular functions, we don't see a direct connection between it and credit assignment length. We are open to further insights or clarification on this connection, as it may lead to an intriguing avenue for future research. --- Rebuttal Comment 1.1: Comment: I'd like to thank the authors for the response, my scores are the same.
Summary: In this paper the authors consider the effectiveness of transformers for solving two kinds of RL problems: credit assignment and memory. To achieve their goal the authors develop rigorous definitions for memory and credit assignment so that different RL tasks can be understood and compared in these terms. The authors then review many well known RL problems from the literature and classify them according to their definitions. Going one step further the authors then propose a new RL task where the role of memory and credit assignment can be disambiguated. Finally, the authors perform empirical experiments on many of the previously analyzed RL tasks to understand when transformers shine and when they fall short. Strengths: The paper is accessible, rigorous, and timely. Great work. Weaknesses: 1. If I were to name anything it would be I wish the authors could have provided a little more on what they think potential solutions could be to the credit assignment problem when using transformers. The paper stands on its own without this, but the authors have clearly thought about the problem and their insights could be valuable. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. I'm curious given the time and thought the authors have put into this problem what they think may be a solution. Can we simply slap on a specific submodule to transformers to augment their credit assignment capabilities? Or do you think we need to develop fundamentally new architectures? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors have adequately addressed limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > What potential solutions could be to the credit assignment problem when using Transformers? We think the credit assignment problem might be easier to solve with Transformers by using them to explicitly redistribute reward signals [Liu et al., 2019, Ferret et al., 2020]. Specifically, Transformers can efficiently redistribute distant reward signals to the time step when the corresponding action occurs. Their strength in handling long-term memory makes them suitable for learning such temporal dependencies with appropriate objectives beyond model-free RL. > Can we simply slap on a specific submodule to Transformers to augment their credit assignment capabilities? Or do you think we need to develop fundamentally new architectures? We conduct new experiments on *multi-layer* Transformers in credit assignment tasks. Although they aid in medium-term credit assignment, they do not help with long-term credit assignment. Please see the **general response** for details. This result suggests that simply adding a specific submodule to Transformers is unlikely to improve long-term credit assignment, indicating the potential need for new architectures. Please let us know if there are any further questions. --- Rebuttal Comment 1.1: Comment: Thank you for your responses. I remain positive about the paper.
Rebuttal 1: Rebuttal: # General Response First, we would like to thank all four reviewers for their positive and constructive feedback on our work! During the rebuttal period, we conducted additional experiments to address some of the questions posed by the reviewers. **mXpQ (see Figure 1 in the PDF)** > Can we simply slap on a specific submodule to Transformers to augment their credit assignment capabilities? We showed that (single-layer) Transformers cannot help long-term credit assignment. To further confirm this conclusion, we ran *multi-layer* Transformers on these tasks. In both Active T-Maze and Key-to-Door, we find that both 2-layer and 4-layer Transformers greatly outperform 1-layer Transformer in tasks with medium-term credit assignment (length $\le 200$), while still fail to perform long-term credit assignment (length $\ge 250$). This finding suggests that it is *unlikely* that simply increasing the size of Transformers will enable them to excel in long-term credit assignment. **2AdR (see Figure 2 in the PDF)** > LSTM starts to falter at a memory length of 250 for Passive T-Maze. But Fig 2 shows that LSTM can solve it with a memory length of 750. This suggests that hyperparameter tuning might influence LSTM training. Will the results of LSTM be improved by further tuning? First, we want to clarify that although LSTM reaches returns higher than $0.5$ in Passive T-Maze with a memory length of $750$, it does not *solve* the task, requiring the return to be $1.0$. Your point on tuning may be valid for medium-term memory lengths. But for long-term memory tasks, our new results show that tuning is unlikely to help LSTM. During the rebuttal period, we conduct new experiments on LSTM-based agent on the tasks with memory lengths of 1250 and 1500. LSTM-based agent reaches return below 0.5. The results confirm our conclusion that LSTM-based agent cannot solve long-term memory tasks. **2AdR** > Can you do a significance test to show superiority of GPT2 over LSTM in Figure 4? Following the reviewer's suggestion, we did a Welch’s t-test on Passive Visual Match success rates (Figure 4 in the main paper). The p-values for the memory length of 500 and 750 are 0.698 and 0.038, respectively. These results indicate that there is significant evidence to reject the null hypothesis at the 750 memory length, but not at 500. Therefore, we clarify that the advantages observed with GPT at a memory length of 500 predominantly pertain to short-term memory, rather than long-term memory. Pdf: /pdf/77ee07ad853c6df79f4f62c58c18a5297ce1d79e.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Linguistic Binding in Diffusion Models: Enhancing Attribute Correspondence through Attention Map Alignment
Accept (oral)
Summary: This paper presents an approach called “SynGen” to improve attribute binding between nouns and their modifiers for text-to-image latent diffusion models, specifically Stable Diffusion. They propose a two-part loss on attention maps that (1) encourages attention maps of nouns and their modifiers to be similar, and (2) encourages attention maps between the noun/modifier to be different from those of other tokens in the prompt. Datasets and baselines: They run experiments on three datasets: the attribute binding contrast set (ABC-6K) from the Structured Diffusion (baseline 1) paper, data from the Attend-and-Excite (baseline 2), and a newly proposed challenge set called Diverse Visual Modifier Prompts. Evaluation: (1) Human evals for concept separation and visual appeal. They find that SynGen outperforms baselines. (2) Qualitative analysis with sample prompts and generations for Syngen and the two baselines, demonstrating that SynGen overcomes failure cases of the other two models. (3) Ablations on the loss components and loss weighting. Strengths: 1. This paper tackles an important and challenging drawback of text-to-image diffusion models struggling to faithfully generate objects with the right attributes. 2. This paper proposes an interesting two-part loss that imposes localized pairwise constraints on the attention maps between image patches and token embeddings. The first constraint enforces that the same image patches attend to both the noun and its modifiers. The second constraint enforces that image patches attending to these nouns/modifiers do not attend to any other tokens in the prompt. 3. This paper attempts to evaluate on a large set of prompts including those from prior work as well as proposing their own challenge set. 4. This paper attempts to better understand the effects and side-effects of the proposed losses via ablations. Weaknesses: 1. The proposed method appears to be strongly grounded in the framework of the Attend-and-Excite paper. The writing doesn’t highlight this connection while introducing the approach in Section 2, beyond a minor footnote. Section 2 would probably need some revision to better draw this connection. 2. Considering that the paper is tackling a very specific problem, the general “content separation” rating collected in the human eval seems pretty weak. Why not collect finer-grained scores for: number of objects in the prompt, number of objects from prompt generated, number of objects from prompt generated with the correct attributes? Wouldn’t this give a much better sense of task success and could potentially allow computing recall? 3. The loss function ablation mentions that sometimes objects are omitted from the generated image. Is combining the proposed losses with global constraint defined in the Attend-and-Excite paper a viable option? Since the framework is identical, wouldn’t combining the losses have mitigated this issue? The Attend-and-Excite paper talks about attribute binding as well, so this appears to be a variant to try. Nits: 1. Maybe more explicitly state that this is an inference-only guidance procedure? 2. There’s a typo on line 112 for the equation: $\\nabla_{z_t}\\mathcal{L}$ not $\\nabla z_t\\mathcal{L}$. Also $z’_t$ not $z’t$. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Was there a reason to not include numerical modifiers in the study? 2. Are there unexpected side-effects of such constraints on model generalization? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: One limitation that was perhaps overlooked is the lack of standardized evaluation. Having to rely on different crowdsourcing tasks and qualitative comparisons makes it much harder to track progress. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1: Writing doesn’t highlight the connection to A&E. Especially section 2:** Thank you for pointing out that the connection to A&E did not come across clearly. We will revise section 2 to give just credit and clarify the relation to A&E. **W2: "Concept Separation" is too general. Consider collecting finer-grained scores for: number of objects in the prompt, number of objects from its corresponding generated image, number of objects with correct attribute in the corresponding generated image. This provides more sense of success and allows to compute recall.** Thank you for this important suggestion. Following this suggestion, we collected finer-grained ratings as requested for 120 images for all four baselines in the DVMP dataset. In the table below, the top row shows the fraction of images where raters stated that an entity was missing from the image (sum of number of objects in the image divided by the actual number of objects in the text). The bottom row shows the fraction of images where raters found the attribute was missing or incorrectly bound (sum of number of attribute errors divided by the actual number of attributes in the text). In this dataset, SynGen is on par with A&E in terms of entity neglect, and better in terms of improper binding. We will provide more analysis in the final version. | | SynGen | A&E | Structured | Standard SD | |-------------------------------|--------|------|------------|-------------| | Entity Neglect Rate (lower is better) | **20.45** | 22.94 | 31.67 | 33.78 | | Improper Binding (lower is better) | **46.27** | 57.51 | 66.84 | 64.14 | **W3: Is combining the proposed losses with A&E’s global constraint a viable option?** Yes, we believe that should work well. We note that A&E uses Gaussian Smoothing and ‘Iterative Latent Refinement’ that we find to hurt for our task, but both losses can be used. We will discuss in the paper together with the response to W1. **Nits:** We will address the typo noted as well as state more explicitly that no training is needed. **Q1 Why didn’t you include numerical modifiers in the study:** We agree that this is an interesting future direction for SynGen. Our preliminary work showed that controlling the number of instances of an object has some fundamental differences with the task discussed here. In terms of attention maps, a numerical modifier behaves very differently than modifiers like color, because it induces different properties of the attention maps. For instance, in an image with “two trees” you expect the attention map to usually have two non-overlapping blobs. Due to these differences, we decided to leave numerical modifiers for future work. **Q2: Are there unexpected side-effects of such constraints on model generalization?** As stated in the Limitations section, we observe that the visual appeal of images generated degrades with the number of modifiers in the prompt. However, SynGen’s decline is remarkably less pronounced compared to existing models (see figure 12 in the appendix). **The lack of standardized evaluation makes it difficult to track progress:** We agree with the reviewer that this is an important topic. In fact, we spent significant effort to develop automated evaluation metrics and our best attempt is recorded in section G and table 4 in the appendix. However, multimodal models notoriously fail in groundedness, and human agreement is low. One such example can be seen in the evaluation of the StructureDiffusion paper. There, in table 1, the automatic measure only agrees with human evaluation ~ 47% of the time, where 33% is random. The automatic measure we devise reaches better human agreement (43.5%, where 25% is random), but it is still low. We thus opted to keep it in the appendix, to provide a way of tracking progress, while not over-emphasizing these results. We believe that given these limitations, it is futile, at the current state of the research, to rely on automatic evaluation in this task. Rather, we opted for high quality and fine-grained human evaluation. Following this comment, we will discuss the need for automated metrics in the paper, and encourage the community to develop some. We will share the raw data of our experiments so future papers can compute future metrics on our data. --- Rebuttal Comment 1.1: Title: Re: Author response Comment: Thanks for the additional experiments and evaluation! There are lots of details brushed under the rug regarding the full evaluation setup but happy to give the benefit of the doubt that these details will be shared and that the evaluation is robust. I've raised my score to reflect this. --- Reply to Comment 1.1.1: Comment: Thank you so much for the support and trust. We will provide all details of the experiments and make our code public to make experiments easy to replicate.
Summary: A frequent issue in text-prompted image generation (and in many other grounded language scenarios) is that a model will treat text akin to a bag of words and ignore syntactic relationships, such as which adjective attaches to which noun -- this is referred to as a problem with lingusitic binding. The paper proposes addressing this problem in diffusion models by adding an extra step to the diffusion process that nudges the image in the process of being generated so as to respect modifier relationships extracted from a syntactic parse of the prompt. This intervention operates in terms of cross-attention maps (i.e. attention relationships established between tokens and parts of the image), with the intuition that related tokens should map to the same region of the image, and unrelated tokens to different regions. The paper also introduces a new challenge dataset aimed at diagnosing problems with linguistic binding in image generation (the proposed method performs well on this challenge set). Strengths: The issue of improper linguistic binding is one that plagues not only attempts at image generation, but many other model uses of grounded language. It's an important problem that has not been adequately resolved by prior work, whereas this paper has demonstrated substantial steps to overcoming this issue without sacrificing on generated image quality. All of this speaks to the significance of this work, which has the potential to see broad and immediate application without the need to undertake costly new model training efforts. The paper is clearly written, and makes effective use of figures to illustrate the problem and how the proposed method resolves it. There is thorough and convincing human evaluation on both existing data, and a new challenge dataset designed specifically to reveal instances of problems with binding. Weaknesses: There are not many weaknesses to point to. This is not a weakness of the paper per se, but when the method my main remaining (aesthetic) dissatisfaction is the reliance on an external parser, and the possibility of cascading failure that any such pipeline-based system entails. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: none Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Limitations are well addressed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your supportive review of this work. The reviewer raises an important point in designing future systems using SynGen. Future work will need to take into account failures of the parser and develop ways to handle them. --- Rebuttal Comment 1.1: Comment: I have read the other reviews, and would like to thank the authors for their responses. With this information in mind, I will maintain my score. Some questions have been raised regarding similarity to the Attend-and-Excite paper. My view on this is that Attend-and-Excite is different in that it doesn't address the attribute binding problem (except indirectly by decreasing attribute neglect). Attribute binding seems to be somewhat resistant to being solved by indirect methods, which contributes to my favorable assessment of the more direct approach taken in this paper.
Summary: The paper focuses on attribute binding/leak/neglect problems in text-to-image models. The authors propose, SynGen, a method that utilizes dependency trees combined with cross-attention map optimization to achieve better attribute binding results. Extensive experiments are conducted to show the effectiveness of SynGen compared to previous methods. Strengths: - The paper addresses an important compositional problem in text-to-image generation. - The method is intuitive and effective. SynGen can obtain stronger attribute-object association using dependency trees compared to Structure Diffusion, the positive loss design is well-motivated for solving the problem, and the negative loss design enforces the cross-attention maps over A&E. - The experiment looks comprehensive by considering all sources of datasets and including even more challenging prompts DVMP. The prompts in DVMP are more challenging and realistic than previous "A and B" format prompts. Weaknesses: - I am concerned with the efficiency of incorporating so many negative losses in the diffusion process. What's the speed of SynGen compared to the original SD? - Using tree-based methods for binding may fail for more complicated and practical prompts. In reality, SD users write much longer prompts that describe the components at different levels. However, I think the contribution of SynGen still matters a lot as the community needs time to develop from methods for short prompts to more generalized methods. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Apart from the question in weaknesses: 1. Would SynGen still work for prompts like "an apple that is blue" or "a red apple on the left and another on the right"? 2. I may have missed this part but how do you deal with the padding tokens? 3. Table 2 shows that positive only and positive+negative have lower visual appeal percentages than negative only. Does that imply that the positive loss will harm the visual appealing of the images? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1: What is the speed SynGen compared to the original SD?** Syngen is about the same speed as A&E (~10% slower). This is about twice slower than vanilla SD. We did not invest in performance tuning, so we assume speed can be improved considerably once we do. We will add this information to the Limitations and appendix. **W2: "Using tree-based methods for binding may fail.... However, the contribution of SynGen matters a lot as the community … develops more generalized methods.":** We agree. Importantly, once better binding is available, it can be easily integrated into the generation process using our SynGen approach. **Q1:Would SynGen work for prompts like "an apple that is blue" or "a red apple on the left and another on the right"?** (1) “An apple that is blue” was not supported in the submitted code, but we already have it working well in the most recent version. Same for “An apple that’s extremely blue” and “An apple that is red and yellow in appearance”. (2) “A red apple on the left and another on the right”: Unfortunately not. There are two issues here. First, regarding "left of”: Spatial relations are not well handled by SD, and SynGen is not designed to fix that problem. Several recent papers proposed ways to improve spatial relations in SD and we assume that combining them with SynGen may help. Second, regarding "another”: SynGen will attribute ‘red’ to ‘apple’, but ‘another’ is an implicit mention of a second red apple, which requires commonsense to identify, and is beyond the scope of our work. More generally, we work with a dependency graph, a syntactic scheme which captures some useful aspects of the structure of the prompt, but is limited to syntactic relations. More elaborate semantic graph schemes can be easily incorporated in our pipeline in the future. **Q2: How do you deal with the padding tokens?** We only extract the cross-attention maps of tokens that are in the prompt. That is, we ignore the start, end, and padding tokens. **Q3: Does Table 2 imply that positive loss harms visual appeal of images?** The table demonstrates that to address the binding problem we need both, the positive and the negative loss. If we were to only use the negative term, the images would have been more visually appealing, but like the table shows, it would not address improper binding. --- Rebuttal Comment 1.1: Title: Thank you Comment: Thank the authors for addressing my questions. I would like to maintain my score for now.
Summary: This paper approaches the binding problem in text to image diffusion models that is marked by the inability of the models to appropriately identify which modifiers are attached to which nouns in the input text. This often results either in images that mix up the attributes of the different nouns that are mentioned in the text or in images that completely disregard some of the attributes or fall back to statistically likely combinations that are not mentioned in the text. The paper proposes an inference time optimization based on first identifying which modifiers are associated with which nouns, and then optimizing a loss function that explicitly enforces the cross attention matrices of corresponding (i.e. related by modifier-noun relationship) tokens in the text to be similar, while those of every other pair in the sentence to be dissimilar. This results in a diffusion model that adheres more faithfully to the input text and produces images that are deemed better more often than competing baseline approaches that attack the same problem. Strengths: ## Originality, Quality, Significance * The paper provides a novel lightweight method using off the shelf syntactic parsers to enforce better binding of modifiers to nouns in text to image diffusion models. The qualitative results seem promising and the human evaluation results suggest that the proposed method outperforms previous approaches to mitigate the binding issue. ## Clarity The paper is easy to follow. The qualitative examples demonstrate the different kind of issues faced by the baselines and the improvements brought by the proposed approach. Weaknesses: * Some qualitative examples on the collected DVMP dataset with some categorization on failure cases based on having uncommon modifiers vs different number (performance on 1,2,3 and more modifiers) of modifiers would have been good to see. Technical Quality: 3 good Clarity: 3 good Questions for Authors: ## Clarifications * The metric used in the human evaluation in Table 1 is not sufficiently clear. Aren't the values in each column supposed to sum to 100? * What is the Complex Concepts Prompts dataset mentioned in Fig 5? ## Suggestions * It would be easier for the reader to follow if the examples in Figure 4,5,6 are reorganized such that the references to them in the text are chronological. Currently it requires the reader to keep going back and forth across the figures. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Provide qualitative examples having uncommon modifiers vs different number:** Following this suggestion. We will show qualitative examples of SynGen and our baselines on prompts with a varying number of modifiers. Specifically, with 2, 3, 4, 5, and 6 modifiers. Note that the DVMP dataset does not contain prompts with just a single modifier. In this context, we note that figure 12 in the appendix quantitatively compares SynGen with A&E over several numbers of modifiers in the DVMP dataset. **Q1: Should values in Table 1 sum to 100%?** Yes. The current sum of ~ 99.8 was due to a rounding error. We’ll fix it in the final version. **Q2: What is the Complex Concepts Prompts dataset mentioned in Fig 5?** This was an earlier name we considered for our DVMP data. Thank you for noting this mistake. **Suggestions 1: Reorganize Figures 4, 5, 6 for chronological reference in the text:** We agree with your suggestion and will rearrange the figures to be congruent with the text describing them. Thank you for helping us improve the paper. --- Rebuttal Comment 1.1: Title: Thanks for the clarifications Comment: Thanks, and looking forward to the qualitative examples and categorization of errors! Maintaining the score for now.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
3D-Aware Visual Question Answering about Parts, Poses and Occlusions
Accept (poster)
Summary: This paper introduces the concept of 3D-aware Visual Question Answering (VQA) and presents a new dataset called "Super-CLEVR-3D". It also proposes a model called "PO3D-VQA" that combines probabilistic neural symbolic program execution with deep neural networks using 3D generative representations of objects. The experimental results show that the proposed model outperforms existing methods, but there is still a significant performance gap compared to 2D VQA benchmarks, highlighting the need for further research in 3D-aware VQA. Strengths: + Originality: The authors propose a new dataset, Super-CLEVR-3D, that extends an existing 2D dataset with 3D-aware questions, introducing a novel benchmark for evaluating 3D-aware VQA models. The paper presents the PO3D-VQA model, which combines probabilistic neural symbolic program execution with deep neural networks using 3D generative representations of objects, providing a novel approach for addressing the 3D-aware VQA task. + Quality: Thorough experimental results demonstrate the superiority of the proposed model, PO3D-VQA, over existing methods. + Clarity: The paper provides clear motivation, defines the task of 3D-aware VQA, and describes the proposed dataset and model in a detailed and comprehensible manner. + Significance: The paper addresses the importance of understanding the 3D structure of visual scenes in VQA, introduces a new dataset, and showcases improvements in accuracy, advancing the field of VQA. Weaknesses: - The paper addresses the VQA problem in 3D scenes but only takes images as input. Why not use point cloud or multi-view images which are more suitable for 3D scenario? - Where are the ground truth information of pose and occlusion come from? Are they included in the Super-CLVER dataset? - The model design of PO3D-VQA is somewhat weak. It looks like the combination of Neural Meshes and P-NSVQA. - I wonder about the performance of only using language and using oracle object representation ( ground truth class and pose label). In this way, we can show the reasoning ability of the model, or just accurate object detection is needed to solve this task. - How the proposed model compared with the scene graph-based method, since the method first parses the image as scene representations. What is the advantage of the neural symbolic method against deep graph networks? - There is no limited discussion on dataset limitations: While the Super-CLEVR-3D dataset is introduced as an extension of an existing dataset, the paper does not extensively discuss the limitations or potential biases of the dataset. Providing insights into the dataset construction process, potential challenges, and potential mitigations would strengthen the validity of the findings. And there is no limited scalability discussion: While the experimental results show improvements over existing methods, the paper does not thoroughly discuss the scalability of the proposed model. Understanding how the model's performance scales with larger and more complex scenes would be valuable for assessing its practical applicability. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Have you considered analyzing the failure cases of existing 2D VQA models on 3D-aware questions? This could provide insights into the limitations of 2D reasoning and highlight the unique strengths of the 3D-aware approach. Have you evaluated the performance of the model on larger-scale scenes or real-world datasets? If not, what challenges do you anticipate in scaling up the model to such scenarios? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Please refer to the weakness part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the time and effort you put into reviewing our paper. Here we address the questions you raised. **Q1: Why not use point cloud or multi-view images, instead of single-view images, as input?** A1: We would like to highlight the focus of our paper: 3D-aware VQA from 2D images, i.e. answering questions about 3D information that can be inferred from 2D images. We chose to use 2D images instead of 3D data because (1) In real-world applications, it is much harder to obtain point cloud or multi-view data compared with single-view 2D images. Therefore, understanding 3D information from 2D images is an important ability of computer vision models. (2) While point cloud or multi-view images may provide rich information, the three types of questions that we focus on in this paper, i.e. pose, parts, and occlusions, can be inferred from single-view images. Therefore, we argue that it is important and valuable to understand 3D information from 2D images. While 3D input data can be an extension of future work, this is beyond the scope of this paper. We would add this to the revision of the paper. **Q2: Where does the ground truth information of pose and occlusion come from? Are they included in the Super-CLEVR dataset?** A2: (a) The pose information is provided in scene annotation files of the Super-CLEVR dataset, which annotates the position, rotation (pose), and attributes of each object in the scene. (b) For the occlusion annotations, while they are not directly provided in the Super-CLEVR dataset, they can be easily computed by rerendering the scene based on the scene annotation files. To get the occlusion annotation, we render each object in the scene separately to get single-object images for each object which is guaranteed to be unoccluded, we compare the unoccluded masks in these rendered single-object images with the object masks of the multi-object images provided in Super-CLEVR. If the single-object mask differs from the multi-object by a threshold (15 pixels), we consider the object/part to be occluded. We will add the description to the revised paper. **Q3: The model design of PO3D-VQA is somewhat weak. It looks like the combination of Neural Meshes and P-NSVQA.** A3: We would like to politely disagree and argue that the model is not a simple concatenation of neural meshes and P-NSVQA. As discussed in Sec 4.2, non-trivial technical improvements have been made. Existing mesh-based pose estimation models [36, 47] are for single-class images: i.e., all objects in the image are from the same class and the object class is known during inference. However, in VQA settings, the scene contains multiple objects from multiple classes, posing a greater challenge to correctly classify the object and accurately solve the 6D pose. Therefore, we adopt the analysis-by-synthetic pipeline used in Neural Meshes and significantly extend the model by introducing including 3D-NMS, greedy proposal generation, and post-filtering are proposed in this paper to address the multi-class and multi-object problem. In our paper, we also discuss the advantage of our method over the naive method that uses object detectors like FasterRCNN to detect and classify the object category. Moreover, besides the technical contributions, our work is the first to integrate 3D geometric neural meshes with VQA, which is shown to be effective and achieves strong results compared with existing methods. We would like to highlight this contribution and suggest the importance of 3D geometry in VQA. **Q4: Performance using oracle object representation, in order to show the reasoning ability of the model.** A4: Thank you for the suggestion. We replace the perception with the oracle object representation provided in the annotations and run the reasoning module (P-NSVQA) based on it. The model achieves 99% using the oracle representations, suggesting the strong ability of the symbolic reasoning module. **Q5: What is the advantage of the neural symbolic method against deep graph networks?** A5: We agree that graph models are widely adopted in VQA, which has been shown in previous works (Hu et al., 2019). However, compared with these models, neural symbolic methods like P-NSVQA or NSVQA have been shown to be more effective and achieve state-of-the-art performance on both the CLEVR dataset and the SuperCLEVR dataset. In addition, the modular symbolic methods have better interpretability (step-by-step reasoning), data efficiency, and robustness in the out-of-distribution settings, as shown in previous works [38, 31, 51]. **Q6: Have you evaluated the performance of the model on larger-scale scenes or real-world datasets? If not, what challenges do you anticipate in scaling up the model to such scenarios?** A6: Please refer the the **general response** about results on the real images, as well as challenges for doing experiments on large-scale datasets. **Q7: Discussion of failure cases of baseline model.** A7: We show such examples in Figure 5 of the main paper, also here we provide two more examples in Fig-4 of the rebuttal PDF. The 2D models are hard to locate the small parts and determine if an object is occluded when the occlusion area is small. For the PNSVQA-Project method, it will make similar mistakes as the pose estimator is not precise enough. **Q8: Discussion of the dataset limitations.** A8: Regarding potential biases in the dataset - As the original Super-CLEVR dataset is designed to study VQA domain generalization, the distribution of the objects/attributes, question redundancy, and the compositionality of concepts are balanced. We built our Super-CLEVR-3D on the $default$ version of Super-CLEVR, where the concepts are well-balanced. We will include more statistics about the dataset distribution in the supplementary materials. [1] Hu, Ronghang, et al. "Language-conditioned graph networks for relational reasoning." Proceedings of the IEEE/CVF international conference on computer vision. 2019.
Summary: 1, The paper extends 2D VQA to the task of 3D-aware VQA, which requires the understanding of the 3D structure of visual scenes and includes knowledge of 3D object pose, parts, and occlusions. 2. Introducing the Super-CLEVR-3D dataset, which contains questions about object parts, their 3D poses, and occlusions. 3. The proposed model, PO3D-VQA, is a 3D-aware VQA model that has two key techniques: probabilistic neural symbolic program execution, and deep neural networks paired with 3D generative object representations for robust visual recognition. Strengths: 1. Proposed an interesting 3D-aware VQA task. 2. Introduced a valuable Super-CLEVR-3D dataset. 3. Developed a novel PO3D-VQA method, that significantly outperforms prior methods. 4. Detailed mathematical explanation of the components of the methods. 5. Comprehensive analysis and discussions. Weaknesses: 1. Lack of some statistics about the constructed Super-CLEVR-3D dataset. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. What type of parts are included in the part questions? 2. Is there a threshold for an object to be considered occluded? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: 1. As described in the paper, the work is currently limited by synthetic scenes. 2. As described in the paper, the method is sensitive to pose prediction errors. Flag For Ethics Review: ['No ethics review needed.'] Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your thoughtful and positive feedback on our paper. Please find our responses below: **Q0 & A0: Please refer to the general response for generalization to real images.** **Q1: Lack of some statistics about the constructed Super-CLEVR-3D dataset.** A1: We include the dataset statistics, as well as the list of question templates and the object-part list in the supplementary materials. We will revise and include important statistics in the main paper, especially the distribution of the questions, objects and attributes (classes, shape, material, color, size), parts, and occlusion ratios of objects and parts. Here we list the distribution of all attributes of objects in these tables. | | car | bus | motorbike | aeroplane | bicycle | |-----------|---------|---------|-----------|-----------|----------| | Percentage | 23.56% | 19.78% | 19.64% | 16.96% | 20.05% | | | red | brown | cyan | green | purple | blue | gray | yellow | |-----------|---------|---------|---------|---------|---------|---------|---------|---------| | Percentage| 12.28% | 12.48% | 12.57% | 12.41% | 12.57% | 12.67% | 12.49% | 12.52% | | | small | large | |-----------|---------|---------| | Percentage| 55.34% | 44.66% | | | metal | rubber | |-----------|---------|---------| | Percentage| 50.04% | 49.96% | **Q2: What type of parts are included in the part questions?** A2: Thanks for bringing this up and we will include more details in our revised paper. The object parts are mined from the UDA-Part dataset [32], where each 3D object model is annotated with parts. We cleaned the UDA-Part annotations by removing the noisy annotations and the extremely small parts which can hardly be seen when rendered onto images. The final list of parts for each object type is included in the supplementary materials. **Q3: Is there a threshold for an object to be considered occluded?** A3: Yes, there is a threshold. When generating the questions, we consider an object to be “not occluded” if the number of the occluded pixels is zero; we consider an object to be ”occluded” when the number of occluded pixels is larger than 15 (with 640 by 480 image resolution). We will include the threshold in the implementation details.
Summary: This work introduces a framework to Visual Question Answering (VQA) that incorporates understanding of the 3D structure of scenes, a significant leap from traditional 2D-based models. To evaluate the task, they proposed a new dataset, Super-CLEVR-3D, designed specifically for 3D-aware VQA, containing questions that necessitate compositional reasoning regarding the 3D object parts, poses, and occlusions. To tackle these queries, they also put forth a new model, PO3D-VQA, combining probabilistic neural symbolic program execution for reasoning and deep neural networks for robust 3D scene parsing. Experiments show the proposed PO3D-VQA model outperforms existing techniques, especially on more complex questions, underscoring the need for 3D understanding in future VQA research. Despite the improvements, the noticeable performance gap compared to 2D VQA benchmarks indicates that 3D-aware VQA remains a critical area of exploration. Strengths: I believe the studied direction, 3D-aware 2D-VQA is important for our community, especially when we want to tackle real-world tasks. Both the proposed VQA dataset and the framework, PO3D-VQA, takes a good attempt towards the big goal. All the used techniques as shown in method and Figure 2 are sound and composed in a reasonable way. Overall, the paper is easy to follow. Results in Table 1 demonstrated the strength of the proposed method, which yield significant improvements, and also indicate there are more to do in the future. Some ablation studies are included in Section 5.4. Weaknesses: As the paper claimed a 3D-VQA dataset contribution, I am curious how the model transfer performance from the dataset to real-images. For example, taking a picture with multiple real 3d models and run the model to check the sim-to-real performance. Also, an odd part is that the current motivation for this paper is 3D navigation and manipulation, but few of the defined problems are related to navigation or manipulation. - Can the proposed model accurately locate parts of 3D objects that can help manipulate? - Are there questions directly related to navigation and manipulation in 3D space? The current question-answers in Figure1 in my mind is more related to previous VQA while less related to 3D VQA that really matters. Besides, there are no failure cases understanding and analysis. For the proposed pipeline, the failed cases should include both the failure of perception module and the failure of reasoning module. Providing failure cases can help people understand the limitation of the proposed system, while not weaken the contribution. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please address the concerns raised above. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: It seems the current framework cannot correct itself if the perception model generated wrong outputs. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. Here we address the questions: **Q0 & A0: Please refer to the general response for generalization to real images.** **Q1: About 3D navigation and manipulation: Are there questions directly related to navigation and manipulation in 3D space?** A1: There are no questions about navigation and manipulation in our dataset and we will clarify this in our final paper. In the submission, we talked about 3D navigation and manipulation because this is one of the motivations as well as the long-term goals of our work. By introducing the 3D VQA we make an initial step in that direction by designing 3D questions that the current 2D baseline models are not able to answer well. We also note that for manipulation and navigation, we would need to annotate parts that are useful for these interactions, which is a direction that we plan to pursue in future work. **Q2: Can the model accurately locate parts of 3D objects that can help manipulate?** A2: In our 3D VQA setting, the model can locate the part accurately in most cases. In Fig.3 of the rebuttal PDF, we visualize the part localization results predicted by our model in the SuperCLEVR-3D dataset. But as we stated in Q1, our 3D VQA dataset is just a first step, and the parts needed for 3D manipulation or navigation might be different from those annotated in our dataset. **Q3: The current question-answers in Figure1 in my mind is more related to previous VQA while less related to 3D VQA that really matters.** A3: As stated in our paper, the question in the dataset requires a comprehensive understanding of the 3D structure of scenes, like the 3D poses, occlusion relationship, and the hierarchical relationship between objects and parts. From our experiment, we show that the 2D baseline methods cannot handle this question without the 3D understanding. In this rebuttal, we add the z-direction relationship and depth-related questions on new images, where the 2D models are not able to determine the precise height (in the z-direction) and depth (see Fig-2 in rebuttal PDF). **Q4: Failure case analysis: does the errors come from the perception module or the reasoning module?** A4: A4: We check the failure cases and provide two visualizations in Fig-5 of the rebuttal PDF. We observe that most of the failure cases are due to the errors of the 6D pose estimator, typically when multiple objects from the **same class** are overlapping. As the mesh-based 6D pose estimator detects objects and their 6D poses by class (Fig-3 II of the main paper), it may fail to parse all objects from the same category when they are too close to each other. 3D NMS can effectively improve the dense scene parsing when objects are from different categories, but conceptually it cannot help when objects are from the same categories in a dense scene. We argue that 6D pose estimation in dense scenes is still a challenging problem and many current works on 6D pose estimation still focus on simple scenes with single objects (Ma et al. 2022; Yu et al. 2014; Yang et at. 2022). **Q5: It seems the current framework cannot correct itself if the perception model generates wrong outputs.** A5: As our reasoning module is a probabilistic neuro-symbolic method (P-NSVQA [31]), it allows the model to output correct answers even when the vision module makes errors. We also introduce new probabilistic reason functions for filtering poses and occlusion in the reasoning module. As studied in the [31], with the same perception module, P-NSVQA can outperform the NSVQA in all test sets, which means it revises the errors from the vision predictions. We believe that the probabilistic reasoning module can also help our 3D perception model to revise some errors. But as with most modular VQA models, the errors from the vision model can inevitably impact the final model's accuracy and performance, which is also good for interpretability. [1] Ma, Wufei, et al. "Robust category-level 6d pose estimation with coarse-to-fine rendering of neural features." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022. [2] Xiang, Yu, Roozbeh Mottaghi, and Silvio Savarese. "Beyond pascal: A benchmark for 3d object detection in the wild." IEEE winter conference on applications of computer vision. IEEE, 2014. [3] Ze, Yanjie, and Xiaolong Wang. "Category-level 6d object pose estimation in the wild: A semi-supervised learning approach and a new dataset." Advances in Neural Information Processing Systems 35 (2022): 27469-27483. --- Rebuttal Comment 1.1: Title: Reviewer response Comment: I thank the efforts for making the rebuttal. I encourage the author to include all the discussion here in the revision especially for the real-image and failure cases. It would be better to include some real cases to show your pipeline can correct answer when vision module outputs errors. --- Reply to Comment 1.1.1: Title: Thanks for your comments Comment: Thanks for your comments. We will add the discussion about real-image and failure cases into our revised paper. Also, we will add some cases to show how the error from the vision module can be corrected by the probabilistic reasoning module.
Summary: This paper is seeks to improve the VQA models' understanding of 3D structure of images, particularly parts, poses, and occlusions. There are two main contributions: 1. Super-CLEVR-3D: a dataset that contains questions about parts, poses, and occlusions. 2. PO3D-VQA: a 3D aware VQA model that combines nuerosymbolic program execution for reasoning and 3D generative representations. Experimental settings: the proposed model is tested on the proposed dataset Super-CLEVR-3D. Strengths: 1. The dataset is a good contribution for studying 3D vision-language reasoning. 2. The proposed model, PO3D-VQA is well-explained and motivated, along with the addition of a 6D parsing module and 3D NMS for better scene understanding. 3. The proposed model is a good proof-of-concept for the combination of neurosymbolic program execution and features learned using deep learning. 4. Relevant baselines are used and the experimental setting is soundand well explained. Weaknesses: 1. The proposed model is only tested on the Super-CLEVR-3D dataset 2. There aren't any questions about "z" direction i.e. above/below relationships as the objects in [31] are always on the same surface. Because of this it is questionable to call the dataset "3D" as only 2D relationships are covered. The dataset is missing templates/questions about depth, distance between objects etc. 4. Super-CLEVR-3D dataset only 5 categories, all of which are forms of ‘vehicles’. This could also lead to an overestimation of the object detection module that is used in the model. 5. The method could be tested on several other 3D aware datasets: for instance GQA [Hudson et al CVPR 2019], or this work https://arxiv.org/abs/2209.12028. Also see Q2, Q3, Q4 for more questions about evaluations. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. Has jointly training the pose estimator and attribute classifier been explored? Could training them jointly limit the erroneous pose prediction instances? 2. How is the performance on the original Super-CLEVR dataset or on the CLEVR dataset? 3. Does the method generalize well to cases where parts/poses/occlusions are not mentioned? Can the proposed model be reliably used for VQA outside of the Super-CLEVR setting? 4. Can this model be used for real-world images with questions about parts/poses/occlusions? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: Limitations are discussed [line 348]. Discussions along the lines of Q2/3/4 above would also be useful when describing the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for the detailed discussions and the appreciation of our work. We would like to address the concerns and questions you raised: **Q0 & A0: Please refer to the general response for generalization to real images.** **Q1: There aren't questions about “z” direction (“above/below” relationships) as the objects are always on the ground.** A1: We thank the reviewer for the suggestion. As suggested, we create new images where some airplanes are flying in the air (while the cars are kept on the ground plane). Examples of the images are shown in Fig-2 of the rebuttal PDF. For the new images, we ask “z-questions” about height relationships and depth. To be more specific, the height relationship asks about “above/below” relationships that can be used to query the objects (e.g. “What shape is the cyan thing above the blue shiny car?”); for the depth question we add the comparison of depth between two objects where the size and bounding box location is not sufficient for the prediction (e.g. “Is the aeroplane closer to the camera than the school bus?”). In this way, we create a subset containing 100 images and 379 questions and test our PO3D model on this subset. We test the PO3D-VQA model directly on the new dataset without retraining. On this dataset, our PO3D model achieves 90.33% accuracy on height relationships questions and 78.89% accuracy on the distance-based question suggesting that our model is not limited to objects on the ground plane and can successfully handle questions about height. As the baseline models only use the bounding box to determine the spatial relationship between objects, they are not able to determine the height relationships. **Q2: Super-CLEVR-3D dataset only 5 categories, all of which are forms of vehicles.** A2: Our model is trained from 3D CAD models that have detailed part annotations. Such well-annotated CAD models are difficult to obtain, therefore we are limited to the object categories covered in the UDA-Part dataset [32]. While there are large-scale 3D object models like Objaverse [A] defining and annotating parts on these meshes is challenging and time-consuming. However, we argue that the 5 categories are already very challenging for existing models, as suggested in [31, 32]. In our work, we not only improve the performance greatly over the existing methods but also extend the dataset with different types of questions, which is a non-trivial contribution. **Q3: Could jointly training the pose estimator and attribute classifier bring better results?** A3: Thanks for the suggestion. We agree that it’s a valuable direction to study if the pose estimator can benefit from the joint training. However, there remains a non-trivial technical gap that the pose estimation process is an iterative render-and-compare process that cannot be easily integrated with an attribute classification head. Based on the significant amount of effort required, we leave this as future work. **Q4: How is the performance on the original Super-CLEVR dataset or on the CLEVR dataset?** A4: We test our model on the original Super-CLEVR dataset. The accuracy of the PO3D-VQA model is 88.40%, which is lower than the SOTA methods (95% PNSVQA). Given that the original dataset primarily focuses on 2D questions, precision in 2D location becomes crucial. While MaskRCNN can train the detection directly from the bounding boxes supervision, our 6D pose estimator is trained to solve 6D pose but not explicitly designed for 2D detection. In Fig-5 of the rebuttal pdf, we show that our model will miss some objects especially when multiple objects are too close or overlapping. As the mesh-based 6D pose estimator detects the object's class by class (Fig-3 II of the main paper), it will fail to parse all the objects from one feature map of one category in such cases. We will add this limitation analysis to the paper. **Q5: Does the method generalize well to cases where parts/poses/occlusions are not mentioned or VQA outside of the Super-CLEVR setting?** A5: First, the original Super-CLEVR dataset does not contain parts/poses/occlusions questions. Our model achieves 88.40% accuracy on the original Super-CLEVR, suggesting that the model generalizes well to this setting. Second, as mentioned in the general response, the model works reasonably well on more realistic images, which indicates that our model is not limited to the Super-CLEVR setting. [A] Objaverse: A Universe of Annotated 3D Objects. arXiv 2212.08051 --- Rebuttal Comment 1.1: Title: thank you for the rebuttal Comment: Thank you for your detailed rebuttal and answers to the reviewers' questions. - The real-world experiments are interesting and should be added to the main paper even though they are preliminary / not full-scale. And I appreciate the authors' response about why scaling it to a large realistic dataset would be challenging. - Thanks for the "z-direction" experiments -- do you plan to add the new subset that you produced during the rebuttal to the proposed dataset? - The fact that performance drops for the older Super-CLEVR dataset is problematic -- perhaps the models may be overfitting to either dataset? --- Reply to Comment 1.1.1: Title: Thanks for your comments Comment: 1. Thanks. We will add a new section with a detailed description and analysis of the real-world experiments following our discussion in the rebuttal. 2. Yes, we are working on expanding the "z-direction” dataset with more images as an independent subset in our dataset. And we will add these experiments into our main paper. 3. Thanks for your comment. On the old SuperCLEVR dataset, the inferior performance of PO3D-VQA is mainly due to the quality of the predicted 2D bounding boxes, as our model is not specifically trained with bounding box supervision, while the PNSVQA model directly builds on a MaskRCNN which is specifically designed for 2D detection and hence gives a significant performance boost in 2D questions. In contrast, we extended our PO3D-VQA model to 2D questions, by predicting the 2D boxes after first inferring the 3D scene parameters, which is arguably much more challenging compared to plain 2D prediction of bounding boxes. Particularly for dense scenes with multiple overlapping objects (as shown in the failure cases in our rebuttal), our PO3D-VQA model sometimes misses highly occluded objects, and these missing objects lead to the performance drop of PO3D-VQA. In order to demonstrate that the missing objects in dense scenes account for most of the performance gap, we conduct a new experiment. We compare the predicted object boxes with the groundtruth boxes using 2D IoU to detect ‘missing’ object. Specifically, missing objects in the ground truth annotation do not overlap with any predicted object box (i.e. with IoU over 0.0). Note that this is a very restrictive setting since we only flag those objects as missing that do not have any overlap with any of the detected boxes. Nevertheless, we find that by adding back these missing objects, our PO3D-VQA achieves a performance of 94.3%, which is comparable with the previous SOTA. This experiment shows that the missing objects as demonstrated in the failure cases is also the main reason of the performance gap between PO3D-VQA and PNSVQA on the old SuperCLEVR. We would like to point out that our PO3D-VQA parses the scene from a 3D perspective and is specifically designed for 3D-based questions. Evaluating the model on SuperCLEVR is not necessarily a fair comparison as it doesn’t use the 2D bounding box supervision in the older SuperCLEVR. In future work we aim to improve the PO3D-VQA in dense scenes or a hybrid neural-symbolic model that excels in both 2D and 3D questions.
Rebuttal 1: Rebuttal: # General Response We thank all the reviewers for the feedback. We are glad that all four reviews are positive and acknowledge both of our contributions: (1) The Super-CLEVR-3D dataset, i.e. a new VQA dataset that introduces an important new task (HP3a, NU9U) that requires reasoning over object parts, 3D poses, and occlusions, and (2) the originality of our proposed PO3D-VQA model (NU9U, SUrx, sJrg) for solving these tasks. We are happy that the reviewers highlighted the significance of our experimental improvements over the baselines (HP3a, SUrx, NU9U), and the clear motivation and good writing quality of the paper. Here we provide a general response about how our proposed PO3D-VQA model can be extended beyond Super-CLEVR-3D to datasets that contain real-world imagery. First, **we agree that our work should be extended to real images or other 3D VQA datasets as an important research topic, and we have started doing this in the current ongoing work**. However, we note that extending such a 3D-aware VQA to datasets is non-trivial. For the real-world data, existing real-world VQA datasets (like GQA and FE-3DGQA suggested by reviewer-sJrg) lack 3D annotations to objects and 3D-related question-answers (questions in existing VQA datasets refer mostly to the 2D space), which makes it hard to train and test our model on them. To enable 3D-aware VQA on these datasets, we need to generate additional 3D annotations on them. Moreover, real VQA datasets contain a large variety of object classes with many being highly articulated (e.g. humans and animals). We are working on annotating these datasets with 3D and scaling up the number of object classes in our PO3D-VQA model, which could allow us to train and test our model, but this requires a significant amount of effort and time. However, to show that our PO3D-VQA model can, in principle, work on realistic images, we tested it on several more realistic image samples that were generated with objects from real datasets that had 3D annotations given (see Fig-1 of rebuttal file). This small preliminary experiment shows that our model can successfully answer 3D-related questions on realistic images. To give more details on this experiment: The example images are manually created using the vehicle objects (e.g. car, bus, bicycle) from ImageNet. In this experiment, the pose estimator is trained on the PASCAL3D+ dataset, which can successfully predict the poses of objects in the image, as shown in (b). The attribute (color) prediction module is trained on Super-CLEVR-3D and the object shapes are predicted by a ResNet trained on ImageNet. Our model can correctly predict answers to questions about the object pose, parts, and occlusions, e.g. “Which object is occluded by the mountain bike”. From the provided examples, we hope to demonstrate that the difficulty of annotation makes it hard to use vision algorithms that are 3D-aware, not just our model. Additionally, we also conduct experiments on new types of questions with more complex scenes as asked by some reviewers. We hope these can help to address the reviewers’ questions about our synthetic dataset and our proposed 3D-aware VQA model. Pdf: /pdf/ce60f33bb47c53d521e3ea66f098884dbf76a1bf.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Boosting Adversarial Transferability by Achieving Flat Local Maxima
Accept (poster)
Summary: The authors empirically observed that the adversarial data lies on the flat local maxima yields enhanced attack transferability. Inspired by this observation, this paper proposes a regularization to help the gradient-based attacks to find the adversarial data at the flat local maxima. This regularization penalizes the gradient norm around the adversarial data and can be efficiently computed via the finite difference method. The empirical results validate the effectiveness of the proposed method in improving attack transferability. Strengths: 1. The motivation of the proposed method is clear. The observation in Figure 1 is interesting and inspiring. 2. The authors utilized the finite difference method to make the computation more efficient. 3. It seems that the proposed method can significantly improve attack transferability. Weaknesses: 1. The proposed method is not theoretically motivated, which degrades its soundness. The authors claimed the optimization of perturbation equals the model training process using an analogy. However, it lacks supportive theoretical results to support this claim. Since the aforementioned claim is not solidly proven, the reason for the adversarial data at flat local maxima yielding better transferability seems unclear. 2. The paper does not provide the standard variance of the reported results to validate their significance. 3. The paper lacks some empirical and theoretical analyses of the effectiveness of the finite difference method in speeding up optimization. I think the optimization of the gradient norm is very important for the proposed method. Therefore, the effect of the acceleration method is worth studying. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. Could the authors provide the error bar? 2. Could the authors provide a comparison between the proposed method and the previous work [1]? 3. Could the authors discuss how the effectiveness of the finite difference method in speeding up optimization empirically/theoretically? What are the computational consumption and the attack success rate with/without the finite difference method? 4. What is the performance of the proposed method compared the baseline methods evaluated on the transformer-based models (i.e., vision transformers)? [1] Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets, ICLR 2020. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The motivation of the proposed method lacks theoretical support. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer 2kUx, Thanks for your valuable and insightful comments. We address your concerns as follows: **Q.1.** Could the authors provide the error bar? > **A.1.** Following your suggestions, we will add all error bars to the final version of the paper. Here we provide part of the error bars as follows. From the results, it can be seen that the standard deviation of our PGN method is not very large, which indicates that our method is stable. > > |Source:Inc-v3|Inc-v4|IncRes-v2|Res-101| > |:-:|:-:|:-:|:-:| > |VMI|74.81$\pm$0.57|70.22$\pm$0.69|75.84$\pm$0.47| > |EMI|81.12$\pm$0.47|76.90$\pm$0.72|80.43$\pm$0.63| > |RAP|82.44$\pm$0.51|79.27$\pm$0.64|81.43$\pm$0.72| > |PGN |91.61$\pm$0.73|90.62$\pm$0.94|89.14$\pm$0.68| **Q.2.** Could the authors provide a comparison ... and the previous work [R1]? > **A.2.** As you suggested, we compare our PGN with SGM [R1] using Res-18 model with MI-FGSM as the backbone method. Since SGM modifies the backpropagation and is compatible with our PGN, we also integrate PGN into SGM to evaluate its generality to other attacks. As shown in the following table, our PGN consistently exhibits better attack performance than SGM, which further shows its superiority in boosting adversarial transferability. Besides, PGN+SGM outperforms PGN as well as SGM, showing its remarkable compatibility with various attacks. We will add it in the revision. > > |Source:Res-18|Res-101|Res-152|Inc-v3| > |:-:|:-:|:-:|:-:| > |MI|82.8|73.3|54.5| > |SGM|89.1|82.5|66.0| > |PGN|95.9|92.9|77.9| > |PGN+SGM|96.7|93.8|80.1| > >[R1] Wu et al. "Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets." ICLR 2020. **Q.3.** Could the authors discuss ... with/without the finite difference method? >**A.3.** As you suggested, we first theoretically analyze the acceleration effect of the finite difference (FD) method. For the baseline attack method I-FGSM, the gradient is computed only once per iteration. Thus, its computational complexity is $O(n)$, where $n$ represents the image size. However, upon introducing the penalty gradient term, the need arises to compute the second-order Hessian matrix, leading to a theoretically computational complexity of $O(n^2)$. To address this, we use the finite difference method as an approximation to the Hessian matrix, which requires the computation of the gradient twice in each iteration, effectively yielding a computational complexity of $O(2n)$. This theoretically promises significant improvements in computational efficiency. > > Additionally, we substantiate our theoretical analysis with comparative experiments. These experiments were conducted on an RTX 2080 Ti with a CUDA environment. We employed I-FGSM and evaluated the total running time on 1000 images (excluding data loading time) and the attack success rate on black-box models. The results are presented in the following table. Directly optimizing Eq. 4 results in better attack performance with high computational resources. With the finite difference method, we can better approximate the performance of direct optimization of the second-order Hessian matrices, which significantly reduces the running time and the computational memory. Furthermore, owing to the relatively modest image size ($299\times 299\times 3$) and the comparatively small number of parameters compared to the model, the accelerated computing capabilities of CUDA enable the actual running time to surpass the theoretical estimates. We will add this discussion in the revision. > |w/o or w/ FD|Inc-v4|IncRes-v2|Res-101| Times (Total)| Memory Size| > |:-:|:-:|:-:|:-:|:-:|:-:| > |I-FGSM (Backbone)|27.8|19.1|38.1|52 s|1631 MiB| > |Hessian matrix+w/o (FD)|39.2|30.2|47.0|469 s|7887 MiB| > |Hessian matrix+w/ (FD)|37.9|28.6|45.7|96 s|1631 MiB| **Q.4.** What is the performance ... the transformer-based models? >**A.4.** Please refer to the response to Q.1 of Reviewer PZXp. **Q.5.** The proposed method is not ... transferability seems unclear. >**A.5.** In [R1], Lin et al. analogize the adversarial example generation process to the standard neural model training process, where the input $x$ can be viewed as parameters to be trained and the target model can be treated as the training set. From this prespective, the transferability of adversarial examples is equivalent to the generalization of the normally trained models. Intuitively, training model and adversarial perturbation generation are both optimization problems, which derives such analogy. Besides, this analogy has been widely accepted by nemerous works, such as exploring better optimization methods (NI [R1], VMI [R2]) or data augmentation methods (ODI [R3], DITL [R4]), which are effective in improving the transferability of adversarial examples. > > In fact, it is difficult to provide a theory to support the analogy between adversarial transferability and model generalization in the field of adversarial attacks. In this work, we are inspired by this analogy and attempted to enhance the transferability of adversarial examples from a new perspective. Hence, we try to explore flat local minima to enhance the transferability of adversarial examples. We assumed and experimentally verified that the flat local optima are related to the transferability of the adversarial examples, and the loss surface maps of the generated adversarial examples in Fig. 2 also validates our motivation. Also, we will keep studying the theoretical connection between transferability and flat local minima in our future work. > > [R1] Lin et al. "Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks." ICLR. 2020. > > [R2] Wang et al. "Enhancing the Transferability of Adversarial Attacks through Variance Tuning." CVPR. 2021. > > [R3] Byun et al. "Improving the Transferability of Targeted Adversarial Examples through Object-Based Diverse Input." CVPR. 2022. > > [R4] Yuan et al. "Adaptive Image Transformations for Transfer-based Adversarial Attack." ECCV. 2022. --- Rebuttal Comment 1.1: Title: Understand Comment: Thanks for your response. The empirical results seem to sufficiently support the effectiveness of the proposed method. However, from the theoretical perspective, this paper does not provide a rigorous theoretical guarantee of its effectiveness. Therefore, I still would like to lean toward Boraberline Accept. I will not defend for its acceptance.
Summary: The paper proposes a method called Penalizing Gradient Norm (PGN) to improve the transferability of adversarial perturbations. The method is motivated by the observation that encouraging the flatness of the local landscape for adversarial examples can lead to better transferability, and thus PGN regulates the process of gradient-based adversarial attack algorithms by penalizing the magnitude of the loss gradient with respect to the input. Since such a regularization process requires the input Hessian which is a computationally expensive process, PGN utilizes the finite difference method to approximate the Hessian matrix. Experiments on the ImageNet-compatible dataset demonstrate that the proposed method can improve the transferability of untargeted attacks in comparison to other baseline methods. Strengths: Originality: The proposed method is original and intuitive. Clarity: The general structure of the paper is very clear: moving from validating an assumption to proposing an algorithm, and finally evaluating the proposed method with empirical results. Significance: The proposed method addresses a practical security concern of deep learning models. The proposed method improves the transferability of the adversarial perturbations compared to existing gradient-based methods. Extensive empirical evaluations were performed to demonstrate the efficacy of the proposed method. Weaknesses: Reverse adversarial perturbation (RAP) is a closely related work that encourages adversarial examples to be located at a region with low loss values. To demonstrate the novelty and significance of the proposed method, the paper needs a detailed discussion of the differences and similarities compared to RAP. One of the major contributions claimed by the paper is the empirical validation that "adversarial examples located in flat regions have good transferability", and it is mainly covered in Sec. 3.2. Putting aside the significance of the contribution, the authors should be very careful about the claims and statements in Sec 3.2. The assumption and the followed empirical validation both suffer from the lack of rigor and thus weaken the significance of the contributions. Please see the Questions section for additional discussions. Some technical details in Sec 3.3 require clarifications. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: The issue around Sec. 3.2 stems from the lack of rigor in Assumption 1. How is a local region defined? What is the definition of flatness, and how to measure it? Following the assumption, why is the maximum l2 norm used in (3)? not a value averaged over l2 norm of data points sampled around x? Consider inputs that fail to transfer, but now are transferable because of the modified objective. Are they indeed situated in a flat region? More importantly, why is (3) necessary to validate the assumption? Since the assumption is agnostic to the attack algorithm, it should be true to any adversarial attack methods: I-FGSM, MI-FSGM, VIM-FGSM, etc. As such, given any attack algorithm, we should compare the flatness between adversarial examples that are transferable and those which fail to transfer. Ln 163 requires clarification, why do we have such expectations? The FD method circumvents the expensive computation of the input Hessian, and the approximation becomes accurate with decreasing value of \alpha. Why is the \alpha used in FD the same as the \alpha used in the iterative process of generating adversarial examples? To achieve an accurate approximation of the Hessian, shouldn't the stepsize used in FD be very small? \alpha = \eps/T seems to be a large value to me. With the current choice of \alpha, how close is the approximation (5) to the actual Hessian? Evaluation: Is PGN based on standard ifgsm, or one of the momentum variants? Since RAP is the closes method to PGN, is RAP used in the evaluation based on the standard ifgsm as well? Ln303: The statement of "flat local minima result in better generalization" being a fact is quite strong. Also, I would suggest the author proofread the paper. There are several very noticeable typos. For instance, even the name of the method is spelled incorrectly as "Penalizing" (Ln48, Ln55) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: I suggest the author include a brief discussion of the limitation of the proposed work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer 8Bpx, Thanks for your valuable and insightful comments. We address your concerns as follows: **Q.1.** PGN v.s. RAP >**A.1.** As you suggested, we discuss the similarities and differences between PGN and RAP methods as follows. >1) **Empirical verification**. Both RAP and PGN aim to achieve flat local optima for more transferable adversarial examples. Besides, we are the first work that provides an empirical verification that adversarial examples in flat local optima are more transferable. >2) **Methodology**. RAP injects worst-case perturbations into the optimization process to maximize the loss and achieve falt local optima. In contrast, PGN introduces a gradient norm into the loss to achieve a flat local optimum. Hence, the method we employed differs significantly from RAP. >3) **Efficiency**. Our method is more computationally efficient. In RAP the outer loop is 400 and the inner loop is 8. In contrast, our PGN method employs finite differences to approximate the Hessian matrix and samples multiple points to obtain a more stable gradient, which requires only 10 outer loops and 20 inner loops for the computation. Thus, our method not only improves computational efficiency but also achieves a higher attack success rate. **Q.2.** The issue around Sec. 3.2...which fail to transfer. >**A.2.** >1) In mathematics, we define the example $x'$ in the $\epsilon$-neighborhood of the input image $x$ as a local region. Flatness indicates the maximum gradient value in the neighborhood of the sample. The smaller the gradient value is, the flatter it is. To characterize the flatness of the loss surface, we define the slope $k=\frac{J(x_0)-J(x_i)}{\||x_0 - x_i \||_2}$, where $x_0$ represents the center point of the 2D map in Fig. 2, and $x_i$ represents the points sampled near the center point $x_0$. >2) During the optimization process, there might be saddle points. If we use averaged gradient, saddle points will reduce the size of the averaged gradient value, making us unable to perceive the sharper regions. Hence we use the maximum gradient, which can better perceive the worst-case gradient. >3) To address your concerns, we selected images (about 286 images) that can be transferred using our PGN method but cannot be transferred using MI-FGSM. We calculate the average slope $k$ of adversarial examples on the Inc-v3 model for these 286 images. **See the first table in PDF**. The adversarial example generated by our PGN have smaller slopes, confirming that our method does craft adversarial examples in flatter local regions. >4) The goal of adding the penalized gradient norm is to make the gradient value smaller, and a smaller gradient value also means that the local region is flatter. Thus, if we verify that Eq.3 can improve the adversarial transferability, it naturally verifies that our assumption is valid, i.e., flat local optima have better transferability. >5) Our assumption is agnostic to the attack algorithm. Hence, PGN is suitable for any gradient-based attack methods. To address your concerns, we combine PGN with these gradient-based attack methods to generate adversarial examples on Inc-v3 and compute their flatness (averaged on 1000 images). **See the second table in PDF**. Our PGN method can effectively improve the adversarial transferability of existing attacks and make the adversarial examples located in flatter regions. **Q.3.** Ln 163 requires...expectations? >**A.3.** Intutitively, it is expected that the loss of these data points in a small neighboorhood are similar. To address your concerns, we evaluated the value of $J(x_t^{adv})$ and $J(x')$ during the iteration process. In each iteration, we first calculated the value of the loss function for the adversarial example $x_t^{adv}$, then we randomly sampled a sample $x'$ in the neighborhood of the $x_t^{adv}$ and calculated the loss function $J(x')$. **See the third table in PDF**. It can be observed that both loss function values are relatively approximate. We can also observe that the expectation of $J(x')$ is smaller than that of $J(x^{adv})$, and maximizing a smaller value during the optimization process is more beneficial for our optimization. **Q.4.** $\alpha=\epsilon/T$ seems to be a large value >**A.4.** We conduct PGN using smaller step sizes **see the fourth table in PDF**. We can see smaller step sizes cannot improve adversarial transferability. This is because the smaller the step size, the closer two neighboring gradients are, leading our method to degrade to MIFGSM. To avoid redudant hyper-parameters, we simply adopt the step size of $\alpha$ in this paper. **Q.5.** With the current...the actual Hessian? >**A.5.** To address your concerns, we compared the cosine similarity of perturbations generated by the Hessian matrix and finite differences method. The average cosine similarity of 1000 images is about 0.9035. It can be observed that the current finite difference can effectively approximate the adversarial perturbation generated by the Hessian matrix. **Q.6.** Evaluation: Is PGN...well? >**A.6.** For fair comparison, our PGN and RAP adopt MI-FGSM for evaluation. **Q.7.** Ln303: The...being a fact is quite strong. >**A.7.** Existing works [R1, R2, R3] have shown that more flat local minima often result in better model generalization from empirical and theoretical perspectives. We will rephrase that sentence as "Inspired by the observation that flat local minima often result in better generalization". > >[R1] Keskar et al. On large-batch training for deep learning: Generalization gap and sharp minima. ICLR. 2017. > >[R2] Neyshabur et al. Exploring generalization in deep learning. NeurIPS. 2017. > >[R3] Foret et al. Sharpness-aware minimization for efficiently improving generalization. ICLR. 2021. **Q8-Q10** > We have corrected the typos and a discussion of the limitations will be added in the revised paper. Technical details in Sec 3.3 have be provided in Appendix. --- Rebuttal Comment 1.1: Comment: I appreciate author's detailed response. One of my major concern with the paper is the lack of rigour, particularly in Sec 3. As such, I will maintain the initial score. --- Reply to Comment 1.1.1: Comment: Thank you for responding to our comments. In terms of your remaining concern, "The issue around Sec. 3.2 stems from the lack of rigor in Assumption 1", we will address it as follows: **1. We provide a more rigorous description of Assumption 1 here.** > **Assumption 1**: Given the maximum radius $\zeta$ for the local region and two adversarial examples $x_1^{adv}$ and $x_2^{adv}$ for the same input image $x$, if $\max _{x' \in \mathcal{B} _{\zeta}(x _1^{adv})} \| \nabla _{x'}J(x', y;\theta) \| _2 < \max _{x' \in \mathcal{B} _{\zeta}(x _2^{adv})} \| \nabla _{x'}J(x', y;\theta) \| _2$, $x _1^{adv}$ tends to be more transferable than $x _2^{adv}$ across various models. > > Here we adopt the maximum gradient in the neighborhood to evluate the flatness of local region, which is more rigorous. **2. Regarding "The assumption and the followed empirical validation both suffer from the lack of rigor and thus weaken the significance of the contributions", we need to clarify again as follows.** >In this work, inspired by the observation that flat local minima can bring better generalization during the model training process, we try to explore whether flat local optimum can improve the adversarial transferability from a new perspective. Hence, we first propose the assumption that adversarial example at a flat local region tends to have better transferability. To validate Assumption 1, we introduce a regularizer to minimize the maximum gradient in the $\epsilon$-neighborhood of the original objective loss function. Intuitively, the smaller gradient also indicates a flatter location. By optimizing this new objective loss function (Eq.3), we find that adversarial examples have better transferability. Thus, when we verify that Eq.3 can improve the adversarial transferability, it naturally verifies that our assumption is valid, i.e., flat local optima have better transferability. Based on this assumption and verification, we propose a novel attack to boost adversarial transferability. **3. More clarifications using the finite difference methods for approximation** > In line 166, we have noted that existing adversarial attacks typically rely on the sign of the gradient, rather than requiring an exact gradient value. Thus, we approximate the second-order Hessian matrix using the finite difference method to accelerate the attack process. We also have compared the cosine similarity of perturbations generated by the Hessian matrix and finite differences method and measured an average similarity of 0.9035 for 1000 images. It can be observed that the current finite difference can effectively approximate the adversarial perturbation generated by the Hessian matrix. Thank you for your effort and reviews. We are looking forward to your further reply and happy to address your concerns if any.
Summary: This paper aims to boost adversarial transferability by using the Penelizing Gradient Norm, which can restrict adversarial examples located in flat regions. The writing is good, and it is easy to read. The experiments demonstrate that the proposed method achieves good results.  Strengths: - The motivation is clear, and the writing is well. - The analysis in Sec.3.2 is interesting, which can verify the assumption. - The proposed method is simple but effective. Weaknesses: 1. In Theorem 1, the authors briefly introduce the finite difference method, which is fundamental for efficiently approximating a second-order Hessian matrix. Although this is an interesting solution, it is better to verify this approximation in experiments if Eq. 6 is a good approximate solution to the objective function in Eq. 4. On one hand, I think the results of solving Eq. 4 directly should be reported, which can show that the approximated solution will not affect the performance. On the other hand, the running time and complexity analysis are also considered, which can show that this key design of PGN actually works well.  2. In Table 1, we first observe that PGN is a good solution for boosting transferability. But, we can also observe that the source model IncRes-v2 can achieve the best average score. Does this mean that the loss surface of this model is in a more smooth region? 3. I understand this paper focuses on boosting adversarial transferability. However, for black-box attacks, query-based adversarial attacks are also widely studied. Therefore, I am interested in if the Penelizing Gradient Norm is a generally method for black-box attacks, not limited to transfer attacks. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: See above weakness. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The proposed method is similar to the previous work [a]. [a] Penalizing gradient norm for efficiently improving generalization in deep learning. ICML 2022. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer xHRb, Thanks for your valuable and insightful comments. We address your concerns as follows: **Q.1.** In Theorem 1, the authors briefly ... key design of PGN actually works well. >**A.1.** As you suggested, we first theoretically analyze the acceleration effect of the finite difference (FD) method. For the baseline attack method I-FGSM, the gradient is computed only once per iteration. Thus, its computational complexity is $O(n)$, where $n$ represents the image size. However, upon introducing the penalty gradient term, the need arises to compute the second-order Hessian matrix, leading to a theoretically computational complexity of $O(n^2)$. To address this, we use the finite difference method as an approximation to the Hessian matrix, which requires the computation of the gradient twice in each iteration, effectively yielding a computational complexity of $O(2n)$. This theoretically promises significant improvements in computational efficiency. > > Additionally, we substantiate our theoretical analysis with comparative experiments. These experiments were conducted on an RTX 2080 Ti with a CUDA environment. We employed I-FGSM and evaluated the total running time on 1000 images (excluding data loading time) and the attack success rate on black-box models. (**The results please refer to the response to Q.3. of Reviewer 2kUx**). Directly optimizing Eq. 4 results in better attack performance with high computational resources. With the finite difference method, we can better approximate the performance of direct optimization of the second-order Hessian matrices, which significantly reduces the running time and the computational memory. Furthermore, owing to the relatively modest image size ($299\times 299\times 3$) and the comparatively small number of parameters compared to the model, the accelerated computing capabilities of CUDA enable the actual running time to surpass the theoretical estimates. We will add this discussion in the revision. **Q.2.** In Table 1, we first ... is in a more smooth region? > **A.2.** We tested the smoothness over the loss function of the adversarial examples generated by Inc-v3 and IncRes-v2 models, respectively. To characterize the flatness of the loss surface, we define the slope $k=\frac{J(x_0)-J(x_i)}{\||x_0 - x_i \||_2}$, where $x_0$ represents the center point of the 2D map in Fig. 2, and $x_i$ represents the points sampled near the center point $x_0$. Notably smaller values of $k$ mean flatter. To derive comprehensive insights, we randomly selected a subset of images ($S_i$, which denotes the $i$-th image) and uniformly sampled 10 points near the center to calculate their average slope $k$. > > |Averaged slope $k$|$S_0$|$S_1$|$S_2$|$S_3$|$S_4$| > |:-:|:-:|:-:|:-:|:-:|:-:| > |Inc-v3|0.013|0.254|0.016|0.024|0.009| > |IncRes-v2|0.004|0.122|0.018|0.020|0.007| > > From the table, it can be concluded that most of the adversarial examples generated by the IncRes-v2 model have smaller slopes. We also counted the slopes of 1000 images on the Inc-v3 and IncRes-v2 models, respectively. The results show that 76.4% of the images exhibit smaller slopes on the IncRes-v2 model. The observations from these experiments confirm that the IncRes-v2 model facilitates the generation of adversarial examples that exist in smoother regions of the loss landscape. In our future work, we will investigate the possible reason and design effective method to train the model with more smooth region as the surrogate model for better transferability. Thanks for your insightful comments. **Q.3.** I understand this paper ... not limited to transfer attacks. > **A.3.** Thanks for this valuable comment. Unfortunately, our PGN might not generalize to query-based attack methods. Here, we provide the following analysis. > 1) The goal of query-based attack methods is different from ours. The goal of our method is more concerned with improving the transferability of the adversarial examples on different black-box models, while the query-based method mainly focuses on the attack success rate on the source model. > 2) Query-based attack methods mainly exploit limited output information such as labels and logits, while our PGN needs to utilize the gradient to the objective loss function. Hence our method is not inherently applicable on query-based attack methods. **Q.4.** The proposed method is similar to the previous work [R1]. > **A.4.** Indeed, both our PGN and [R1] penalize the gradient norm. However, there are significant differences between these two works, which are summarized as follows: > 1) **Different goals**. While [R1] primarily focuses on enhancing flat local minima to improve generalization during model training, our main objective is to investigate the potential impact of flat local minima on the transferability of adversarial attacks. Previous research in the field of adversarial attacks has shown limited attention to the relationship between flat local optima and adversarial transferability. Consequently, our work significantly contributes new insights to the domain of adversarial attacks. > 2) **Objective function**. A distinguishing feature of our proposed method lies in its emphasis on the gradient information surrounding the adversarial example. Specifically, we approximate the second-order Hessian matrix using the finite difference method, which is crucial for our approach. The work presented in [R1] also supports and provides theoretical foundations for the feasibility of our method within the realm of adversarial attacks. > > In summary, our main motivation in this work is to explore whether flat local optimum can improve the adversarial transferability. To the best of our knowledge, it is also the first time to use penalized the gradient norm and finite difference method to support our motivation in the field of adversarial attacks. > > [R1] Zhao et al. "Penalizing gradient norm for efficiently improving generalization in deep learning." ICML 2022. --- Rebuttal Comment 1.1: Title: Response to authors' rebuttal Comment: I appreciate the authors' detailed response to the initial review. Having carefully considered their feedback in conjunction with the comments from other reviewers, I decided to maintain my initial rating.
Summary: In this work, the authors first assume and empirically validate that adversarial examples at flat local minima tend to have better adversarial transferability. Based on this finding, they introduce a regularizer on the gradients in the neighborhood of the input sample to achieve flat local minima. To make the attack more computationally efficient, they propose Penalizing Gradient Norm (PGN) attack, which approximates the second-order Hessian matrix by interpolating two Jacobian matrixes. Strengths: It is the first work that empirically validates that adversarial examples at flat local minima have better adversarial transferability. The approximation on the second-order Hessian matrix is reasonable with theoretical support. The proposed method is simple yet effective. Extensive experiments have shown that PGN can significantly boost adversarial transferability compared with existing methods. The visualization in Figure 2 validates that PGN can achieve better flat local minima than existing attacks, which further supports their motivation. Weaknesses: Results on vision transformers, such as ViT, Swin, etc. would better be included. Evaluations on more defense methods, such as randomized smoothing and denoising should be conducted. Technical Quality: 3 good Clarity: 3 good Questions for Authors: In Line 9 of Algorithm 1, g’ and g* are only related to i-th sampled example x’. Is it a typo? I think it should accumulate all the gradients of N sampled examples. What is the difference between the two symbols L and J? I think they are both loss functions. Definitions of these two loss functions should be clarified. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer PZXp, Thanks for your valuable and insightful comments. We address your concerns as follows: **Q.1.** Results on vision transformers, such as ViT, Swin, etc. would better be included. > **A.1.** As you suggested, we further adopt four mainstream vision transformers to evaluate the effectiveness of our method, i.e., ViT [R1], PiT [R2], Visformer [R3], and Swin [R4]. The adversarial examples are generated on Inc-v3 and the attack success rates are summarized in the following table. It can be seen that our PGN can consistently outperform the baselines on these transformers, showing its high effectiveness and generality to various architectures. We will report the complete results in our final paper. > > |Method |ViT |PiT |Visformer |Swin| > | :----: | :----: | :----: | :----: | :----: | > |MI |18.9 |18.1 |23.7 |22.2 | > |NI |20.0 |19.9 |25.5 |25.5 | > |VMI |28.4 |34.0 |41.1 |40.8 | > |EMI |26.0 |27.3 |36.9 |36.7 | > |RAP |35.2 |41.8 |50.6 |50.2 | > |PGN (ours) |**44.7** |**53.2** |**65.5** |**64.5** | > > [R1] Alexey et al. "An image is worth 16x16 words: Transformers for image recognition at scale." ICLR. 2021. > > [R2] Heo et al. "Rethinking spatial dimensions of vision transformers." ICCV. 2021. > > [R3] Chen et al. "Visformer: The vision-friendly transformer." ICCV. 2021. > > [R4] Liu et al. "Swin transformer: Hierarchical vision transformer using shifted windows." ICCV. 2021. **Q.2.** Evaluations on more defense methods, such as randomized smoothing and denoising should be conducted. > **A.2.** Following your suggestion, we further evaluate our PGN and other gradient-based attacks on three advanced defense models, i.e., Feature Distillation (FD) [R1], Randomized Smoothing (RS) [R2], and Neural Representation Purifier (NRP) [R3]. The adversarial examples are generated on the ensemble models, i.e. Inc-v3, Inc-v4 and IncRes-v2, and the attack success rates are summarized in the following table. As we can see, our PGN can consistently exhibits better attack performance than the baselines on these advanced defenses. These results futher validate the superiority of our proposed PGN. We will add the complete results in our final paper. > > | Method | FD | RS | NPR| > | :----: | :----: | :----: | :----: | > | MI | 51.7 | 30.3 | 36.5 | > | NI | 53.6 | 30.7 | 38.1 | > | VMI| 67.8 | 35.6 | 44.9 | > | EMI| 74.8 | 38.8 | 46.8 | > | RAP| 79.6 | 41.7 | 49.6 | > | PGN (ours)| **85.7** | **45.2** | **51.3** | > > [R1] Liu et al. "Feature Distillation: DNN-Oriented JPEG Compression Against Adversarial Examples." CVPR. 2019. > > [R2] Cohen et al. "Certified adversarial robustness via randomized smoothing." ICML. 2019. > > [R3] Naseer et al. "A self-supervised approach for adversarial robustness." CVPR. 2020. **Q.3.** In Line 9 of Algorithm 1, g’ and g* are only related to i-th sampled example x’. Is it a typo? I think it should accumulate all the gradients of N sampled examples. What is the difference between the two symbols L and J? I think they are both loss functions. Definitions of these two loss functions should be clarified. > **A.3.** Thanks for pointing out this typo. > 1) In fact, the symbol $\bar{g}$ is intended to accumulate the gradients of multiple sampled examples, and the correct representation should be $\bar{g} = \bar{g}+(1-\delta)\cdot g'+ \delta \cdot g^{\ast}$. Specifically, in each iteration, $\bar{g}$ is initialized to $0$. Then $N$ different examples $x'$ will be randomly sampled in the neighborhood of $x_t^{adv}$. Subsequently, the gradients $g'$ and $g^{\ast}$ related to example $x'$ will be computed. Finally, these gradients will be accumulated into $\bar{g}$ as the final gradient. We will correct this typo in Algorithm 1 in the revised version. > > 2) The symbols $L$ and $J$ represent two different loss functions. Where $J$ is the original loss function of the classifier $f$ (e.g., the cross-entropy loss), and $L$ is our proposed loss function that introduces a penalized gradient norm to the original loss function to achieve flat local maxima. To avoid confusion, we will add the above clarifications to the final version of the paper. We appreciate your efforts in improving the clarity and accuracy of our paper, and we will make the necessary revisions to address these issues. --- Rebuttal Comment 1.1: Comment: I have carefully read the responses and the other reviews. I think the authors have addressed my concerns. Also, the assumption is reasonable with empirical verfication, which makes the proposed method novel and solid. Thus, I raise my score to 7.
Rebuttal 1: Rebuttal: Dear Reviewers and Area Chairs: We sincerely appreciate all of your precious time and constructive comments. The results of the partial response data (Tables) are in the pdf file. Pdf: /pdf/c43b63a8954f8e1968febe3320dc44341fd31911.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Fantastic Robustness Measures: The Secrets of Robust Generalization
Accept (poster)
Summary: This paper assesses the correlation between existing measures and the robust generalization gap in various experimental settings. Specifically, the author evaluates the relationship between measures based on weight-norm, margin, smoothness, flatness, gradient-norm, and robust generalization under different conditions such as model architecture, training methods, inner maximization steps, optimizer, batch size, data augmentation, and early stopping. Through extensive experimentation, the author finds that these measures are highly sensitive to training setups and, therefore, not very effective. Strengths: 1. Conduct extensive experiments to explore the correlation between different measures and the robust generalization gap. 2. Present some new findings regarding robust overfitting. Weaknesses: 1. Lack of innovation: The author mainly focuses on existing measures and does not propose new measures. 2. Limited contribution: The author's finding that existing measures are not effective does not provide substantial contributions since the underlying mechanism of robust overfitting remains unclear. If there were a measure capable of accurately evaluating the robust generalization gap, it would also offer a clear direction to the underlying mechanism of robust overfitting. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Please refer to the comments in Weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's insightful assessment of the strengths and weaknesses of our paper. Your feedback provides valuable insights that help us improve the quality and clarity of our work. Below, we provide a detailed response addressing each of the weaknesses [W#] raised with all references aligned to the main paper: **** **[W1]** We are truly sorry to hear that the innovation of the paper does not match your expectation. Here is a short concrete motivation for studying existing measures. We discover that limitations exist when extending the findings of prior works to practical scenarios due to a restricted set of models [51, 49] and different evaluations [50, 44]. To this end, we **newly propose a metric** $\pi_k$ in Equation (6) to uncover the effectiveness of measures in the adversarial training framework by training **over 1300 models** and further **verify how and when are measures correlated with the robust generalization gap.** To do so, we would very much appreciate it if the reviewer could further comment on what aspects they would judge as the most important to update in the paper. **** **[W2]** The robust generalization gap is *complex* to study directly, particularly within the complexities of real-world datasets [43, 52, 55]. In this paper, we take a significant step towards understanding the robust overfitting by revisiting the measures and uncovering their connections with the robust generalization gap, which was not clearly verified due to the lack of experiments in prior works. We want to emphasize that **other reviewers**, Rev. 2BXc **"This kind of study help evaluate on a more equal footing several of the proposed correlated factors of robust generalisation. I believe this is always helpful for the research community as it helps develop intuition that might lead to novel methods"** and Rev. vGwc **"The paper's findings can inform future experimental and theoretical work on the understanding of the robust overfitting problem,"** agreed with the contribution of our paper. Furthermore, our study **provides an original contribution** **by uncovering the high correlation between the 'x_grad_norm' measure and the robust generalization gap**—a novel observation within this research domain. We also have conducted additional experiments in Appendix “A.3 Robust Measures with Regression Analysis”, which introduces the combination of ‘x_grad_norm’ and ‘average_ce(PGD)’ can more effectively predict the robust generalization gap. **** Please refer to general responses for the common questions of other reviewers. **If there are still any unclear points, please feel free to ask detailed questions, we would be happy to provide further explanations.**
Summary: This paper studies the correlation between previously proposed robustness measures and the robustness generalization gap. It studies a large number of models with different architectures and training parameters and shows that some prior beliefs about the usefulness of popular robust generalization measures are not well justified under such comprehensive study. Strengths: 1. The paper comprehensively studies the ability of different robustness measures to predict robust generalization gap and highlights cases where their findings are at odds with prior work and previously held opinions. 2. The methodology and the experiments are clearly and well described. 3. The paper's findings can inform future experimental and theoretical work on the understanding of the robust overfitting problem. Weaknesses: 1. The paper can benefit from further discussion on its findings. For example, the authors say that “margin maximization in adversarial training methods should be revisited” but it isn’t clear why it wouldn’t work. Or that “`boundary_thickness` is more applicable for models trained with AT than to other adversarial training methods” but no intuition as to why that’d be the case is offered. Similarly, they say that “smoothness does not guarantee low robust generalization gap”, which is at odds with the prior work they cite but there is no explanation as to why this discrepancy might occur. 2. Methodologically, the paper seems quite similar to (Jiang et al., 2020). However, (Jiang et al., 2020) seem to have a more comprehensive analysis which attempts at explaining the possible causal factors for the generalization gap. I think this paper can benefit from such a deeper analysis as well. 3. The paragraph starting on line 266 is sends a very conflicting message about the usefulness of `x_grad_norm` for predicting the robust generalization gap. It seems to be saying that it is both highly positively and highly negatively correlated. Perhaps a rewrite for clarity might help. 4. Overall, I found it difficult to understand what are the main takeaways from this work. The conclusion says “Our results suggest practical guidelines for robust generalization” but I am not sure what they are. In several occasions the paper criticizes or is at odds with prior work, but does not provide sufficient evidence as to what are the causes of the discrepancies. I think the paper might benefit from sending a more clear message of what these guidelines are and why some of its findings are at odds with prior work. References: Yiding Jiang, Behnam Neyshabur, Hossein Mobahi, Dilip Krishnan, and Samy Bengio. Fantastic generalization measures and where to find them. 2020 Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. On several occasions, it is mentioned that evaluating solely on the test set is undesirable. It’s also used as an explanation for obtaining opposite results to (Yang et al., 2020) but is not fully developed. Why is evaluating the metrics solely on the test set an issue? 2. I found it a bit difficult to understand the purpose of $\pi_k$, the new metric the paper proposes. It seems to be trying to handle cases similar to the Simpson’s paradox, where $k$ fixes a group that has a positive trend when considered in itself, but such that when all groups corresponding to different values of $k$ are considered, the trend becomes reversed. Would this be a valid explanation? 3. (Jiang et al, 2020) show that sharpness-based measures are some of the most predictive ones for the standard generalization gap, while this paper shows that they are not very predictive for the robust generalization gap. How do you reconcile these two seemingly opposing results? 4. In App. A.1, it is argued that there are measures with high correlation with the test robust accuracy but not the robust generalization gap. This is counterintuitive, so what would be possible explanation? High rank correlation with low $R^2$ for linear analysis doesn’t necessarily mean that the measures are not predictive, but rather that the relationship may be highly nonlinear. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: I've identified no limitations or ethical concerns. However, here are some suggestions: 1. Maybe instead of writing the mean and confidence intervals in Tabs. 1,2 and 3, you could show horizontal plots of the confidence intervals? That could make it much easier to visually compare the different effects. 2. The measures are only defined in the appendix. I understand that this is for space purposes, but it is difficult to follow your Experiments section without knowing what all these names refer to. I’d recommend saying a couple of words each time you mention a new measure, keeping the formal definitions in the appendix. I think this would improve the readability. 3. Line 199, there’s an unnecessary “to”. 4. Line 699, “maximization” should be “maximizations” Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's insightful assessment of the strengths and weaknesses of our paper. Your feedback provides valuable insights that help us improve the quality and clarity of our work. Below, we provide a detailed response addressing each of the weaknesses [W#], questions [Q#], and limitations [L#] raised with all references aligned to the main paper: **** **[W1]** We sincerely apologize for the lack of discussion on our findings. We certainly agree that more discussions and explanations could benefit the novelty of our paper. To address your concerns, **we have undertaken a comprehensive analysis of 15 additional papers** related to “margin maximization”, “boundary thickness”, and “local Lipschitzness”. **We kindly request the reviewer to refer to our second response to "Weaknesses" in the global response for a more comprehensive explanation.** The forthcoming revised version of our manuscript will include the discussions, which will not only address the concerns but also provide a more robust foundation for our findings. Thank you for your diligence in providing these insights. **** **[W2]** Thank you for your insightful comment regarding the suggestion of experiments on causal factors. In response to your suggestion, **we have performed an additional analysis in Table S1 available in the global response PDF,** focusing on the calculation of the conditional mutual information between robust measures and the robust gap. The cardinality of the set of hyper-parameters $S$ is less than or equal to two (i.e., $|S|\leq2$), in accordance with (Jiang et al., 2020). We here highlight new findings that align well with the results outlined in the main paper. Specifically, average_ce(PGD) consistently exhibits a significantly high conditional mutual information $\hat{I}(V_\mu, V_g|U_S)$ and shows the highest value of $\kappa(\mu)=0.409$. This indicates the concrete connection between average_ce(PGD) and the robust generalization gap. Similarly, prob_margin(PGD) shows a high value of $\kappa(\mu)=0.387$. local_lip, estimated_sharpness, estimated_inv_sharpness, and average_flat show a low $\hat{I}(V_\mu, V_g|U_S)$ and $\kappa(\mu)$, which is consistent to our observation. Notably, pacbayes_mag_flat(PGD) shows a decent $\kappa(\mu)$, yet it exhibits a relatively low $\hat{I}(V_\mu, V_g|U_S)$ for Steps, which is the important factor in discriminating robust and non-robust models, as discussed in Line 255. x_grad_norm yields a decent $\kappa(\mu)$ compared to other measures, as observed in Table 5 and Figure 5. We appreciate your feedback, and by incorporating these deeper analyses, we believe our paper now provides a more comprehensive understanding of the underlying causal factors driving the robust generalization gap. **** **[W3]** We apologize for any confusion caused by the phrasing in the paragraph starting on line 266. Allow us to clarify the relationship regarding the usefulness of x_grad_norm in predicting the robust generalization gap. Specifically, x_grad_norm exhibits a high negative correlation with the robust generalization gap compared to other measures. In practical terms, this indicates that models characterized by a lower robust generalization gap tend to have higher x_grad_norm. We will provide clearer paragraphs in the revised version, taking into careful account all the valuable feedback provided by the reviewers. **** **[W4]** In the statement "Our results suggest practical guidelines for robust generalization," the term 'practical guidelines' is intended to emphasize that our study verifies the viability of certain measures with a larger scale of experiments. **As Reviewer 2BXc said that “[…] Many papers claim that Z can be explained by X or Y in some particular setting, while others fail to observe that,”** we believe all our observation (whether certain measures correlated or not correlated to robust generalization) can be helpful for the research community. We have highlighted particularly noteworthy guidelines in blue, which we believe offer substantial insights for future investigations. As this paper primarily aims at an empirical evaluation of measures' effectiveness in predicting the robust generalization gap, our contribution lies more in the realm of empirical findings, similar to [17, 37], rather than being theoretical. **Furthermore, we would like to argue the careful usage of statements such as "Model A is superior to Model B because Model A has a better measure value than Model B," a frequently employed in recent literature.** However, we understand that a lack of discussion on why some of our findings are at odds with prior work. We hope our response to the previous weakness (W1) can resolve this issue. Again, thank you for your valuable suggestion, which will undoubtedly contribute to the depth of our paper. **** **[Q1]** As demonstrated in [23], “the most direct and principled approach for studying generalization in deep learning is to prove a generalization bound which is typically an upper bound on the test error based on some quantity that can be calculated on the training set.” Moreover, it is noteworthy that some researches often use the statement that "superior measure value lead to better generalization" to support the claim that "minimizing (or maximizing) measure value during training results in improved generalization,” which is not always true as demonstrated in [4]. Most importantly, the theoretical analyses in prior work [23, 49, 50] have often used the framework of PAC-learning that basically uses the training set for establishing upper bounds for the generalization gap. In this regard, we believe a comprehensive assessment of measures on the training dataset is essential to avoid wrong conclusions or misuses, such as employing these measures as optimization targets or early-stopping criteria. **** *The remaining questions and limitations will be addressed in the continued official comment due to character limitations.* --- Rebuttal Comment 1.1: Title: Rebuttal by Authors [Continue] Comment: **** **[Q2]** Thank you for your interesting interpretation of the new metric $\pi_k$ that we introduced in our paper. We would like to say that your observation is valid, and indeed, $\pi_k$ can be seen as a potential solution for Simpson’s paradox. As noted in Eq. (6), $\pi_k$ has been designed to address cases where specific groups exhibit meaningful trends when considered individually. For instance, in Table 1, the global trend for boundary thickness on Training Methods shows a $\pi$ value of -0.17. However, upon closer look at its high standard deviation, the presence of a distinct trend within a particular group (AT) becomes apparent as shown in Figure 3. We appreciate your insightful perspective and it undoubtedly contributes to enhancing readers' comprehension of the underlying concept of $\pi_k$. **** **[Q3]** First of all, we would like to emphasize the difference between [23] (Jiang & Neyshabur et al., "Fantastic Generalization Measures and Where to Find Them"). In [23], the authors employed customized parameter-efficient neural networks, which are not actively used these days. In contrast, our work employs more practical models such as ResNets, which have been widely used in recent research [11, 46, 44, 50]. We believe this difference in model choice holds significant implications, as recent research [S12, 50] suggested that as neural networks scale up in size, certain phenomena can undergo a reversal. Additionally, recent empirical findings by [4] also demonstrated that sharpness-based measures may not serve as reliable indicators of generalization in modern settings, particularly within the context of the standard training framework. Our observations align with their conclusions that sharpness plays a role as an objective to obtain well-performing models rather than serving solely as a reliable estimator. **** **[Q4]** Indeed, upon delving into prior research [23], which sorely focused on predicting the generalization gap, we were also curious about the relationship between measures and test accuracy itself, rather than the gap. Our motivation for including Appendix "A.1 Estimating Test Robust Accuracy" stemmed from this inquiry, aiming to provide a comprehensive exploration that addresses the interests of readers who share our curiosity. Notably, local Lipschitzness (local_lip) demonstrates a more meaningful total $\tau$ of -0.40 compared to -0.23 in Table 1. It is worth noting that recent research [S15] has highlighted the role of local_lip as an upper bound for the difference in cross-entropy loss between benign and adversarial examples in an adversarial training framework. Consequently, based on this theoretical proof, it would be more accurate to characterize local_lip as correlated with robust accuracy rather than the gap. Most importantly, our focus on investigating the robust generalization gap stems from the consideration that prior research [23, 49, 50] has often emphasized theoretical analyses aimed at establishing upper bounds for the generalization gap, frequently within the framework of PAC-learning. As the field continues to evolve, we anticipate that future work will progressively delve into theoretical analyses offering insights into test accuracies. While we acknowledge the limitations of linear regression in capturing nonlinear relationships between variables, we adopt linear regression since it is the simplest and most intuitive approach for explaining the relationships between measures and the gap. While more complex models may indeed offer greater flexibility in modeling nonlinear interactions, they often come at the cost of interpretability. As we will make our trained models and codes publicly accessible, we expect the research community to explore and implement advanced modeling techniques, offering a promising direction for future research. **** **[L1]** We have attempted to summarize all the experimental results in one table, choosing to report the mean and standard deviation. However, we assure you that we will make every effort to promptly provide any additional experimental results that the reviewers may require. Thus, we reported sample figures in the PDF file in the global response, and we will add all the in Appendix as it requires a large space for all figures. **** **[L2]** We greatly appreciate your feedback regarding the presentation of measures in our paper. Due to the page limitations, we had the challenge posed in providing comprehensive information on each measure within the main paper. To address this concern, we will short descriptions for each measure at the point of their initial mention in Section "4 Experiments." We note that all other typos have also been fixed. Again, thank you for your careful feedback. **** **[L3-L4]** Thank you for bringing the typos to our attention. We have conducted a thorough review of the manuscript and have made the necessary corrections. We again thank you for your detailed review.
Summary: The authors propose a large-scale empirical analysis of the impact of several potential indicators of robust generalisation. This paper can be seen as a metaanalysis of many of the indicators proposed in the literature. Strengths: 1. I appreciate a paper aiming to test and/or reproduce at a larger scale of experiments the viability of certain measures as indicators of the robust generalisation gap. Many papers claim that Z can be explained by X or Y in some particular setting, while others fail to observe that. This kind of study helps evaluate on a more equal footing several of the proposed correlated factors of robust generalisation. I believe this is always helpful for the research community as it helps develop intuition that might lead to novel methods. 2. The paper is generally well-written and is quite accessible (see in “Typos and suggestions” questions/recommendations for fixing a part that can be improved; I understood out of intuition what the metric would be but it’d always be better to clarify that part since it is central to your results). Weaknesses: 1. Intro is missing references (in fact, it only has 1 !). References are not optional there in my opinion because they contextualise your paper/motivations. I don’t think refs should come later; plus on your end it’s a matter of adding 10-15 references that are already in your bibliography in section 1, so it can be easily fixed. 2. Only evaluated against PGD on CIFAR10 (that is ok in itself ! It should just be clarified in the claims). Please specify the norm ($L_\infty$) and indicate that (attack choice and dataset) when you summarise your claims (abstract/intro/conclusion) since datasets and the choice of attack likely influences the results. 3. Clarity is generally good but can be improved in 3.3. See “Typos and suggestions”. 4. Correct me if I’m wrong, but it appears the measures are evaluated on the training set, and not a validation set. If so, could that be stated explicitly somewhere ? I believe that’s **very** important (and should also appear clearly in summaries of your results/methodology), especially since many notions/intuitions on flatness may apply better to a validation set, which could lead to different results. See my question, where this could be a possible explanation. **Typos and suggestions:** 1. L134-135: “each $\Theta_i$ corresponds to a tuple of training parameters” to clarify that they’re not individual parameters. If they are individual parameters, then the sentence “a search space $\Theta$ = …” needs fixing because the right hand side of the equality is not a set (in fact, the right hand side needs fixing anyway to indicate a set). 2. L146: isn’t $\Theta_k$ a tuple of hyperparameters, and $\theta_i$ a particular hyperparameter within such tuples ? I believe eq 5 and/or L146 could benefit from some clarifications. Namely, about what exactly $\theta_k$ vs $\Theta_k$ is. 3. Table 1: I would also indicate in the caption what (PGD) means. Also, indicate what columns correspond to (yes it’s in the text but it’s always better when figures are as self-contained as possible in my opinion). 4. Table 3: average_ce PGD x Model architecture should be bolded too. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. I would assume that some measures only make sense within a given model with everything fixed except the weights. For example, flatness vs generalisation. There’s quite some literature in the non-adversarial setting arguing that flatness is correlated with generalisation. However, even within a model, with just the weights varying, it seems not to be the case. Correct me if I’m wrong but this is what Table 4, column “steps” indicates for example (since in that case, two models compared have the same loss landscape but are at a different location in weight space due to not having had as many steps). Do you have intuition why in the robust case this intuition doesn’t hold ? Mine is that it could be tied to some papers arguing that flatness in weight space and input space can be inversely correlated (e.g., “Adversarial Training Makes Weight Loss Landscape Sharper in Logistic Regression” by Yamada et al. 2021). It could also be related to using the training set instead of a validation set to evaluate flatness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: I would specify in limitations the choice of dataset, models and attack. And if relevant, the fact all measures were evaluated on the training set if that is correct. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's insightful assessment of the strengths and weaknesses of our paper. Your feedback provides valuable insights that help us improve the quality and clarity of our work. Below, we provide a detailed response addressing each of the weaknesses [W#], questions [Q#], and limitations [L#] raised with all references aligned to the main paper: **** **[W1]** We appreciate the reviewer's valuable feedback regarding the references in the introduction section. During the refinement of the structure and its flow, the most of sentences containing references are relocated to Related Work. We will ensure the inclusion of references, such as the various phenomena [32, 43, 52, 41], measures [50, 51, 44], and relevant additional sources within Introduction. **** **[W2]** We greatly appreciate your feedback. We agree that the importance of providing clear information regarding the dataset, attack choice, and norm used in our evaluations in the main paper. This will enhance the transparency, readability, and contextual understanding of our paper. We will definitely provide a detailed explanation of these key details in the Introduction and Methodology. We again appreciate the reviewer's thoughtful input. **** **[W3]** Thank you for your detailed remarks. We will update all typos/suggestions and make sure to remove possible other ones. **** **[W4]** We regret any confusion that may have arisen as a result. We would like to clarify that, apart from the weight-norm measure, all other measures are indeed evaluated on the training set, as established in prior works [22, 23, 51]. Although some of the settings are presented in Appendix “B. Measures”, we acknowledge that it has not been mentioned in the main paper. In our revised manuscript, we will take measures to prominently highlight the evaluation of measures on the training set. We will clarify the settings and add detailed settings at the beginning of “3. Experimental Methodology” and “4. Experiments”. **** **[Q1]** First of all, your observation is correct. In the past, the statement that "enhanced flatness corresponds to improved generalisation" had been widely accepted in the deep learning community [13-15]. Indeed, Sharpness-aware minimization have demonstrated superior generalisation performance across various models and tasks, including adversarial training [15, 49]. However, **recent research [4] has challenged this statement** by revealing that flatness does not correlate well with generalisation in standard training framework, particularly within modern practical scenarios. Similarly, our work also finds that flatness does not correlate well with robust generalisation in adversarial training framework. Based on these observations, **we can conclude the following statement – that flatness should be considered as an objective to pursue during optimization, rather than a direct evaluation metric** for predicting a model's generalization performance. Additionally, we greatly appreciate your insightful observation and the reference you provided, "Adversarial Training Makes Weight Loss Landscape Sharper in Logistic Regression" by Yamada et al. 2021. This paper indeed offers an interesting observation, both theoretically and empirically, on the relationship between perturbations during adversarial training and the sharpness of the loss landscape. Based on their observation, we recognize a potential connection between the observed high values of robust generalization gaps depicted in Figure 1 and the sharpening effect on the loss landscape resulting from larger perturbations during adversarial training. **Indeed, we also confirmed that adversarially trained models tend to exhibit sharper loss landscapes when compared to standard trained models (refer to Figure S2 in the PDF in global response).** We will definitely add a new subsection in our paper. In assessing the flatness measures on the test set, we conducted an evaluation for both 'estimated_sharpness' and 'estimated_invariant_sharpness’. However, as shown in the below tables, there is **NO** significant correlation between test flatness and the robust generalization gap. | $\pi_k$ | Model | Method | Steps | Optimizer | Batch | Aug | Semisup | flag | Total $\tau$ | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | estimated_sharpness(test) | 0.10±0.01 | 0.15±0.03 | 0.12±0.20 | 0.09±0.07 | 0.08±0.04 | 0.14±0.05 | 0.10±0.03 | 0.15±0.03 | 0.09 | | estimated_invariant_sharpness(test) | 0.12±0.01 | 0.16±0.05 | 0.09±0.18 | 0.13±0.06 | 0.12±0.04 | 0.17±0.02 | 0.13±0.00 | 0.15±0.05 | 0.12 | Certainly, our analyses could be extended to test examples (or validation examples), but for our study, we chose to focus on train examples. This choice aligns with the approach outlined in [23], which argues that “the most direct and principled approach for studying generalization in deep learning is to prove a generalization bound which is typically an upper bound on the test error based on some quantity that can be calculated on the training set.” Moreover, it is noteworthy that some researches often use the statement that "superior measure value lead to better generalization" to support the claim that "minimizing (or maximizing) measure value during training results in improved generalization,” which is not always true as demonstrated in [4]. Most importantly, the theoretical analyses in prior work [23, 49, 50] have often used the framework of PAC-learning that basically uses the training set for establishing upper bounds for the generalization gap. In this regard, we believe a comprehensive assessment of measures on the training dataset is relatively important to avoid wrong conclusions or misuses, such as employing these measures as optimization targets or early-stopping criteria. **** **[L1]** We sincerely hope that the responses provided above address the outlined weaknesses and limitations. Thank you once again for your insightful assessment and guidance. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their detailed reply. > [W2] We greatly appreciate your feedback. We agree that the importance of providing clear information regarding the dataset, attack choice, and norm used in our evaluations in the main paper. This will enhance the transparency, readability, and contextual understanding of our paper. We will definitely provide a detailed explanation of these key details in the Introduction and Methodology. We again appreciate the reviewer's thoughtful input. I still believe this should be in the abstract too, given that several works show how loosely correlated robustness against different norms or types of attacks can be (it could even be a stated limitation). > Indeed, we also confirmed that adversarially trained models tend to exhibit sharper loss landscapes when compared to standard trained models (refer to Figure S2 in the PDF in global response). We will definitely add a new subsection in our paper. That will be an interesting addition indeed. > Certainly, our analyses could be extended to test examples (or validation examples), but for our study, we chose to focus on train examples. This choice aligns with the approach outlined in [23], which argues that “the most direct and principled approach for studying generalization in deep learning is to prove a generalization bound which is typically an upper bound on the test error based on some quantity that can be calculated on the training set.” Moreover, it is noteworthy that some researches often use the statement that "superior measure value lead to better generalization" to support the claim that "minimizing (or maximizing) measure value during training results in improved generalization,” which is not always true as demonstrated in [4]. Most importantly, the theoretical analyses in prior work [23, 49, 50] have often used the framework of PAC-learning that basically uses the training set for establishing upper bounds for the generalization gap. In this regard, we believe a comprehensive assessment of measures on the training dataset is relatively important to avoid wrong conclusions or misuses, such as employing these measures as optimization targets or early-stopping criteria. At the risk of saying something obvious, generally, the test set should be used for general final conclusions about metrics, the validation set for hyperparameter selection and early stopping (yes it's part of training but the weights aren't trained on it, it's a proxy for a test set), and of course the training set for direct optimisation / parameter tuning. The reason why is that the underlying assumption behind many of the theoretical claims mentioned is that everything is i.i.d. or equivalently that the training data is representative of the test data, and a second point is that this assumes the performance of a model doesn't come (partly) from memorisation. Nowadays, with overparametrising becoming the norm, the $\simeq 0$ training loss is evidently poorly indicative of the test loss. --- Reply to Comment 1.1.1: Comment: Thank you for your response to our rebuttal. > I still believe this should be in the abstract too, given that several works show how loosely correlated robustness against different norms or types of attacks can be (it could even be a stated limitation). We will certainly include the information about the norm ($L_\infty$) and other relevant settings in the abstract and limitation. > At the risk of saying something obvious, generally, the test set should be used for general final conclusions about metrics, the validation set for hyperparameter selection and early stopping (yes it's part of training but the weights aren't trained on it, it's a proxy for a test set), and of course the training set for direct optimisation / parameter tuning. The reason why is that the underlying assumption behind many of the theoretical claims mentioned is that everything is i.i.d. or equivalently that the training data is representative of the test data, and a second point is that this assumes the performance of a model doesn't come (partly) from memorisation. Nowadays, with overparametrising becoming the norm, the 0 training loss is evidently poorly indicative of the test loss. We sincerely appreciate your comment, and it has prompted us to rethink our methodology. In particular, your comment *"The reason why is that the underlying assumption behind many of the theoretical claims mentioned is that everything is i.i.d. or equivalently that the training data is representative of the test data,"* highlights a crucial aspect that we had overlooked. We now acknowledge the potential importance of analyzing the test data with measures. In light of your feedback, we have revisited our code and have initiated the process of conducting the same analysis on the test set. We anticipate having the results available before the rebuttal stage, but please note that it may require some time. We again appreciate your additional guidance.
Summary: The paper conducts a large scale correlation analysis for adversarially trained models to identify which existing robustness-related measures are predictive of robust overfitting. The correlation analysis is performed over large set of relevant dimensions (including model architectures, training procedures, attack strengths, optimizers, batch sizes, etc.). The overall experimental methodology seems fine. Strengths: The experiments uncover some interesting findings regarding which metrics can be predictive of robust generalization. A definite strength of this work is that all code and trained models are made available online -- this will enable the adversarial ML community at large to build on the analysis explored herein with new measures, training methods/settings/etc. Weaknesses: 1. A major significant weakness of this work is its title. The title is non-descriptive and incendiary. For an academic publication (i.e. not a blog post, news article or social media post) this seems wholly unacceptable. Please update it to something suitably descriptive instead. 2. How related this work is with reference [23] (Jiang & Neyshabur et al., "Fantastic Generalization Measures and Where to Find Them") is severely downplayed. Essentially, this work follows almost the exact same playbook as [23]. A subsection on the relationship of this work with [23] should be included in this paper. For example, a small step in this direction is section 3.3 which already points towards the evaluation metrics being wholly adopted from [23] while changing the notation. 3. The attack used for measuring the robustness of trained models on the test set is just PGD-10 and this is not strong enough for a proper evaluation of robustness. AutoAttack (at least), and e.g., Square and Multi-Targeted attacks should've been used instead to better approximate the models' adversarial robustness. To root out training procedures that are affected by gradient obfuscation, using just a simple gradient-based attack (with just 10 steps; no restarts; etc.), like PGD-10 for evaluation, is insufficient. What slightly ameliorates this point is the inclusion of analysis of models from RobustBench (e.g. Figure 5). 4. Only when models are actually robust on the train set it is meaningful to measure the robustness gap; including the bottom left part of Figure 1 seems like it would just introduce noise into the whole estimation process. 5. Two important sections, including the Key Findings and Section 3.3 (which introduces the evaluation metrics) should be re-written much more clearly. For example, key finding 1 states "traditional metrics may be ineffective due to the high sensitivity with respect to training setups" -- this is very unclear? What are traditional metrics? Saying that something "may be ineffective" is so imprecise that it becomes not useful for the reader. In key finding 3, it is said that "flatness-based measures [...] perform poorly and even sharper minima can exhibit a lower robust generalisation gap" -- what does it mean for sharper minima to exhibit a lower robust generalisation gap? Do you mean something like: "neural networks which have sharper minima exhibit a lower generalisation gap in 70% of cases we evaluated?". Please rephrase. For section 3.3, I had to carefully read [23] to try to be able to parse what section 3.3 is stating -- please clarify all the writing from 147-159; e.g., the point in lines 144-145 is completely unclear by itself; the point in lines 158-159 has typos and is unclear; similarly, what does "generalization gap varies sensitively with respect to training setups" mean? Could you motivate why the pi_k metric is effective (line 152)? An explanation of why this might be the case only shows up in section 4. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Do the key findings hold when considering only trained models which have high train & test robustness? Table 1 is not very easy to read at a high-level; have you attempted colour-mapping it (i.e. like a heatmap)? Have you tried training a classifier (e.g. just an MLP) using all measures as input features to predict a discretized robust generalization gap? Depending on how well the classifier generalizes, this could be useful for as a direct optimization proxy to improving the predicted robust generalization gap. It seems a bit awkward to not qualify "measures" with a prefix; for example, in [32] the considered measures are named "complexity measures". E.g., have you considered something like "robustness measures", "proxy measures" etc.? See also the section on weaknesses above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Limitations are discussed briefly. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's insightful assessment of the strengths and weaknesses of our paper. Your feedback provides valuable insights that help us improve the quality and clarity of our work. Below, we provide a detailed response addressing each of the weaknesses [W#] and questions [Q#] raised with all references aligned to the main paper: **** **[W1]** We are sorry to hear that the title of the paper is unacceptable. The initial title was motivated by [23] (Jiang & Neyshabur et al., "Fantastic Generalization Measures and Where to Find Them"), but we understand the need for a more accurately descriptive title that better reflects the content and objectives of our research. **We have taken this feedback seriously and are in the process of revising the title** to ensure that it appropriately represents the scholarly nature of our work. One of the potential titles that we are currently considering is 'Evaluating Generalization Measures in Adversarial Training Framework.' We believe this title encapsulates the core focus of our research, highlighting the evaluation of generalization measures within the context of an adversarial training framework. However, if you have any further suggestions or ideas for a more fitting title, we would greatly appreciate your input. **** **[W2]** We appreciate the reviewer's feedback regarding the relationship between our work and the reference [23]. Indeed, our study draws inspiration from the valuable insights presented in [23], which conducted an evaluation of complexity measures in the standard training framework. However, due to space limitations in the main paper, we regrettably could not thoroughly delve into the connection between our work and [23]. In response to the reviewer's suggestion, **we introduce the following new subsection that explicitly addresses the connection between our study and [23] based on our answer to the first weakness in the global response:** \subsection{Comparison to Jiang et al. [23]} The pioneering study [23] explored the empirical correlations between complexity measures and generalization, with a primary focus on the standard training framework. Our main contribution is delving specifically into the realm of robustness measures within the adversarial training framework—a context having different generalization tendencies even with the same measures and experimental settings, as demonstrated by prior research [43, 52]. Of significance is the observation that the metric $\psi_k$ proposed in [23] has limitations in accurately capturing the effectiveness of robustness measures due to its high variance with respect to training setups. By introducing a new metric, our work enhances the understanding of when and how robustness measures correlate with robust generalization. Moreover, while Jiang et al. [23] employed customized parameter-efficient neural networks, we adopt widely-used model architectures such as ResNets, thereby providing insights that are not only relevant to recent research but also offer more practical implications. **** **[W3]** While we acknowledge the potential benefits of AutoAttack, the constraints outlined above guided our decision to use PGD-10. **We believe that Appendix “A.4 Robust Generalization Gap with AutoAttack”**, which demonstrates the comparison between the robust generalization gaps calculated using PGD and AutoAttack, **can help alleviate some of the concerns you've raised.** However, to address your concern in detail, we here explain three reasons why we primarily used PGD-10. **Firstly, the computational cost of AutoAttack is substantial.** Our experimental design involved training models across diverse adversarial settings and required adversarial examples for both training and test datasets to estimate the robust generalization gap. However, with AutoAttack, it takes 10 min/batch for WRN-34-10 on our resources. **Since we used 1300 models, we need at least 1 year to obtain all adversarial examples even with 6 GPUs.** Thus, the resource-intensive nature of AutoAttack led to impractical computation times, making it infeasible to execute extensive experiments. **Secondly, the prevalent usage of PGD as a baseline during training** among various methods. Indeed, all popular adversarial training methods, such as AT, TRADES, and MART, use PGD as a baseline during training, and thus early-stopping (or other similar techniques) is adopted with PGD on training or validation sets. This led us to consider PGD as a more practical choice for providing practical insights. **Lastly, the specific robustness measures we employed, namely boundary_thickness and local_lip, rely on PGD adversarial examples for their calculation.** As these robustness measures are often computed using PGD, the choice to use PGD for evaluation contributes to consistency across our experiments. **** **[W4]** In the main paper, we mainly conducted experiments on all possible models to comprehensively validate the robustness measures across a wide range of models. However, we also have shared a curiosity about the implications of our analysis on models with high robustness. **To verify this, we have provided “A.2. Focusing on Adversarially Robust models” in Appendix.** Specifically, we selected models by conditioning on ‘average_ce(PGD)’ ≤ 1.5. Upon analyzing these adversarially robust models, we observed distinctive behaviors in certain robustness measures (’path_norm’, ‘average_ce’, ‘x_grad_norm’, ‘inverse_margin’, ‘prob_margin’, and ‘boundary_thickness’). For example, margin-based measures displayed more pronounced negative correlations, while the effectiveness of ‘boundary_thickness’ in estimating the robust generalization gap became effective. Please refer to Appendix A.2. for more detailed explanations and graphical representations of our findings. **** *The remaining weakness and questions will be addressed in the continued official comment due to character limitations.* --- Rebuttal Comment 1.1: Title: Rebuttal by Authors [Continue] Comment: **** **[W5]** We apologize for the unclear points and thank you very much for reading carefully. **To address your concerns, we thoroughly revised the sections and sentences you highlighted.** For instance, we have revised the following sentences as follows: * Original: "Evaluating measures under the adversarial training framework with traditional metrics may be ineffective due to the high sensitivity with respect to training setups." Revised: "Given the high sensitivity of the robust generalization gap with respect to training setups, calculating the expectation of the rank correlation across a wide range of training setups yields a high variance and fails to capture a trend that appears in several groups of data." * Original: "Flatness-based measures, such as estimated sharpness, generally perform poorly and even sharper minima can exhibit a lower robust generalization gap." Revised: "Flatness-based measures, such as estimated sharpness, generally perform poorly in predicting the robust generalization gap. Rather, they show a negative correlation, indicating that models with sharper minima tend to exhibit a lower robust generalization gap." **Finally, we would like to clarify the motivation and effectiveness of $\pi_k$.** The notion of "generalization gap varies sensitively with respect to training setups" refers to the phenomenon where the range of the robust generalization gap can significantly differ based on the specific configuration of training parameters. This characteristic, distinct from the standard training framework [23], is evident in Figure 1, where the robust generalization gap exhibits a wide range of values, spanning from 0 to 65, across various training setups. This aligns with prior findings [37] that the performance of adversarial training is heavily affected by diverse hyperparameters, such as optimizers, batch size, and early stopping. Due to this distinct characteristic of adversarial training, the metric $\psi_k$ proposed by [23] exhibits extremely high variance as shown in Table 1. This high variance of $\psi_k$ raises concerns about the reliability of its connections and the potential for misleading conclusions. As Rev. vGwc mentioned, this phenomenon can be explained by Simpson's paradox, where $\psi_k$ might provide misleading results or struggle to capture meaningful correlations between robustness measures and the robust generalization gap. To address these concerns, we introduced the metric $\pi_k$ to offer a deeper understanding of correlations between robustness measures and the robust generalization gap. Notably, we have observed that $\pi_k$ demonstrates significantly lower variance compared to $\psi_k$ (Table 2), and it enhances our ability to identify conditions that yield high rank correlations for specific measures (Figure 3). We will update all the responses in the revised version. We sincerely thank you for helping us improve the overall quality of our work. **** **[Q1]** Please refer to the response to [W4]. **** **[Q2]** We appreciate the reviewer's suggestion to enhance the readability of Table 1. While we have tried to visualize well Table 1, the extensive information contained in the table poses a challenge for effective color representation. We believe that, given the comprehensive nature of the data we are presenting, visualizing it might introduce complexity. One approach we are considering is the incorporation of bar plots, which could provide a clearer representation of the data distribution and relationships. Thus, we reported some figures in the accompanying PDF in this global response, and we will add all the in Appendix as it needs large space for figures. --- Reply to Comment 1.1.1: Title: Rebuttal by Authors [Continue] Comment: **** **[Q3]** In Appendix 'A.3 Robust Measures with Regression Analysis', we have conducted a linear regression analysis to explore the predictive potential of the combinations of robustness measures on the robust generalization gap. Given that 'average_ce(PGD)' consistently demonstrated the highest correlation with the robust generalization gap across diverse settings, we incorporated each measure as an independent variable. Notably, our experimentation revealed that a combination of 'x_grad_norm' and 'average_ce(PGD)' yielded the most compelling performance in predicting the robust generalization gap. To push further, we extend our exploration with an additional experiment, wherein we employed a 5-fold evaluation strategy with a linear regression model to predict the robust generalization gap. During the experiment, We also consider feature selection. Specifically, we employed forward selection to identify the most effective set of measures. In the subsequent table, we present the results of our 5-fold evaluation, reporting the average $\tau$ along with its standard deviation. | #Measures | Selected Measures | 5-fold $\tau$ (Avg.±Std.) | Increment | | --- | --- | --- | --- | | 1 | ['average_ce(PGD)'] | 0.7229±0.1301 | | | 2 | ['x_grad_norm', 'average_ce(PGD)'] | 0.7683±0.1040 | +0.0454 | | 3 | ['x_grad_norm', 'average_ce(PGD)', 'pacbayes_mag_flat(PGD)'] | 0.8145±0.0728 | +0.0462 | | 4 | ['x_grad_norm', 'average_ce(PGD)', 'x_grad_norm(PGD)', 'pacbayes_mag_flat(PGD)'] | 0.8219±0.0730 | +0.0073 | | All | - | 0.8165±0.0868 | | As we observed, 'average_ce(PGD)' is selected as a prominent predictor, followed by the selection of 'x_grad_norm'. This result is consistent with Tables 12 and 13. Furthermore, our exploration identifies 'pacbayes_mag_flat(PGD)' and 'x_grad_norm(PGD)' as additional effective measures, resulting in higher average $\tau$ compared to using the entire feature set. We sincerely appreciate your constructive input, which motivated us to conduct these additional analyses. We will surely include this discussion in a revised version of the paper. **** **[Q4]** We appreciate your suggestion regarding the terminology used for the measures. Your suggestion of using the term 'robustness measures' is well taken, as it provides a clearer context for the nature and intent of the metrics under evaluation. In our revised version, we will adopt this terminology consistently throughout the manuscript to ensure greater clarity and precision. Thank you.
Rebuttal 1: Rebuttal: **Dear all,** We would like to thank the editor and the reviewers for their careful comments and suggestions. We summarize the reviews according to our own perspective. **** **Strengths.** We are glad that Reviewers PVgT, 2BXc, and vGwc found that our results **“uncover some interesting findings regarding which metrics can be predictive of robust generalization”** and **“is always helpful for the research community as it helps develop intuition that might lead to novel methods.”** Additionally, Reviewers 2BXc and vGwc also highlighted that **“the paper is generally well-written”** and is rooted in **“clearly and well-described”** methodology and experiments. $ $ **Weaknesses.** We have welcomed all reviews and did our best to carefully addressed every concern. Specifically, the reviewers raised the following common concerns. 1. Lack of detailed explanations on the relationship to [23] and distinctions between our study and the findings of [23]. (Rev. PVgT and vGwc) 2. More discussion of results and contributions. (Rev. vGwc and qzPP) 3. Visibility of Tables, which are not very easy to read at a high-level. (Rev. PVgT and vGwc) **Our answer to the common concerns raised by the reviewers:** 1. We carefully revisited [23](Jiang & Neyshabur et al., "Fantastic Generalization Measures and Where to Find Them") and we here introduce a summarization of the differences between our study and the findings of [23]. To summarize, **(i) our work delves into the adversarial training framework**, diverging from the standard training framework in [23]. Furthermore, **(ii) our choice of widely-used models such as ResNets**, in contrast to the customized parameter-efficient structures of [23]. Importantly, **(iii) we introduce the novel metric $\pi_k$**, addressing the limitations of the metric $\psi_k$ from [23], facilitating a precise assessment of the correlations between measures and robust generalization. We will add a new subsection that explicitly addresses clarification of the relationship to [23] in the revised version. 2. To augment the comprehensiveness of our results and their contributions, **we conducted an in-depth review of additional 15 papers (referenced as [S#])** ━will be updated in the continued comment due to character limitations━ related to “margin maximization”, “boundary thickness”, and “local Lipschitzness”. * Margin maximization: According to prior works [S1, S2], models trained with CrossEntropy serve as max-margin predictors by optimizing a lower bound of the margin within both standard and adversarial training frameworks. However, recent studies have highlighted that maximizing the margin might not necessarily be the optimal objective in adversarial training due to intricate gradient flow dynamics [S3] and the non-cognitive concept of using predicted probabilities [S4]. Moreover, in recent work [S5, S6], despite the similar robustness of TRADES and AT, their margin distributions on benign and adversarial examples are extremely different. This implies that the margin cannot be the sole determinant of adversarial robustness. Our findings also support these observations by revealing that margin alone does not correlate well with robust generalization; in fact, its excessive pursuit of margin maximization can potentially have an adverse impact on robust generalization. Considering other recent studies [S7, S8], the margin maximization should be accompanied by a consideration of other factors such as weight regularization or gradient information. * Boundary thickness: Boundary thickness often serves as a supportive measure to validate the robustness of new training algorithms [S10]. However, empirical validation of its efficacy remains sparse, primarily due to the limited focus on AT in the original paper. Intriguingly, our comprehensive experiment reveals that boundary thickness exhibits a relatively low correlation with the robust generalization gap within models trained by TRADES or MART. We believe that this is caused by the limitation of the margin because boundary thickness also sorely depends on the probability margin. Considering that AT shows a more prominent margin distribution compared to TRADES [S5], this margin-based measure, boundary thickness might be more aligned with AT —where robustness is attained through margin maximization—than with other training methodologies. Indeed, as shown in Figure 3, AT is placed as an outlier for both prob_margin and boundary_thickness. Consequently, our study cautions against the assumption that a higher boundary thickness implies better robust generalization. We emphasize that the concurrent work [S11] also points out that boundary complexities such as boundary thickness are highly abstract. * local Lipschitzness: While the foundational work [S12] demonstrates that imposing local Lipschitzness leads to better generalization in linear classification, recent research [50] presents an opposing perspective, suggesting that within neural networks, local Lipschitzness might hurt robust generalization. However, it's noteworthy that the conclusions drawn in [50] are built on limited models of fewer than 20 and are sorely based on test examples, warranting further investigation for comprehensive validation. In our large experiment, we cannot observe that local Lipschitzness itself negatively affects robust generalization. Moreover, in Appendix Tables 8 and 9, we observe that local Lipschitzness is also not perfectly aligned with robust accuracy in contrast to [50, S15]. These findings are consistent with recent works [S13, S14], which highlight the importance of model architecture or weight norms when evaluating models with local Lipshitzness. 3. To provide easy visualization of Tables, we will add the visualization of our tables by providing multiple point plots with error bars for each table in the Appendix. **Sample plots are available in the accompanying PDF in this “global” response.** **↓Accompanying PDF** Pdf: /pdf/50e4f0624e02d50a6d4cf9b3867a775238923197.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Active Bipartite Ranking
Accept (poster)
Summary: The paper proposes an elimination active learning criterion for the bipartite ranking problem. The criterion account for the local smoothness of a data sample’s posterior distribution. The author proves the sample complexity and the PAC learnability of the proposed algorithm under some assumptions. Strengths: 1The proposed methodology is theoretically sound. The removal of a point in the active set corresponds to the learning objective, which is to minimize the gap between the current ROC and the optimal ROC. Beyond that, the PAC learning bound has been derived. Weaknesses: 1The proposed method seems not to be practically sound. Most learning problems nowadays have high input dimensions. It is not clear whether the proposed algorithm can be easily extended to solve those real-world problems or if it only enjoys the nice properties for one dimension data. 2It is not clear how the criterion will change after the removal of a data point. If the impact on the posterior around data point i is dependent on other points, is it still a fair criterion? Technical Quality: 3 good Clarity: 3 good Questions for Authors: My questions are listed in the weakness section. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Comment: Please see the general rebuttal in regards to your first question. Indeed, due to the interconnected nature of the problem the decision to remove a point may depend on points already removed from the active set, a problem not found in multi armed bandits, see the discussion at the end of section 3.1. For this reason a point is removed from the active set only when the number of points appearing within a certain distance of it is sufficiently small. This distance is $\Delta_{(t)}$, multiplied by some large constant. Thus any points $j$ not yet clearly distinguished from the removed point also has the guarantee that $\Delta_j \leq c\Delta_{(t)}$, for some constant $c$, thus ensuring that our criterion is consistent. See the proof of Lemma A.3 for details. --- Rebuttal Comment 1.1: Title: Thank you for your response Comment: My concerns have been addressed, and I have changed my rating accordingly.
Summary: The paper proposes an active learning for the bipartite ranking problem in which the goal is to provide a ranking for a set of points rather than given positive or negative labels. The proposed algorithm called active-rank tries to optimize the ROC curve, evaluated by the distance to the optimal one. The authors provided detailed theoretical analysis that guarantees the distance can be arbitrary small with high probability. The evaluation is performed on synthetic data. Strengths: - The paper appears to be technically sound, and the theoretical analysis is in depth. - According to the authors, the paper is the first rigorous framework for active bipartite ranking. Weaknesses: - The paper is difficult to follow for those who are not familiar with this topic. - The dimension of the input is restricted to 1, for which significance is not clear to me. The authors should have clarified it more. - The experiments are only for synthetic data. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - The authors assumes the piecewise constant model. Is it a reasonable assumption for the bipartite ranking setting? - K is assumed to be known. How is it selected in practice? - In section 4, scenario 1 is 'K = 16'. Is it 'K = 4'? Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Some limitations are discussed in the conclusion section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Comment: Please see the general rebuttal in regards to your first and second questions. The definition of scenario 1 was wrong, the correct version is, Scenario 1: $\mu_i = 0$ when $i\leq 11$, $\mu_{12}=\mu_{13} = 0.23$, $\mu_{14}=\mu_{15} = 0.33$, $\mu_{16}=0.35$ and $K = 16$. This scenario aims at testing the algorithm when a significant section of the feature space is constant.
Summary: This paper develops an active learning framework for the bipartite ranking problem. A selective sampling procedure or strategy is necessary for lebeling data points sequentially. The proposed method is called active-rank, which aims to minimize the distance between the ROC curve of the ranking function built and the optimal one, w.r.t., the sup norm. Theoretical results as well as some associated numerical results based on synthetic data demonstrate strong empirical evidence of performance. Strengths: - The paper is well-written with clear notation. - Concrete theoretical results are provided and discussed for the proposed algorithm. - The performance of the proposed active-rank method is demonstrated by experiments that are in line with the theoretical considerations in the theory and active sampling procedure. Weaknesses: - Given the elimination nature of the proposed method, the experiment lacks behavioral observations of the proposed method in real dataset, that may be helpful for the practitioner's reference. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Can you describe the pros and cons for practitioners, when working on real datasets where the scenario situation described in Section 4 is not clear? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: no concern Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
null
Summary: An active bipartite ranking algorithm is proposed in this article for one-dimensional data with a posterior probability $\eta(x) = {\rm P}(Y =+1\vert X =x )$ piecewise constant on a grid of size $K$. An upper bound is provided on the number of queried to achieve a controlled error of type PAC$(\epsilon,\delta)$ on the ${\rm sup}$ norm between the learned ROC curve and the optimal one. Compared to the lower bound of any sampling policy also established in this paper, the upper bound suffers notably from a logarithmic dependence on the size $K$ of the grid. Experiments on synthetic data conforming to the aforementioned setting show a consistent superiority of the proposed algorithm over several baseline. However a really close match is observed between the uniform passive sampling and the proposed active method. Strengths: * The studied problem, active bipartite ranking, is practically interesting and original. * The theoretical framework is clearly presented with a brief explanation of its relation to the best arm identification (BAI) problem. * Statistical guarantees are provided to support the proposed algorithm and to lay the ground for future investigation. * The proposed algorithm is tested on synthetic data. Weaknesses: * The data setting is quite restrictive in a one-dimensional space with the posterior probability piecewise constant on a grid of a known size $K$. * The passive sampling performs comparably to the proposed active algorithm under various scenarios conforming to the data setting underlying the proposed algorithm. * There is no testing on real data. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * Does there exist an upper bound of the uniform passive sampling under the same setting or a similar one? Seeing the empirical results, I suspect that the passive sampling achieves already nearly optimal performance, possibly due to the fact that the posterior is piecewise constant on a uniform partition of the space and data points inside each interval are statistically equivalent. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors have discussed the extensions to smooth posteriors and to higher dimensions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Comment: I thank the authors for the discussion on the comparison to passive learning, which is helpful and worth developing in the main text. My score is increased.
Rebuttal 1: Rebuttal: We thank all reviewers for the time taken to read our paper and for their constructive feedback. Some topics come up for several reviewers. Performance of passive approach in comparison to our active algorithm: For a uniform/passive sampling strategy to be PAC$(\epsilon,\delta)$ one would have to draw samples until the width of the confidence interval at all points $i$ is less than $\Delta_i$. Therefore, a uniform sampling strategy, with an appropriate stopping rule, would have the following tight upper bound on its expected sampling time, up to log terms, $cK\max_{i\in [K]}\frac{1}{kl(\mu_i,\mu_i + c'\Delta_i)},$ for some absolute constants $c,c'>0$. Essentially, our improvement in the active setting is to replace the $\max$ with a weighted summation across the grid. Thus, in settings where the $\Delta_i$ are relatively constant across large sections of the grid then the theoretical performance of a passive approach can be close to optimal. In contrast, in cases where a very small section of the interval is hard to rank and the rest is easy - i.e. $K\max_{i\in [K]}\frac{1}{kl(\mu_i,\mu_i + \Delta_i)}$ is much greater than $\sum_{i\in [K]}\frac{1}{kl(\mu_i,\mu_i + \Delta_i)},$ a passive approach will fail. Such settings involve large $K$ where the gaps $\Delta_i$ on the majority of cells are large, with a relatively small number of cells with small gaps $\Delta_i$. Incidentally, we point out that this corresponds to many situations of interest in practice (in information retrieval, for a specific request, the vast majority of the documents are equally irrelevant, while the ranking of a very small fraction of relevant documents is challenging; the same phenomenon is also observed in credit-risk screening). In these cases the benefit to the practitioner will then be that they quickly focus on the interesting sections of the feature space. For such a setting one needs very small tolerance $\epsilon$ as well as large $K$. This results in computational difficulties for experiments and as noted in the paper, some work is required to reveal a setting where the active rank show a large improvement over a uniform approach. We agree with the reviewers that comparison to a passive approach is lacking, a final version of the paper would include additional discussion in line with the above. Extension to higher dimensions: Our analysis extends immediately to higher dimensions. Indeed one may consider a function piece wise constant on a $d$ dimensional grid of size $K$ and as we can always project a $d$ dimensional grid onto the $[0,1]$ interval, our results remain unchanged. Indeed, for better illustration, our experiments are carried out in dimension 2. A more interesting question is to include a sparsity assumption with increasing dimension $d$, as this arises in many practical situations. This would fundamentally change the nature of the problem and is beyond the scope of this paper. A discussion will be added in the final version for the sake of clarity. The piece wise constant assumption and assumed knowledge of $K$: For active learning in general such an assumption is very reasonable, first and foremost as it is essential for the majority of the literature in finite multi armed bandits. Indeed our Bipartite ranking problem can be viewed as a finite multi armed bandit where the objective of the learner is to rank the $K$ arms. Other pure exploration problems, such as best arm identification, have seen extensive interest, in the case of fixed known $K$. We believe ranking the arms is as natural a question as finding the best arm and the potential for practical application is no less. In practice one can replace knowledge of $K$ with an upper bound, although this will then appear in the bounds. Selecting $K$ is a tricky question, ideally one would have expert knowledge, however, in the absence of this it may be better to remove the piece wise constant assumption and view the problem under a continuity assumption, e.g. $\beta$-Hölder smoothness constraint. This is akin to the extension of the finite multi armed bandit to the infinite armed bandit. Now the question becomes, how does one proceed without knowledge of $\beta$? Ideas from the bandit literature may be applied to our setting, e.g. Carpentier, Valko, 2015, where $\beta$ is estimated. This question goes beyond the scope of our paper and will also be especially challenging in higher dimensions. Reviewer 5: Indeed, due to the interconnected nature of the problem the decision to remove a point may depend on points already removed from the active set, a problem not found in multi armed bandits, see the discussion at the end of section 3.1. For this reason a point is removed from the active set only when the number of points appearing within a certain distance of it is sufficiently small. This distance is $\Delta_{(t)}$, multiplied by some large constant. Thus any point $j$ not yet clearly distinguished from the removed point also has the guarantee that $\Delta_j \leq c\Delta_{(t)}$, for some constant $c$, thus ensuring that our criterion is consistent. See the proof of Lemma A.3 for details. Reviewer 1: Line 159 is a typo, it should read "for all $j: |\mu_i - \mu_k| \geq \Delta_i$". The x-axis in figure 4 is indeed stopping time, this will be clarified. Aside from scenario one, for larger stopping times, i.e. greater than 500 active-rank outperforms the competitors, although by an admittedly small margin in some cases. The typo will be corrected to say something to this effect. Reviewer 4: The definition of scenario 1 was wrong, the correct version is, Scenario 1: $\mu_i = 0$ when $i\leq 11$, $\mu_{12}=\mu_{13} = 0.23$, $\mu_{14}=\mu_{15} = 0.33$, $\mu_{16}=0.35$ and $K = 16$. This scenario aims at testing the algorithm when a significant section of the feature space is constant.
NeurIPS_2023_submissions_huggingface
2,023
Summary: - This paper studies sample complexity bounds for bipartite ranking under an active learning paradigm. The paper gives an algorithm called “active-rank” to solve the variant of the problem under the assumption that the posterior is a piecewise constant function over a grid of size K. The sample complexity bounds are problem-dependent, for which the authors define an instance-dependent notion of problem complexity where each arm/grid point’s global effect needs to be treated individually. - The paper also gives a lower bound (not matching with the upper bound), for the class of problem instances where the posterior differs only by a constant over the input range. The authors hope that the dependency of these bounds on $\Delta$ and the logarithmic gap in the upper bound can be removed but leave it as a future work. Strengths: - Although not new, active learning is an important learning paradigm, and its application to bipartite ranking is a significant contribution. - The ideas in the paper are well-presented. The discussion about the global nature of the problem compared to other works where only pairwise comparisons, a local property, needs to be estimated with high error tolerance and probability helped access the novelty of this work. Weaknesses: - The results in the paper hold only for a carefully designed set of problem instances of bipartite ranking. However, the results contribute toward improving the understanding of active learning in bipartite ranking. So I wouldn’t consider this a weakness. - However, a theoretical comparison with passive ranking complexity is missing. Are there no existing works studying this? It is necessary to understand how the sample complexity compares between active and passive learning. What does uniform sampling give on the grid? Does active learning give any improvement over passive learning? The authors say before Assumption 2.1 that the assumption helps in understanding the power of active sampling over passive, but I did not see anywhere in the paper a theoretical comparison. - The presentation of the paper can be improved. The figures are hard to read; the axes are not labelled (Fig 4) and sometimes the legends are missing (Fig 2). Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Please answer my question about the theoretical comparison of active vs. passive ranking for the 1D grid. - The explanation about Lemma 2.2 does not seem correct, even after ignoring the typos in Lines 158 and 159. Delta_i is an upper bound on how big a confidence interval you can have around $\mu_i$. The inequality should be $|\mu_i - \mu_j| \ge \Delta_i$. Only for such $j \neq i$, the scoring function has to get the sign correct. Is that what you meant? - Line 319 says, “active-rank all competitors”. What is missing? Active-rank outperforms all the competitors? That doesn’t seem true from Figure 3. - Is the x-axis in Figure 4 stopping time? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Limitations adequately addressed Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Comment: Please see the general rebuttal in regards to your first question. Line 159 is a typo, it should read "for all $j: |\mu_i - \mu_k| \geq \Delta_i$". The x-axis in figure 4 is indeed stopping time, this will be clarified. Aside from scenario one, for larger stopping times, i.e. greater than 500 active-rank outperforms the competitors, although by an admittedly small margin in some cases. The typo will be corrected to say something to this effect. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thanks for the comparison with the passive approach. It helps judge the contribution of this work better. Increasing my score.
null
null
null
null
null
null
DIFFER:Decomposing Individual Reward for Fair Experience Replay in Multi-Agent Reinforcement Learning
Accept (poster)
Summary: This paper proposes to train a mixing network and agent networks separately for multiagent reinforcement learning when the system is based on the centralized training and decentralized execution (CTDE) paradigm. The authors derive an individual reward that maintains invariance between the gradients of the mixing network's loss function and the sum of the loss functions of the agent networks. Then, the authors apply Prioritized Experience Replay to select samples for training the agent networks. The proposed method, DIFFER, is compared with QMIX, COMIX, QPLEX, and MASER on several tasks, such as SMAC, GRF, and MAMujoco. The experimental results show that DIFFER outperforms the baseline methods and demonstrate that a fair experience replay mechanism works efficiently. Strengths: - Originality: The idea of fair experience replay based on the constraint between the gradients of the mixing and the agent networks is novel. However, I found the following paper that may be related to this study: D. Hostallero et al. (2020). Inducing Cooperation through Reward Reshaping based on Peer Evaluations in Deep Multi-Agent Reinforcement Learning. In Proc. of AAMAS. - Quality: The experimental results support the claims and the proposed method. - Clarity: The manuscript is written very well and easy to follow. - Significance: While the CTDE approach becomes popular in MARL, many previous studies do not pay attention to the difference between individual experiences. This study sheds light on its importance. Weaknesses: - Although the experimental results show that the proposed method works well, there is no theoretical analysis. In particular, the gradient of the loss function of the agent network is independent of the target networks as described later. It would be better to discuss the loss function in detail. - In addition, the off-policyness of the proposed method should be discussed. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Major comments: - The individual reward (2) is interesting, but I am unsure how the TD error is calculated. I think that the TD error is given by $$ (R + \gamma \tilde{Q}_{\mathrm{tot}} - Q_{\mathrm{tot}}) \frac{\partial Q_{\mathrm{tot}}}{\partial Q_i} $$ because the term $\gamma \tilde{Q}_i – Q_i$ cancels out. Is the target network $\tilde{Q}_i$ necessary? Is my understanding correct? - I would like to know whether the used learning algorithm is off-policy or not. Experience replay buffer technique is typically applied to off-policy reinforcement learning algorithms, but the loss functions (6) and (7) suggest that the action value function is trained in an on-policy manner, like SARSA. Is on-policyness necessary for the proposed method? - Is the input of the mixing network consistent with the outputs of the agent networks when the mixing and agent networks are trained separately? - Does QMIX-divide adopt uniform sampling instead PER for training the agent networks? If so, the performance difference between QMIX-DIFFER and QMIX-divide can be discussed based on Fujimoto et al. (2020). S. Fujimoto et al. (2020). An Equivalence between Loss Functions and Non-Uniform Sampling in Experience Replay. NeurIPS 33. Minor comments: - Line 87: $A^n$ -> $A^N$. - Line 96: $r$ -> $R$. - Line 154: Since $S$ denotes the state space, it would be better to use a different symbol. - Eq.(7) and Line 191: $\chi^{\mathrm{indi}}_j$ -> $\chi^{\mathrm{ind}}_j$. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Minor comments: The authors do not discuss this work's potential negative social impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer: Thanks for your valuable comments. **Q1: Is $\widetilde{Q_i}$ necessary?** A1: Your understanding is indeed correct. 1) In the computation of the TD error for the agent networks, there is no need to explicitly determine the value of $\widetilde{Q_i}$ because it cancels out within the formula. The TD error can be expressed as $(R + \gamma \widetilde{Q} _ {\mathrm{tot}}-Q_{\mathrm{tot}})\frac{\partial Q_{\mathrm{tot}}}{\partial Q_i}$, as you have correctly stated. 2) However, it is worth noting that the calculation of $\widetilde{Q_i}$ remains necessary for the determination of $\widetilde{Q_{\mathrm{tot}}}$. The target mixing network receives the $\widetilde{Q_i}$ values from each individual agent and produces the $\widetilde{Q_{\mathrm{tot}}}$ output. 3) While it may not be necessary to explicitly calculate individual rewards using Equation (2) in the code, incorporating this calculation can significantly enhance our understanding and improve interpretability. **Q2: Is the used learning algorithm off-policy?** A2: Yes. 1) We would like to clarify that the action value functions in our work are trained in an off-policy manner similar to Q-learning , contrary to an on-policy approach like SARSA. This is evident in Line 100, where we define $\widetilde{Q}(\boldsymbol{o’};\theta^-)$ as the optimal target action value, represented as $\max_{\boldsymbol{a’}}Q(\boldsymbol{o’},\boldsymbol{a’};\theta^-)$. 2) As a result, in Equation 6, $\widetilde{Q_{\mathrm{tot}}}(\boldsymbol{o’},\theta_m^-)=\max_{\boldsymbol{a’}} Q_{\mathrm{tot}}(\boldsymbol{o’},\boldsymbol{a’};\theta_m^-)$ represents the optimal target team action value function, demonstrating the off-policy nature of our learning algorithm. 3) Regarding Equation 7, we apologize for the typo. The part of $\widetilde{Q_{j}}(o'_j, a_j;\theta_p^-)$ is incorrect, which should be $\widetilde{Q_{j}}(o'_j;\theta_p^-)$. The correct expression of Equation 7 should be $$L^{\mathrm{ind}} _ {\mathrm{TD}}(\theta_p)=\sum\nolimits_{\chi^{\mathrm{indi}} _ j=(o_j,a_j,o_j',r_j)\in S'}\omega_j(r_j +\gamma \cdot \widetilde{Q_{j}}(o'_j;\theta_p^-)-Q_j(o_j,a_j;\theta_p))^2$$ We regret any confusion stemming from our oversight and appreciate the opportunity to rectify it in the revised version of our paper. **Q3: Is the input of the mixing network consistent with the outputs of the agent networks?** A3: Yes. In Algorithm 1 on page 5, specifically in Line 13-19, we provide a detailed description of a network update procedure. 1) After retrieving data from the replay buffer, the mini-batch data is fed into the agent network, resulting in the computation of individual action-values ($Q_i$). These individual action-values are subsequently forwarded to the mixing network, which combines them to obtain the team action-value ($Q_{\mathrm{tot}}$). 2) We calculate both the individual TD-loss and the team TD-loss independently. Notably, the gradient propagation and network updating steps are carried out independently for both the agent networks and the mixing network. 3) In the revised version of our paper, we intend to emphasize the consistent flow of information between the agent and mixing networks throughout the training process.  **Q4: The reason of the performance difference between QMIX-DIFFER and QMIX-divide.** A4: 1) QMIX-divide adopts all samples in the mini-batch instead of PER for training the agent networks. 2) We have thoroughly studied the paper you referred to and have identified that it indeed provides an explanation for our observed performance improvement to a certain degree. Consequently, we have decided to cite this paper in the revised version of our work and carefully discuss the implications it holds for our research. **Q5: Minor comments.** A5: We apologize for the presence of these typos and sincerely appreciate your valuable insights. We will promptly rectify the errors to ensure clarity and precision in our paper. Please let us know if you have any further concerns, and we are encouraged to have a discussion. --- Rebuttal Comment 1.1: Title: Be Glad to Tell Us Any Concern Comment: Dear Reviewer: We genuinely value your suggestions and comments. We address your comments point by point and try our best to respond to them. If there are further suggestions or questions, please let us know them. We are eager to engage in a discussion with you. Best regards, The authors of DIFFER
Summary: This is a very interesting and strong paper on MARL that seeks the strategy of learning individual policies rather than team policies in order to break symmetry and learn identity and contextual influences on the individual policies. In order to do so, it solves the technical problem of decomposing a team reward into individual reward, and provides a proof that their method works. They then employ a replay buffer that selects individual experiences to train the agent network akin to prioritized replay. The authors demosntrate their method on a variety of benchmarks, showing improvement. Strengths: The method is novel; the theory and experiments solid. The writing was very clear and flowed very naturally. The authors demonstrate their method not just multi agent Mujoco, but even the difficult StarCraft II micromanagement tasks. Weaknesses: I do not have any concerns. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: N/A Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors have a limitations section that is fair and does not weaken the strengths of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer: Thanks for your valuable comments. We sincerely appreciate your positive evaluation and recognition of our research. To enhance our work’s robustness and comprehensiveness, we plan to incorporate additional experiment results, intuitive visualization, and theoretical analysis in the revised edition of our paper. These enhancements aim to strengthen the validity, clarity, and impact of our findings. Your valuable feedback has been instrumental in shaping the development of our research, and we are grateful for your contribution.
Summary: This paper introduces a method for decomposing shared rewards into individual rewards within value-based multi-agent reinforcement learning (MARL) methods. By decomposing the rewards, the method enables the prioritization of crucial individual experiences. Experimental results demonstrate the effectiveness of the proposed approach across a range of value-based MARL methods. Strengths: 1. The paper provides theoretical evidence for the optimization equivalence between value-based MARL methods and those employing decomposed rewards. 2. The concept of decomposing shared rewards into individual rewards is innovative and appears to yield positive results, showcasing the potential for significant impact. 3. The authors have made their code available, ensuring reproducibility of the experiments. Weaknesses: 1. The paper lacks a discussion of policy-based MARL methods, which is a significant gap in the related work. It would be valuable to include a comprehensive overview and comparison with policy-based approaches. 2. The absence of a comparison with policy-based methods diminishes the completeness of the evaluation. Including such a comparison would further strengthen the paper's contribution. Some related papers are [1][2]. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Is there any intuitive visualization or illustration available that can help understand the decomposed individual rewards more clearly? 2. How does this method relate to policy-based MARL methods? It would be beneficial to discuss the potential application of individual reward decomposition to policy-based approaches. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper primarily focuses on cooperative MARL tasks. It would be intriguing to explore the applicability of reward decomposition in mixed cooperative-competitive settings. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer: Thank you for the constructive comments. **Q1: Intuitive visualization of the decomposed individual rewards.** A1: We have included visualizations displaying the individual rewards for each agent, along with corresponding screenshots, which can be found in the Figure 3 of Rebuttal PDF. The presented visualizations demonstrate that the individual rewards yielded by DIFFER align with the current game situation, further supporting the effectiveness of our approach. To provide greater clarity on the concept and implications of decomposed individual rewards, we will incorporate it in the revised version of our paper. **Q2: How does this method relate to policy-based MARL methods?** A2: 1) *Performance Comparison with Policy-Based Methods:* We notice that you may dismiss the related paper details in Weaknesses 2, so that we don’t know which exactly the related papers you mentioned. We have conducted a comprehensive experiment comparing DIFFER with the classic policy-based method COMA[1]. The detailed results of this comparison can be found in the “global” response PDF, which strongly highlight the strengths and advantages of our proposed DIFFER framework. 2) *Potential Application to Policy-Based Approaches:* Our method, DIFFER, is specifically designed to address MARL problems where individual rewards are not available, and teams only have access to a shared team reward. In such scenarios, calculating individual action-values becomes a challenge. However, DIFFER provides a solution by decomposing individual trajectories, enabling the calculation and update of individual action-values. a) For policy-based methods that approximate individual action-values using individual critics, such as IPPO[2] and MADDPG[3], the traditional approach involves treating the team reward as an individual reward and using it to update individual critics. In this context, the decomposition of team experiences provided by DIFFER may not be directly applicable or necessary since the individual critics can approximate action-values using the team reward. b) On the other hand, for policy-based methods that do not require the approximation of individual action-values, such as MAPPO[4], our DIFFER method might not offer significant enhancements. In such cases, where there is no need to decompose team trajectories for individual action-values training, the contribution of DIFFER may not be as pronounced. In our forthcoming research, we will embark on an exploration of the application of our innovative method, DIFFER, within the domain of policy-based methods. Please share any remaining concerns. [1] Foerster, Jakob, et al. "Counterfactual multi-agent policy gradients." Proceedings of the AAAI conference on artificial intelligence. Vol. 32. No. 1. 2018. [2] de Witt, Christian Schroeder, et al. "Is independent learning all you need in the starcraft multi-agent challenge?." arXiv preprint arXiv:2011.09533 (2020). [3] Yu, Chao, et al. "The surprising effectiveness of ppo in cooperative multi-agent games." Advances in Neural Information Processing Systems 35 (2022): 24611-24624. [4] Lowe, Ryan, et al. "Multi-agent actor-critic for mixed cooperative-competitive environments." Advances in neural information processing systems 30 (2017). --- Rebuttal Comment 1.1: Title: Be Glad to Tell Us Any Concern Comment: Dear Reviewer: We appreciate your comments and time! We wonder whether our response have response all of your questions about our work. And we are most glad to discuss anything with you about our work. Best regards, The authors of DIFFER
Summary: The paper presents a new algorithm DIFFER to help individual agents to learn individual TD-error such that it improves the overall (team) model performance. The paper's proposed method is different from an existing method MASER in that it is maintaining the overall team objective. Overall model performance comparison was made against QMIX which uses the traditional team TD-error only. Strengths: The paper adequately points out the limitation of the proposed method in that additional complexity thus more resources is required. The paper also demonstrates how each complexity (i.e. individual TD-error computation and the selection phase) contributes to the final model performance compared to a prior art. This helps the reader make an informed decision based on the tradeoff between complexity and utility demonstrated. The paper seems overall well-written such that it was fairly easy to follow. Weaknesses: I would like to see more comparisons of DIFFER's performance and complexity against MASER's, a closely related work. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See the weakness section above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors adequately listed the potential limitation in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer: Thanks for your valuable suggestion. 1) We conducted experiments to compare the performance of DIFFER and MASER in SMAC environments. According to your suggestion, we have conducted additional experiments to meticulously evaluate the performance of DIFFER in relation to MASER on the Google Football Research (GRF) task, which can be found in the Figure 1 of Rebuttal PDF. The comprehensive results underscore the strengths and advantages of our proposed DIFFER framework. 2) Here we explain the factors leading to the poor performance of MASER. a) It is important to acknowledge that MASER introduces a divergence in the optimization objective by seeking to maximize a combination of individual action-value and team action-value, as explained in lines 264-270. This divergence may contribute to its limitations. b) It is crucial to note that MASER predominantly focuses on addressing sparse reward multi-agent reinforcement learning (MARL) scenarios, whereas our experiments are exclusively conducted within dense reward environments. Please let us know if you have any further concerns, and we are encouraged to have a discussion. --- Rebuttal Comment 1.1: Title: Be Glad to Tell Us Any Concern Comment: Dear Reviewer: We sincerely appreciate your valuable feedback. We wonder whether our response adequately addresses all of your concern. We are more than willing to provide any additional clarification or answer any questions you may have regarding our work. Best regards, The authors of DIFFER
Rebuttal 1: Rebuttal: We extend our sincere gratitude to all the esteemed reviewers for their invaluable and constructive comments. The consistently positive reviews our submission received from all the Reviewers have filled us with delight. We are greatly encouraged by their recognition of the originality (Review f7XM, U3kJ, ddQi), significant impact (Review U3kJ, ddQi), theoretical foundation (Review f7XM, U3kJ) , reproducibility (Review f7XM), and clarity (Review Jgve, U3kJ, ddQi) showcased in our work. Their meticulous evaluation of our research has further bolstered our confidence in the merits and contributions of our study. We also appreciate reviewers pointing out our weaknesses. We address their comments point by point and try our best to respond to them. Hope our response addresses the reviewers' concerns. The additional experiments in the Rebuttal PDF are summarised as follows: In Rebuttal Figure 1, we show a comparison between our model DIFFER and MASER[1] on Google Football Research (GRF) tasks, highlighting the performance improvement of our method towards MASER.. In Rebuttal Figure 2, we show a comparison between our model DIFFER and a classic policy-based method COMA[2], highlighting the performance improvement of our method towards COMA.. In Rebuttal Figure 3, we present the visualization of individual rewards produced by our DIFFER method. The consistent correlation between individual rewards and the ongoing game situation serves as a testament to the efficacy and rationality of our DIFFER method. [1] Jeon, Jeewon, et al. "Maser: Multi-agent reinforcement learning with subgoals generated from experience replay buffer." International Conference on Machine Learning. PMLR, 2022. [2] Foerster, Jakob, et al. "Counterfactual multi-agent policy gradients." Proceedings of the AAAI conference on artificial intelligence. Vol. 32. No. 1. 2018. Pdf: /pdf/8f227af75254d07c40120e9ea6181e00fd3dc22d.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Training Energy-Based Normalizing Flow with Score-Matching Objectives
Accept (poster)
Summary: The authors consider normalizing flows with linear transformations. Computing the determinant of their Jacobian is expensive, limiting their applicability when training via maximum likelihood. Therefore, the authors introduced an energy-based method to train normalizing flows with linear transformation, which circumvents the need of computing the determinant of the Jacobian. It only has to be computed once for inference, which leads to a significant speed-up in training. Compared to the maximum likelihood-based training, the authors achieve similar results while getting a significant speed-up. Strengths: The idea of combining energy-based training with normalizing flows by phrasing the flow as an energy-based model is novel and interesting. It circumvents the need of having to compute the determinant of the Jacobian of some layers, which can save a lot of compute. It still needs to be computed for inference; however, since it is constant in the cases the authors considered, it has to be computed only once and can be stored for further use. Weaknesses: My biggest concern is the relevance of this work. Normalizing flows typically use transformations that are designed in such a way that the Jacobian determinant is easy to compute. For that reason, linear transformations are not very popular, also because they are not very expressive. The claim of the authors, that they are used in Glow and subsequent flow models for images, is true, but the linear transformation was only applied across channels, and since the number of channels is much smaller than the total number of input dimensions, i.e. the number of all pixels across all channels, they are not very costly in terms of evaluating their Jacobian's determinant. The authors apply their method to MNIST and CIFAR10, roughly matching the performance of their baseline with their method. However, both perform significantly worse than Glow and related methods. Hence, it is not clear whether their method gives them any advantage over these methods, either in terms of performance or speed. There are also procedures to estimate the Jacobian's determinant to cut down cost, such as the Skilling-Hutchinson estimator used by residual flows. The authors should have compared their method with using such an estimator. Unfortunately, their method cannot be used for residual flows, as the determinant of the Jacobian cannot depend on the input to be treated as a normalization constant in an energy-based method. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: * How does your method compare against Glow in terms of runtime? * How does using an estimator for the Jacobian's determinant compare against your method? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Limitations were mentioned in the weaknesses section of the review. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer’s time and effort spent on the review, and would like to respond to the reviewer’s questions as follows. Please note that parts of the responses are provided in the [global comment](https://openreview.net/forum?id=AALLvnv95q&noteId=9f9rk2L3tW) due to the word limit. --- ### **Comments** --- **C1. (a)** My biggest concern is the relevance of this work. Normalizing flows typically use transformations that are designed in such a way that the Jacobian determinant is easy to compute. For that reason, linear transformations are not very popular, also because they are not very expressive. **(b)** The claim of the authors, that they are used in Glow and subsequent flow models for images, is true, but the linear transformation was only applied across channels, and since the number of channels is much smaller than the total number of input dimensions, i.e. the number of all pixels across all channels, they are not very costly in terms of evaluating their Jacobian's determinant. **Response: (a)** We value the reviewer's perspective, but respectfully hold a different viewpoint. Architectures that prioritize efficient log-Jacobian evaluations often come at the expense of limiting learnable transformations, which might compromise their capacity to capture complex feature representations, as highlighted in previous studies [r1-r3]. In contrast, our work introduces an alternative approach to address training inefficiencies. This approach does not impose restrictions on the linear transformations, and offers more flexibility in architectural design. In addition, linear transformations are well-recognized and extensively used in the current literature. This trend of utilizing linear transformations, specifically 1x1 convolution layers, was pioneered by Glow [r4], and subsequent studies [r5-r8] have gradually relaxed the constraints on the kernel sizes. The aforementioned literature has a common goal of incorporating linear layers with a more general form in model designs, and their experimental results have supported the notion that linear transformations play a vital role in enhancing the expressiveness of flow-based models. **(b)** In response to the reviewer's comment on the computational cost, it is important to highlight that the described scenario is a niche, pertaining exclusively to cases with few channels and limited receptive fields within the model. While earlier methods [r4-r8] have gradually loosened this constraint, their methods necessitate careful management of the number of channels and the receptive fields to ensure computational feasibility. In contrast, our paper enables the use of arbitrary linear transformation in flow-based architectures. This key distinction allows the extension of EBFlow to more complex architectures. --- **C2.** The authors apply their method to MNIST and CIFAR10, roughly matching the performance of their baseline with their method. However, both perform significantly worse than Glow and related methods. Hence, it is not clear whether their method gives them any advantage over these methods, either in terms of performance or speed. **Response:** We would like to emphasize that our primary objective is not to achieve state-of-the-art performance. Instead, our goal is to deliver performance comparable to contemporary works while significantly enhancing training efficiency. A unique feature of our approach is its ability to incorporate arbitrary linear transformations in flow-based models without compromising this efficiency, setting our work apart from previous research. The results demonstrated in Section 5 validate that our proposed method can train flow-based models with linear layers efficiently. This leads to considerable speed improvements while maintaining competitive performance levels. We believe that these experimental results are sufficient to substantiate the main claim of our paper. --- **C3. (a)** There are also procedures to estimate the Jacobian's determinant to cut down cost, such as the Skilling-Hutchinson estimator used by residual flows. The authors should have compared their method with using such an estimator. **(b)** Unfortunately, their method cannot be used for residual flows, as the determinant of the Jacobian cannot depend on the input to be treated as a normalization constant in an energy-based method. **Response: (a)** The Skilling-Hutchinson estimator used by residual flows is not generally applicable to our model. In residual flows, each transformation block adheres to the constraint that its Lipschitz constant is less than 1 (i.e., $Lip<1$). Under such a constraint, the Jacobian determinant of each block can be estimated with convergence guarantee through an infinity power series. In contrast, the flow-based architectures adopted in this paper do not enforce constraints on Lipschitzness. This suggests that the estimator presented in [r2] is not applicable to our models, since our models can exhibit $Lip\geq 1$. **(b)** We value the insights provided by the reviewer. However, we would like to highlight that our method is compatible with residual flows, while its efficiency enhancement depends on the architecture employed. The training efficiency of residual flows can potentially be improved under the case where a number of residual-flow blocks consist solely of linear transformations. In such a case, the Jacobian determinant of these residual-flow blocks can be treated as $Z(\theta)$ in EBFlow, allowing its computation to be bypassed. For instance, consider a residual-flow block $g_i(x;\theta)=x+\tilde{W}x$, where $\tilde{W}$ is the weight matrix with its spectral norm less than 1 (i.e., $||\tilde{W}||_2<1$). In this case, $J{g_i}=I+\tilde{W}$ is formulated as $Z(\theta)$, enabling its determinant calculation to be circumvented. --- ### **Questions** --- (see the global comment) --- **References:**\ (see the global comment) --- Rebuttal Comment 1.1: Title: Reply to the authors Comment: I'm still not completely convinced by the author's arguments, but I acknowledge their position. Given the mostly positive feedback from the over reviewers, I'm not opposed to accepting this paper and increase my score to a borderline accept. --- Reply to Comment 1.1.1: Comment: We would like to thank the reviewer again for the valuable review and feedback.
Summary: Normalizing flows are generative models trained by maximizing the log-likelihood of the dataset given the model, which is expressed as the sum of the base probability of a transformed variable plus a series of log-Jacobian terms that keep track of the volume distortion factors. Evaluating the log-Jacobian is usually the main bottleneck of normalizing flow architectures, so much so that the most popular architectures are designed so as to keep the log-Jacobian computations tractable. This paper introduces the use of techniques from the energy-based models literature in order to more efficiently train normalizing flow architectures. In a normalizing flow architecture, it is common to have both elementwise non-linear layers, which give data dependent but tractable log-Jacobian terms, and linear layers whose log-Jacobian is data-independent but expensive to compute. The main idea introduced in this paper is to split the log-likelihood of a normalizing flow into the product of two terms: 1) an "energy function" that contains all the data-dependent terms and 2) a normalization constant that collects all the data-independent log-Jacobians of the linear layers. This mirrors the split in energy-based models, where the energy function is tractable but the normalization function is intractable and it is not directly evaluated. Consequently, the authors apply a series of training objectives developed in order to train energy-based models to this new setting, obtaining an efficient way of training normalizing flows with expensive linear layers. Strengths: The main idea is clever and original. It follows from the realization that the intractable terms of the log-Jacobian are independent from the data. This offers an interesting and well-motivated reason for borrowing techniques from the energy-based literature. The experiment section is adequate, with experiments both on toy data and on real high-dimensional datasets. While the results are not spectacular, they certainly support the claim that the energy-based training can be used to train normalizing flows with a substantial speed-up and without a major loss of performance. The paper is well-written and the relevant literature is properly referenced. Weaknesses: Major points: In general, I am not convinced that this approach solves a major problem in the current literature. In fact, most successful flow architectures are designed to have fast log-Jacobian evaluations. Moreover, both continuous flows and the more recent flow-matching models can be used to train architectures with intractable Jacobian terms. The experiments only compare with the normalizing flow baseline and therefore they do not provide evidence that this approach has a competitive advantage over continuous flows and diffusion-like models such as flow-matching models. Minor points: I find the presentation somehow difficult to read, with several important equations and concepts scattered all over the paper. In particular, I think it would be useful to the reader to explicitly discuss the score-matching loss associated with Eq.9, for example, by writing the concrete form of Eq.5 as applied to Eq.9 in the main text. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Could you clarify in which situations this model should be preferred over continuous flows and score-based diffusion models? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The limitations of the approach have not be extensively discussed. I do not see any direct negative social impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for thoroughly summarizing our paper, and would like to respond to the reviewer’s questions as follows. --- ### **Comments** --- **C1.** Major points: **(a)** In general, I am not convinced that this approach solves a major problem in the current literature. In fact, most successful flow architectures are designed to have fast log-Jacobian evaluations. **(b)** Moreover, both continuous flows and the more recent flow-matching models can be used to train architectures with intractable Jacobian terms. The experiments only compare with the normalizing flow baseline and therefore they do not provide evidence that this approach has a competitive advantage over continuous flows and diffusion-like models such as flow-matching models. **Response: (a)** We respectfully disagree with the reviewer’s comment. Architectures that prioritize fast log-Jacobian evaluations often come at the expense of limiting learnable transformations, potentially restricting their capacity to capture complex feature representations, as highlighted in [1,2]. In contrast, our work introduces an alternative approach to address training inefficiencies. This approach does not impose restrictions on linear transformations, and offers more flexibility in architectural design. **(b)** Continuous normalizing flows (CNF) (e.g., FFJORD [3] and flow-matching models [4]) reside in a research domain distinct from the central focus (i.e., (non-continuous) normalizing flows) of our study. CNF differs from (non-continuous) normalizing flows due to its reliance on an ODE solver for simulating the sampling process. This characteristic renders CNF less suited for certain downstream applications that could be efficiently achieved through (non-continuous) normalizing flows. For instance, models leveraging the latent information of a flow-based model in their training objective (e.g., [5]) may require performing backward propagations through the inverse transformation $g$. In these scenarios, the adjoint method is required for calculating the gradients by backward propagating through the ODE solutions, which can be computationally intensive in terms of both time and memory. In addition, density estimation in CNF also necessitates the use of an ODE solver, which introduces additional errors for the approximated solution (e.g., the relative and absolute tolerances in the RK45 solver). This characteristic may render CNF less common in statistical analyses demanding accurate density estimation, such as [6] and [7]. While we acknowledge the ongoing efforts of enhancing CNF, we posit that CNF and (non-continuous) normalizing flows represent distinct research avenues, with each having its unique applications. --- **C2.** Minor points: (see above) **Response:** We appreciate the reviewer's suggestion. However, substituting $E$ in Eq. (5) with the specific form of Eq. (9) would lead to overly lengthy equations. Separating Eq. (9) and Eq. (5) offers the advantage of preserving clarity throughout the presentation. --- ### **Questions** --- **Q1.** Could you clarify in which situations this model should be preferred over continuous flows and score-based diffusion models? **Response:** As explained in the response to C1 (b), (non-continuous) normalizing flows are capable of (1) exact density evaluation and (2) offering computational benefits in certain downstream applications compared to CNF, which lacks these capabilities. In a similar vein, due to these unique characteristics, score-based diffusion models also fall short in these two areas, making them unsuitable replacements for (non-continuous) normalizing flows. More specifically, to explain why (non-continuous) normalizing flows are preferred, we first inspect the score-based diffusion models and categorize them into three different types based on their continuity with respect to $t$ and the presence of added noises. This categorization serves to clarify the difference between (non-continuous) normalizing flows and them. First, when score-based diffusion models are interpreted as probability flow ODEs [8,9], they fall within the category of CNFs, thus sharing the characteristics and limitations discussed in the response to C1 (b). Second, when diffusion models are interpreted as SDEs [10], they stand apart from normalizing flows due to their intrinsic stochasticity. The stochasticity introduces difficulty in exact density estimation, leading score-based diffusion models to rely on upper-bound approximations for their likelihood [9]. Third, discrete variants of diffusion models may also require approximations for density estimation. For example, DDPM [11] leverages upper-bound to approximate negative log likelihood, setting it apart from normalizing flows in this aspect. As a result, in certain scenarios, (non-continuous) normalizing flows are the favored choice, and diffusion models cannot adequately replace them. --- **References:**\ [1] Chen et al. Residual Flows for Invertible Generative Modeling, NeurIPS, 2019.\ [2] Gresele et al. Relative gradient optimization of the Jacobian term in unsupervised deep learning, NeurIPS, 2020.\ [3] Grathwohl et al. FFJORD: Free-form Continuous Dynamics for Scalable Reversible Generative Models, ICLR, 2019.\ [4] Lipman et al. Flow Matching for Generative Modeling, ICLR, 2023.\ [5] Nalisnick et al. Hybrid Models with Deep and Invertible Features, ICML, 2019.\ [6] Nalisnick et al. Do Deep Generative Models Know What They Don't Know?, ICLR, 2019.\ [7] Jiang et al. Revisiting Flow Generative Models for Out-of-distribution Detection, ICLR, 2022.\ [8] Karras et al. Elucidating the Design Space of Diffusion-Based Generative Models, NeurIPS, 2022.\ [9] Song et al. Maximum Likelihood Training of Score-Based Diffusion Models, NeurIPS, 2021.\ [10] Song et al. Score-Based Generative Modeling through Stochastic Differential Equations, ICLR, 2021.\ [11] Ho et al. Denoising Diffusion Probabilistic Models, NeurIPS, 2020. --- Rebuttal Comment 1.1: Comment: Thank you for the reply. I do agree that there are valid reasons to use normalizing flows with discrete layers and that your work can have a relevant impact in the flow literature. I also agree with you that lifting the Jacobian tractability constraint can open the door for more expressive flow architectures. However, this remains conjecture since your paper does not show this increase in expressiveness neither theoretically nor experimentally. So said, I do think that this work is a step in the right direction and I am happy to keep my score and argue for acceptance. --- Reply to Comment 1.1.1: Comment: We would like to extend our sincere gratitude to the reviewer for the insightful feedback and thoughtful response. In response to the new query regarding the expressiveness the reviewer raised subsequently, we wish we could present the additional experimental results directly. Unfortunately, since the rebuttal phase has concluded, we may be unable to share the results here. However, in light of the reviewer’s feedback, we have conducted additional experiments and observed intriguing results. Specifically, the experimental results indicate that flow-based models involving unconstrained linear layers exhibit superior performance in comparison to models incorporating linear layers constrained by upper/lower triangular weight matrices or those utilizing LU decomposition. The performance improvement suggests that unconstrained linear layers provide enhanced expressiveness than their constrained variants. These findings will be included in the final version of our paper. We thank the reviewer again for engaging in a constructive discussion and for the reviewer’s appreciation of our method.
Summary: The authors use a connection between flows and EBMs to devise a new training method for flows via score-matching. Their training procedure improves efficiency via avoiding computation of the Jacobian determinants that are independant to values of the samples (i.e. only contribute to the normalizing constant of the PDF). Additionally they identify two methods for improving training of the flows via score-matching, namely (1) match-after-preprossing whereby the score-matching occurs to the pre-processed variables avoiding numerical instability from the pre-processing layers, and (2) using an exponential moving average for the weight updates. In their experiments, they show their approach significantly improves training efficiency. Strengths: - The paper identifies that computing the unnormalized PDF of a flow is significantly cheaper than the normalized PDF for linear flow layers, O(D^2) instead of O(D^3), noting that computing the Jacobian determinant is not necessary for the unnormalized PDF. This contribution is novel and valuable. - The experiments of the paper are strong, and demonstrate the utility of their approach. Weaknesses: For sample generation by inverting the flow, the inverse matrix for the linear flow layers will cost $D^3$ - this is not mentioned in the paper. This inversion may be performed once, and then re-used for sample generation but for large $D$ this could be prohibitive. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: This is the first time I've seen imputation done with flows - is this a novel contribution from the paper? How would sampling work for large $D$? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: There is no discussion around inverting the flow when $D$ is large (see Weaknesses/Questions). Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the time and effort spent on the review, and would like to respond to the reviewer’s questions as follows. --- ### **Comments** --- **C1.** **(a)** For sample generation by inverting the flow, the inverse matrix for the linear flow layers will cost $D^3$ - this is not mentioned in the paper. **(b)** This inversion may be performed once, and then re-used for sample generation but for large $D$ this could be prohibitive. **Response:** **(a)** We agree with the reviewer that indicating the $O(D^3L)$ sampling cost in the paper enhances the clarity. This will be included in the forthcoming revision. **(b)** If the dimensionality $D$ is excessively large such that the inverse matrix of linear layers cannot be explicitly calculated, a viable alternative is to adopt a stochastic sampler such as Langevin MCMC generator discussed in Lines 327-335. The associated computational cost would depend on the number of steps (denoted as $N$) required for convergence and the time complexity of deriving the gradients of $E(x;\theta)$ (i.e., $O(D^2L)$). This results in an overall cost $O(D^2NL)$, potentially more economical than the $O(D^3L)$ computation. --- ### **Questions** --- **Q1. (a)** This is the first time I've seen imputation done with flows - is this a novel contribution from the paper? **(b)** How would sampling work for large $D$? **Response:** **(a)** Imputation for flow-based models is a well-established concept, as demonstrated in the previous work [1], which explored such an application. The process of image inpainting becomes feasible when an energy function can be explicitly defined. In the case of flow-based models, a straightforward choice is selecting $E(x;\theta)=-\log p(x;\theta)$ for this purpose. Such an application of energy function would be more prevalently observed in score-based and energy-based generative modeling literature (e.g., [2-4]). **(b)** In response to the reviewer's inquiry, we have conducted supplementary experiments, and presented the imputation results of the FC-based models trained with DSM on the CelebA [5] and STL-10 [6] datasets. Both of the datasets have data dimensionality $D=$64x64x3. These results demonstrate EBFlow's potential in generating images with good quality. Please refer to Figure 1 of the PDF file in the [global response](https://openreview.net/forum?id=AALLvnv95q&noteId=9f9rk2L3tW) above. --- **References:**\ [1] Dinh *et al.* NICE: Non-linear Independent Components Estimation, ICLR Workshop, 2015.\ [2] Nguyen *et al.* Plug & Play Generative Networks: Conditional Iterative Generation of Images in Latent Space, CVPR, 2017.\ [3] Du *et al.* Implicit Generation and Modeling with Energy Based Models, NeurIPS, 2019.\ [4] Song *et al.* Score-Based Generative Modeling through Stochastic Differential Equations, ICLR, 2021.\ [5] Liu *et al.* Deep Learning Face Attributes in the Wild, ICCV, 2015.\ [6] Coates *et al.* An Analysis of Single Layer Networks in Unsupervised Feature Learning, AISTAT, 2011. --- Rebuttal Comment 1.1: Comment: Thank you for the responses to my questions and pointers to literature on imputation with flows. The additional imputation results on CelebA and STL-10 are a nice addition to the paper. I am happy with my strong recommendation for acceptance, and have no further questions. --- Reply to Comment 1.1.1: Comment: We express our gratitude again for the valuable feedback and response provided by the reviewer.
Summary: This paper proposes a new flow based approach to generative modeling called EBFlow. EBFlow uses training objectives from EBMs to reduce training cost for flow based methods unlike previous approaches that mostly achieved this via different architectures or biased estimation methods. The main insight for EBFlow is that the change of variables formula for a series of transformations for a NF is broken in to two parts: one containing only linear flows term and the other containing terms from non-linear transformations. The linear flows term is then interpreted as the normalizing constant and this interpretation aids in faster training strategies for flow based methods. Strengths: The motivation for reducing training costs for NFs is discussed well and is a very relevant and difficult problem. Several studies have been conducted to address this and the authors here propose a nice and novel methodology to address this. The exploration and discussion of the method as well as the motivation is thorough. The empirical analysis considered is detailed. Weaknesses: I am not sure how to interpret the empirical results. It is obvious from the plots in Fig 2 that the proposed method is much faster than ML based objective. However, the performance in all the tasks for the ML based objective is usually better or very close to EBFlow. I was curious why this is? Also, for the results reported in Tables 1 and 2, were the methods trained for the same wall clock time? Technical Quality: 3 good Clarity: 3 good Questions for Authors: See above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer’s time and effort spent on the review, and would like to respond to the reviewer’s questions as follows. --- ### **Comments** --- **C1.** I am not sure how to interpret the empirical results. It is obvious from the plots in Fig 2 that the proposed method is much faster than ML based objective. However, the performance in all the tasks for the ML based objective is usually better or very close to EBFlow. I was curious why this is? **Response:** The use of KL-divergence-based objectives (i.e., ML and SML objectives) and Fisher-divergence-based objectives (i.e., SSM, DSM, and FDSSM objectives adopted by EBFlow) exhibits a tradeoff between NLL performance and speed. KL-divergence-based objectives are expected to achieve superior NLL performance, since they explicitly minimize the NLL metric during the optimization process. On the other hand, employing Fisher-divergence-based objectives offer the advantage of efficient training by circumventing the computation of Jacobian determinants of linear transformations. This increased speed may come at the cost of NLL performance compared to the KL-divergence-based objectives, since NLL is not explicitly minimized in Fisher-divergence-based objectives. --- **C2.** Also, for the results reported in Tables 1 and 2, were the methods trained for the same wall clock time? **Response:** The training wall clock time for the methods in Table 1 and 2 are different, since the models are trained until convergence as mentioned in the caption of Figure 2. We appreciate the question raised by the reviewer and will add the details in the forthcoming revision. --- Rebuttal Comment 1.1: Comment: Thank you for the response. I have no further questions and will keep my score. --- Reply to Comment 1.1.1: Comment: We would like to thank the reviewer again for the valuable review and response.
Rebuttal 1: Rebuttal: This global comment includes extended discussions of the questions raised by reviewers htQa and fofM. The attached PDF file contains additional experimental results. --- ### **Reviewer htQa (Cont'd)** --- **C8.** I think the author intends to model $g(\theta)$. Therefore, it is better to avoid using $E(x;\theta)$ to avoid confusion as it is usually used for representing a parametric energy model. **Response:** Using the notation $E$ is necessary since this paper aims to explore the connections between flow-based and energy-based models. Introducing the symbol $E$ emphasizes the perspective of viewing a flow-based model as a parametric energy-based model. --- **C9.** For Table 2, the author could change the unit to batch/second for easier comparison. **Response:** We thank the reviewer for the suggestion. The following tables compare our results presented in batch/second and second/batch on the MNIST dataset. - MNIST (FC-based) ||ML|SML|SSM|DSM|FDSSM| |-|-|-|-|-|-| |sec. / batch|1.25 e-1|8.15 e-2|3.02 e-2|1.50 e-2|7.68 e-3| |batch / sec.|8.00|12.27|33.11|66.67|130.21| --- **Q1.** Could the author explain in detail why the complexity of calcuating Jacobin determinants is $O(D^3L)$ for non-linear transformation? Is it general in all kinds of non-linear functions? **Response:** It seems that there might be some misunderstanding regarding the premise of the flow-based architecture discussed in this paper. As explained in Lines 59-61, the non-linear transformation is defined to have a computational cost of $O(D^2L)$. This complexity is determined by the sparsity of the Jacobian of $g_i \in S_n$. For instance, in Neural Spline Flows (NSF), the Jacobian determinant calculation of the non-linear transformation can be simplified as multiplying the diagonal elements of the Jacobian, resulting in a complexity that satisfies the premise stated in Lines 59-61. --- **Q2.** What does the sentence in line 62-63 mean? **Response:** In Lines 62-63, we present a number of examples of implementing the flow-based architecture satisfying the criteria stated in Lines 59-61. --- **Q3.** What is the FID score in generation tasks? **Response:** The FID score is seldom employed in the literature on normalizing flows. Contemporary studies, including [r4~r8], prefer NLL (Negative Log-Likelihood) and Bits/Dim (Bits per Dimension) as the evaluation metrics. Moreover, in the case of our related works [r3,r9], which are trained on the MNIST dataset, evaluating FID would necessitate additional adjustments, such as modifying the pre-trained Inception backbone to accommodate 28x28x1 inputs. Such modifications can significantly impact the numerical results and are rarely adopted in recent literature. --- **Q4.** What is the distribution of $p_u$? **Response:** $p_u$ can be any continuous distribution as long as its sampling process and density estimation can be easily implemented. In this paper, we follow many popular flow-based modeling methods (e.g., [r4,r10]) to choose $p_u$ as an isotropic Gaussian with zero mean and unit variance throughout all experiments. --- ### **Reviewer fofM (Cont'd)** --- **Q1.** How does your method compare against Glow in terms of runtime? **Response:** The runtimes of different losses calculated based on the Glow architecture are demonstrated in the following tables. The Glow architecture is fixed to 4 blocks to ensure the computation is manageable, and the kernel size ($ks$) of the convolutional layers within Glow varies from 1x1 to 7x7. All of the experiments are performed on an NVIDIA TITAN V GPU, and the results are reported as the average runtime for 10 independent runs. As shown in the rightmost three columns of both tables, SSM, DSM, and FDSSM exhibit great scalability with respect to the kernel size, as increasing the value of $ks$ does not lead to rapid growth in the average runtime. In contrast, the computational cost of the ML objective (used in the original Glow paper) increases significantly with respect to the kernel size. This phenomenon arises from the fact that transformations with larger $ks$ tend to exhibit Jacobian matrices with several non-zero elements, and thus increase the computational burden associated with the ML objective. - $D=$28x28x1 |$ks$|Baseline (ML)|SML|SSM|DSM|FDSSM| |-|-|-|-|-|-| |1x1|0.269|0.201|0.340|0.189|0.114| |3x3|0.380|0.366|0.344|0.213|0.122| |5x5|0.554|0.460|0.346|0.213|0.122| |7x7|0.859|0.526|0.351|0.222|0.123| - $D=$32x32x3 |$ks$|Baseline (ML)|SML|SSM|DSM|FDSSM| |-|-|-|-|-|-| |1x1|2.237|1.573|0.354|0.219|0.121| |3x3|3.547|2.104|0.395|0.227|0.124| |5x5|6.511|3.129|0.441|0.232|0.124| |7x7|9.249|4.156|0.443|0.238|0.128| Please note that the results correspond to the training time measured in sec. / batch. --- **Q2.** How does using an estimator for the Jacobian's determinant compare against your method? **Response:** We presume the reviewer is referring to the estimator discussed in C3 (a). However, this estimator is inapplicable without the enforcement of an additional Lipschitz constraint, as explained in the response to C3 (a). --- **References:**\ [r1] Behrmann et al. Invertible Residual Networks, ICML, 2018.\ [r2] Chen et al. Residual Flows for Invertible Generative Modeling, NeurIPS, 2019.\ [r3] Gresele et al. Relative gradient optimization of the Jacobian term in unsupervised deep learning, NeurIPS, 2020.\ [r4] Kingma et al. Glow: Generative Flow with Invertible 1×1 Convolutions, NeurIPS, 2018.\ [r5] Hoogeboom et al. Emerging Convolutions for Generative Normalizing Flows, ICML, 2019.\ [r6] Ma et al. MaCow: Masked Convolutional Generative Flow, NeurIPS, 2019.\ [r7] Lu et al. Woodbury Transformations for Deep Generative Flows, NeurIPS, 2020.\ [r8] Meng et al. ButterflyFlow: Building Invertible Layers with Butterfly Matrices, ICML, 2022.\ [r9] Pang et al. Efficient Learning of Generative Models via Finite-Difference Score Matching, NeurIPS, 2020.\ [r10] Dinh et al. Density estimation using Real NVP, ICLR, 2017. Pdf: /pdf/d61160714e4b63056e0b2887db33905d2bef7382.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper has proposed an energy-based normalizing flow model, where the computaiton of Jacobin determinants for linear transformations can be skiped with score-matching objectives. This could enable deeper layers of linear transformation and make the flow model more efficient. Strengths: This paper has proposed an interesting idea about the relationship between flow model and energy-based model. The flow model could be interpreted as an energy-based model with a tractable normalizaiton term and the proposed score-matching objective could significantly improve the efficiency. Weaknesses: 1. The experiments lack comparison with other energy-based flow model like [1][2][3] and recent flow models like [4][5][6]. 2. Is there any theoretic analysis for the complexity between different objective function? 3. The author misses citations for the very first deep-learning-based energy-based models like [7][8]. Writing weakness: 1. The background in section 2.2 could break up into two sections, i.e., energy-based model and score-matching model. 2. It is hard to see the difference between results in the first two rows in Figure 1. 3. In line 51, even the tranformation function in flow model is invertible, the author should specify that $g()$ is the inverse transformation function. 4. In line 51, using $g_i(\cdot;\theta)$ is misleading. Does different transformation functions share the same parameters? 5. I think the author intends to model $g(\theta)$. Therefore, it is better to avoid using $E(x;\theta)$ to avoid confusion as it is usually used for representing a parametric energy model. 6. For Table 2, the author could change the unit to batch/second for easier comparison. Reference: [1] Xie, Jianwen, et al. "A Tale of Two Latent Flows: Learning Latent Space Normalizing Flow with Short-run Langevin Flow for Approximate Inference." arXiv preprint arXiv:2301.09300 (2023). [2] Xie, Jianwen, et al. "A tale of two flows: Cooperative learning of langevin flow and normalizing flow toward energy-based model." arXiv preprint arXiv:2205.06924 (2022). [3] Nijkamp, Erik, et al. "Learning energy-based model with flow-based backbone by neural transport mcmc." arXiv preprint arXiv:2006.06897 2 (2020). [4] Chen, Ricky TQ, et al. "Residual flows for invertible generative modeling." Advances in Neural Information Processing Systems 32 (2019). [5] Maaløe, Lars, et al. "Biva: A very deep hierarchy of latent variables for generative modeling." Advances in neural information processing systems 32 (2019). [6] Vahdat, Arash, Karsten Kreis, and Jan Kautz. "Score-based generative modeling in latent space." Advances in Neural Information Processing Systems 34 (2021): 11287-11302. [7] Xie, Jianwen, et al. "A theory of generative convnet." International Conference on Machine Learning. PMLR, 2016. [8] Xie, Jianwen, et al. "Cooperative learning of energy-based model and latent variable model via mcmc teaching." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 32. No. 1. 2018. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. Could the author explain in detail why the complexity of calcuating Jacobin determinants is $O(D^3 \cdot L)$ for non-linear transformation? Is it general in all kinds of non-linear functions? 2. What does the sentence in line 62-63 mean? 3. What is the FID score in generation tasks? 4. What is the distribution of $p_{u}$? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: This paper has propose an interesting view of taking flow model as an eneryg-based model with score matching objectives. However, the expriments lack comparison of many baseline models as listed in 'weakness' part and other eval metrics in generation. Additional, the author should rephrase some sentences and formulars to avoid misleading. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer’s time and effort spent on the review, and would like to respond to the reviewer’s questions as follows. Please note that parts of the responses are provided in the [global comment](https://openreview.net/forum?id=AALLvnv95q&noteId=9f9rk2L3tW) due to the word limit. --- ### **Comments** --- **C1.** The experiments lack comparison with other energy-based flow model like [1][2][3] and recent flow models like [4][5][6]. \ **C2.** The author misses citations for the very first deep-learning-based energy-based models like [7][8]. **Response:** The references offered by the reviewer are either not applicable to the context of this work or beyond the scope of this work. Therefore, attempting a comparison of their performance may be impractical. [1] discusses the utilization of an additional decoder structure to define and train flow-based models in latent space. On the other hand, [2] and [3] explore the training techniques of optimizing an energy-based model according to a flow-based model. It is important to note that the primary emphasis of [1-3] is not on accelerating the training process of flow-based models, which constitutes the central theme of this paper. Residual Flow [4] presents an efficient method for training i-ResNet. However, the concept of [4] differs from this work, as it is derived based on the assumption of the Lipschitz constraint. In particular, each residual-flow block in [4] is constrained to have a Lipschitz constant less than 1, which is not a requirement in the architecture adopted in this paper. Due to the fundamentally different premises, a comparison between our method and [4] would provide limited insights. Regarding [5,6], they primarily concentrate on VAE and score-based models, respectively, which are not directly related to the research field of this work. Moreover, it is crucial to highlight that the primary focus of this work lies in normalizing flows rather than energy-based models. As a result, references [7,8] are also independent to the scope of this study. --- **C3.** Is there any theoretic analysis for the complexity between different objective function? **Response:** The energy function defined in Eq. (9) has a forward propagation time complexity of $O(D^2L)$. This is based on the observation that forward passing $g$ requires $O(D^2L)$ time, and calculating the determinant of the non-linear transformations requires $O(D^2L)$ time, as specified in Lines 55-63. This implies that the complexity of calculating FDSSM is $O(D^2L)$ time. Moreover, differentiation operations exhibit the same computational complexity as the forward propagation process. As a result, the SSM and DSM objectives, along with their gradients with respect to $\theta$, can be computed in $O(D^2L)$ time. On the other hand, the ML and SML objectives require a complexity of $O(D^3L)$ due to specific computational requirements. The ML objective involves computing the Jacobian determinants of the linear layers, while the SML objective necessitates the computation of inverse matrices for the linear transformations during training. These operations contribute to the increased time complexity to $O(D^3L)$ when compared to the other objectives. --- **C4.** The background in section 2.2 could break up into two sections, i.e., energy-based model and score-matching model. **Response:** We appreciate the reviewer's suggestion. However, splitting Section 2.2 into two distinct subsections, “Energy-based Models” and “Score-matching Models”, might not be ideal, as it undermines the main goal of Section 2.2 of this paper: explicitly distinguishing between parameterization and training objectives. More specifically, models trained using score-matching objectives can also fall under the category of energy-based models. In the current presentation, we first discuss the parameterization (i.e., Eq. (3)) and then describe its training methods (i.e., Eq. (4)-(7)). This arrangement provides clarity in terms of distinguishing the concepts of parameterization and training methods. Therefore, we maintain that the current presentation in Section 2.2 is more suitable. --- **C5.** It is hard to see the difference between results in the first two rows in Figure 1. **Response:** As specified in Lines 247-249 of the manuscript, the objective of Fig. 1 is to display comparable qualitative results between the baseline and EBFlow. It is important to note that the central aim of Section 5.1 is to motivate the adoption of EBFlow by demonstrating the ability to match its performance with the baseline method. The experimental results presented in Fig. 1 substantiate this claim by showcasing similar qualitative outcomes between the baseline and EBFlow. --- **C6.** In line 51, even the tranformation function in flow model is invertible, the author should specify that $g()$ is the inverse transformation function. **Response:** The definition of flow-based models in Eq. (1) suggests that $g$ is an inverse transformation function, despite not being literally stated. In Eq. (1), the function $g$ transforms the data vector $x$ to the latent variable $u$. This indicates that $g$ is the inverse of the generator $g^{-1}$ which maps $u$ to $x$. --- **C7.** In line 51, using $g_i(\cdot; \theta)$ is misleading. Does different transformation functions share the same parameters? **Response:** Employing separate notations, such as denoting $\theta_i$ for each $g_i$ in Line 51, could provide greater precision. However, in the context of this paper, each $g_i$ is parameterized using a subset of $\theta$. This formulation offers notational simplicity, which contributes to the conciseness of this work. --- **C8.** I think the author intends to model $g(\theta)$. (...).\ **C9.** For Table 2, (...). **Response:** (See the global comment) --- ### **Questions** --- (See the global comment)
null
null
null
null
null
null
Universal Online Learning with Gradient Variations: A Multi-layer Online Ensemble Approach
Accept (spotlight)
Summary: In this work, the authors aim to develop an algorithm for online convex optimization that can adapt to the specific curvature of the loss function without knowing it in advance. The goal is to develop an algorithm that can adapt to strongly convex, exp-concave and convex loss functions dynamically, while also deriving some problem-dependent guarantees. They also ensure that their method is computationally efficient as it only requires one gradient query per round. To do so, they present a three layer algorithm that estimates both the type of loss functions and its parameters. Crucially, this kind of multi-layer approach generally comes at the cost of logarithmic factors. These three algorithms follow pre existing structures (MSMWC (Chen et al. 2021) for the meta algorithms and an optimistic Online Mirror Descent for the base algorithm) but the authors rely on several tricks and refinements of the analysis to exploit negative terms in the regret analysis and obtain competitive regret bounds. Finally, they also provide a novel decomposition of the regret which allows to reduce the number of gradients queries from $O(\log^2 T)$ to $O(1)$ at each round. This result is of independent interest, as they show that it can be used with different meta algorithms, such as Adapt-ML-Prod (van Erven and Koolen 2016) to recover worst-case guarantees (whereas the main results of the paper are stated in terms of problem dependent terms). Strengths: This paper provides significant contributions to the field of OCO. Achieving optimal results simultaneously across several environments has been an important topic in learning theory in the past few years, and, by providing universal guarantees, this paper fits in this line of work. The presented algorithm builds upon existing results and provides non-trivial improvements to their analysis. These technical contributions are interesting and likely to be build upon in the future. It is worth noting that this paper is particularly well written. The body of the paper focuses on providing detailed explanations for the choice of methods, and the authors take a particular care in higlighting the connections and differences with the rest of the litterature. Discussionss such as in Section 3.2.1 which highlight the different approaches that could have been considered and were discarded are not that common and should be encouraged. The proofs, deferred to the appendix, are clear and detailed. Appendix A provides several applications for the results, showing that the method can be used to bridge the gap between stochastic and adversarial online convex optimization and for two-player zero-sum games. In both cases, the proposed algorithm obtains regret bounds that are at least matching up to log-factors and sometimes improving upon the state of the art while considering a broader setting. Weaknesses: While many comparisons with other results in the litterature have been provided, it would be interesting to get comparisons with lower bounds, in particular for the small loss bounds that depend in F and V. As the results achieved by the current method are matching these of Zhang et al. (2022), though with a better runtime complexity, it could also be interesting to get experimental results to compare them. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Could you discuss if and how you think these results could be further improved? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the constructive feedback and appreciation of our work! Below we answer your questions on the lower bounds and the possibility of further improvements. **Q1**. It would be interesting to get comparisons with lower bounds, in particular for the small loss bounds that depend in F and V. **A1**. Thanks for the insightful suggestion. The optimal bounds for convex functions are $O(\sqrt{V_T})$ and $O(\sqrt{F_T})$. For exp-concave and strongly convex functions, the optimal results are $O(\ln V_T)$ and $O(\ln F_T)$, with an additional dimension dependence for exp-concave functions. For more details, please refer to Lemma 9 of [1] and Corollary 3 of [2]. We will add a corresponding discussion in the next version. **References:** [1] Online Optimization with Gradual Variations, COLT 2012 [2] Beyond Logarithmic Bounds in Online Learning, AISTATS 2012 --- **Q2**. Could you discuss if and how you think these results could be further improved? **A2**. Thanks for the question. An open problem left in our work is to *achieve the optimal regret bound for convex functions* (by removing the extra $\ln V_T$ factor) *while attaining the current optimal rates for exp-concave and strongly convex ones.* In the future, we will investigate whether this term can be removed either by novel modifications and analysis of the MsMwC algorithm or by exploring the negative stability terms in other meta algorithms (e.g., Adapt-ML-Prod). We will include this discussion on future directions in the next version. --- We believe that our research occupies a nonnegligible position in universal online learning, especially given the significance of the gradient-variation quantity coupled with the novel techniques we've introduced. In the end, thanks again for recognizing the importance and contributions of our work. --- Rebuttal Comment 1.1: Comment: Dear authors, Thank you for your answers, I don't have any further questions at this point.
Summary: This paper put together and designed new techniques to reach a high level of adaptivity in online convex optimization. It is a timely contribution that is useful to put together various algorithms optimal for different problem setups and to better adapt to the characteristic of each setup. Strengths: This is an excellent paper in presenting the results and the qualitative explanation of the techniques or proofs. The adaptivity contribution of the paper is also very satisfying in terms of general knowledge of machine learning. Weaknesses: I do not see any weakness in this paper. However, I have not checked the proofs in the appendix. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: No question. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the valuable feedback and appreciation of our work! We believe that our research occupies a nonnegligible position in universal online learning, especially given the significance of the gradient-variation quantity coupled with the novel techniques we've introduced. The gradient variation is a fundamental problem-dependent quantity in modern online learning, largely due to its profound connections with both stochastic/adversarial OCO and games. As a result, our work finds broad application scope in these fields by obtaining universal algorithms with nearly optimal guarantees therein. Moreover, the techniques we've developed, such as the cascaded correction terms, the negative terms in MsMwC, and the meta-base regret decomposition with customized surrogate functions, are poised to capture the interest of the broader community and potentially spark further innovations.
Summary: This paper proposes algorithms that have data-dependent regrets and adaptive to the loss function classes. The algorithm improves on Zhang et al., 2022 in the sense that 1) it also achieves a gradient variation bound for convex Lipschitz losses and 2) it requires only one gradient evaluation at each round. The first is achieved by the introduction of optimism. The second is achieved by appropriately designed surrogate functions for the base learners. --- The authors have addressed my questions. I keep the original rating. Strengths: 1. The algorithm is new and improves on Zhang et al., 2022 (see the summary). 2. The paper clearly addresses the ideas behind algorithm design and analysis. Weaknesses: 1. **Comparison of complexity.** Please also compare the number of the instances of the base learners required for each algorithm in Table 1. This information helps evaluate the computational complexity. 2. **Algorithm 1.** It is said that the base learners are set "as specified in Section 2," but Section 2 (preliminaries) does not give any algorithm. 1. **Presentation.** - Section 4 is presented in a hasty way. The reader may not be able to understand the ideas without reading Appendix C. - Table 1: The Lipschitzness and smoothness assumptions are missing. The assumptions may not hold in, e.g., online portfolio selection. - Ln. 35: That "the learner requires to know the function information (type and curvature) in advance to select suitable algorithms" only holds for textbook OCO algorithms and not general OCO. - The three aspects in Ln. 57--60 can be skipped as that will be addressed in detail in Ln. 79--89. - Ln. 58: The term "small-loss bound" has not been explained before. It is better to explain it as the term is not common sense to general machine learning researchers. - Ln. 125: That $N \approx 1 + \log T + \log T$ looks weird. Perhaps there is some typo. 2. **Typos.** - Ln. 57: aspect*s* - Ln. 219: *for* our purpose Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please address the first two weaknesses and the possible typo in Ln. 125. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: This is a theory paper. All assumptions are explicitly written. The potential issue regarding the computational complexity has been raised in the weakness block above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the valuable feedback and appreciation of our work! Given that all three questions (computational complexity, base learners setup, and number $N$) relate to the base learner configuration, we provide a comprehensive explanation for it first, followed by the answers to your questions. --- **Base learner setup:** we are addressing a problem where both the type (strongly convex, exp-concave, or convex) and curvature coefficient (value of $\alpha$ or $\lambda$) of the functions are unknown. Ideally, the best base learner is the one that runs the algorithm for the right function type and the accurate guess of the curvature coefficient. Since both of them are unknown in our problem, we employ multiple base learners to hedge the uncertainty. The key to the problem is designing a meta algorithm to effectively track the best base learner. Specifically, we make *one base learner run an algorithm for convex functions* (only one is needed since there is no curvature coefficient here). For exp-concave and strongly convex functions, where the curvature coefficient's value is unknown, we employ multiple (the number will be illuminated later) base learners for each case. They all run an algorithm for the corresponding function type but with different guessed values for the curvature coefficient. Concretely, we discretize the possible range $[1/T,1]$ of $\alpha$ and $\lambda$ into $\lceil \log_2 T \rceil$ values ($\mathcal{H}$ in Line 128) and choose one of them as a guess. Overall, we need *$\lceil \log_2 T \rceil$ base learners for exp-concave and strongly convex functions*. Therefore, we use $N = 1 + \lceil \log_2 T \rceil + \lceil \log_2 T \rceil = O(\log T)$ base learners. --- **Q1**. Please also compare the number of instances of the base learners required for each algorithm in Table 1. **A1**. Thanks for the suggestion. Previous works employ a *two-layer* structure, necessitating $O(\log T)$ base learners. Our method leverages a *three-layer* structure and thus requires $O(\log^2 T)$ base learners. We will include this information in the revised version. **Q2**. It is said that the base learners are set "as specified in Section 2," but Section 2 (preliminaries) does not give any algorithm. **A2**. Thanks for the question. The base learner setup is provided in Lines 125-129 and please refer to the description above Q1 for more details. We will give a more detailed formalization in the revised version. **Q3**. Ln. 125: That $N \approx 1 + \log T + \log T$ looks weird. Perhaps there is some typo. **A3**. We appreciate your observation. It is not a typo. We represent $N$ as the *sum of three components* to denote the number of base learners needed for different kinds of functions. Specifically, one for convex functions (which don't have a curvature coefficient), and $O(\log T)$ for both exp-concave and strongly convex functions due to their unknown curvature coefficient. --- We are grateful for your meticulous review and will carefully revise the mentioned presentation issues and typos to enhance the paper's readability. --- Rebuttal Comment 1.1: Title: Thanks Comment: Thanks for the response. I will keep the rating. **Q1.** Please include this information. Thanks. **Q2.** I raised this issue simply as a matter of presentation. The statement may confuse some readers, as Lines 125--129 are buried in a paragraph in the literature review. Please consider rewriting the related sentences to make things clearer. **Q3.** Please add a few words to explain the reason for the two $\log T$ terms. --- Reply to Comment 1.1.1: Title: Thanks for the suggestions! Comment: **Response to Q1:** Thanks for the recommendation. We'll incorporate a comparison of base learner numbers from various methods in our revised version. **Response to Q2:** Thanks for the sincere advice. Given the extra page being permitted in the camera-ready version, we will rewrite this part to give a more detailed explanation of the base learners. **Response to Q3:** Thanks. We'll provide further clarifications in this section. Finally, thanks again for the thorough review and valuable suggestions!
Summary: A new algorithm for online learning is provided, together with improved regret bounds compared to the state-of-the-art: The new regret guarantees are summarised in table 1 on page 2: in the case of a strongly convex function, the regret guarantee is of $O(min(\log(V_T),\log (F_T)))$ where $V_T$ and $F_T$ are the gradient variation and cumulative loss of the best comparator respectively (cf. equation on lines 53 to 54). In the case of exp-concave functions, the rate is the same, whereas in the case of a convex function, the rate is $\widetilde{O}(\min(\sqrt{V_t},\sqrt{F_T})$ instead. In terms of the regret bound, only the convex case is an improvement over the existing results ([3], $O(\sqrt{F_T})$). An additional claimed advantage is that the gradient query is reduced to $O(1)$ instead of $O(\log(T))$: this is valid for an improved algorithm (separate from the main algorithm presented in the paper) with additional surrogate losses, which is only presented in the appendix on page 18 (Algorithm 2) The main algorithm itself is a combination of several existing techniques: at each iteration, a set of $KN$ experts $(k,i)\in [K]\times [N]$ each consisting of different gradient descent methods is maintained. Each expert computes the gradient update as per Optimistic OMD [3]. Then, for each fixed $k$, the experts $(k,i')$ for $i'\leq N$ are aggregated exactly as in MsMwC (cf. [1] and equation (2.3)), then the resulting experts are aggregated again via another use of MsMwC, with different parameters. The proof is a very long list of re-expressions of the regret via various existing results and additional calculations, drawing a lot of inspiration from [1], which itself improves on techniques in [2]. =================Post-Rebuttal============= After discussing with the authors, I am convinced that the presentation issues are theoretically fixable, and I still think the amount of work is adequate. Thus, I will keep my score and hope the authors fix all the minor errors as promised and try hard to make the paper more accessible to a broader audience. ============== **References** [1] Sijia Chen, Wei-Wei Tu, Peng Zhao and Lijun Zhang. Optimistic Online Mirror Descent for Bridging Stochastic and Adversarial Online Convex Optimization. ICML 2023. [2] Pierre Gaillard, Gilles Stoltz and Tim Van Erven. A Second-order Bound with Excess Losses. COLT 2014. [3] Lijun Zhang, Guanghui Wang, Jinfeng Yi and Tianbao Yang. A Simple yet Universal Strategy for Online Convex Optimization. ICML 2022. Strengths: This is a highly technical and long paper and it is clear the authors know what they are doing. The parts of the proofs I managed to check are generally correct with only minor typos (though they are not reader friendly). Weaknesses: In my opinion, the main paper is poorly written. I understand that the authors are trying to explain the intuition for why they construct their algorithm in this way, but there isn't enough context or revision of existing methods to make the paper readable. The paper reads like a soliloquy of a researcher trying to solve the problem at hand: for instance, in lines 140 to 148, the authors are saying "if we used Adapt-ML [2], bad things ill happen, what can be done about this?" then later in Section 3, their solution is proposed, which consists in using [3] instead, with a purportedly novel choice of optimism. See also how the authors are using expressions like "Another try is to" on line 206. This is an issue not just in terms of grammar but also because it occurs in the context of a vague exposition that follows the stream of consciousness of the author rather than trying to present a cogent story. Here are examples of more concrete **issues with the presentation**: **1** There is a lot of talk in the main paper of "how to cancel out terms" which comes a long time before the part where the authors introduce their main algorithm. When they finally decide to do so on page 7, it is only the basic version of the algorithm (not the one with the $O(1)$ gradient calls) and it still vaguely refers to "the base learners described in Section 2", which I can only imagine refers to what is introduced in line 168 (section 3.1), but is still presented as new in line 177 "to unify all kinds of functions, we propose...." **2** The concept of "optimism" isn't properly introduced either in the main or in the supplementary; the only definition is on line 72 in the intro: "a hyperparameter encoding historical information", which I am given to understand means slightly different things in different algorithms. A few more words of explanation here would be nice. Definitely, the base learners should be defined properly in the main text. In the current form, not even the reference to [3] is especially clear outside of Table 1. The same goes for the section names, terms such as "exogeneous negativity" are not very informative. In footnote 2, the authors even speak of "implementing $m_t$", which is definitely very informal and should be explained more clearly. **3** Note also that the Bregman Divergence is note defined in lines 486 to 487 or 211-212 (page 6), despite the fact that it would make the description more complete and that it is indeed defined in the reference [1] when the same equation is introduced. **4** In the proof of Lemma 2, the first equation is hard to understand without background in the field. the only explanation is "by standard analysis of optimistic OMD". It would be nice to say that this refers specifically to Theorem 1 of [4], which is explained better later in the appendix when the argument arises again (line 767 page 30). Since it is natural for the reader to read the proof of Lemma 2 first, the more detailed explanation should be present there as well. **5** Similarly, the proof of Lemma 1 on page 14 uses equation E.1 from page 29, which is completely unnecessary as E.1. is derived from first principles. **6** The proof of Lemma 2 is done for "an arbitrary comparator $u\in\Delta$, but the concept of comparator isn't really defined there, though it becomes clear from context after some effort. This also seems to run counter to the fact that in the main paper the authors write elsewhere in the paper that they "focus on the proof for fixed learning rate", which seems not to be the case in the case of the proof of Lemma 2. **7** What does the notation $\|.\|_{U_t}$ mean on line 761? Is it explained somewhere? I understand that I am not an expert in the field and my difficulties understanding the bigger picture is mostly due to this, but I do feel like in the case of well written papers, I am often able to make better progress, in less time, towards understanding more complex papers than this one in equally distant areas. However, I am not sure, it may be that the prerequisites are intrinsically larger. **References** [1] Sijia Chen, Wei-Wei Tu, Peng Zhao and Lijun Zhang. Optimistic Online Mirror Descent for Bridging Stochastic and Adversarial Online Convex Optimization. ICML 2023. [2] Pierre Gaillard, Gilles Stoltz and Tim Van Erven. A Second-order Bound with Excess Losses. COLT 2014. [3] Lijun Zhang, Guanghui Wang, Jinfeng Yi and Tianbao Yang. A Simple yet Universal Strategy for Online Convex Optimization. ICML 2022. [4] Zhao, Peng ; Zhang, Yu-Jie ; Zhang, Lijun ; Zhou, Zhi-Hua. Adaptivity and Non-stationarity: Problem-dependent Dynamic Regret for Online Convex Optimization. ArXiv. May 2023. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: **1** Can you describe the base learner from [3] concisely? Perhaps rewrite the main paper in a more careful way? **2** I am a bit confused about what you mean by the $O$ notation sometimes: which constants are considered to be "$O(1)$ exactly? In page 31, it seems like $\gamma$ is treated as a variable (not $O(1)$), but it is less clear in the case of $D$ and $G$: at the end of the proof of Lemma 10, **the $O(1)$ notation hides a term of $4DG^2$**. If you can absorb this into the $O(1)$ notation, why do you still need to write $\gamma D^2$? I can sort of see that the reason is that there is a $\gamma$ in there, but providing a bit of context might help. **3** Do you really mean an equal sign at the second equality (first line) describing the Adaptivity term in page 30? it seems like if $1/\gamma<\frac{D}{\sqrt{1+\bar{V_T}}}$, **the equality wouldn't hold**. More worrying still, the inequality would be in the wrong direction. **4** In lemma 3, you make the assumption that $(\ell_{t,k -m_{t,k}})^2 =\langle l_{t,k}-m_{t,k},p_{t,k}\rangle^2$. Could you provide more context there? **5** Could you summarize the difference between your work and [1] more concisely? Why was the base learner [3] not used directly in [1]? What would happen if we used the correct base learner but did not use the hierarchical approach from your method? **Minor mathematical errors:** **1** I think the term $-8\sum_{t=2}^T \|p_t-p_{t-1}\|$ should be $-4\sum_{t=2}^T \|p_t-p_{t-1}\|$ in Lemma 2. Indeed, the definition of "term C" excludes the factor of $1/2$ at the bottom of page 14 (as evidenced not just by the curly bracket but also by the use of Pinsker's inequality in line 498 as there would need to be a factor of 16 instead of 32 if the 1/2 were included): this factor of $1/2$ needs to be reintroduced in line 513 on page 16. It would also be nice to quote Pinsker's inequality and perhaps add a citation at line 499 (page 15). **2** In the calculations on page 15, lines 497-498, at the third equality on the first line, the $b_i$ should be at the numerator instead of the denominator. **3** In the second line of the series of equations for the optimality gap in the proof of Lemma 10 on page 30, I think it should be $\leq$ instead of $=$ because there is a term of $-\|x^*-\hat{x}_{T+1}\|^2\frac{1}{2\eta_T}$ which is dropped out. **4** In the statement of Lemma 5, it should be $\|p-q\|_1^2$ instead of $\|p-p\|_1^2$ **5** In the proof of Lemma 1 on page 14, at the second line of equalities after line 482, I think it should be $\leq $ and not $=$ because of a missing $\langle g_T-g_{T+1},x_T-x_{T,i*}\rangle$ term which is added to make the sum look simpler. **6** In the definitions of the iterations of $p_t$ and $q_t$ from [1] in line 486 page 14 and also in line 211 page 6, the last term should be $D(p,q_{t-1})$ instead of $D(p,q_{t})$, consistent with equation 6 in [1]. **NB**: I have used q instead of \hat{p} due to issues with markdown. **Typos** line 7: "the O notation" (missing "the") line 146: constants=> constant line 161, same thing line 205 "it is still open that whether..>" ==> "it is still an open problem to determine whether" line 217 "same of" ==> "same as" line 252 "same of" ==> "same as" line 488: "follows the similar logic of" ==> "follows a similar logic as" line 506 "which equals to" ==> "which is equivalent to" (this somewhat changes the meaning and makes the line more understandable. Also, one might want to consider turning some of these inline equations into {align} blocks). line 739 "it the optimal step size" ==> "if the optimal step size" Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 1 poor Contribution: 2 fair Limitations: The improvement compared to [3] is somewhat disappointing compared to the much larger complexity of the algorithm: it seems that eh main improvement is to use a single algorithm for all types of functions and to reach similar rates, with a better rate in the case of convex functions. This is interesting theoretically but the algorithm seems too unwieldy to be used in either practice or future work. I cannot fully vouch for the correctness as I often had to read the supplementary line by line without fully grasping the bigger picture. It is also hard to vouch for the originality compared to [3] as I am not familiar enough with it. Nevertheless, this seems like solid work, assuming the presentation was improved. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the valuable feedback and very careful check of our paper! We will carefully revise it according to the mentioned points. Due to the 6,000-character limit of this year's rebuttal, we first answer your questions about [Chen et al., 2023] and [Zhang et al., 2022] and then respond to presentation/typo issues and other questions in the next reply after the reviewer-author discussion begins. --- **Q1**. Can you describe the base learner from [Zhang et al., 2022] concisely? Perhaps rewrite the main paper in a more careful way? **A1**. We appreciate your question, and thanks for the suggestion. We will give a more detailed formalization in the revised version. Below we provide a comprehensive explanation for it. Specifically, we consider the problem where the kind (strongly convex, exp-concave, or convex) and curvature coefficient (value of $\alpha$ or $\lambda$) of the functions are unknown. Ideally, the best base learner is the one that runs the algorithm for the right function type and the accurate guess of the curvature coefficient. Since both of them are unknown in our problem, we employ multiple base learners to hedge the uncertainty. The key to the problem is designing a meta algorithm to effectively track the best base learner. Specifically, we set *one* base learner to run an algorithm for convex functions (only one is needed since there is no curvature coefficient here). For the rest two cases, e.g., for exp-concave functions, since the value of $\alpha$ is unclear, we discretize its potential range $[1/T,1]$ into $O(\log T)$ values ($\mathcal{H}$ in Line 128) and select one of them as a guess. In total, we require $O(\log T)$ base learners for the exp-concave case, all of which execute an algorithm for exp-concave functions but with *varying guesses* of the value of curvature coefficient $\alpha$. A similar arrangement also applies to strongly convex functions. --- **Q2**. Could you summarize the difference between your work and [Chen et al., 2023] more concisely? Why was the base learner [Zhang et al., 2022] not used directly in [Chen et al., 2023]? What would happen if we used the correct base learner but did not use the hierarchical approach from your method? **A2**. Thanks for the question. We compare with [Chen et al., 2023] from the following aspects: * **Difference:** the two works focus on *different problems*. Specifically, ours considers obtaining gradient-variation bounds in the universal problem. While theirs studied the *stochastically extended adversarial (SEA)* model, an interpolation between stochastic and adversarial OCO. Since the base learners in [Zhang et al., 2022] are employed to deal with the universal problem, they cannot be used directly in [Chen et al., 2023]. * **Similarity:** one similarity is that both works consider three kinds of functions (strongly convex, exp-concave, and convex). Thus, our analysis of *base learners' negative terms* has some parallels with theirs, which are also *standard* in optimistic online learning. * **Application:** our method can be applied to their problem with nearly optimal universal results due to the profound connection between the gradient variation and the SEA model, resolving a *major open problem* therein (see their conclusion for the open problem). --- **Q3**. The improvement compared to [Zhang et al., 2022] is somewhat disappointing compared to the much larger complexity of the algorithm... This is interesting theoretically, but the algorithm seems too unwieldy to be used in either practice or future work. **A3**. Thanks for the feedback. While our algorithm might appear complex, it stands as the *first* to achieve gradient-variation bounds for the universal problem --- an *open problem* identified by [Zhang et al. 2022]. The improvement over [Zhang et al. 2022] is significant, not only due to the importance of the gradient-variation quantity itself but also because the improvement from $T$ to $V_T$ is *polynomial* for convex functions, whereas *logarithmic* in the rest cases (as stated in Line 69). Furthermore, our results can be applied to various learning problems, encompassing the *stochastically extended adversarial (SEA)* model and *games* (Appendix A). In doing so, we tackle a *major open question* posited by [Chen et al., 2023]. Please refer to Table 3 and Table 4 on Page 13 for an overview of our results in the aforementioned applications. The complexity of our three-layer method arises primarily because, to our knowledge, *only MsMwC* can yield the desired negative terms for cancellations (also one of our technical contributions) while securing a second-order regret bound. In the future, we will focus on obtaining the same rates with a two-level structure. A possible solution is to explore the negative stability terms in other meta algorithms (e.g., Adapt-ML-Prod). We will include a discussion on these future directions, especially concerning computational efficiency, in the updated version. --- We believe that our work offers valuable contributions to the community. We hope the above replies will address your concerns and would appreciate a reevaluation of our paper's score. And we are happy to provide further clarifications if needed in the following author-reviewer discussions. We also take this opportunity to sincerely thank you for the careful review, including the thorough examination of the proofs! Your suggestions are very insightful and important for further improving the paper. --- Rebuttal Comment 1.1: Title: Response to other questions Comment: In this reply, we respond to presentation/typo issues and other questions, hoping to help you understand our work better. Since some LaTeX commends of `\hat{}`, `\boldsymbol{}` do not work well in OpenReview, we use other notations instead or simply omit them. --- In this part, we clarify the **presentation issues**. **Q1**. The presentation order. **A1**. We truly appreciate your advice. We will reorder the content in the revision to provide readers with an overview of the entire paper before delving into the specifics. **Q2**. The concept of "optimism"; The definition of base learners; Section names such as "exogenous negativity"; "implementing $m_t$". **A2**. We greatly appreciate your suggestions. We will provide more explanations about optimism when it first appears. Given that an extra page is allowed in the camera-ready version, we will include a more comprehensive formalization of base learners. And we will carefully revise the paper to remove not informative statements. **Q3**. The Bregman divergence is not defined. **A3**. Thanks. We will include the definition in the revised version. **Q4**. In the proof of Lemma 2, the first equation is hard to understand without a background in the field. **A4**. We appreciate your keen observation! We will cite Theorem 1 of Zhao et al. [2023] at this point. **Q5**. Referring to equation E.1 in the proof of Lemma 1 is unnecessary. **A5**. Thanks. We will remove it. **Q6**. The comparator and learning rate in Lemma 2. **A6**. Thanks for highlighting this. In Lemma 2, we provided a more general result with an arbitrary comparator and changing learning rates. This was done hoping that the negative stability term analysis would be comprehensive enough for readers interested solely in the MsMwC algorithm. We will state this clearly to prevent misunderstandings. **Q7**. What does the notation $\|\cdot\|_{U_t}$ mean on line 761? Is it explained somewhere? **A7**. Thanks for your sharp observation. It refers to the matrix norm: $\|\mathbf{x}\|_{U_t} = \sqrt{\mathbf{x}^\top U_t \mathbf{x}}$. We will provide its definition in the revised version. --- In this part, we answer your **questions**. **Q8**. The $O$-notation. **A8**. Thanks for raising this. The gradient norm $G$ and domain diameter $D$ are generally considered constants in constrained online learning. As a result, we incorporate them into the $O(1)$ term. Regarding $\gamma$, since it can take different values in various scenarios (e.g., please refer to Line 694 and Line 704), we do not include it in $O(1)$, even though $\gamma$ is always a constant. **Q9**. Do you really mean an equal sign at the second equality (first line) describing the Adaptivity term in page 30? **A9**. Thanks for the question. The correct derivation should use $\le$ due to $\eta_t = \min\{D/\sqrt{1+\bar{V}_t}, 1/\gamma\} \le D/\sqrt{1+\bar{V}_t}$. **Q10**. In lemma 3, you make the assumption that $(\ell_{t,k} - m_{t,k})^2 = \langle \ell_{t,k} - m_{t,k}, p_{t,k} \rangle$. Could you provide more context there? **A10**. We appreciate your insightful question. This is not an assumption but rather a *condition* that is inherently satisfied by our algorithm. For context, this condition serves to ensure a self-contained proof for our meta algorithm and has been validated, for instance, in Line 595. We appreciate this inquiry and will refine the manuscript to underscore that this is a condition, not an assumption. --- In this part, we answer your remarks on some minor **mathematical errors**. **Q11**. The constant before $\|p_t - p_{t-1}\|_1^2$ and the citation to Pinsker's inequality. **A11**. Thanks for the sharp observation! It is indeed a typo. We will rectify this typo by adjusting the constant to 4 and will cite Pinsker's inequality. **Q12**. In the calculations on page 15, lines 497-498, at the third equality on the first line, the $b_i$ should be at the numerator instead of the denominator. **A12**. Thanks. $b_i$ should be at the numerator, and we will revise it. **Q13**. In the second line of the series of equations for the optimality gap in the proof of Lemma 10 on page 30, I think it should be $\le$ instead of $=$. **A13**. Thanks. We will correct it to be $\le$. **Q14**. In the statement of Lemma 5, it should be $\|p - q\|_1^2$ instead of $\|p - p\|_1^2$. **A14**. Thanks for catching this error! It is a typo, and we will correct it. **Q15**. In the proof of Lemma 1 on page 14, at the second line of equalities after line 482, I think it should be $\le$ and not $=$. **A15**. Thanks. It is a typo, and we will revise it to be $\le$. **Q16**. In the definitions of the iterations of $p_t$ and $q_t$ from [1] in line 486 page 14 and also in line 211 page 6, the last term should be $D(p, q_{t-1})$, where $q$ denotes `\hat{p}`. **A16**. Thanks. We will revise it in the follow-up version. And thanks for finding some additional typos in both the main paper and the appendix. We will revise them carefully.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes an online convex optimization algorithm that is adaptive in two ways to the environment. Specifically, it can achieve logarithmic regrets for problems with good properties such as strong convexity and exp-concavity, as well as $\sqrt{T}$ regret in the worst-case. Further, it achieves improved regret bounds for problems with small variation of the function gradients. In addition, it works with only one gradient call per round. Strengths: - The paper is well structured and easy to follow. - Existing studies are adequately reviewed and the paper is informative for the reader. - The paper clearly explains the structure of the proposed method, the idea of the proof, and its implication in great detail. Weaknesses: Compared to the study by Zhang et al. [2022], the magnitude of the contribution of this paper appears somewhat limited. Although the design of the algorithm is quite sophisticated, the main component is a combination of already known techniques. It would be better to have a more detailed description of the challenges in algorithm design and analysis, as well as the new techniques proposed in this paper. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Can you address the concerns described in Weaknesses above? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 2 fair Limitations: I have no concerns about the limitations and potential negative societal impact are adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the valuable feedback. We believe that our work has solid contributions and holds significant interest to the community. The submitted version may not have effectively conveyed our findings clearly, leading to the reviewer's perception. We will carefully revise the paper to clarify the unique contributions of our work. Herein, we elaborate on our work's results and techniques to highlight its unique significance. --- **Results:** our research offers more than an enhanced gradient-variation bound for convex functions over [Zhang et al., 2022]. Below we emphasize the importance of the problem and our results. * **Importance of problem:** obtaining universal gradient-variation bounds is highly important but challenging, left as an *open problem* by [Zhang et al., 2022] and now resolved by us. The importance comes from two aspects: * The convex case, where the method of [Zhang et al., 2022] fails, is far more important than the other two cases (exp-concave and strongly convex) since the improvement from $T$ to $V_T$ is *polynomial* in the convex case, whereas *logarithmic* in the other two cases (as stated in Line 69). * The gradient variation is a *fundamental* problem-dependent quantity in modern online learning due to its implications for worst-case and small-loss guarantees, and profound connections with adversarial/stochastic convex optimization and games. * **Importance of our results:** our results can be applied to various learning problems. In particular, applying them to the *stochastically extended adversarial (SEA)* model gives a *single* algorithm with *nearly optimal* regret bounds with different kinds of functions, thus resolving the *major open problem* left by [Chen et al., 2023] (see their conclusion for the open problem). Furthermore, our results can also be applied to *games*, providing universal guarantees therein. The applications are deferred to *Appendix A* due to page constraints. Please refer to Table 3 and Table 4 on Page 13 for an overview of our results in the aforementioned applications. --- **Techniques:** our method is not a combination of already known techniques but with some important innovations, as outlined in Lines 90-99. As far as we know, these techniques did not appear in previous works, which distinctly differentiate our method from that of [Zhang et al., 2022]. * The first innovation is a *novel optimism design* (elaborated in Section 3.1), demonstrating that a straightforward modification of existing methods is insufficient to tackle this problem. * The second innovation involves a *novel meta-base regret decomposition* with customized surrogate losses for base learners (detailed in Section 4), reducing gradient complexity to one per round. Note that such a regret decomposition is *inapplicable* to [Zhang et al., 2022] because their method cannot deal with the positive stability terms as we do. * The third innovation is incorporating *cascaded correction terms* within a three-layer structure (Section 3.2.2). Although the idea of correction terms is not new to other problems, such as non-stationary online learning [Zhao et al., 2021] and the multi-scale expert problem [Chen et al., 2021], our work is the *first* to introduce it to universal online learning, and the application to a three-layer structure necessitated *an extensive adaptation* of the technique. * Two interesting novel byproducts arise in our techniques. * The first one is the *negative terms in MsMwC* (Section 3.2.1), which may be of independent interest to the community. * The second one is a simple method and analysis for the worst-case universal guarantees (detailed in Proposition 1 in Section 4) and a general regret decomposition in the online ensemble structure for exp-concave and strongly convex functions. --- If our responses have properly addressed your concerns, please consider updating your score. Thanks! --- Rebuttal Comment 1.1: Comment: Thank you very much for your very thorough response. I now have a better understanding of the importance of this research. I maintain my positive score.
Summary: Addressing to the drawback of Zhang et al. [2022], which does not enjoy gradient-variation bounds for convex functions, using an optimistic version of ADAPT-ML-PROD with a second-order bound of [Wei et al., 2016],three-layer online ensemble structure with two-layer meta learner running MSMWC [Chen et al., 2021] with different parameter configurations, and each MSMWC-MID is further connected with N base learners to explore the unknown function information, is developed.$\mathcal{O}(\ln V_T)$, $\mathcal{O}(d \ln V_T)$ and $\hat{\mathcal{O}}(\sqrt{V_T})$ regret bounds for strongly convex, exp-concave and convex loss functions is obtained. two different levels of adaptivity with problem-dependent gradient variations, e.g., adaptive to unknown function information and benign Environments are fulfilled. Strengths: In Zhang et al. [2022], the convex case has not been addressed. a two-layer framework In Zhang et al. [2022] is generalized to three-layer online ensemble structure. a single algorithm with gradient-variation bounds for all kinds of functions is designed. Weaknesses: 1 the derived results substantially depend on Zhang et al. [2022], [Chen et al., 2021], Zhao et al. [2021] and [Wei et al., 2016]. 2 The results of the algorithm still belong to the class of MSMWC [Chen et al., 2021]. the decomposed regret and injecting cascaded correction terms to both the top and middle layer make the work replicate published findings without adding substantial knowledge, leading to lack of novelty. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: I believe that exists other gradient-variation bounds, which is similar to [Wei et al., 2016], Can better results be obtained by means of it? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: the authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the insightful feedback. Below we clarify our unique contributions and differentiate our approach from existing ones, hoping to address your concerns. --- **Q1**. The derived results substantially depend on [Zhang et al., 2022], [Chen et al., 2021], [Zhao et al., 2021], and [Wei et al., 2016]. **A1**. We acknowledge that some algorithm components of our approach are based on existing works, and thus the new ones might appear as "simple modifications" (depending on one's familiarity with online learning). However, effectively integrating them to obtain our results necessitates *non-trivial innovations*. We will include more elaborations on technical contributions in A2. Below, we emphasize the importance of the problem and our results. * **Importance of problem:** obtaining universal gradient-variation bounds is highly important but challenging, left as an *open problem* by [Zhang et al., 2022] and now resolved by us. The importance of the problem comes from two aspects: * The convex case, where the method of [Zhang et al., 2022] fails, is far more important than the other two cases (exp-concave and strongly convex) since the improvement from $T$ to $V_T$ is *polynomial* in the convex case, whereas *logarithmic* in the other two cases (as stated in Line 69). * The gradient variation is a *fundamental* problem-dependent quantity in modern online learning due to its implications for worst-case and small-loss guarantees, and profound connections with adversarial/stochastic convex optimization and games. * **Importance of our results:** our results can be applied to various learning problems. In particular, applying them to the *stochastically extended adversarial (SEA)* model gives a *single* algorithm with *nearly optimal* regret bounds for different kinds of functions, thus resolving the *major open problem* left by [Chen et al., 2023] (see their conclusion for the open problem). Furthermore, our results can also be applied to *games*, providing universal guarantees therein. The applications are deferred to *Appendix A* due to page constraints. Please refer to Table 3 and Table 4 on Page 13 for an overview of our results in the aforementioned applications. To summarize, while some algorithm components of our method are based on existing works, we have made non-trivial usages and effectively resolved *two open problems* raised in the literature. We believe that our work has solid contributions and holds significant interest to the community. --- **Q2**. The results of the algorithm still belong to the class of MSMWC [Chen et al., 2021]. the decomposed regret and injecting cascaded correction terms to both the top and middle layer make the work replicate published findings without adding substantial knowledge, leading to a lack of novelty. **A2**. We respectfully disagree with this comment. While inspired by existing works, effectively unifying them to solve our problem requires comprehensive uses of them. Besides, there are also unique challenges in our problem, which necessitate novel technical contributions for solving them. Below we discuss the mentioned techniques (MsMwC, regret decomposition, and cascaded correction terms), hoping to address your concerns. * **MsMwC:** our meta algorithm resides within MsMwC but incorporates novel techniques, including a *novel optimism design* (detailed in Section 3.1) and *negative stability terms in the analysis* (Section 3.2.1), which may be of independent interest to the community. * **Regret decomposition:** our regret decomposition in Section 4 is novel, allowing the algorithm to use only one gradient query in each round. Note that such a regret decomposition is *inapplicable* to [Zhang et al., 2022] because their method cannot deal with the positive stability terms as we do. * **Cascaded correction terms:** although our use of cascaded correction terms is not entirely new, the application to a three-layer structure necessitated *an extensive adaptation* of the technique. Importantly, our research focus deviates significantly from the existing literature --- our work is the *first* to introduce corrections for universal online learning, whereas prior works use it for different purposes, such as non-stationary online learning [Zhao et al., 2021] and the multi-scale expert problem [Chen et al., 2021]. --- **Q3**. I believe that exists other gradient-variation bounds which are similar to [Wei et al., 2016]. Can better results be obtained by means of it? **A3**. We are not sure if we get this question accurately. As far as we know, simply modifying [Wei et al., 2016] does not suffice to obtain gradient-variation universal regret bounds (as explained in Line 164-176). This is due to the lack of negative terms in their regret analysis, which makes it hard to perform cancellations as we did in our work. --- We will revise the paper to ensure the readers understand our key contributions better. If our responses have properly addressed your concerns, please consider updating your score. Thanks! --- Rebuttal Comment 1.1: Comment: Dear Reviewer FwDP, We sincerely appreciate your helpful comments. As the period for author-reviewer discussions is coming to an end, we kindly ask whether our response has properly addressed your concerns and potential misunderstandings. Please let us know if you have any more questions, and we are happy to provide further clarification. Thanks! Best Regards, Authors
null
null
null
null
NAS-X: Neural Adaptive Smoothing via Twisting
Accept (poster)
Summary: The paper proposes NAS-X, a novel approach to estimate the posterior expectations in Reweighted Wake Sleep (RWS) architectures. Instead of a traditional estimation using self-normalized importance sampling (SNIS), and similar to Neural Adaptive Sequential Monte-Carlo (NASMC), the proposed method employs a Sequential Monte-Carlo (SMC) approach to estimate the necessary expectations. In contrast to NASMC, NAS-X uses smoothing distributions instead of filtering distributions as targets, which improves particle efficiency and reduces the variance of the estimates. This requires the estimation of twist sequences, which the paper does using the density-ratio approach SIXO. Strengths: - Efficient particle-based estimation of posteriors in sequence models is an important topic and has a long history in the machine learning community. - The paper is well-written and relatively easy to follow. I was particularly impressed by the paper’s natural motivation: starting with a concise recap of RWS, the paper clearly explains the need for smoothing SMC and its estimation via twists. - The main technical contribution (Eq.(8) and Eq.(9)) is SIXO-based smoothing SMC for posterior inference in RWS architectures. From a technical point of view this contribution is relatively simple, because all necessary pieces (RWS, NASMC, and SIXO) were already available, but the insight that the RWS updates are compatible with SIXO estimation is noteworthy and must be appreciated. - The experiments validate the proposed approach in settings with known ground truth and discrete latent variables, as well as challenging inference in Hodgkin-Huxley models. While the experiments with Gaussian-linear models and Switching Linear Dynamical Systems are important sanity checks and confirm the advantages of smoothing SMC over filtering SMC, I particularly enjoyed the real-world experiments on voltage dynamics in neural membranes. They not only demonstrate that NAS-X is more particle-efficient than its competitors but also that the proposed model could be useful beyond synthetic environments. Weaknesses: - My main concern with this paper is its potential impact: SMC-based inference in RWS architectures is already a niche topic and there is nothing fundamentally wrong with filtering SMC. Smoothing SMC introduces the additional complication of twist estimation, but even that has already been addressed with SIXO. My worry is that the remaining contribution likely has a relatively small audience. - Another point of concern are NAS-X’s multi-level approximations: SMC is already approximate in nature, but now, in addition to the quality and efficiency of the particles, accurate estimation of the twist sequence becomes a separate challenge. There is a trade-off between the additional information contained in future observations and a decrease in robustness due to a more complex learning problem, and I would have liked to see experiments that directly evaluate the quality of the learned twists. - The paper claims that smoothing SMC leads to lower-variance estimates compared to filtering SMC (l.32f, l.92f), but the experiments do not investigate this claim directly. Some indirect evidence is provided in Figure 1 and Figure 3, but I would have appreciated additional insights. - The description of the experimental setup could be better structured. I did find most of the information I was looking for, but it required constant jumping between different sections of the paper and between the paper and the supplemental material. The information contained in the supplemental material is also not referenced well enough in the main paper and I would encourage the authors to include more pointers to the supplemental material. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - The paper generally does a good job giving credit to prior work, but the relationship between SIXO and NAS-X is not clear enough. If SIXO already performs smoothing SMC, what is the main difference between the two? Is NAS-X more than SIXO applied to RWS? What architecture is used for SIXO *without* NAS-X? - The effect of the number of training particles on the reported performance metrics remains a bit of a mystery. I would appreciate a reproduction of Figure 3(a) with 32 and 128 training particles. - The meaning of “inclusive KL”, in contrast to “exclusive KL”, is not clear enough. I found the definitions in the literature, but they should be mentioned in the paper. Typos: l.48.5 (“,”), l.52 (“goals”). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: - The paper does not discuss the limitations of the proposed model. Completely absent in this work is a runtime analysis. Does NAS-X's higher particle efficiency translate to faster (absolute wall time) inference compared to NASMC/SIXO? - The paper does not discuss the potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your comments and feedback. Just to be clear on terminology --- In your review you refer to “RWS architectures”, but we weren’t sure if you meant model architecture or a general framework for developing new methods. We take the later interpretation and refer to it as the “RWS approach” in our response. **My main concern with this paper is its potential impact** Thank you for this feedback, proper positioning of the work is extremely important! First, we underscore the broad applicability of our work. NAS-X is a method for model learning and inference in Markovian state-space models (SSMs). Model learning and inference in SSMs is an important topic to NeurIPS and the broader machine learning community, with many applications (e.g., neuroscience, healthcare, deep generative models) [1,2,3,4,5,6,7,8]. In addition, we believe NAS-X is of broader methodological interest than SIXO, as it outperforms SIXO on most tasks and can fit discrete latent variable models which SIXO cannot. NAS-X is not limited in its applicability by using RWS; instead, it leverages RWS- and SIXO-inspired methods to tackle broader problems than SIXO or RWS can on their own. In many problems, we agree there is nothing wrong with filtering SMC. However, in some problems, smoothing is crucial (e.g., the Hodgkin-Huxley model). We showed that NAS-X performs better on the HH inference task with 4 particles than FIVO or NASMC does with 256 particles, a 64x improvement over filtering SMC. Our theoretical results also support this --- FIVO and NASMC’s gradient estimates are not consistent, whereas NAS-X’s gradient estimates are consistent. While there is nothing wrong with filtering SMC, in many settings, there can be strong gains from using smoothing techniques. [1] Krishnan, Rahul G. et al. “Structured Inference Networks for Nonlinear State Space Models.” AAAI (2016). [2] Fox, Emily B. et al. “Nonparametric Bayesian Learning of Switching Linear Dynamical Systems.” NeurIPS (2008). [3] Costacurta, Julia C., et al. "Distinguishing Discrete and Continuous Behavioral Variability Using Warped Autoregressive HMMs." NeurIPS (2022). [4] Alaa, Ahmed M. et al. “Attentive State-Space Modeling of Disease Progression.” NeurIPS (2019). [5] Ghahramani, Zoubin et al. “Variational Learning for Switching State-Space Models.” Neural Computation 12 (2000). [6] Miller, Andrew C. et al. “Learning Insulin-Glucose Dynamics in the Wild.” MLHC (2020) [7] Chung, Junyoung et al. “A Recurrent Latent Variable Model for Sequential Data.” NeuIPS (2015). [8] Wu, Luhuan et al. “Practical and Asymptotically Exact Conditional Sampling in Diffusion Models.” (2023) **The relationship between SIXO and NAS-X is not clear enough** We view the RWS approach as ascending estimates of the gradients of the log marginal likelihood (LML) of a latent variable model. These gradients are expectations wrt the posterior over latents and can be approximated using biased but consistent estimates from self-normalized importance sampling (SNIS). Changing the SNIS method gives new model-fitting techniques. Thus, NASMC uses filtering SMC (also an SNIS method) to estimate gradients, and NAS-X uses smoothing SMC. NAS-X approximates smoothing SMC using twists learned via density ratio estimation, as in SIXO. This contrasts the VI approach, which ascends unbiased estimates of a lower bound on the LML. FIVO uses filtering SMC’s estimate of the marginal likelihood to define a lower bound on the LML, and SIXO uses smoothing SMC similarly. The crucial difference is that RWS-based methods follow biased but consistent estimates of the gradients of the true LML, while VI-based methods follow unbiased estimates of lower bounds of the LML. These different approaches result in gradient estimators with different strengths and weaknesses. Both NAS-X and SIXO use (approximate) smoothing SMC as part of their algorithms. The difference is that NAS-X uses the samples from smoothing SMC to estimate the gradients in a way that allows it to outperform SIXO and be more broadly applicable, as supported by our empirical results. **Another point of concern are NAS-X’s multi-level approximations** NAS-X's approximations provide important theoretical and practical advantages. For example, NASMC incurs bias in the gradient estimates because it approximates the posterior distribution with filtering distributions. We view introducing the twists not as a further approximation but as an attempt to fix NASMC's approximation. Our experiments and additional comparisons show NAS-X performs favorably against all baselines, providing evidence that this change is helpful in practice. In principle, incorporating twists complicates the learning problem. In practice, twist learning is robust and easy. In the PDF, we present twist parameter recovery and classification accuracy for the Gaussian SSM experiments; in this setting, the optimal twists have a known parametric form. The optimal twist parameters are recovered quickly, the classification accuracy is high, and training is stable. This suggests that, with an appropriate twist parameterization, twist learning via density ratio estimation is tractable. Twist learning was stable and successful in all other experiments as well, we'll include full results in the revision. **The paper claims that smoothing SMC leads to lower-variance estimates** In new results, we show that NAS-X has lower variance and lower bias gradient estimates than filtering SMC-based methods (see general response). This claim is well-established in the literature (e.g. FIVO). Furthermore, as discussed above, the advantage of smoothing SMC in the context of NASMC is also to reduce bias of gradient estimates. **Effect of number of training particles...** We didn't have the resources to reproduce Figure 3a for 32/128 particles; see PDF for 8 and 16 particles. **We are constrained by space, please see general response for the rest of your concerns.** --- Rebuttal Comment 1.1: Comment: I want to thank the authors for their insightful response, in particular the clarifications about the differences between NAS-X and SIXO, the inclusion of additional baselines, and new experiments analyzing NAS-X's gradients. I still believe that the paper proposes a relatively low-level fix that introduces non-trivial complexity and, while I appreciate the authors' comments about twist stability and robustness, I think there are many open questions regarding their parametrization and recovery in real-world settings. --- Reply to Comment 1.1.1: Comment: Thank you for your response! Do you feel that we addressed your concerns enough to merit an increase in our score? You said your main concern was the impact of the method compared to the complexity required. We tried to clarify in our response that the usefulness and impact of NAS-X exceed that of SIXO for an algorithm of exactly the same complexity (both technical and computational). Stated differently, NAS-X is a drop-in replacement for SIXO that only requires changing a few lines of code but performs significantly better and can handle discrete latents. Regardless of your choice, thank you for your feedback and discussion!
Summary: The paper tackles the problem of learning sequential latent variable models using a new method called NAS-X. It combines two previous research: 1. The smoothing with twisting that’s applied in variational inference (SIXO in particular). It approximates smoothing distribution as targets for proposal learning, instead of using filtering distribution, to avoid sample degeneracy; 2. The Reweighted wake-sleep method, which can handle discrete latent variables. They used experiments to illustrate that NAS-X can learn proposals that match the true posterior marginals, and can be applied to discrete variables and handle complex tasks. Strengths: The paper is well written. The authors have done extensive experiments to show the effectiveness of their work. The proposed method is somewhat novel, as it combines two existing ideas and outperforms them. Weaknesses: The paper lacks discussions of future work and limitations of the proposed method. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Section 2.1: I find the structure there a bit unclear. you first described a two-step coordinate ascent method in the first paragraph, but then it is unclear whether your formular afterwards are talking about the first or the second step, it might make sense to be clearer there. 2. Section 2.2 SMC description: the “repeats three steps” paragraph doesn’t make it clear that it is repeatedly sampling the next time point (if I understand correctly). Maybe it is better to just be clear that each step is for a new t. 3. Section 5.2: it is unclear to me that the qualitative comparison in figure 2 brings that much insight, but maybe it at least shows how the data is generated? 4. Section 5.3.1: for figure 3(a) you showed that is It more particle efficient, I wonder if it would be helpful to also comment on the computation efficiency there? 5. Section 5.3.1 last paragraph: The discussion about RWS vs VI seems intriguing but I wish it can be discussed a bit more clearly and thoroughly and provide more context. Also if we believe this to be an important enough point, maybe it is worth highlighting this point in the intro or conclusion? 6. Figure 4: Why would we expect NAS-X to outperform in certain metrics but underperform in other metrics? 7. Section 5.3.2: I wonder why NASMC gets dropped from the results in this section? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I wonder if the authors could add more discussions on the limitations/future work of their proposed method? Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments and constructive feedback. Below we respond to the questions/concerns raised. **The paper lacks discussions of future work and limitations of the proposed method.** One limitation is the NAS-X does not have a unified objective (i.e., the proposal and model are updated using different objectives). On the other hand, both FIVO and SIXO use a unified objective, namely SMC’s lower bound on the log-marginal likelihood. While this had no practical impact on our experiments, the lack of a unified objective could be relevant in other applications. Another limitation is that SMC’s gradient estimates are consistent but biased; empirically, this doesn’t seem to prevent NAS-X from recovering proposal and model parameters in settings where we know those parameters. However, we think a thorough theoretical analysis of how the bias of the gradients depends on the quality of the twist approximation and number of particles is an exciting topic for future work. **Section 5.3.1: for figure 3(a) you showed that is It more particle efficient, I wonder if it would be helpful to also comment on the computation efficiency there?** See general response **Section 5.3.1 last paragraph: The discussion about RWS vs VI seems intriguing but I wish it can be discussed a bit more clearly and thoroughly and provide more context** We can expand upon this in more detail in the revision but elaborate here briefly. We find that VI methods such as SIXO tend to learn more entropic proposals. We saw this in the Hodgkin-Huxley experiments and the Gaussian state-space model experiments. We think this property is related to alternative interpretations of the VI methods. These VI methods were originally motivated as using Monte Carlo algorithms to derive tighter lower bounds on the log-marginal likelihood, hopefully resulting in a better learning objective. However, an alternative interpretation is that these VI methods maximize a standard ELBO objective but with an “expanded” variational family. The expanded family is defined by *both* the chosen proposal family (q) and the Monte Carlo algorithm. As a concrete example, you can interpret the IWAE bound as maximizing the ELBO with a proposal defined by this procedure: 1. Draw K samples x_1, … , x_k from q(x | y). 2. Weight the K samples using the density ratio p(x_i, y) / q(x_i | y) 3. Draw a sample from the set of weighted samples. The IWAE bound can therefore be thought of as maximizing the ELBO but with a variational family that allows the underlying proposal (q) to propose K samples and choose the “best” one. This perspective is also explored in the paper "Energy Inspired Models" (Lawson et al. 2019). Intuitively, if the proposal (q) in IWAE has K chances to draw a high-quality sample, it is incentivized to be more entropic, since only the best sample is chosen. A similar argument holds for bounds based on more complex Monte Carlo algorithms like FIVO and SIXO. Samples from the underlying proposal (q) drive Monte Carlo algorithms that select only the best particles. This allows the proposal to generate many dubious samples provided it generates at least one good sample. Crucially, this interpretation does not hold for RWS-based algorithms (RWS, NASMC, NAS-X). RWS algorithms use self-normalized importance sampling variants like filtering and smoothing SMC to directly estimate the gradients of the log marginal likelihood. The Monte Carlo algorithms are only used to improve the gradient estimates, and cannot be interpreted as augmenting the proposal distribution. In practice, this results in proposals that are not as entropic as VI-trained proposals because they are not “aware” that their bad particles will be resampled away. We hope this clarifies things and are happy to discuss more. **Figure 4: Why would we expect NAS-X to outperform in certain metrics but underperform in other metrics?** We are not entirely sure why NAS-X would outperform on some metrics and underperform on others, but suspect it could be related to the less entropic proposals it learns. We provided both standard model learning metrics and biophysical metrics because different users may value different metrics. For example, our biophysical metrics were motivated by the ones used in [1]. This is similar to other fields like image modeling, where it has become standard practice to evaluate model samples on perceptual metrics like Frechet inception distance (FID) as well as likelihoods. This is because many models that achieve the highest likelihoods did not generate good samples, and many models with the best samples did not provide tractable likelihood evaluation (e.g., GANS). [1] Lueckmann, J.M et al. Flexible statistical inference for mechanistic models of neural dynamics. NeurIPS (2017). **Section 5.3.2: I wonder why NASMC gets dropped from the results in this section?** We found HH model learning under the IWAE, NASMC, RWS, and FIVO objectives to be highly unstable. Few runs survived to converge, instead NaN-ing out early. For example, 46 percent of all IWAE runs failed to converge, compared to 19 percent of NAS-X runs and 25 percent of SIXO runs. This prevented us from reliably evaluating the methods, and we concluded that they were ill-suited for the model learning task. We were able to successfully evaluate all of these methods for proposal learning (see new figures), however, and found them to be far inferior to SIXO and NAS-X. --- Rebuttal Comment 1.1: Comment: Thanks the authors for their detailed response! They have addressed my questions. --- Reply to Comment 1.1.1: Comment: Glad we could clarify things! Do you feel our additional results and explanation merit changing your rating or confidence score?
Summary: An algorithm based on reweighted wake-sleep (RWS) is applied to sequential latent variable models $p_\theta(x_{1:T}, y_{1:T})$. Model parameters are fit by maximizing the evidence $p_\theta(y_{1:T})$ in $\theta$, and posterior inference is performed by minimizing the forward KL divergence $KL(p_\theta(x_{1:T} \mid y_{1:T}) \mid \mid q_\phi(x_{1:T} \mid \mid y_{1:T}))$ in variational parameters $\phi$. Gradients for each of these tasks are estimated by smoothing SMC, which can be viewed as a lower-variance alternative to self-normalized importance sampling (SNIS) in the sequential setting. This submission extends the filtering SMC approach of previous work. However, the smoothing distributions are not available and must be approximated by learning twists, which is accomplished by training a discriminator. This results in an estimate of an appropriate density ratio that allows for twisting to be used. Strengths: * The exposition is clear the and potential advantages with respect to forward KL minimization and smoothing are intuitive. * The method is general and and appears to be applicable in any learning/inference setting with sequential latent variable models. The only significant design choice for the user appears to be the design of the variational distribution, and the additional computational overhead from the training of the classifier for estimating the twists is not prohibitive. * The method is novel to the best of my knowledge, and combining a forward KL minimization with SIXO seems like a good idea. The forward KL objective makes it easy to handle discrete latent variables, as the second experiment shows, whereas other methods (see below) could not do so as easily. * The authors show the applicability of the method to three experiments of increasing complexity, with an emphasis on dynamical systems. Weaknesses: The biggest weaknesses of the paper are: * Lack of comparison to existing methods. In addition to NASMC, FIVO (Maddison et al., 2017) and VSMC (Naesseth et al., 2018), and AESMC (Le et al., 2018) are closely related methods that instead target the reverse KL divergence. While these are cited in related work, they should be compared to when possible (perhaps not in discrete latent variable models, though) as they are natural competitors for these sequential latent variable models. * Lack of discussion of, and comparison to, other SMC variants. The main intuition provided in the paper for using smoothing SMC is that both SNIS and filtering SMC can result in either high-variance estimates and/or particle degeneracy. Metrics (e.g. effective sample size) and figures for validating this intuition would be helpful, as this intuition is the central motivation for the use of smoothing SMC. Ablation studies incorporating these competitors/alternatives would make more clear the importance of the contribution of this work. At present, it is difficult to tell whether both the smoothing approach or the forward KL formulation contribute to the success of the proposed method, or whether in some cases just one of these does. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * Was vanilla RWS compared to? Naive implementations wouldn’t use sequential structure, and would rely only on SNIS, so I’d expect all of NASMC, SIXO, and NAS-X to beat RWS easily. Nevertheless, it would be a nice baseline. * Can there be more discussion on the classifier being used to approximate the density ratio? Some precise results from the GAN literature or the literature on likelihood free inference by ratio estimation (Thomas et al., 2016) stating formally that the optimal classification rule yields the likelihood ratio in some form would make this part of the exposition more clear; at least some citations should be added. * To what extent are the advantages of the proposed method from 1) use of the forward KL divergence and 2) use of smoothing SMC? Comparison with FIVO, VSMC, AESMC along with naive RWS or ELBO could really help make this clear. * Is the $\chi$ in the title on purpose? Or should there be an X as in the rest of the body? It should be consistent. * On Open Review, the “smothing” in the title should corrected if possible. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors have made clear the class of probabilistic models that this method is designed for. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments and feedback. In addition to the small changes you suggested, we also respond to your main questions and concerns: **Lack of comparison to existing methods.** In new experiments (see PDF), we compare NAS-X to several more baselines, including FIVO, SIXO, RWS, IWAE, and ELBO. In short, NAS-X outperforms or performs comparably to these baselines in all three experiments. These additional results establish that 1) we outperform methods that employ reverse KL + smoothing while being able to handle discrete latent variables (e.g., SIXO), 2) we outperform both forward KL and reverse KL-based methods that use filtering SMC (e.g., NASMC, FIVO) and 3) we outperform several baselines that ignore sequential structure (RWS, IWAE, ELBO). Additionally, in new empirical results, we show that NAS-X has lower variance and lower bias gradient estimates than these methods (see general response). Altogether, our new empirical results highlight the benefits of using smoothing SMC in conjunction with a forward-KL divergence objective for sequential latent variable models. We did not include IWAE, FIVO, or SIXO in the discrete latent experiment (rSLDS) because (as you imply) they rely on continuous reparameterization gradients. Instead, we compared to Laplace EM, a very strong baseline that analytically marginalizes out the latents. **Lack of discussion of, and comparison to, other SMC variants… motivation for the use of smoothing SMC** First, to address your concerns we included an analysis of variance and bias of gradient estimates for the Hodgkin-Huxley experiments. This includes comparisons to methods based on SNIS (e.g., IWAE, RWS) and filtering SMC (e.g., FIVO, NASMC). In short (see general response), NAS-X’s use of smoothing SMC leads to lower variance gradient estimates and lower bias. Second, while this is certainly problem-dependent, the claim that filtering SMC’s estimates can be high variance is relatively well-established in the literature [1,2,3,4,5]. Performing reliable inference and model learning in problems where smoothing information is highly important (e.g. neural voltage data) was a primary motivation for our work. In addition to the practical motivations, we wish to highlight theoretical motivations for using smoothing SMC in the context of RWS: 1) NAS-X’s gradient estimates are consistent when the twists are optimal, 2) NAS-X produces unbiased gradient estimates even for finite particles, provided that the twists and proposal are optimal. Please see the general response for a detailed discussion. Importantly, NASMC does not have these guarantees because it uses filtering SMC’s intermediate particle approximations to estimate expectations wrt the true posterior, to avoid particle degeneracy. Since these intermediate particles approximate the filtering distributions and NOT the true posterior, this introduces additional bias into gradient estimates; this bias persists even in the infinite particle limit. We illustrated this in Experiment 1, where NASMC did not recover the true posterior. In contrast, NAS-X’s gradients are unbiased, in theory, when the twists and proposal are optimal. We illustrated this advantage in the Gaussian state-space model experiments, and will provide a discussion/formal proof of this result in the revision to motivate our method better. [1] Lawson, Dieterich et al. “Twisted Variational Sequential Monte Carlo.” (2018). [2] Lawson, Dieterich et al. “SIXO: Smoothing Inference with Twisted Objectives.” (2022). [3] Naesseth, Christian Andersson et al. “Elements of Sequential Monte Carlo.” Found. Trends Mach. Learn. 12 (2019): 307-392. [4] Heng, Jeremey et al. “Controlled Sequential Monte Carlo” (2019). [5] Guarniero, Pieralberto et al. "The iterated auxiliary particle filter." Journal of the American Statistical Association 112.520 (2017): 1636-1647. **Ablation studies incorporating these competitors [...] would make more clear the importance of the contribution of this work** Thanks for this suggestion. We wholeheartedly agree. In new ablation studies (see general response) we observe, broadly speaking, that the RWS-based methods (RWS, NASMC, NAS-X) provide lower-variance gradients and learn less-entropic proposals than their variational inference counterparts (ELBO, FIVO, SIXO). Lower-variance gradients are an important advantage as long as they are also low-bias, and our experiments show that learning the twists allows NAS-X to provide the lowest-bias gradient estimates of all methods considered. Thus, both the KL direction and the twist learning contribute to NAS-X’s performance. In addition to the performance benefits of NAS-X over its competitors, the forward-KL methods are also more versatile because they can accommodate discrete latent variables. **Was vanilla RWS compared to?** We have included comparisons to RWS and other non-sequential baselines (e.g., ELBO, IWAE). We find that in all experiments, NAS-X significantly outperforms RWS. **Can there be more discussion on the classifier being used to approximate the density ratio?** We will discuss this in more detail in the revised version. For experiment 1, we chose a parametric form that matches the analytic density ratio for the Gaussian state-space model. For experiments 2 and 3, we use RNNs that take the observations in as inputs and produce encodings for each time step. We feed these encodings and the latent states into an MLP classifier for each time step. We’ll include the citations below and expand our discussion of SIXO’s density ratio approach in the revision; please suggest any additional citations. [1] Sugiyama, Masashi et al. Density ratio estimation in machine learning. Cambridge University Press, 2012. [2] Thomas, Owen et al. “Likelihood-Free Inference by Ratio Estimation.” Bayesian Analysis (2016). [3] Uehara, Masatoshi et al. “Generative Adversarial Nets from a Density Ratio Estimation Perspective.” arXiv (2016). --- Rebuttal Comment 1.1: Title: Rebuttal reply Comment: Thanks to the authors for the detailed response. 1) The inclusion of FIVO as a competitor has suitably addressed my concern about limited evaluation. In the Gaussian SSM, it's clear that NAS-X outperforms FIVO. Although I think this is a simple example (and thus that it may be important to contextualize this by noting that learning the twists is (perhaps) more straightforward because of the simplicity), it's a fine toy example and shows a setting where NAS-X is clearly stronger than FIVO. The comparison of bias and variance of gradients is also a nice addition, whether it ends up in the main body or supplement. I'm updating my score from 4 to 5 to reflect these additions. 2) The "new theoretical results" alluded to by the authors in the global and individuals responses may be a nice touch, but ultimately may have limited utility: any result that relies on the condition "when the twists are optimal" seems to assume away the extremely complicated task of learning the twists (as acknowledged by the authors on line 102). While these may provide some nice motivation or intuition, in more complicated models beyond Gaussian SSM this may not be a fair assumption. --- Reply to Comment 1.1.1: Comment: Thanks for your response and engaging with our rebuttal! We really appreciate you taking the time. In regards to your first point, we wanted to emphasize that we also compared to FIVO in the HH rebuttal experiments and found it to be far inferior to NAS-X. For example, NAS-X with 4 particles achieved a log marginal likelihood lower bound of -173 nats while FIVO with 4 particles achieved -579 nats and FIVO with 128 particles achieved -173 nats. So FIVO only matched NAS-X using 32 times as many particles. Plots of these values are in the rebuttal PDF (Figure 4 left). Unfortunately, FIVO Nan-ed out too frequently to be reliably evaluated for HH model learning. We believe this is a significant non-toy example where NAS-X clearly outperforms FIVO. As a side note, we were also excited to see that NAS-X outperformed FIVO for the LGSSM as that is a setting where filtering methods are generally seen as sufficient. For your second point, in our practical experience, twist learning via DRE is quite easy and robust. We provide experimental evidence of this in the rebuttal PDF for the LGSSM, but all experiments were the same in this regard. We are happy to provide plots of twist performance for the other experiments if that would help. We apologize for any confusion in the unclear wording of line 102. When we said “Learning the twists can be extremely challenging” we were referring to previous methods that use a Bellman-type loss to learn the twists (iAPF [1], cSMC [2], and TVSMC [3]). These methods struggle or completely fail in complex, high dimensional settings. Indeed, those struggles motivated our choice of DRE for twist learning and our experience is that it works very well. Even if the true twists are never recovered exactly we still observed orders of magnitude performance gains over not using the twists, which implies they are useful even when not 100% correct. Thanks again for your valuable feedback! [1] Guarniero, Pieralberto, et al. "The iterated auxiliary particle filter." [2] Heng, Jeremy, et al. "Controlled sequential Monte Carlo." [3] Lawson, Dieterich, et al. "Twisted variational sequential Monte Carlo."
Summary: The authors propose to use SIXO particle approximation to calculate the expectations in reweighted wake-sleep algorithm for state space models. Strengths: The paper is written in a clear way and all required background is well-explained (but description of SIXO and twists is too short). The idea of replacing the variational expectations with best possible practical approximation of the posterior is interesting. The empirical findings are well-presented. Weaknesses: I would lengthen the description of SIXO and twists. More empirical evidence and discussion why NAS-X actually warrants better performance (as it introduces biases) would strengthen the paper. Some theoretical analysis would strengthen the paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: What do you think are main limitations of NAS-X? When the algorithm is expected to perform poorly (or be outperformed by Laplace EM)? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive comments and constructive feedback. Below we respond to the questions/concerns raised. **I would lengthen the description of SIXO and twists.** Thank you for the suggestion. We have added a detailed discussion of the technical details behind SIXO and twists in the revised version of the paper. **More empirical evidence and discussion why NAS-X actually warrants better performance.** In new experiments (see PDF), we compare NAS-X to several more baselines, including FIVO, SIXO, RWS, IWAE, and ELBO. In short, NAS-X either outperforms or performs comparably to these baselines in all experiments. These additional results establish that 1) we outperform methods that employ reverse KL + smoothing while being able to handle discrete latents (e.g., SIXO) 2) we outperform both forward KL and reverse KL-based methods that use filtering SMC (e.g., NASMC, FIVO) and 3) we outperform several baselines that ignore sequential structure (RWS, IWAE, ELBO). Additionally, in new empirical results, we show that NAS-X has lower variance and lower bias gradient estimates than these methods (see general response). Altogether, our new empirical results highlight the benefits of using smoothing SMC in conjunction with a forward-KL divergence objective for sequential latent variable models. **Some theoretical analysis would strengthen the paper.** We will include two theoretical guarantees in the revised paper: 1) NAS-X’s gradient estimates are consistent when the twists are optimal 2) NAS-X produces unbiased gradient estimates even for a finite number of particles, provided that the twists and proposal are optimal. Please see the general response for a detailed discussion of this. Importantly, NASMC does not have these guarantees because it uses filtering SMC’s intermediate particle approximations to estimate expectations w.r.t. the posterior. Since these intermediate particles approximate the filtering distributions and NOT the smoothing distributions, this introduces additional bias into gradient estimates. Importantly, this bias persists even in the infinite particle limit. From this perspective, you can view the twists not as introducing bias but as compensating for the bias inherent in NASMC’s approach. **What do you think are main limitations of NAS-X?** One limitation is the NAS-X does not have a unified objective (i.e., the proposal and model are updated using different objectives). In certain settings, this could lead to a divergence of model and proposal parameters. FIVO does have a unified objective for the model and proposal, namely SMC’s lower bound on the log-marginal likelihood. SIXO shares this objective for the model and proposal but learns the twist using the separate density ratio estimation objective, as in NAS-X. While this had no practical impact on our experiments, the lack of a unified objective could be relevant in other applications. Another limitation is that SMC’s gradient estimates are consistent but biased for finite numbers of particles in practice (when twists and proposals are imperfect). We think a thorough theoretical analysis of how the bias of the gradients depends on the quality of the twist approximation and the number of particles is an exciting topic for future work. A final limitation is that NAS-X only works in the offline setting where all observations are available. It cannot be used for streaming data. **When is the algorithm expected to perform poorly (or be outperformed by Laplace EM)?** Because NAS-X is a sampling-based method, we would not expect it to perform as well as methods that analytically marginalize out discrete latent variables, such as Laplace EM [1]. However, analytic marginalization is only tractable when the number of latent variables is small. Our rSLDS experiments were designed to address this directly and show NAS-X matching or slightly outperforming Laplace EM even in a model with 4 discrete states. This shows that our sampling-based method performs reasonably even in settings where it is at a disadvantage. [1] Zoltowski, David, Jonathan Pillow, and Scott Linderman. "A general recurrent state space framework for modeling neural dynamics during decision-making." Proceedings of the 37th International Conference on Machine Learning, vol. 119, PMLR, 2020 --- Rebuttal Comment 1.1: Comment: I thank the authors for the response which addresses my questions. I maintain my score. I encourage the authors to explicitly discuss the limitations in the manuscript.
Rebuttal 1: Rebuttal: We thank the reviewers for their detailed feedback. We respond to reviewers individually and provide a general response below. We have strengthened our submission with several new experiments (see figures in PDF) and theoretical analyses (see below). If the reviewers feel that the new experimental results, analyses, and clarifications resolve their concerns, we kindly ask them to consider updating their overall scores. We are happy to provide further clarifications. ### New Experimental Results #### Additional Baselines In new experiments, we show NAS-X outperforms alternative methods suggested by the reviewers (ELBO, IWAE, FIVO, SIXO, and RWS). NAS-X’s improvements over these methods highlight 1) the benefits of using smoothing SMC versus filtering SMC and self-normalized importance sampling and 2) the benefits of forward KL over reverse KL in the context of sequential latent variable models. Results include: * Figure 1. Gaussian State Space Model: NAS-X achieves a tighter lower bound and lower proposal parameter error than SIXO, FIVO, ELBO, IWAE, NASMC, and RWS. * Figure 3. r-SLDS: NAS-X attains a higher bootstrap particle filter bound than NASMC and RWS. SIXO and FIVO rely on continuous reparameterization gradients and so were not compared to. * Figure 4. Hodgkin-Huxley Inference experiments: We include new comparisons between IWAE, FIVO, NASMC, SIXO, and NAS-X for proposals trained with 4, 8, and 16 particles. These show that NAS-X substantially outperforms all other methods even as the number of training particles changes. NAS-X achieves the same performance as SIXO with 4x fewer particles, and achieves the same performance as FIVO and NASMC with 64x fewer particles. RWS results were also obtained but were too poor to fit on the plots. #### Robustness of twist learning * Figure 2 illustrates that twist learning is robust, plotting convergence to true twist parameters and twist classification accuracy in the Gaussian SSM. ### Computational Complexity/Wall Clock Time Several reviewers asked about computational complexity. We provide a summary of our results here, and will include a full discussion in the final version of the paper. Theoretically, all methods have O(KT) time complexity, where K is the number of particles and T is the number of time steps. Practically, NAS-X and SIXO have similar wall-clock times but are slower than FIVO and NASMC, primarily because of twist training. Even if FIVO and NASMC were run with more particles to equalize wall-clock times, they would still far underperform NAS-X in log marginal likelihood lower bounds. In the accompanying PDF, we include a table (Figure 6) detailing the wall-clock speed of each method in milliseconds per step during HH inference training runs. SIXO and NAS-X take ~3.5x longer per step than NASMC and FIVO and 2.5x longer per step than RWS and IWAE. However, Figure 4 shows that FIVO, NASMC, IWAE, and RWS cannot match NAS-X’s performance even with 64 times more computation (256 particles). SIXO only matches NAS-X’s performance with 4x as many particles. Therefore, NAS-X uses computational resources much more effectively than other methods. ### Gradient variance and bias Figure 5 (left) shows NAS-X attains lower variance gradient estimates than IWAE, FIVO, and SIXO with comparable variance to RWS. We also studied the bias in Figure 5 (middle) by approximating the true gradient using SMC with 256 particles and the best proposal found in the HH inference experiments. NAS-X's gradients are lower bias than all methods but FIVO, but FIVO's gradients are also the highest variance. We believe FIVO’s gradients appear less biased because its parameters are pushed towards degenerate values where gradient estimation is “easier”. We illustrate this in Figure 5 (right), where we plot LML bounds. ### New theoretical results We present two theoretical results illustrating NAS-X’s advantages in idealized settings. One concerns the consistency of NAS-X’s gradients and the other concerns unbiasedness. Proposition I (informal): For any proposal and the optimal twists, NAS-X’s gradient estimate of the log marginal likelihood converges almost surely to the true gradient as the number of particles approaches infinity. This is not true for NASMC. Sketch: This follows from 1) SMC’s strongly consistent estimates of expectations of test functions with respect to a normalized target distribution (Theorem 7.4.3 Del Moral [2004]), 2) NAS-X’s use of smoothing distributions as the intermediate targets, and 3) the twist optimality. This is not true for NASMC since it uses filtering SMC’s intermediate particle approximations (to avoid particle degeneracy), which approximate the filtering distributions and not the true posterior. Proposition II (informal): For any finite number of particles, NAS-X’s gradient estimates are unbiased, provided that the proposal equals the true posterior and twists are optimal. This is not true for NASMC. Sketch: Under the stated assumptions, both NAS-X and NASMC propose particles from the true posterior. However, the different intermediate target distributions will affect how these particles are distributed after reweighting. For NAS-X, the particles will have equal weight since they are reweighted using the smoothing targets. Thus, after reweighting, the particles are still samples from the true posterior. In contrast, in NASMC, the samples are reweighted by filtering targets and will be distributed according to the filtering distributions instead of the true posterior. In the revision of this paper, we will include these statements and formal proofs. ### Limitations NAS-X lacks a unified objective; the proposal and model are updated using different objectives. FIVO and SIXO have a unified objective for model and proposal. This didn’t have a practical impact on our experiments but could be relevant in other applications. NAS-X also does not work in online settings where future data is unavailable. Pdf: /pdf/9954e4140022e750b4817ee9f1cd54b037747858.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
AGD: an Auto-switchable Optimizer using Stepwise Gradient Difference for Preconditioning Matrix
Accept (poster)
Summary: The paper proposes a new gradient based optimization called AGD. The idea is to utilize gradient difference between the current and previous steps in the preconditioning matrix which is related to Hessian. Also, an autoswitching function is proposed to switch between SGD and the adaptive optimizer. As mentioned in the paper, both ideas were previously brought up but the attempt to make an explicit optimizer is first done in the paper. Strengths: - Well written and motivated - Conducting number of downstream experiments - Presenting extra studies to examine the method Weaknesses: -The idea is not fully new - The details of experiments could be more explained Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Figure 3 suggests AGD is faster to converge, and Table 3 suggests AGD results in better performance. Could we make such a conclusion that it should be always used instead of SOTA optimization methods? - What is the stop criteria of different optimization techniques in Table 3? I am wondering if the improvement obtained by AGD is accurately examined or for example all methods go for the same number of iterations and AGD converges faster? You argued this issue in section 4.3 for CV task. - The performance of AGD in CV task is quite marginal. How about the convergence time? does AGD maybe converges faster? - I would have also liked to see more about autoswitching and why and how it happens in a simple toy example. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: - Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable review. **Q1: Figure 3 suggests AGD is faster to converge, and Table 3 suggests AGD results in better performance. Could we make such a conclusion that it should be always used instead of SOTA optimization methods?** AGD is at least the optimal method in the datasets and models we have tested. We believe it will also have promising performance in other scenarios. **Q2: What is the stop criteria of different optimization techniques in Table 3? I am wondering if the improvement obtained by AGD is accurately examined or for example all methods go for the same number of iterations and AGD converges faster? You argued this issue in section 4.3 for CV task.** In regards to the experiments detailed in Table 3, both the LSTM and transformer models were trained for a consistent number of epochs until convergence (200 epochs for LSTM and 55 epochs for transformers). With the exception of the optimizer hyperparameters mentioned in Appendix A.1, such as learning rate, epsilon, and delta, all other training settings remained identical across optimizers. These settings were inherited from previously published works to ensure a fair and credible comparison. **Q3: The performance of AGD in CV task is quite marginal. How about the convergence time? does AGD maybe converges faster?** The convergence speed of different optimizers can be found in Fig. 6 of Appendix A.2. For ResNet18 on ImageNet, AGD has a faster convergence speed compared to other optimizers. For ResNet20 and ResNet32 on Cifar10, after the learning rate decay, AGD demonstrates better convergence compared to other optimizers except for AdaHessian. **Q4: I would have also liked to see more about autoswitching and why and how it happens in a simple toy example.** Consider using AGD to optimize the function $f(x, y) = |x| + |y|$ from the starting point $(-4, 0.5)$, see Figure 4 in the [provided PDF](https://openreview.net/attachment?id=wLS9DFtY0I&name=pdf). Choose hyperparameters of $lr = 1.0$, $\delta = 1.0$, and $(\beta_1, \beta_2) = (0.0, 0.0)$. In the X-axis direction, the gradient difference is 0, so taking the maximum allows the parameters to be updated in an SGD-like manner. In the Y-axis direction, the gradient difference is 2, which is greater than 1, so the parameters are updated in an adaptive manner. Auto switch allows the model to use different update methods in different directions, and switches adaptively between SGD and adaptive methods as training progresses. We hope our responses have addressed your concerns. --- Rebuttal Comment 1.1: Title: after rebuttal Comment: Thanks for the response!
Summary: This paper introduces a novel optimizer, namely, Auto-switchable optimizer with Gradient Difference of adjacent steps. The authors propose a different way of approximating information from the Hessian for faster and better convergence in a way that a preconditioning matrix is computed using successive optimization steps in the parameter steps. The authors evaluate their optimization algorithm across different tasks and including some widely used architectures such as Transformers and RNNs. Moreover, the authors provide a theoretical analysis of the convergence of their algorithm and some bounds. Strengths: - The paper is well written in terms of giving the appropriate context of the most commonly used optimization algorithms, nice figures for toy experiments to understand how the algorithm works. - The experimental framework idea is well designed in the sense that a good range of optimization tasks is considered to show the importance of the algorithm. - The idea of using the successive steps seems novel to me to approximate Hessian information for faster convergence. Weaknesses: - In my humble opinion, the issue with the current idea lies in terms of the number/width of the optimization steps and how far the solution lies compared to the initialization point in the parameter space. The information that one can obtain by gradient differentials in the optimization space is only getting less and less useful while we increase the dimensionality of the optimization space. That means that for smaller optimization problems where the optimal solution would lie close to the initialization point, then I can really see the benefit of AGD compared to less Hessian well-informed optimizers like Adam. One can also see this emergent problem even from the theoretical analysis of the authors. For instance, in Theorem 2, the authors bound the approximation of the function using $D_\infty$ which effectively is the upper bound in terms of successive optimization steps in the parameter space and also an upper bound of each weight vector compared to its distance from the optimal solution. That means that for real-world multi-dimensional optimization problems (e.g. billions of parameters) those bounds and probably the algorithm itself could be called into question. Having said that, there are other works like NTK that have shown that neural networks stay pretty close in general near the initialization regime, so, I would not consider it appropriate to ask the authors to prove these things but they could put more effort into making their experiments more convincing to ameliorate this huge concern. - Building upon my previous argument, my belief about the problem of scaling of the algorithm to bigger optimization problems is also displayed in the experiments where AGD scores significantly higher than all the other optimizers in smaller parameter spaces (e.g. 1-2 layers of LSTM) but becomes almost negligible with slightly bigger networks (e.g. compare Adam vs Proposed in Table 4, ResNEt 20, 32, Table 5 MLP, DCN, compare AdamW vs AGD in Table 1 Transformer). Having said that, the authors can include experiments with larger and state-of-the-art networks (e.g. ResNet 100+ layers on ImageNet, include some experiments with some networks with 500+ million parameters transformers) to prove me wrong and compare against state-of-the-art performance reported numbers with SGD and/or Adam. I am willing to increase my score if most of the most important above concerns are addressed by the authors since I truly believe that the paper has a great potential to offer a more robust general optimization strategy to replace Adam. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Would you consider including experiments with some harder tasks such as estimation tasks, e.g. audio source separation instead of detection and retrieval tasks? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The authors should include my first point under the weaknesses of the paper and include a small paragraph to address this potential limitation of their algorithm. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your constructive review. **Q1: In my humble opinion, the issue with the current idea lies in terms of the number/width of the optimization steps and how far the solution lies compared to the initialization point in the parameter space ... That means that for real-world multi-dimensional optimization problems (e.g. billions of parameters) those bounds and probably the algorithm itself could be called into question ... I would not consider it appropriate to ask the authors to prove these things but they could put more effort into making their experiments more convincing to ameliorate this huge concern.** To our knowledge, the issue exists for all optimizers. The generalization bound of Adam-like optimizers also includes $D_\infty$, as shown in Theorem 5 of [1]. The generalization bound of SGD includes $\\| x_0 - x^* \\|^2$, where $x_0$ is the starting point and $x^*$ is the optimal point, as shown in Theorem 2.1.14 of [2]. Therefore, the statement `the issue with the current idea lies in ... how far the solution lies compared to the initialization point in the parameter space` is inappropriate. In addition, we have experimentally demonstrated the effectiveness of AGD for models with a large number of parameters, as shown in the Figure 1 of the [PDF provided](https://openreview.net/attachment?id=wLS9DFtY0I&name=pdf) in the [Global Response](https://openreview.net/forum?id=A954O4tDmU&noteId=wLS9DFtY0I). We conducted optimizer experiments on the GPT2 (124 M) and GPT2-Large (774 M) models using the OpenWebText dataset. The results indicate that AGD outperformed the AdamW optimizer in terms of convergence. We hope our results address your concern. **Q2: My belief about the problem of scaling of the algorithm to bigger optimization problems is also displayed in the experiments where AGD scores significantly higher than all the other optimizers in smaller parameter spaces (e.g. 1-2 layers of LSTM) but becomes almost negligible with slightly bigger networks (e.g. compare Adam vs Proposed in Table 4, ResNEt 20, 32, Table 5 MLP, DCN, compare AdamW vs AGD in Table 1 Transformer).** Firstly, LSTMs tend to have far more parameters than ResNets, as convolutional kernels typically have fewer parameters. This difference can be observed in Table 2 of the paper. Additionally, the improvement in metrics can vary depending on the model’s capacity and the specific dataset being used. For instance, enhancing the AUC of ReSys tasks is known to be challenging, and certain networks are often over-parameterized for smaller datasets (e.g., ResNets on Cifar10). Consequently, achieving improvement from the optimizer’s perspective can be challenging but valuable as well. Lastly, we have conducted extensive experiments of AGD on GPTs since the rise of LLMs, and the results have been highly promising, as indicated in the response to Q1. The pretraining stage of LLMs can typically be time-consuming, often spanning several months. However, we are confident that AGD has the potential to considerably expedite the training of LLMs. **Q3: Having said that, the authors can include experiments with larger and state-of-the-art networks (e.g. ResNet 100+ layers on ImageNet, include some experiments with some networks with 500+ million parameters transformers) to prove me wrong and compare against state-of-the-art performance reported numbers with SGD and/or Adam.** Please refer to the [Global Response 1](https://openreview.net/forum?id=A954O4tDmU&noteId=wLS9DFtY0I). Also see Q1. **Q4: Would you consider including experiments with some harder tasks such as estimation tasks, e.g. audio source separation instead of detection and retrieval tasks?** Following [3, 4], we test optimizers against common CV and NLP tasks. We will leave the harder tasks you mentioned for future work. We hope our responses have addressed your concerns. --- References: [1] Reddi, S. J. et al., On the Convergence of Adam and Beyond. ICLR 2019. [2] Yurii E. Nesterov. Introductory Lectures on Convex Optimization - A Basic Course, volume 87 of Applied Optimization. Springer, 2004. [3] Zhuang, J. et al., AdaBelief Optimizer: Adapting Stepsizes by the Belief in Observed Gradients. NeurIPS 2020. [4] Yao, Z. et al., ADAHESSIAN: An Adaptive Second Order Optimizer for Machine Learning. AAAI 2021. --- Rebuttal Comment 1.1: Title: Response to the authors. Comment: Thanks for your response and for trying to address my concerns. Here is my response: > To our knowledge, the issue exists for all optimizers. The generalization bound of Adam-like optimizers also includes , as shown in Theorem 5 of [1]. The generalization bound of SGD includes , where is the starting point and is the optimal point, as shown in Theorem 2.1.14 of [2]. Therefore, the statement the issue with the current idea lies in ... how far the solution lies compared to the initialization point in the parameter space is inappropriate. The statement is perfectly valid since the truth is that Adam and other optimizers are prefered over other optimizers only for their experimental results and not for their theoretical properties. Although Adam like optimizers might have the same issue in proving useful bounds, that fact alone does not mean that all other theoretical bounds for Adam-like optimizers should also prove bounds which are not informative. All in all though, the authors' answer is not correlated with my criticism which was: _"In my humble opinion, the issue with the current idea lies in terms of the number/width of the optimization steps and how far the solution lies compared to the initialization point in the parameter space. The information that one can obtain by gradient differentials in the optimization space is only getting less and less useful while we increase the dimensionality of the optimization space. That means that for smaller optimization problems where the optimal solution would lie close to the initialization point, then I can really see the benefit of AGD compared to less Hessian well-informed optimizers like Adam."_. In my initial review I also said: _"I would not consider it appropriate to ask the authors to prove these things but they could put more effort into making their experiments more convincing to ameliorate this huge concern."_, thus, I think the authors have misinterpreted my critisism and considered that I am attacking their proof whereas I was simply talking about the fact that their algorithm might provide boosts over vanilla Adam for smaller optimization spaces. Having said that, I acknoledge and welcome that the authors have tried to run the appropriate large scale experiments to show that my concern is not valid and empriically show that their algorithm outperforms AdamW. > In addition, we have experimentally demonstrated the effectiveness of AGD for models with a large number of parameters, as shown in the Figure 1 of the PDF provided in the Global Response. We conducted optimizer experiments on the GPT2 (124 M) and GPT2-Large (774 M) models using the OpenWebText dataset. The results indicate that AGD outperformed the AdamW optimizer in terms of convergence. We hope our results address your concern. Thanks for running this experiment. However, the validation losses reported on https://github.com/karpathy/nanoGPT when training these larger models on OpenWebText are 3.12 and 2.67 for GPT-2 and GPT-2-LARGE, respectively. In the authors' rebuttal, the reported baseline numbers are approximately 3.08 and 2.72 for GPT-2 and GPT-2-LARGE, correspondingly. That fact makes me skeptical about the validity of these experiments and how can a discrepancy like this be caused (I would expect both losses to be higher than the reported ones in https://github.com/karpathy/nanoGPT in the case of less training time or at least have a similar pattern). It could be the case that the weight decay could play a role here and Adam would also behave very similarly to AGD. > Firstly, LSTMs tend to have far more parameters than ResNets, as convolutional kernels typically have fewer parameters. This difference can be observed in Table 2 of the paper. Firstly, the above statement is absolutely incorrect, the number of parameters in convolutional and recurrent neural networks depends on the task and the context that one wants to model. It might be true that for some easy problems like image classification for 28x28 pixels, one only needs a few small convolutional kernels in order to obtain the appropriate receptive field. On the contrary, if you had an audio waveform sampled at 16kHz and you wanted to model time-dependencies over 5 second clips, one would need a much larger amount of trainable parameters for a CNN model compared to an LSTM that can implicitly carry the state for each time-step. In Table 2, the authors compare LSTM models for NLP tasks and compare them with CNNs for computer vision tasks with input-sizes of at most (224x224), thus, no generalization can be made from this Table. All in all, the authors' experiments with GPT make me skeptical and my main criticism (AGD behaves similarily to Adam except of smaller optimization spaces) remains. Thus, I cannot increase my score until I see a clear performance improvement of AGD vs Adam for a large-scale model and task. --- Reply to Comment 1.1.1: Title: Clarification to the confusion on GPT-2 Comment: Thank you for your response. I believe the main concern revolves around the performance of GPT-2. The README of nanoGPT states the following: > However, we have to note that GPT-2 was trained on (closed, never released) WebText, while OpenWebText is just a best-effort open reproduction of this dataset. This means there is a dataset domain gap. Indeed, taking the GPT-2 (124M) checkpoint and finetuning on OWT directly for a while reaches loss down to **~2.85**. This then becomes the more appropriate baseline w.r.t. reproduction. The numbers you mentioned are evaluations conducted on OpenWebText using OpenAI’s checkpoint, which is trained on the closed WebText dataset. It is expected that the val loss would be higher due to domain shift. The repo author also mentions the nanoGPT's actuall loss (**~2.85**) in [this section](https://github.com/karpathy/nanoGPT/tree/master#reproducing-gpt-2). Our result for GPT-2 with 50000 steps, approximately 3.08, is larger than 2.85. For testing, we keep the weight decay and other training settings the same, except for the learning rate. We choose lr as 6e-5/1e-4 for GPT-2/GPT2-Large, and delta as 1e-14 based on our experience with transformers small from paper. It is worth mentioning that though AGD is not well tuned, the results are promising. I hope this would clarify your confusion and address your concern about AGD's performance on large scale models.
Summary: The paper proposes a new optimizer based on finite difference approximation to obtain the inner product between the Hessian row vector and the parameter vector difference from gradients of succeeding steps and an auto-switching function to switch between SGD and the adaptive optimizer. It uses an exponential moving average of gradient instead of gradient for lower variance. Experimental results show that the proposed method performs better than existing methods in most cases. Strengths: The proposed algorithm can efficiently acquire the information of the Hessian. Experimental results show performance improvements. Weaknesses: The explanation of related works is minimum. It needs to be more concretely clarified what problem of existing second-order methods it addresses. The experiments in section 4 only report model performance and do not report computational costs. I need help finding Figures 5 and 6 in the paper. Are they missing? Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: What is the reason for the performance improvement? Is it due to its ability to find better local optima or the faster convergence in a limited computation budget? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: There is no numerical explanation for the computational cost. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your thoughtful review. **Q1: The explanation of related works is minimum. It needs to be more concretely clarified what problem of existing second-order methods it addresses.** Second-order methods entail the computation or approximation of the Hessian, which can be computationally demanding. Despite efforts to reduce overhead, such as the Hessian diagonal approximation employed by AdaHessian, these methods are still less efficient compared to first-order methods. In response to Q2, we will present experimental results regarding the computational costs involved. The major problem of the existing second-order methods, like AdaHessian, is that they require extra memory footprints and heavy computation compared to first-order methods, which makes they unusable in practice. Please refer to [Global Response 2](https://openreview.net/forum?id=A954O4tDmU&noteId=wLS9DFtY0I), where we compare the resource usage experimentally. **Q2: The experiments in section 4 only report model performance and do not report computational costs.** In theory, AGD requires approximately the same amount of memory as AdamW, as they both store two optimizer states, with slightly higher computational requirements. In order to demonstrate this, we provide experimental results in [Global Response 2](https://openreview.net/forum?id=A954O4tDmU&noteId=wLS9DFtY0I). **Q3: I need help finding Figures 5 and 6 in the paper. Are they missing?** Thanks for pointing out this issue. Figures 5 and 6 are in Appendix A.2 due to space limitations. We will provide further clarification in the next revised version, as well as fix a typo in Figure 5. **Q4: What is the reason for the performance improvement? Is it due to its ability to find better local optima or the faster convergence in a limited computation budget?** Our findings suggest that the observed improvement in performance can be attributed to both fast convergence and good generalization. The fast convergence is primarily thanks to the gradient difference between the previous and current steps, which effectively incorporates Hessian information into the preconditioning matrix. Furthermore, the auto-switch mechanism we implemented can prevent abnormal updates and and, compared to the traditional method of adding a small amount, can avoid introducing additional bias during updates, thereby leading to more stable training and improved generalization. We hope our responses have addressed your concerns.
Summary: The paper proposes a new optimizer called AGD that integrates the information of the Hessian into the preconditioning matrix and switches dynamically between SGD and the adaptive optimizer. The authors establish theoretically proven convergence guarantees in both non-convex and convex stochastic settings. The authors validate AGD on a total of six public datasets: two from NLP (IWSLT14 and PTB), two from CV (Cifar10 and ImageNet), and the rest from RecSys (Criteo and Avazu). The experimental results reveal that AGD can outperform or be on a par with the SOTA optimizers. Overall, the paper makes a significant contribution to the field of deep learning optimization. AGD is a promising new optimizer that has the potential to improve the performance of deep learning models on a variety of tasks. Strengths: • AGD can achieve a faster convergence rate over SGD • AGD can automatically switch between stochastic and adaptive optimization, depending on the progress of the optimization. This allows AGD to achieve the best of both worlds, i.e., the fast convergence of adaptive optimizers and the robustness of SGD. • AGD has a few flexible hyperparameters, making tuning for different tasks and datasets easy. • AGD was evaluated on various datasets across different domains, and it outperformed other optimizers in most cases. Weaknesses: • The convergence rate of AGD depends on the hyperparameters δ and β1. If these hyperparameters are not chosen carefully, AGD may not converge to the optimal solution. Looking forward to seeing more comprehensive robustness evaluation using different network structures and larger datasets. • AGD requires more computation than SGD. How is its efficiency for large-scale problems, e.g., large language models? I think performance on large-scale generative model optimization is a very interesting direction. Does the author have plans to try on such a task? Technical Quality: 3 good Clarity: 3 good Questions for Authors: • Please see weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: There is no limitation discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your insightful review. **Q1: The convergence rate of AGD depends on the hyperparameters δ and β1. If these hyperparameters are not chosen carefully, AGD may not converge to the optimal solution. Looking forward to seeing more comprehensive robustness evaluation using different network structures and larger datasets.** We also conducted robustness tests on $\delta$ for the ResNet18 model using the ImageNet dataset, and the results are shown in Figure 2 in the [provided PDF](https://openreview.net/attachment?id=wLS9DFtY0I&name=pdf). As can be seen, the model performance did not show significant changes within a large range of $\delta$ values. As a common parameter for adaptive optimizers, $\beta_1$ was not adjusted in this work in order to maintain consistency with other optimizers. **Q2: AGD requires more computation than SGD. How is its efficiency for large-scale problems, e.g., large language models? I think performance on large-scale generative model optimization is a very interesting direction. Does the author have plans to try on such a task?** Please refer to [Global Response 1](https://openreview.net/forum?id=A954O4tDmU&noteId=wLS9DFtY0I). We hope our responses have addressed your concerns.
Rebuttal 1: Rebuttal: #### Global Response 1. **AGD performance on large models** We conduct optimizer experiments on the GPT2 (124M) and GPT2-Large (774M) models using the OpenWebText dataset. Our code is based on https://github.com/karpathy/nanoGPT. We compare AGD with the default AdamW optimizer. The results are shown in Figure 1 of the PDF provided. As can be seen, AGD outperforms AdamW even after both were run for 50,000 steps. We hope our results effectively address the concerns of the reviewers. 2. **AGD's computational cost** We train a transformer small model for IWSLT14 on a single Nvidia P100. AGD is comparable to the most commonly used optimizer AdamW, while significantly better than AdaHessian in terms of memory footprint and training speed. We will add these results to the next revision. | Optimier | Memory/Mb | Time per Epoch/s | Relative time to AdamW | | - | - | - | - | | SGDM | 5119 | 230 | 0.96× | | AdamW | 5413 | 260 | 1.00× | | AdaHessian | 8943 | 750 | 2.88× | | AGD | 5409 | 278 | 1.07× | In addition, we have included some images in attachment PDF to address specific questions raised by the reviewers. Pdf: /pdf/49c97d64bc2b0c45e9963a385fe121f36d2e8ab6.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposed AGD, a novel optimiser that can dynamically switch between an adaptive optimiser and SGD. To achieve this, an auto-switching function is introduced to change the diagonal elements of the preconditioning matrix based on the gradients of adjacent steps. Both theoretical analysis on the convergence of the optimiser, and experimental results on six public CV, NLP and RecSys setups are provided, demonstrating AGD is a promising method both in theory and practice. Strengths: It is often noticed in practice that SGD with proper hyperparameters (learning rate, momentum, and weight decay) can outperform the adaptive optimisers with better local optimum found, while the adaptive optimisers require less effort in hyperparameter tuning and sometimes faster convergence. The auto-switching strategy may help combine the advantages of them, and experimental results and analysis demonstrate the soundness and robustness of the chosen auto-switching function. Weaknesses: 1. Optimisers can perform differently on large-scale and small-scale experiments. Although good PTB results are given in the paper, a larger-scale language modelling experiment may still be useful. 2. Some hyperparameter settings, such as those for PTB, are missing in the paper and the supplementary materials. The effect of regularisation methods, in particular L2 and dropout, are not given. This often makes a key difference to the generalisation ability of the test set and the performance of SGD and could be at least considered in the numerical analysis session. 3. The proof of the faster convergence speed of AGD using the case studies in Sec. 3.2 may have resulted in some questions, as the comparisons of optimisers with different learning rates and other hyperparameters may not be fair enough. 4. Extra memory cost, computational cost and possible limitations of the method are not discussed. Technical Quality: 3 good Clarity: 3 good Questions for Authors: What's the trajectory of Adam at the minimum point B in Figure 1? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Discussions of regularisation methods (at least with SGD) and large-scale experiments with complex model structures (e.g. a Transformer with both encoder and decoder) are missing. The analysis of the convergence speed in the numerical analysis session may not be completely fair. Would be useful to include the analysis of computation and storage costs. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your constructive review. **Q1: Optimisers can perform differently on large-scale and small-scale experiments. Although good PTB results are given in the paper, a larger-scale language modelling experiment may still be useful.** Please refer to [Global Response 1](https://openreview.net/forum?id=A954O4tDmU&noteId=wLS9DFtY0I). **Q2: Some hyperparameter settings, such as those for PTB, are missing in the paper and the supplementary materials. The effect of regularisation methods, in particular L2 and dropout, are not given. This often makes a key difference to the generalisation ability of the test set and the performance of SGD and could be at least considered in the numerical analysis session.** The training settings are consistent across all optimizers, including regularization methods. The specific training details for PTB are provided in the file nlp-task/lstm/run_all_layer{1,2,3}.py, where the general dropout rate is set to 0.4 and the weight decay is set to 1.2e-6 (as mentioned in Appendix A.1). We will provide more details in the revised version. **Q3: The proof of the faster convergence speed of AGD using the case studies in Sec. 3.2 may have resulted in some questions, as the comparisons of optimisers with different learning rates and other hyperparameters may not be fair enough.** As mentioned in Sec. 3.2, all hyperparameters are kept identical, including the learning rate of 1e-3 and betas of (0.9, 0.999). One exception is SGDM, which does not converge under such a large learning rate. **Q4: Extra memory cost, computational cost and possible limitations of the method are not discussed.** AGD and AdamW both store two optimizer states, so their memory footprints should be similar. AGD incurs a slight additional computational cost, which we validate on the transformer model and find to be negligible, especially when compared to optimizers like AdaHessian. The results are shown in [Global Response 2](https://openreview.net/forum?id=A954O4tDmU&noteId=wLS9DFtY0I). **Q5: What's the trajectory of Adam at the minimum point B in Figure 1?** The whole trajectory until convergence is shown in Figure 3 in the [provided PDF](https://openreview.net/attachment?id=wLS9DFtY0I&name=pdf). We hope our responses have addressed your concerns.
null
null
null
null
null
null
On the Importance of Feature Separability in Predicting Out-Of-Distribution Error
Accept (poster)
Summary: This paper proposes an easy-to-use dataset-level method for predicting OOD scores. The authors analyze two desiderata in representation learning: high inter-class dispersion and high intra-class compactness. Through some experiments on CIFAR and TinyImageNet, the authors reveal that the inter-class dispersion is strongly correlated with the OOD performance while intra-class compactness does not really correlate with the OOD accuracy. Strengths: 1. The proposed method is well-motivated and explained. The authors first claim that MMD and Fr\'echet distance is not good surrogates for OOD error prediction and suggest using dispersion score instead. Figure 3 indicates that the dispersion score is indeed better than conventionally used distance. 2. The proposed method is easy-of-use and training-free. It is also flexible to different OOD data in sample size and class distributions. 3. The method outperforms previous approaches in most benchmarks. Weaknesses: 1. The experiments are all conducted in elementary and simple experiments settings, i.e., the OOD dataset is set to be some augmentation and corruptions applied on the ID set. This is not really the real-world OOD benchmark setting as the corrupted OOD dataset still has some class-overlapping information with the ID set. Can it apply to the standard OOD benchmark in CIFAR and ImageNet? For example, the authors could use ImageNet-1k as ID and iNaturalist, Places, Textures, and SUN as OOD. With the current experimental setting with simple data augmentation, it is really hard to judge whether the method can be useful for real-world usage. 2. It would be much more interesting to have non-overlapping ID and OOD dataset in experiments. For example, the authors can train model on CIFAR100 and use CIFAR10C as OOD or vice verse (CIFAR10 as ID and CIFAR100 as OOD). It also meets more real-world setting as CIFAR10/CIFAR100 is a commonly used OOD benchmark. 3. Though the intra-class compactness alone is not useful. Would it be better if the intra-class compactness is combined with the dispersion? Would it be an interesting ablation study? Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Please see weaknesses and limitations. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: My main concern is still the simple experiment setting. In my opinion, it would make the proposed method useful only if the OOD benchmark is replaced with real-world ones. I suggest the authors verify the proposed approach in standard CIFAR and ImageNet-1k benchmarks as done in [1,2,3]. Of course, it is not necessary to plot the curve of accuracy versus distance as done in Figure 1 and Figure 3 (a single correlation value would be sufficient), but it would be important to show the method of predicting OOD error is useful in practice especially given that the method does not rely on the class distribution. [1] React: Out-of-distribution detection with rectified activations. NeurIPS21 [2] On the importance of gradients for detecting distributional shifts in the wild, NeurIPS21 [3] RankFeat: Rank-1 Feature Removal for Out-of-distribution Detection. NeurIPS22 Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. **About the OOD setting in the experiments** Thank you for pointing out the potential for misunderstanding. We would like to clarify that there are different definitions of "OOD" in various related areas. In OOD detection, OOD examples are those test samples with different label space from the training set (semantic shift). In OOD generalization [1] and OOD error estimation, we use "OOD" to describe examples with covariant shifts or concept shifts, where their ground-truth labels are included in the label set. The difference between these two settings lies in **whether the label space is shared between the training and test data**. Therefore, it is common practice to use different "OOD" test sets in OOD detection and OOD error estimation. To avoid potential misunderstandings, we will clearly describe the OOD setting in the problem statement (Subsection 2.1) of the final version. Moreover, it is meaningless to calculate the accuracy in an OOD test set with semantic shifts, where the ground-truth labels of all instances are out of the label space. To demonstrate the robustness of Dispersion score in complex settings, we conduct experiments on a more realistic setting, where parts of test samples are with semantic shifts and others are with covariant/concept shifts. The goal is to estimate the performance on those examples with covariant/concept shifts. We show the results in the Table 2 (see the attached pdf). Specifically, we inject 10% extra examples from unseen classes (drawn from [300K Random Images](https://github.com/hendrycks/outlier-exposure) and CIFAR-100) on ResNet18. This table shows that our method outperforms all training-free benchmarks, and is comparable with training method, ProjNorm (using ). In addition, to demonstrate effectiveness of Dispersion Score under natural distribution shifts, we also conduct experiments on some domain-adaptation/ domain-generalizaition datasets, such as PACS, Office-31 and Office-Home. The results in the Table 1 (see the attached pdf) show that our method outperforms the compared methods in these settings with a large margin, which confirms the advantage of our method. 2. **Can the combination of intra-class compactness and Dispersion Score performs better than only Dispersion Score?** Thank you for the suggestion. In the Table 3 (see the attacked pdf), we show that combining dispersion with compactness cannot outperform using Dispersion Score only. [1] Shen, Z., Liu, J., He, Y., Zhang, X., Xu, R., Yu, H., & Cui, P. (2021). Towards Out-Of-Distribution Generalization: A Survey. ArXiv, abs/2108.13624. --- Rebuttal Comment 1.1: Title: Thanks for the response! Comment: Thanks for the response! Most of my questions have been solved. I would like to increase the score by one level. I sincerely suggest the authors include a discussion about the difference with OOD detection and add some works of literature. This would help readers to better understand the work. --- Reply to Comment 1.1.1: Comment: Thank you for checking our rebuttal and raising your score. We will add the discussion in the related work as your suggested. Sincerely thanks for your valuable time on this paper!
Summary: In this paper, the authors focus on the task of predicting model performance on unseen/shifted datasets, without support of annotations. To do so, they first show the connection between feature separability and test accuracy, with an intuitive example and theoretical explanation. Based on the analysis, they propose a novel metric, Dispersion score, which measures the inter-class divergence from feature representations. They also reveal that intra-class compactness does not reflect the generalization performance, while inter-class dispersion works as a good indicator. They conduct experiments on CIFAR-C and TinyImageNet-C datasets to show the efficiency and effectiveness of the proposed metric. Furthermore, they also show the advantages of their method in some extreme cases (limited data, partial label set, class imbalance). Strengths: 1.The motivation of this work is reasonable. Intuitively, the prediction performance should be tightly tied with the quality of the learned feature. High inter-class dispersion is one of the goals of self-supervised learning for learning a good representation. The authors also provide an interesting explanation from a theoretical perspective, which is one of the highlights of this work. 2.The analysis of those methods with distribution distance is thought-provoking. In my view, the shifted distance might not be necessarily connected to the test performance, as the shifted features might not be important for the final prediction. For example, an image classifier with sufficient generalization ability would not change its predictions when the background is changed a lot. 3.The proposed method is novel and interesting. To the best of my knowledge, this is the first work to exploit the feature properties of test instances for predicting OOD error. It does not require access to the training data and only uses a forward propagation, which is much faster than existing SOTA methods. 4.The empirical results are extensive and convincing. The authors not only show the simple method outperforms existing training-free methods, but also compare to ProjNorm on the computational efficiency (575 second v.s. 11 second). The most exciting part for me is the analysis on the flexibility in OOD data, which considers some real-world settings, like class imbalance and limited data. The authors also present that a high intra-class compactness is not necessary for good prediction performance, which may provide a new insight for representation learning. Weaknesses: 1.The presentation of some Figures can be improved. For example, in Figure 1b, the magnified box seems not match the original area. The four images of Figure 2 are a little small, which might be improved by reducing the blanks between images. 2.Why intra-class compactness does not work is not clear. Although the authors show that intra-class compactness is not an effective indicator empirically, it could be better if the authors can provide an intuitive explanation. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: see weaknesses. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: The authors discussed the limitations of the proposed score in the adversarial setting, where the feature quality is also broken by adversarial attacks. The authors also provide analysis to show the underlying reason, which is also an interesting contribution. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. **Improving quality of some figures.** Thanks for your constructive suggestions. We will improve the quality of those figures in the revised version, as you suggested. 2. **The reason why intra-class compactness dose not work.** Thank you for the thought-provoking question. Previous works in representation learning demonstrate that high intra-class compactness and inter-class separability are correlated to the final accuracy in machine learning. In this paper, we show that intra-class compactness cannot be used to indicate the prediction performance under distribution shift, while inter-class separability still works. The potential reason of the difference is from the effects of convariate shifts. The convariate shift may have a significant influence on the intra-class compactness so that compactness cannot effectively reflect the final accuracy. We will explore this hypothesis in our future work. --- Rebuttal Comment 1.1: Title: To response Comment: Thank you for the detailed response. My concerns have been addressed. I also read the other reviews and your responses. You have done a good job and I believe those discussion can further enhance this work. --- Reply to Comment 1.1.1: Comment: Thank you for reading our response and keeping a positive score! We are really grateful for your time and expertise.
Summary: This research focuses on predicting test accuracy on shifted datasets without access to ground-truth labels. The authors begin by analyzing the potential issue of the existing methods which were based on the shift distance and point out that the previous distributional distances were not always correlated highly to the out-of-distribution (OOD) error. Then, they proceed with an intuitive example and theoretical explanation, showing the connection between feature separability and test accuracy. Based on this, they propose a novel metric that measures the inter-class dispersion, which is demonstrated as an effective factor for the OOD error estimation. They conducted experiments on CIFAR-C and TinyImageNet-C to validate the advantages of the proposed method. The authors further demonstrate the robustness of the proposed method against class imbalance and data shortage. Strengths: 1. The task studied in this paper is practically important. In some real-world applications, it is necessary to assess the model performance on a given unlabelled dataset. Under those scenarios, OOD error estimation becomes inevitable and valuable. 2. The proposed method is supported by both empirical observations and theoretical analysis. The motivation is clear. 3. The proposed method is novel, effective, and efficient. Previous SOTA methods like ProjNorm need to update the model which is computationally expensive (It can be unachievable with some large models). They also show that intra-class compactness cannot work well, which motivates deeper exploitation of the properties within the features distribution rather than the coarse-grained property on the whole dataset adopted in AutoEval. 4. It is also interesting to see that this paper further investigated the performance under some scenarios with imperfect data, e.g., class imbalance, smaller sample size, and partial OOD error prediction. To the best of my knowledge, this is the first work providing such a complete analysis. Compared with the previous studies, the proposed approach achieved comparable performance even under these extreme cases. 5. This paper is well-written and easy-to-understand. The analysis and figures provided by the authors are clear and informative. I believe readers can easily get the core idea and implement it. Weaknesses: 1. Some potential typos should be corrected in the next version. See the Questions part for more details. 2. The discussion about the pseudo labels used for cluster centroid determination can be further extended. See Questions for more details. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - I noticed that when computing the dispersion score, the pseudo labels were used to determine the cluster centroid. In Appendix E, the author compared the pseudo-labeling with another clustering method, K-means. However, I am interested in if it can be further improved by improving the quality of these pseudo labels. For example, filtering out some low-confidence predictions, adopting soft pseudo labels instead of hard ones, or considering wrong pseudo labels as noisy annotations? - I also noticed that the previous work, ProjNorm, also adopted pseudo labels for the OOD error prediction. Could you please summarize the differences in how to adopt the pseudo labels between the two methods? - About the evaluation times of Dispersion score on TinyImageNet (Table 2). The time of ResNet50 should be longer than that of ResNet18. Is there a typo that records the times in the wrong order? - From the analysis in 4.4, does it mean that we can focus more on the inter-class dispersion instead of intra-class compactness in representation learning? - Some potential typos: - Line 221, Spearson -> Pearson? - Line 240, Table 6 -> Table 1? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: I did not see any severe limitations in this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. **Can improving the quality of pseudo labels further enhance performance of OOD error estimation?** To verify whether improving quality of pseudo labels can further improve the estimation performance, we conduct experiments on Tiny-ImageNet with ResNet18 by filtering out low-confidence predictions under a certain threshold. From the results below, we can observe that selecting high-confidence samples even degrades the estimation performance. Furthermore, we present in Appendix D that using ground-truth labels cannot improve the estimation performance, showing that improving label quality may not be a correct direction. As we discussed in the response to the reviewer 1A8L, pseudo labels in our method is used to introduce the bias of the linear layer, so a potential direction might be using the softmax output, instead of the one-hot pseudo labels. |Threshold|0.0|0.05|0.1|0.2|0.3|0.4|0.5|0.6|0.7|0.8|0.9| | --- | - | - | - | - | - | - | - | - | - | - | - | ||0.966|0.965|0.966|0.945|0.917|0.886|0.875|0.847|0.821|0.798|0.786| 2. **The difference between Dispersion and ProjNorm in adopting pseudo labels.** Thank you for the great question. Yes, both our Dispersion Score and ProjNorm adopt pseudo labels during OOD error estimation, but we note that the role of the pseudo labels in these two methods are somewhat different. In ProjNorm, they use pseudo labels in a "self-training" manner, which is commonly used in semi-supervised learning. As the number of mislabeled out-of-distribution (OOD) examples increases, the training deviation of parameters tends to expand further. In this way, the distance in parameter space can be used to measure the test accuracy. In our method, pseudo labels is used to introduce the bias of the linear layer, which also affects the final predictions. If we simply use the true labels, Dispersion score can only measure the quality of the learned representations, instead of the final accuracy. 3. **Do we need to pay more attention on inter-class dispersion than intra-class compactness in representation learning?** In this work, our findings indicate that there is no significant linear correlation between intra-class compactness and test accuracy on out-of-distribution datasets. However, there might be some other forms of relationship between them, e.g., nonlinear correlation. Therefore, it may needs more analyses to explore the specific effects of intra-class compactness in representation learning, which we believe can benefit from the insight from this work. 4. **Some typos** Many thanks for your suggestions. We will fix those typos in the revised version. --- Rebuttal Comment 1.1: Title: Response to Author Rebuttal Comment: Thanks for the response from the authors. My concerns have been addressed. In addition, I attached some further comments w.r.t. the response from the authors as follows: Firstly, it is great to see that the authors provided additional experiments by adopting different confidence levels w.r.t. the pseudo label. The results indicated that filtering out samples with a confidence threshold is even detrimental to the final performance. According to my understanding, the model obtained on the training set can be poorly calibrated on the OOD dataset (usually we do not apply a calibration process in OOD error prediction), and discarding low-confident samples with a threshold can be considered as introducing more over-confidence in disguise during calculating the dispersion score. Is this a reasonable explanation? Secondly, it is also interesting to see the results on the open-set OOD prediction setting (Table. 2 of pdf attachment). Although the current OOD error prediction task does not consider the existence of label space change, it is still interesting to see that the proposed method outperforms the baselines under the constructed open-set setting. --- Rebuttal Comment 1.2: Comment: Dear Reviewer HxyT, Thank you so much for recognizing this work. In our understanding, filtering out test samples by a confidence threshold has two negative impacts on OOD error estimation: information loss and high bias. Firstly, the true accuracy is calculated by two parts of information: correctly-classified samples and wrongly-classified samples. If we filter out those samples with low confidence, the calculated metric may ignore information from wrongly-classified samples, leading to the degradation of the estimation performance. Secondly, the number of test samples for calculating the metric is reduced after filtering out samples with lower confidence. Therefore, the calculated metric would be sensitive to those remaining samples that are biased toward high confidence (corresponding to high accuracy). With these two effects, the metric would tend to give overly optimistic estimations if we only use examples with high confidence. Best Regards, Authors
Summary: This work aims to predict classifier accuracy on unlabeled test samples. To achieve this goal, this paper proposes a feature separability-based dataset-level score to check whether features have high inter-class scatter. This feature separability score is calculated by measuring how far the centroids of features that share the same pseudo-label predicted by the model deviate from the center of all features on average. The experiments show that such inter-class dispersion is strongly correlated with model accuracy. Strengths: + [***Good clarity***] This work is well-written and easy to follow. The method is well presented, the visualizations are helpful, and the experimental settings are clearly introduced. + [***Measuring feature separability is well-motived***] Under distribution shifts, the features of the source and target can be scattered differently. Using such information to reflect model accuracy seems reasonable. Weaknesses: - [***More results to illustrate the relationship between distribution gap and accuracy***] In the accuracy prediction, there are two metrics are proposed to measure the distribution gap. Please show their results in Figure 1 to well illustrate the motivation regarding the potential limitation of the distribution gap for accuracy estimation. - [***The definition of dispersion score is not sound***] The class cluster is based on the classifier's prediction. What if the classifier gives bias predictions on test sets? For example, in adversarial examples, the classifier maintains class-class separability but totally misclassified data. Moreover, why using the gound-truth label to define class clusters does not give stronger correlation strength (Section D). Moreover, whether the proposed method can handle the cases where some classes do not appear or some unseen classes appear. For example, ProjNorm discusses the label shift where some classes are missing. Under such a scenario, ProjNorm is less effective than other methods. - [***The experimental setting is somewhat limited***] This work only provides the experiments on small-scale datasets (e.g., CIFAR-10/100 and TinyImageNet). Considering the literature, the results on iWILDS and ImageNet should be included. For example, DoC and ATC report the results on ImageNet datasets with several natural distributions, such as ImageNet-V2, ImageNet-R and ObjectNet. Without the results on such realistic datasets, it is hard to conclude the robustness and effectiveness of the proposed method. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - Why using the gound-truth label to define class clusters does not give stronger correlation strength (Section D). - Under an adversarial attack, the features might still have high dispersion but the classifier archives low accuracy. Please comment on this and discuss the potential solution to alleviate this. - Class imbalance results (Table 3) are not sufficient. How about other existing methods (e.g., DoC and ATC) under such class imbalance? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: This work reports results for some special cases, such as class imbalance and adversarial attacks. Another potential limitation is the open set problem, where some unseen classes arise. Also, some classes may be missing during testing. It would be better to mention and discuss both cases, as the proposed scatter score can be significantly affected. ***[Post-rebuttal]*** > I would recommend excluding the open-set results from the main paper due to the evaluation metric's limitation in assessing only the seen classes. Moreover, ATC reports the results on some real-world and large-scale datasets like iWILDS and ImageNet, including such datasets would make the submission solid. This addition would enhance the robustness of your research and highlight its practical applicability. Given that you've already showcased results on domain adaptation datasets such as PACS, Office-31, and Office-Home, I am inclined to think that the current evaluation is sufficient. > [Additional suggestions which do not impact the rating of this paper] You might consider referring to two relevant works: "Unsupervised Accuracy Estimation of Deep Visual Models using Domain-Adaptive Adversarial Perturbation without Source Samples," which also features results on domain adaptation datasets, and "Characterizing Out-of-Distribution Error via Optimal Transport." This latter work, which assumes a consistent marginal label distribution between training and test sets, stands in contrast to your method and may help highlight the merits of your approach. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Great thanks for your constructive comments and suggestions! Please find our response below. 1. **Improving Figure 1 by adding the two matrics.** Thank you for the suggestion. In the final version, we will add the corresponding metrics (shown in table 1) on the figures. We list the numerical details below for your reference. | $Method$ | $Fr\acute{e}chet$ | $MMD$ | | ---- | ---- | --- | | $R^2$ | 0.858 | 0.804 | | $\rho$ | 0.964 | 0.943 | 2. **Can Dispersion Score work under adversarial attacks?** In this paper, our method is designed for predicting the test accuracy on OOD data, which is generally produced by natural corruption or perturbation. As an extension, we discussed the adversarial setting in Appendix H. The results show that those methods of OOD error estimation cannot provide meaningful performance under adversarial attacks, as well as our method. This is consistent with the observation described in ProjNorm, where their method predicts an error of 28.1% when the true error is 100.0%. Furthermore, we find that feature dispersion may provide an intuitive understanding of the inherent difference between adversarial and (natural) corruption robustness. As shown in Figure 7, adversarial perturbations surprisingly increase the distance between different clusters, while corruption perturbations decrease the separability of the clusters. The results reveal that adversarial perturbations affect the model predictions in a different way: assigning instances to the wrong groups and enlarging the distance among those groups. This explains why dispersion score cannot be applied to predict the performance on adversarial examples. While the limitation on adversarial examples persists (like ProjNorm), we hope the above insight can inspire specific designed methods for predicting adversarial errors in the future. 3. **The performance of Dispersion Score in parial OOD error prediction.** In Appendix G, we provide an analysis for the parial setting, where the label space of the test dataset is only a subset of that of the training dataset. In Table 7, we can observe that our method achives better performance than existing methods when some classes disappear during testing. 4. **The performance of Dispersion Score when the test set includes unseen classes.** Many thanks for your constructive suggestions. We conduct this experiment by injecting 10% extra examples from unseen classes with pretrained ResNet18. We use Tiny-ImageNet and CIFAR-10 as the ID datasets, while 300K Random_Images and CIFAR100 as the datasets with unseen classes. The results are presented in Table 2 (see the attached pdf), which shows that our method outperforms previous methods. 5. **Why using the true labels cannot outperform our method with pesudo labels?** It is because Dispersion score with ground-truth labels only measures the quality of the learned features, regardless of the last linear layer. In an extreme case with a perfect feature extractor, the accuracy might be low due to the poor linear classifier, even though Dispersion score with ground-truth labels is very high. Using pesudo labels, Dispersion score involves in the bias of the linear classifier, thereby showing better performance in predicting OOD errors. We may provide a formal analysis to show the advantages of pesuado labels in Dispersion score in the final version or future work. 6. **More results on datasets with several natural distributions.** To illustrate the effectiveness of Dispersion Score under realistic datasets, we evaluate the estimation performance via MSE on ImageNet-R with WRN-50-2. Their results are shown below. |MSE|Entropy|ATC|Frechet|ProjNorm|Dispersion| | - |- |- |- |- |- | |ImageNet-R|27.67|13.27|11.31|7.23|$\textbf{4.46}$| In addition, we also evaluate our method under datasets with more complex distribution shift, such as PACS, Office-31 and Office-Home, which results are shown in Table 1 (see the attached pdf). We can observe from those tables that Dispersion Score obtain better and more stable performance compared with other compared baselines, while the state-of-the-art method, ProjNorm, almost fail in these cases. 7. **The results of other compared methods in the imbalanced setting.** We show the results of other compared methods in Appendix F. In Table 6, we illustrate the performance of the rest methods including Rotation, Entropy, ConfScore, AgreeScore, ATC and $Fr\acute{e}chet$ under the imbalanced setting. The results show that our method outperforms those compared methods by a meaningful margin. --- Rebuttal Comment 1.1: Title: Follow-Up Discussion Comment: Dear Authors, Thanks for the response! - I am interested in the provided "The performance of Dispersion Score when the test set includes unseen classes". With new classes, how to evaluate classifiers? Viewing the unseen classes as one outliner class or just evaluating on original seen classes? - "Why using the true labels cannot outperform our method with pesudo labels?" This part is still not clear to me, especially "Dispersion score with ground-truth labels only measures the quality of the learned features, regardless of the last linear layer". Best, Reviewer 1A8L --- Reply to Comment 1.1.1: Comment: Dear Reviewer 1A8L, Thank you for the further questions and we are glad to have an in-depth discussion with reviewers. Please find our response below. 1. **How to evaluate the model performance when the test set includes examples from unseen classes?** For this setting, we only calculate the accuracy on the original seen classes. Despite that we do not care about the performance on those examples from unseen classes in this task, those open-set samples may have a large impact on the metrics, which are used to estimate the prediction performance (as mentioned by Reviewers 1A8L and ALm9). Our results show that Dispersion score can achieve SOTA performance in this setting. Please kindly let us know if there are any recommended evaluation methods for this setting. 2. **Why Dispersion score with ground-truth labels only measures the feature quality?** We would like to clarify that the final predictions of a deep network depend on both the learned feature $\boldsymbol{z}$ and the linear classifier $f_{\omega}$: $\boldsymbol{p}=f_{\omega}(\boldsymbol{z})$. If we calculate the Dispersion score with the ground-truth labels, the metric will be not related to the linear classifier $f_{\omega}$, as it only uses the feature extractor $f_g$ and the ground-truth labels. In this manner, Dispersion score with the ground-truth labels does not contain the biased information of the learned classifier, thereby being suboptimal in estimating the final prediction performance. We also give an extreme case in the last response, which may provide an intuitive understanding of this part. Best regards, Authors --- Reply to Comment 1.1.2: Comment: Sincerely thanks for your informative feedback and valuable time in the discussion. We will incorporate the new results and explanations into the final version appropriately. We are open to further discussion if there are any remaining concerns.
Rebuttal 1: Rebuttal: ## General Response We thank all the reviewers for their time, patient, and valuable comments. We are glad that all reviewers agree that our work is *well-motivated and well-organized, clear, and concise*. Reviewers HxyT, BweV, and ALm9 appreciate that *the experimental results are strong and convincing*. We are also encouraged that reviewers find this method is *novel, comprehensively analyzed (BweV, HxyT), and easy-to-use (Alm9)*. We respond to each reviewer's comments in details, respectively. And we put three tables in the attached pdf file, which support our claims in the responses. In the revised version, we will also update the manustript according to reviewers' suggestions, and we believe this makes our paper much stronger. Pdf: /pdf/fe494754b31c28a63ed65dfb6e84cfc4319c8655.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Image Captioners Are Scalable Vision Learners Too
Accept (oral)
Summary: This paper shows that training a ViT with the image-to-text generation (i.e., image captioning) objective is an effective way to learn good visual representations for downstream tasks. The paper systematically compares between the image captioning and the image-text contrastive learning (CLIP) objectives, where image captioning leads to ViTs that are comparable to the CLIP-ViT. Furthermore, the paper proposes parallel decoding as an additional pre-training objective to complement the conventional autoregressive decoding. Strengths: - The paper systematically demonstrates a refreshing observation: image-to-text generative learning leads to vision encoders as good as contrastive-learned ones. This opens up more research opportunities for visual representation learning using language supervision. - The pre-training details are well-controlled to ensure a fair comparison between Cap and CLIP. - The paper performs a comprehensive evaluation on the pre-trained ViT, including both image classification tasks and vision-language tasks. Table 8 is very nice in particular to compare the frozen ViTs. - It is interesting to see that Cap/CapPa outperforms CLIP on tasks that require fine-grained language understanding. - The proposed parallel prediction makes sense intuitively as a way to enforce stronger supervision on the ViT. Weaknesses: I do not find major weakness from this paper. There is a minor limitation as detailed below. More questions about the paper are listed in the next section. Minor limitation: Since Cap-ViT is pre-trained using an encoder-decoder paradigm, its representations would be more suited for similar encoder-decoder tasks (image caption, VQA) compared to CLIP-ViT. Therefore, frozen adaption may not justify the advantage of Cap-ViT on such tasks. It would be good to also report fine-tuning performance. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - How does the size of the text decoder affect the representation learning performance? - What if the pre-training is performed using a pre-trained text decoder such as T5, does it improve representation learning? - Would Cap and CLIP have a complementary effect if they are combined as a multi-task pre-training objective such as in BLIP? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Yes the authors have addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Minor limitation: Frozen adaption may not justify the advantage of Cap-ViT** We actually tried fine-tuning the image encoder along with the decoder for both CLIP* and Cap/CapPa models and did not obtain an improvement for any of the models. This is consistent with prior work which did not observe an improvement either for CLIP-style models when fine-tuning with the same decoder-based setup, see [2, Sec.5.7]. Frozen adaptation is also favorable from a computational perspective as no gradients have to be propagated through the encoder. \ **How does the size of the text decoder affect the representation learning performance?** An ablation of the decoder depth for Cap can be found in Table 7 (right). 3 and 12 layer decoders obtain a 1-3 points lower 10-shot classification accuracy across the majority of the eval sets compared with the (default) 6 layer decoder. \ **What if the pre-training is performed using a pre-trained text decoder such as T5, does it improve representation learning?** This is an interesting question. We investigated this setup with a pretrained T5-Base decoder (which has 12 instead of 6 decoder layers as Cap/CapPa). First, to enable stable training we needed to reduce the learning rate from 1e-3 to 5e-4 and set the optimizer beta2 parameter to 0.95. Furthermore, training was only stable when unfreezing the cross-attention parameters. We also explored variants where we re-initialized the cross-attention parameters, and another variant where additionally all decoder parameters were trained. For all other design choices we follow the Cap-ViT B/16 setup with the 900M example schedule. None of these variants performs better than Cap overall, and the more we allow the decoder to deviate from its language-pretrained weights the better the vision performance gets. Using a pretrained T5 decoder (10-shot linear eval.) | model | ImageNet | CIFAR100 | Pets | Cars | |:-------------------------------|-----------:|-----------:|-------:|-------:| | Cap | 49.7 | 56.0 | 72.6 | 74.7 | | Cap (12 dec. layers) | 48.7 | 54.8 | 74.4 | 73.8 | | Cap T5 | 42.8 | 44.9 | 69.3 | 62.3 | | Cap T5 (rein. xatt.) | 43.7 | 45.7 | 68.3 | 66.9 | | Cap T5 (rein. xatt., unfreeze) | 48.6 | 55.2 | 72.0 | 75.6 | \ **Would Cap and CLIP have a complementary effect if they are combined as a multi-task pre-training objective such as in BLIP?** While there is anecdotal evidence in the literature that these losses can have complementary effects (see e.g. [32, 33, 52]) these experiments do not take pretraining compute into account and/or do not remove unnecessary network components when ablating losses. By contrast, our experiments control for these factors and show that both Cap/CapPa and CLIP lead to competitive vision encoders. It is therefore unclear how much complementary signal they provide when controlling for model capacity and pretraining compute. We leave analysis of this aspect at scale for future work. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' response and the additional experiments. I confirm my original score of strong accept. This paper provides interesting and refreshing observations with strong experimental support.
Summary: This paper shows that the captioning loss is a competitive alternative to pretrain image backbones compared with contrastive loss (CLIP) when using the same training budget and training data. The standard next token prediction objective on the full caption sequence (Cap) is complemented with a parallel prediction loss (CapPa) which is used on a quarter of the training batches. Experiments notably show: - the benefit of CapPa over Cap in a variety of settings - for classification, CLIP models are better than CapPa models at linear probing but the gap is bridged when using MAP probing. - with LiT transfer, CapPa models perform competitively in classification and are better at vision and language tasks like VQA or image captioning. - CapPa models scale well with model size and training budget. - CapPa models are better at attribute / relation / order prediction. Strengths: - Alternating parallel prediction with autoregressive prediction improves downstream transfer. - The experimental setup is well described. - Extensive experiments with interesting insights, e.g., the CapPa backbones are most competitive when exploiting all the token embeddings (and not merely average pooling them) which makes sense as they are all used for cross-attention. Weaknesses: - The layout of the tables and figures is hard to follow: Table 2 appears before Table 1, Table 5 is not discussed anywhere, Figure 3 is placed long after it is discussed, and Table 10 (L238) does not exist. Explaining the CLIP* and 8k/16k meaning in the caption of Table 2 would also help. - How do the compute requirements (e.g. gpu memory) of the CLIP* with 8k/16k batch size compare with CapPa? - Most experiments use a proprietary dataset (WebLi) for pretraining and the code is not provided, which harms reproducibility. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: See weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 2 fair Contribution: 3 good Limitations: Limitations are discussed in Section 5. Potential negative societal impact is not discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **The layout of the tables and figures is hard to follow** Thank you for bringing this to our attention. We will improve the alignment of tables and figures with the text flow in the next revision. \ **How do the compute requirements (e.g. gpu memory) of the CLIP\* with 8k/16k batch size compare with CapPa?** The default Cap/CapPa model and training setup is chosen such that it closely matches the corresponding CLIP* setup in terms of actual accelerator hours (and examples seen). Table 1 in the paper reports TPUv4 hours per billion examples seen along with parameter count (which is also matched for the two model families). Below are the memory requirements per chip when training the models on 64 TPUv4 chips (the same setup as used to determine TPU hours in Table 1). The memory requirements of CapPa and CLIP* are comparable for the same batch size as well. | model | TPUv4 mem | |:------------------|-----------:| | Cap/CapPa B/16 | 19.62 GiB | | CLIP* (8k) B/16 | 20.05 GiB | | CLIP* (16k) B/16 | 31.46 GiB | \ **Most experiments use a proprietary dataset (WebLi) for pretraining and the code is not provided, which harms reproducibility.** The experiments on the publicly available LAION-400M data set (Sec. 4.4, Fig. 6) show that our most important results transfer to publicly available data. Furthermore, we only perform very simple text-based filtering of the WebLI data (which is from the public web) as done by prior work [4, 25], and we do not use any image-text similarity based filtering or other sophisticated filtering procedures. We are working on a code release and are hopeful to publish code before the conference. \ **Potential negative societal impact is not discussed.** We plan to include the following discussion in the paper: Our models fit in the broader context of large scale vision-language pretraining and as such share many of the benefits and issues of related models such as [40, 21, 52, 33, 50]: They produce versatile vision models which obtain strong performance on natural images, on OCR-related tasks, and also when combined with a generative language decoder. These capabilities enable many useful applications (e.g. assistive technologies, medical imaging), but also potentially harmful ones (e.g. surveillance). We generally recommend either employing the CapPa vision encoder with a new, task-specific prediction head, or using the pretrained decoder for scoring only. We do not recommend the pretrained decoder for downstream image captioning applications without further refinement, as it is trained on a large number of alt-texts from the web. Harmful biases should be carefully assessed in the context of the concrete downstream application and prediction head used. For example, when combining the encoder with a (potentially pretrained) decoder for captioning or VQA, an assessment of hallucinations, attribute binding issues and stereotypical attribution should be done. --- Rebuttal Comment 1.1: Comment: I thank the authors for providing a rebuttal, have read other reviews, and confirm that I am inclined to accept this paper. In particular, it is valuable to see that CapPa has similar memory requirements with CLIP* (8k). I encourage the authors to improve the layout of tables and figures and include this discussion of potential negative societal impact in the paper, and to release the code to improve reproducibility,
Summary: This paper revisits image captioning as a pretraining task for learning general vision encoders from web image-text pairs. Surprisingly, the empirical study shows that captioning pre-trained vision encoders is competitive or better than contrastively pre-trained ones on image recognition and vision-language tasks. Strengths: - The paper presents a novel and interesting finding: using captioning as a pre-training scheme can also achieve strong results compared with contrastive ones. The empirical study is valuable to the community. - The paper presents an in-depth analysis of different design factors, such as the use of decoders, encoders, and pre-training data. One can find many insightful discussions in the experiment section. Weaknesses: - Although captioning can be a promising scheme for pre-training, it may not be able to replace the existing contrastive pre-trained objective. For many established tasks, such as image-text retrieval or estimating the similarity between a given image-text pair, CLIP-style models are convenient and offer a more efficient computation. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: The paper is well-written and technically flawless. I don't have significant concerns about this paper. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: No potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **CLIP-style models are convenient for image-text retrieval/computing image-text similarities and offer a more efficient computation.** This is indeed a downside of captioning-based approaches. On the upside, Cap and CapPa are much better than CLIP-style models in zero-shot classification tasks where word order, attribution, and relation are important (see Table 6). Furthermore, we show that LiT-tuning [56], where a text encoder is trained to match a frozen image embedding, represents an efficient way to equip Cap/CapPa with these capabilities (Table 4). --- Rebuttal 2: Comment: Thanks. I don't have further comments. I will keep my rating.
Summary: In this paper, the authors more a controlled comparison between two pretraining approaches to learning visual representations from language supervision: contrastive (CLIP-style) and generative (= image captioning, e.g. VirTex, SimVLM, BLIP, CoCa), etc. The goal of this paper is not to advance the state-of-the-art, but rather to observe the model/data scaling behavior of these pretraining tasks. Experiments show that captioning-pretrained models can match or outperform CLIP-style contrastive models on several multi-modal tasks. Strengths: **I think this paper, in its current form, already matches the quality of a typical publication at the NeurIPS conference.** I strongly recommend acceptance; it is relevant to the conference audience and will spur interesting discussion in the community. Below I highlight the main strengths of the paper to substantially support my assessment: 1. **Paper presents contrary evidence to existing results:** The vision community is making rapid progress with vision-language models as they enable new transfer applications that can be specified using natural language. Much of this progress in the last 2-3 years was catalyzed by the development of CLIP (and concurrent works like ALIGN). Ever since, the vision community has largely gravitated towards pushing progress on multi-modal contrastive models, following the "image captioning models converge slower on web data" result from the CLIP paper. This paper presents a piece of contrary evidence that simple captioning-only models can match or outperform their contrastive counterparts. 2. **Promising alternative to poor language understanding of contrastive models:** Image captioning as a pretraining task has been studied in previous models (e.g. VirTex, SimVLM, BLIP, CoCa, etc.). The main novelty of this work is a direct comparison of captioning with the contrastive objective at scale, with controlled model capacity and training dataset size. Hence, I think this is a timely contribution that will force practitioners and researchers to rethink the relevance of text-generative models and side-step the embarrassing failures of contrastive models, e.g. their inability to distinguish "man eating a sandwich" from "sandwich eating a man". 3. **New evaluations with captioning models show practical runtime solutions for classification/retrieval:** Captioning models are known for their slow image/text retrieval runtime since they cannot "cache" the text classifier weights once like contrastive models. However, they are better at language understanding than contrastive models which are known to behave as bag-of-words models. Authors show evaluations related to "LiT tuning" to convert a captioning-pretrained image encoder to a contrastive model -- this helps overcome the runtime overhead of image captioning-only models. 4. **Experiments are thorough and well presented:** The paper studies a targeted comparison between captioning and contrastive pretraining approaches. The authors present a series of experiments and evaluations to support this study. All comparisons seem fair and controlled to the best of my knowledge, with differences specified wherever relevant. Modeling ablations and experiments with a different dataset (LAION-400M) make the study more self-contained. Many evaluations report error bars wherever appropriate. 5. **Excellent clarity in writing and presentation:** The motivation for this study is precisely stated I the abstract and introduction. The coverage of related work is broad and comprehensive. All technical details for empirical analysis are well-stated and easy to follow. The main paper and supplementary material have adequate implementation details to aid reproducibility. Weaknesses: I have some questions and suggestions that could make the study more comprehensive. Have the authors considered the following experiments in their study? 1. **The exact autoregressive language model used in CLIP paper:** CLIP paper uses a different autoregressive model, and this particular model is shown to converge slowly. Have the authors tried this exact architecture? Instead of a transformer decoder with cross-attention to image features (e.g. like VirTex), CLIP's autoregressive baseline has a SimVLM-style design wherein image features are pooled into 2x2 grid and passed to the text model as the first four tokens. The model follows the transformer encoder design and predicts caption tokens autoregressively. 2. **Backward captioning or masked language modeling?** Have the authors considered auxiliary objectives used by prior works, such as backward captioning (VirTex) or masked language modeling (ICMLM)? These objectives can amortize the cost of forward pass through the image encoder and provide denser gradients to the image encoder. 3. **[Related to above] multiple parallel decoders:** The above suggestion can be extended to enable the use of multiple lightweight text decoders with multiple auxiliary objectives. One can design each decoder head with reduced capacity to make all models have comparable sizes. Training with such multiple objectives should speed up convergence. 4. **Evaluation on dense prediction tasks?** If the goal of this study is to learn high-quality image encoders, then I suggest the authors may include additional evaluations with dense prediction tasks like object detection and segmentation. These tasks are ubiquitous in vision and quite challenging. I suggest that authors could train a ViTDet-style model with a frozen/fine-tunable image encoder from Cap/CapPa training. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Minor suggestion: - This paper references CLIP in many places in the text. However, it is sometimes awkward to read as "[40] showed..." or "released by [40] ...". Ultimately, it is personal preference, but I may recommend the authors use `citet` format like "Radford et al. [40] showed that ..." for a better reading experience if they do not have a preference. - `Line 84`: GeLU -> GELU. ReLU ("Re" = "Rectified") and GELU ("GE" -> "Gaussian Error") :-) - What is the "(ok:as)" in section 4.2 title? Seems like a latex macro :-) Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors have included a reasonable discussion about the limitations of this study (and their trained captioning models) in the final section of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for suggesting additional experiments, many of which we were able to address. \ \ **Performance of the exact autoregressive language model used in CLIP paper** We trained this exact model (with a ResNet-50 encoder as in the CLIP paper) for 900M examples seen and found that it performs somewhat worse than the same encoder model together with the transformer decoder architecture. However, we believe that exploring alternative, potentially simpler decoder architectures at scale is an interesting direction for future work. Comparing prefix decoder with baselines (10-shot linear eval. and scoring) | model | ImageNet | CIFAR100 | Pets | Cars | INet zs. | |:------------------------------|-----------:|-----------:|-------:|-------:|-----------:| | CLIP* (8k) R50 | 39.8 | 33.5 | 49.2 | 60.9 | 43.6 | | Cap R50 (transformer decoder) | 37.8 | 33.3 | 48.6 | 52.4 | 28.5 | | Cap R50 (prefix decoder) | 36.8 | 30.4 | 41.5 | 45.4 | 27.6 | \ **Cap/CapPa with backward captioning or masked language modeling** We explored masked language modeling in the context of parallel prediction by masking only a fraction of the tokens, but only observed improvements over pure autoregressive modeling when masking all tokens (see Sec. 4.4, Parallel prediction). As for backward captioning, we trained Cap while randomly reversing the caption with probability 0.5 (with the 900M examples schedule). This ensures that model capacity and pretraining compute remain unchanged. We do not observe improved performance (see below). While VirTex ablates backwards captioning and shows improvements, they use a separate decoder, so the ablated model has fewer parameters and FLOPs (here we control for both factors). Forward vs forward + backward captioning (ViT-B/16 encoder; 10-shot linear eval.) | model | ImageNet | CIFAR100 | Pets | Cars | |:----------------|-----------:|-----------:|-------:|-------:| | Cap | 49.7 | 56.0 | 72.6 | 74.7 | | Cap (fwd + bwd) | 49.2 | 56.1 | 71.7 | 73.0 | \ **Using multiple parallel decoders** To address this point, we trained a CapPa variant with two parallel decoders, one for autoregressive prediction and another one for parallel prediction, each with 3 layers instead of 6. This model matches the pretraining compute of the default Cap/CapPa models with 6 decoder layers. While this model performs better than Cap with 3 decoder layers in linear 10-shot eval, it does not clearly outperform Cap with 6 decoder layers and performs worse than CapPa on 3 out of 4 eval sets. Comparing separate decoders with baselines (for ViT-B/16 encoder; 10-shot linear eval.) | model | ImageNet | CIFAR100 | Pets | Cars | |:-----------------------|-----------:|-----------:|-------:|-------:| | Cap (3 dec. layers) | 48.7 | 53.7 | 73.5 | 73.7 | | Cap | 49.7 | 56.0 | 72.6 | 74.7 | | CapPa w/ sep. decoders | 49.5 | 54.9 | 75.8 | 79.0 | | CapPa | 50.4 | 57.4 | 76.2 | 78.5 | \ **Evaluation on dense prediction tasks** CLIP and image/text pretrained models more generally seem less popular than supervised/self-supervised vision encoders for such supervised dense prediction tasks. However, vision-language models have become popular recently for open vocabulary semantic segmentation (see e.g. [a, b, c, d]) and the field is evolving rapidly. We believe it might be interesting to explore this direction with captioning models, but we leave this for future work as it would require a substantial extension of the scope. - [a] Ding et al., Decoupling zero-shot semantic segmentation. CVPR 2022 - [b] Ma et al., Open-vocabulary Semantic Segmentation with Frozen Vision-Language Models. BMVC 2022 - [c] Liang et al., Open-vocabulary semantic segmentation with mask-adapted CLIP. CVPR 2023. - [d] Mukhoti et al., Open Vocabulary Semantic Segmentation with Patch Aligned Contrastive Learning. CVPR 2023 \ **Minor issues** Thank you for raising these, we will address them in the next revision of the paper. --- Rebuttal Comment 1.1: Title: Thank you for a thoughtful rebuttal Comment: I thank the authors for accepting my suggestions and spending effort in running additional experiments! The results in the rebuttal overall look promising, and I encourage the authors to report them in the supplementary material. Please find specific responses below: **Performance of the exact autoregressive language model used in CLIP paper** The authors found this model to perform worse than their `Cap` model. I view this as a positive result — authors state that the observation of Radford et al., 2021 (the captioning models converge slower) mostly holds for a specific configuration they experimented with (ResNet-50 + 12-layer transformer) but disappears when scaling to ViTs and such. The exact autoregressive architecture used by CLIP is one more factor that contributed to their observation which the authors claim to (somewhat) refute in this paper. **Cap/CapPa with backward captioning or masked language modeling** I believe this experiment could be better designed — the authors use one set of transformer weights to process forward and reversed captions. Reversed (English) captions are like a different language that is composed of English words, but follow completely opposite sentence structure and grammar rules (e.g. `subject-verb-object` becomes `object-verb-subject`). The model is modeling two essentially different languages that use the same words, which intuitively sounds like a very difficult problem — one may argue it's more difficult than learning a multilingual language model with English and other known languages (with sensible grammar). > > While VirTex ablates backwards captioning and shows improvements, they use a separate decoder, so the ablated model has fewer parameters and FLOPs (here we control for both factors). I appreciate the authors' thoughtfulness in controlling for params/FLOPs. Indeed (Desai & Johnson) should have controlled for this by using two transformers intact but performing forward captioning with both. I think the updated experiment is not necessary — the central message of the paper still holds without it. I thank the authors for running this experiment! **Using multiple parallel decoders:** Thank you for running this experiment, it shows that downstream performance mostly depends on the params/FLOPs of the decoders. Performance is less sensitive to how these parameters are divided across multiple decoders for auxiliary language modeling tasks... perhaps to some extent, I bet having CapPa models with six decoders of one layer each may get weak (??) **Evaluation on dense prediction tasks:** The authors' argument is persuasive — while I believe that this paper could be enriched by these evaluations, I note that they are not crucial to back the main claims presented in the paper. I would discount this concern in my final assessment. --------- **Summary:** I recommended acceptance before the rebuttal. I continue to recommend acceptance after the authors' rebuttal. The paper presents a topic that would be of broad interest to the NeurIPS audience and presents it with sound experiments and high-quality presentation. While all reviewers unanimously agree to accept, I am happy to defend this paper for acceptance if needed. Congratulations to the authors!
Rebuttal 1: Rebuttal: We thank all the reviewers for carefully reading our paper. We appreciate their thoughtful comments and the overall very favorable assessment. In particular, we liked the many suggestions for additional experiments, some of which we were able to run. We believe these experiments will make the paper stronger. Please find the detailed responses to each review below.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
On Convergence of Polynomial Approximations to the Gaussian Mixture Entropy
Accept (poster)
Summary: This paper presents new methods for approximating the entropy of a mixture of Gaussians/cross-entropy between mixtures of Gaussians. By using deterministic approximations based on power series/orthogonal polynomials expansions, the authors are able to obtain similar results as MCMC approximations for only a fraction of the computing time. Furthermore another deterministic approach (Huber) is shown to be theoretically and empirically divergent under a simple condition. The result is applied to Nonparametric Variational Inference and is shown to exhibit a better convergence behavior than the Huber approximation. Strengths: The paper is well written and the experiments are clearly presented. The theoretical claims are solid and provide useful insights and methods for approximationg entropy of GMMs. The convergence criterions clearly show the limitations of the already used Huber method and is supported empirically. Moreover the method of Taylor approximation and Legendre polynomials are shown to perform well in practice and outperform Huber is some settings. Weaknesses: - I think it could be beneficial to have a presentation of NPV from the start for the sake of motivation - no convergence rates/error estimate is presented or at least discussed, I think this is also crucial even though harder to investigate Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: -line 183 : what do you mean by convergence rate of \beta \alpha^n + \eta ? (edit : it is introduced in the appendix but is quite obscure in the main at the first read) -I can't find the result on pointwise convergence for Legendre series when the function is continuous and L^2 ; can you point more directly in the book/to another reference ? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The authors have correctly adressed the main limitations of their results, namely the limitations to GMM and the problem of scaling to high orders of approximations as well as the number of components. Moreover, they do not indicate that their method should always be preferred to Huber's method but point out contexts where it could be the case. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **line 183 : what do you mean by convergence rate of $\beta \alpha^n + \eta$ ?** From Taylor's theorem, a function, $f$, can be represented as $f=T_n+R_n$ where $T_n$ is the $n^{th}$ order Taylor series and $R_n=\mathcal{O}(\alpha^n)$ is the remainder. Rewriting this function in terms of the Taylor series, $T_n=-R_n+f= \beta\alpha^n+\eta$, we assume $\beta<0$ to force the negative on the remainder and use $0<\alpha <1$ to model the decay of the error. We agree the main text could benefit from a clearer description of this and will move the discussion in section A.5 to the main body. **I can't find the result on pointwise convergence for Legendre series when the function is continuous and $L^2$** The reference we cite in the paper is Bharwy et al. [4]. They define the shifted Legendre polynomials and state (without proof) that they are an orthonormal basis on $L^2((0,L))$. For a rigorous proof of this statement, one can look at "Linear Operators in Hilbert Spaces" by Joachim Weidmann in Section 3.2. We can add a reference to this book if the reviewer believes this should be included. **no convergence rates/error estimate is presented or at least discussed, I think this is also crucial even though harder to investigate** We agree that having some statement of convergence rate would be a nice addition, but the reviewer is correct that this is harder to investigate. We attempted to approach the Taylor series using the Taylor theorem which says that the $n^{th}$ order Taylor series has a remainder $R_{n+1}(x)= \dfrac{f^{(n+1)}(c)}{(n+1)!}(x-a)^k$ for some $c\in[a,x]$. Since we are expanding the Taylor series inside the expected value of the entropy term, we must compute $\mathbb{E}_{q}\left[\dfrac{f^{(n+1)}(c)}{(n+1)!}(x-a)^k\right]$ however $c$ is dependent on $x$. Bounding this quantity is not fruitful because as $x\rightarrow 0$, the bound diverges. Furthermore, for Legendre and Taylor, we attempted other traditional approaches to finding rates of convergence however fell into similar issues. This is still ongoing research and we hope to make this contribution in future work. --- Rebuttal Comment 1.1: Comment: Thank you for your answers and I apologize for my late response. Regarding the line 183 I also think adding A.5 to the main body will be beneficial. As for the problem of approximation by Legendre serie ; this is precisely the point of my remark : the convergence is in L^2 and not pointwise ; hence a formula such as "log (p(x)) = sum... for all x" is far from obvious (for instance in the Fourier classical theory not every continuous function on (0,1) is the pointwise sum of its Fourier serie). I assume that for mixtures of Gaussians the result has good chances to be true but this requires some precisions. I would recommend stating those results almost everywhere instead. Overall my score remains unchanged. --- Reply to Comment 1.1.1: Comment: Based on the reviewer's latest clarification we agree that the statement "log (p(x)) = sum... for all x" requires further details to establish pointwise convergence more clearly. Thank you for pointing this out. Pointwise convergence is not necessary for our result as we utilize this approximation in Thm. 4.5 (just below L464 in the appendix): $$ \dots = -\sum_{i=1}^M w_i \int q_i(x)\log(p(x))dx = -\sum_{i=0}^M w_i \int q_i(x) \sum_{n=0}^\infty \dots L_{[0,a],n}(p(x))dx = \dots$$ We will instead revise the statement that the series is convergent almost everywhere. This result is shown in ["The Convergence Almost Everywhere of Legendre Series"](https://www.jstor.org/stable/2037625?seq=1)(Harry Pollard), which proves that "log (p(x)) = sum..." holds for x a.e. and furthermore a.e. is a sufficient condition for our proof. The series for log(p(x)) is in fact pointwise convergent everywhere except p(x)=0. For p(x) a GMM it is everywhere positive in the domain so p(x)>0. Pointwise convergence can be shown with standard results, for example Theorem 12 of [Differential Equations and Linear Algebra](https://www.math.utah.edu/~gustafso/s2013/3150/slides/seriesMethodsLinearDE.pdf) (Gustafson, Grant B.). Given the limited time left in the discussion period, however, we propose to change the statement to a.e. convergence as pointwise convergence is not necessary.
Summary: The paper provides an improved approximation to the entropy of Gaussian Mixture Models (GMMs). The proposed approximation is a lower bound to the entropy and is more accurate compared to related methods. Strengths: 1. Accurate computation of the GMM entropy has implications on variational inference (VI) as well as information theory. The proposed approximation establishes a lower bound for the GMM entropy which can be naturally adapted in VI. 2. The authors show that the proposed approximation overcomes the divergence issue in the related methods, e.g., Huber et al. (2008), and hence provides stronger theoretical guarantees. 3. The method utilizes an interesting idea leveraging the Legendre series approximation of the logarithm. Weaknesses: 1. While the proposed approximation establishes a lower bound for the GMM entropy, it is not clear whether the bound is tight. Thus, applying the approximation to VI may not yield monotonically increasing ELBO. 2. Computation of the approximation seems very intensive, e.g., Eq. 9. and Eq. 12. The increased accuracy may not justify the computational costs. 3. The authors only provided simulation results on toy data. It could be helpful to demonstrate the benefits of the approximation in a real-life problem. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. How do you choose the order of the approximating polynomials? 2. How does the approximation improve the VI compared to using Huber et al.? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **While the proposed approximation establishes a lower bound for the GMM entropy, it is not clear whether the bound is tight. Thus, applying the approximation to VI may not yield monotonically increasing ELBO** The Taylor series bound on GMM entropy becomes tight as the order of the approximation increases. This is a result of convergence of the series (Theorem 4.2) and the lower bound property (Theorem 4.3). **Computation of the approximation seems very intensive, e.g., Eq. 9. and Eq. 12. The increased accuracy may not justify the computational costs.** Computation is dominated by the combinatorial sum in Eq. (11) in Lemma 4.1. For an $M$-component GMM our $n$-th order Taylor series this is dominated by $\mathcal{O}((n+M-1)!)$, see Appendix A.5 for discussion. Computation of the series in Eq. (11) can easily be parallelized to utilize GPU acceleration, offsetting the cost. In our experiments we achieve good accuracy with moderate polynomial orders, making GPU parallelization unnecessary. Empirical analysis of computation time is presented in Fig. 3. **The authors only provided simulation results on toy data. It could be helpful to demonstrate the benefits of the approximation in a real-life problem.** We recognize the desire for an application of our methods to real data and this will be a focus in future work. The focus of the present work is a convergent alternative to that of Huber et al. To support our claims made in this paper we prefer controlled synthetic evaluation, as it allows us to demonstrate divergence of the baseline, convergence of our approaches in the same setting, and straightforward evaluation of computation time. Evaluation in real-data scenarios becomes less straightforward as the data distribution is unknown and cannot be instrumented. **How do you choose the order of the approximating polynomials?** Our series approximations are convergent, meaning higher-order polynomials are more accurate. The user may choose the order based on computational resources and required accuracy for the problem at hand. This is not necessarily true for Huber et al. as the series may diverge and divergence is more severe at higher orders. In A.6, we reproduce the toy model from Huber et al. in which that method is well-behaved. Figure 6 shows that our Legendre series yields comparable accuracy and Figure 3 (bottom right) shows the two methods have comparable computation time. **How does the approximation improve the VI compared to using Huber et al.?** The accuracy of the Legendre polynomial is close to that of Huber et al. when the latter method is convergent. In this setting the benefit of Legendre over Huber et al. is limited, but is guaranteed to be convergent in all cases. Our Taylor series benefits from being a lower bound on $H_q(q(x))$. In ELBO, $\log(p(x))\geq L(\theta) = \mathbb{E}\_{q_\theta(y|x)}[\log p(x,y)] + H_{q_\theta(y|x)}(q_\theta(y|x))$ this lower bound on the entropy term gives a lower bound on $L(\theta)$, and thus a lower bound on $\log(p(x))$. This bound does not hold in Huber et al.'s approximation. --- Rebuttal Comment 1.1: Title: After rebuttal Comment: Thank you for addressing my comments. I've increased my score accordingly.
Summary: This paper looks at the convergence of an existing approximation of the GMM entropy. They show conditions, which commonly occur, in which the approximation diverges. The authors then present several new approximations in which conditions can be chosen that guarantee convergence. The authors demonstrate this both theoretically and empirically. Strengths: I have reviewed a previous version of this paper and this version is stronger in all regards. The theory is sound, the experiments are well-done, and the paper is clearly written. My concerns for previous versions of this paper have been addressed. Weaknesses: The computational complexity is a concern for this method. Also, the contribution here is somewhat limited. Other methods involving entropy often consider nonparametric estimation methods that can be adapted to different and unknown densities. Here, it is assumed that the parameters of the GMM are known. That said, the GMM is of sufficient importance in machine learning that I believe the contribution is sufficient. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: No questions. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors address the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **I have reviewed a previous version of this paper** We sincerely thank the reviewer for their effort and helpful insights on an earlier submission. The early feedback has strengthened the present work. **The computational complexity is a concern for this method** For similar polynomial orders of our approximations to Huber et al., the computation complexity is comparable (Figure 3). The computation cost may start to become a larger focus for higher polynomial orders due to the combinatorial sum in computing Eq. (11) in Lemma 4.1. However, the multivariate normal distribution terms can be evaluated outside the summation once as their values are constant, meaning the whole summation is simply scalar terms. This computation can then easily be parallelized to utilize GPU acceleration to offset cost. In our experiments we found that GPU parallelization was not necessary as adequate accuracy is achieved with moderate polynomial orders. **The contribution here is somewhat limited. Other methods involving entropy often consider nonparametric estimation methods that can be adapted to different and unknown densities. Here, it is assumed that the parameters of the GMM are known** Our work builds upon the well-established approximation introduced by Huber et al., which has over 300 citations indicating its widespread recognition and use. We highlight a crucial observation that the original approach lacks general convergence. In response, we present two novel methodologies that not only ensure convergence but also extend applicability to higher polynomial orders compared to previous approaches. Furthermore, our advancements come with no additional computational overhead for similar accuracy. We believe that this firmly establishes our contribution's broader impact and its potential benefits to a wider research audience. --- Rebuttal Comment 1.1: Comment: I have read the other reviews and the authors' responses. I am satisfied with their responses and am keeping my score the same.
Summary: The authors show that a common entropy approximator, based on Taylor series approximators, for Gaussian mixture models is not necessarily convergent. They propose an alternative approach based on the Taylor expansion of the logarithm, whose efficiency hinges on the fact that moments of Gaussians can be computed efficiently. Strengths: The proposed approach fills gaps in the literature and proposes novel solutions. Weaknesses: It isn't clear how practical this approach is. Many of the lemmas and theorems are known results, and the authors do not do a good job differentiating their contributions. The presentation could use some improvement. The experiments are entirely synthetic despite the possibility of applying this approximation as part of non-parametric variational inference. Computational complexity is not clearly stated. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Practical computation time is, of course, important. However, I think you should also include a formal statement of the computational complexity of all of the methods involved. - How does this approach perform when applying NPVI to a real data set? The experiments are mostly synthetic -- it would have been nice to see a real application. Minor typos: While the submission is comprehensible, the language, e.g., in the introduction and elsewhere, feels a bit unnatural. I suggest a proofreading pass to smooth it out a bit. - "i.e." and "e.g." must always be followed by commas. - The sentence in lines 73-75 is awkward. - The sentence in line 86-87 is not a sentence - "h" is not defined in (5). - The presentation is a bit confusing as it seems to present the univariate Taylor theorem as applying to multivariate functions... This is fine in (6), which is a univariate Taylor series, but the GMM problem is presented in the general multivariate case. - Notation changed from c to a in (6). Maybe be consistent? - Line 152 "we can applying". Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **"Many of the lemmas and theorems are known results..."** Key results of this paper are divergence of the Huber et al. approximation (Thm. 3.1) along with our Taylor and Legendre approximations and their convergence (Thm. 4.2 & 4.5). To the best of our knowledge these results have not previously been established. We have included some established results restated in our own context for completeness, such as convergence of Taylor (Lemma 3.2) and Legendre (Lemma 3.3) series. We do not claim these as novel contributions of the paper and provide references where appropriate. We can defer these results to the appendix if the reviewer feels it would improve the presentation. We are also happy to include any references that the reviewer feels may have been overlooked if they are provided. **Practicality of approach** Our work builds on an established approximation proposed in Huber et al., which has been accepted as practical with over 300 citations at the time of this writing. We show that this widely cited method does not converge in general and we provide two alternatives that are both convergent and applicable to higher polynomial orders than previous work. Moreover, our approach has no additional computational overhead compared to existing work (see response below and paper Fig. 3). We feel this firmly establishes that the work is, both, practical and beneficial to the community. **"...I think you should also include a formal statement of the computational complexity..."** Appendix A.5 outlines complexity of the Taylor approximation and motivates our proposed limit approximation. We summarize here for convenience: Complexity of the approximation is dominated by the summation over additive sequences with a fixed sum in Eqn. (11). For an $M$-component GMM our $n$-th order Taylor series this is dominated by $\mathcal{O}((n+M-1)!)$. We will include this statement in the main text, along with complexity of our Legendre series, which is comparable as it is dependent on the same dominating summation. Huber et al. do not provide complexity order of their approach, but we provide a summary of our calculation here for the reviewer's convenience. Recall that the Huber et al.'s approximation about mean $\mu_i$ is given in Eq. (5) as, $\sum_{n=0}^\infty \frac{1}{n!} \log(p(\mu_i))^{(n)} (x-\mu_i)^n$. For $D$-dimensional r.v. $x$ the term $(x-\mu_i)^n$ requires $\mathcal{O}(D^n)$ operations. The term $\log(p(\mu_i))^{(n)}$ requires Faa' di Bruno's formula, which yields a sum over all $n$-tuples of non-negative integers $(m_1,\dots, m_n)$ s.t. $1m_1+2m_2+\dots+nm_n=n$ which is combinatorial. We will include an extended form of this discussion in the final appendix. **Synthetic vs. Real-Data Experiments** We agree with the reviewer that real-data experiments would be interesting, and will be a focus of future application of this methodology. For the present work we feel that controlled synthetic evaluation is preferable for supporting our claims in this paper. Our synthetic evaluation supports theoretical analysis by directly showing divergence of Huber et al. even in simple settings, validates our convergence results in those same settings, and provides a computational comparison along several dimensions that can easily be controlled. **The presentation...seems to present the univariate Taylor theorem as applying to multivariate functions...** Huber et al. expands about the vector-valued random variable $x$. Because of this Huber et al. cannot easily be calculated above second-order due to tensor arithmetic that arises (see computation above). By contrast our approach expands about the density function $p(x)$, which is always a scalar function by definition. This distinction is what allows our approach to be computed to polynomial orders above 2 without tensor arithmetic. Divergence of Huber et al. in the scalar case (Thm. 3.1) is sufficient to establish that the method is not convergent in general. **Notation changed from c to a in (6). Maybe be consistent?** This notation change is intentional and explicitly noted following Eq. (6): "Note the change of c to a as the Taylor series center...".
Rebuttal 1: Rebuttal: We thank all reviewers for their time and useful insights. We have provided individual responses to each reviewer below. For brevity we do not address minor typos, all of which will be corrected in a final version.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
One-2-3-45: Any Single Image to 3D Mesh in 45 Seconds without Per-Shape Optimization
Accept (poster)
Summary: This paper introduces a novel approach for converting a single image into a 3D textured mesh without the necessity for per-shape optimization. The method employs Zero-123 for generating novel views of the input image, which is then utilized by a generalizable MVSNeRF to reconstruct the 3D shape. To tackle issues like inconsistent multi-view predictions from Zero-123 and inaccurate camera poses, the authors have developed an elevation estimation module and specialized training strategies, including 2-Stage Source View Selection and Groundtruth-Prediction Mixed Training. Strengths: 1. The paper proposes a method that reconstructs 3D shapes from a single image, eliminating the need for per-shape optimization. It does this through the effective application of feed-forward SparseNeuS. 2. The authors discuss and address several challenges associated with lifting Zero-123 predictions to 3D, such as inconsistent multi-views and inaccurate camera poses. The introduction of 2-Stage Source View Selection, Groundtruth-Prediction Mixed Training, and an Elevation Estimation Module appears to be effective solutions. 3. The results demonstrate that One-2345 enhances both the quality and efficiency of the single-image to 3D conversion. Additionally, the authors adeptly extend this to text-3D conversion using a pre-trained text-image model. The ablation studies provide strong evidence for the efficacy of the introduced modules and training strategies. Weaknesses: 1. The paper mentions the use of SparseNeuS for reconstructing 3D shapes from multi-view predictions by predicting blending weights. The colors of the 3D points are computed as the weighted sum of projected colors. However, since SparseNeuS usually takes real images as input and this work uses potentially inaccurate multi-view predictions, I am concerned that directly aggregating colors from these predictions could lead to blurry textures and artifacts. I suggest the authors investigate the effects of aggregating or refining colors more thoroughly. 2. There is a noticeable discrepancy between training and inference. During training, the 2-Stage Source View Selection module generates multiple nearby views from ground truth anchor views for reconstruction. However, during inference, there are no ground truth anchor views available, and Zero-123 must rely on its own predicted anchor views, which may not always be accurate. The paper does not address these challenges or offer solutions. Additionally, the “2-Stage Source View Selection and Groundtruth-Prediction Mixed Training” section could benefit from clearer explanations. 3. SparseNeuS employs post-optimization after the feed-forward pass. It would be helpful to know if One-2345 adopts a similar strategy, and, if so, how much improvement can be attributed to such a lightweight post-optimization. 4. The overall concept of integrating a feed-forward MVSNeRF with Zero-123 is not groundbreaking. Moreover, the Groundtruth-Prediction Mixed Training, though practical, lacks novelty. However, the motivation for employing a feed-forward approach to elevate Zero-123 to 3D shape reconstruction is sound. Technical Quality: 3 good Clarity: 3 good Questions for Authors: see the strengths and weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The paper does not discuss limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments and valuable suggestions. We will revise our paper based on your feedback. Here are our responses to your comments: **Weighted sum of projected colors** Thanks for pointing this out. We agree that computing the point colors as a weighted sum of projected colors may not be the optimal solution in our case, considering the underlying inconsistencies. We noted that some generalizable NeRF works (e.g., MVSNeRF) use an MLP to aggregate projected colors and directly predict point colors instead of their linear weights. We have launched an experiment to explore this strategy, and training is ongoing. We will update the results after the training is complete. **Discrepancy between training and inference** We also noticed this discrepancy. The high-level motivation is that each anchor image (one of eight) and its local images (four views, second stage) are expected to control a local region of the shape, and these eight regions should be relatively independent of each other. Since the local predictions for each anchor image are relatively accurate and the second-stage local predictions are also used in training, each local region should be reconstructed accurately. As a result, even if we utilize ground-truth anchor images during training, it should be possible to generalize to inconsistent predicted anchor images. In fact, we have experiments trying to utilize predicted anchor images during training. However, we found it challenging to supervise the training. For example, if we utilize ground-truth renderings to supervise predicted anchor images, it actually leads to worse results, as shown in Figure 8 of the main paper. We will clarify “2-Stage Source View Selection and Groundtruth-Prediction Mixed Training” in our revision, given more space. **Additional post-optimization** There are two ways to add post-optimization. The first one is fine-tuning with multi-view predicted images as done in the original SparseNeuS. This may help if the multi-view predictions are very consistent. However, we find that it typically does not improve our results due to underlying inconsistencies among multi-view predictions, as shown in Figure 2 of the PDF (see main rebuttal). Another strategy is to leverage priors from 2D diffusion models (e.g., StableDiffusion) to further fine-tune the generated mesh as in DreamFusion. Our generated results can serve as a good initialization and accelerate the optimization. While NeRF representation and volume rendering is used in the original DreamFusion to better handle topological changes, in our fine-tuning, mesh representation, and surface rendering can also be used to reduce the rendering time since there are no significant topological changes. **Integrating a feed-forward MVSNeRF with Zero-123** We would like to emphasize our contribution in proposing a new paradigm for generalizable single-image 3D generation, which overcomes the limitations of existing paradigms, such as efficiency and 3D consistency for optimization-based methods and poor generalization for 3D native generative models. It’s nontrivial to integrate a feed-forward MVSNeRF with Zero123. There still exists multiple challenges when combining them: (a) original SparseNeuS only focuses on frontal-view reconstruction while we need 360-degree full mesh reconstruction; (b) original SparseNeuS only considers consistent multi-view images as input instead of inconsistent multi-view predictions; (c) we need the camera poses of the input view for 3D reconstruction. We propose several critical training strategies and a pose estimation module to overcome these challenges. As highlighted by Reviewer L2hY, we demonstrate originality through our well-motivated and smart design choices and the underlying reasoning. **”Does not discuss limitations?”** We would like to clarify that we have discussed limitations in supplementary and will move them to the main text in our revision. --- Rebuttal Comment 1.1: Title: Updates on color prediction experiments Comment: Thank you for your insightful suggestions. We have previously initiated an experiment in which we employ a MLP to aggregate projected colors, with the aim of directly predicting point colors rather than their linear weights. Recent experimental results indicate that this approach may help mitigate certain artifacts, such as white spots that can occur when exporting meshes. We agree that pursuing more sophisticated strategies for color prediction holds promise as a valuable avenue for future research. --- Rebuttal Comment 1.2: Comment: The authors' rebuttal and additional experiments sufficiently address my main concerns. The post-optimization results in Figure 2 are interesting. I appreciate the authors taking the time to conduct these extra analyses. I am still interested to see results when replacing the blending weights with MLP prediction, as suggested in the rebuttal. This would provide further insight into the generalizability of One-2-3-4-5. The color blending scheme seems important for the strong generalizability shown in Figure 1. Seeing results with MLP-predicted colors could reveal if generalizability is dependent on the proposed blending approach or not. I will maintain my existing scores for now. --- Reply to Comment 1.2.1: Title: add links Comment: Dear AC, Reviewer eL63 has requested additional figures from our experiments. The email guidelines suggest that including links might not be permissible. Could you please confirm if we are allowed to share an anonymous link? Thank you, --- Reply to Comment 1.2.2: Title: Response to Reviewer eL63 Comment: Dear Reviewer eL63, Thank you for your insightful feedback. In principle, the use of MLP-predicted colors should not compromise generalizability. Given that the projected pixel colors from all views serve as input to the MLP, it inherently has the capacity for internal "linear combination" or color blending. However, the MLP isn't strictly bound to this mechanism and might aggregate the color in more sophisticated ways. In our experiments, the inputs to the MLP comprise the coordinates of the query point, the corresponding interpolated cost volume feature, and the projected pixel colors from various views. As previously noted, employing the MLP approach yields similar results but addresses certain artifacts – like the white spots which can manifest during mesh exports. Our color-prediction MLP design is inspired by MVSNeRF, but we acknowledge there's potential for further refinement, especially by possibly integrating the view directions from all views as suggested in other related studies. We would like to provide visual results from our experiments. However, current guidelines, as per the email and the "FAQ for Authors" section, seem to restrict authors from sharing links within comments. We sincerely hope our textual explanation clarifies your queries. Should you have any further concerns or questions, please do not hesitate to let us know.
Summary: The authors tackle the problem of 3D reconstruction from a single image, which is challenging due to the lack of 3D information. They propose a novel method that combines a 2D diffusion model, Zero123, with a cost-volume-based 3D reconstruction technique, SparseNeuS, to generate a 360-degree 3D textured mesh in a feed-forward pass. They also estimate the elevation of the input shape and introduce several training strategies to improve the consistency and quality of the 3D mesh. Their contributions are: * A novel method that leverages 2D prior models for 3D modeling without per-scene optimization. * An elevation estimation module that computes the camera poses required by the reconstruction module. * A series of essential training strategies that enable the reconstruction of 360-degree meshes from inherently inconsistent multi-view predictions. Strengths: * The work tackles 3D reconstruction from a single image, which is challenging and useful. The work can handle potentially any object category and generate a full 3D mesh from a single image. * The proposed method achieves high-quality 3D reconstruction in a feed-forward manner without optimization, which is faster and more efficient. The work does not require per-scene optimization but relies on a single feed-forward pass to generate the 3D mesh and as such it is much faster than existing methods. * The work leverages 2D prior models for 3D modeling, which is somewhat novel. It uses a 2D diffusion model to synthesize multi-view images of the input, and then uses multi-view 3D reconstruction techniques to obtain a 3D mesh. Weaknesses: * The evaluation section is noticeably light. The authors used 20 shapes from GSO and Objaverse to report F-Score and CLIP similarity. The number of shapes (20) is too low for any meaningful comparison considering that the method was trained with 46k 3D assets. * The paper is framed that the work solves the image to 3D problem, which is true to some extend. In reality the work solve the problem of 3D lifting a single object, which is a much more constrained problem. I would advise the authors to change their wording to a more accurate description of the task. * The work mentions that it used 46k Objaverse assets for training, i.e. the whole Objaverse-LVIS dataset . All figures have Objaverse-LVIS assets and they keep repeating in the figures (backpack, super mario, horse, minion etc). A quick search show nearly all of them, for example this is the backpack: https://skfb.ly/6XCoS . All other can be found with search on the website https://objaverse.allenai.org/explore . On the contrary, there a minimal results on assets not from objaverse which rises the question whether the reconstructions we are seeing in the paper are assets from the training set. * The work has showcases a lot of licensed assets like pokemon, Super-mario and others. These assets require a licensing agreement with the respect companies that hold the rights. It might be sensible to replace them. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: * Would it be possible to do a human evaluation with hundreds of 3D assets comparing your results with that of Shap-E? This would enhance the quality of the current evaluation. * Would it be possible to report numerical scores for hundreds of generated 3D assets rather than 20? To the same extend, can you report what the assets used were and whether they exist in Objaverse-LVIS? * Objaverse created a licensing issue when it came out with many artists requesting to retract their 3D assets from the dataset. Did the authors respect this by checking the no-AI flag in Sketchfab for both training and evaluation? + Comments in Weaknesses Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: Yes to some extend Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments and valuable suggestions. We will revise our paper based on your feedback. Here are our responses to your comments: **Number of shapes for evaluation** We first would like to clarify that we used 40 shapes (20 from GSO and 20 from Objaverse, excluding the LVIS subset) instead of 20 shapes for evaluation. The limited number of evaluation shapes is mainly because many baseline algorithms are very time-consuming. For example, RealFusion takes ~90 minutes to generate a single shape on an A100 GPU. It's not practical to evaluate them on hundreds of shapes (e.g., it will take 2 A100 weeks for RealFusion to generate 200 shapes). Faced with these challenges, recent 3D AIGC papers have yet to quantitatively evaluate the method on large-scale datasets. For example, RealFusion evaluates the methods on only 21 shapes (7 categories) with one baseline, while we have 40 shapes and five baselines. **"Only reconstruct single objects, not single image to 3D?"** We agree that our current method mainly focuses on the generation of a single object. We have explicitly stated that in the abstract (line 6) and introduction (line 32). We will follow your suggestion to make the wording more rigorous. **”Results shown in the paper are from the training set?”** We would like to clarify that all results shown in the paper and supplementary are not seen during training of One-2-3-45, as stated in the supplementary. We split the Objaverse-LVIS dataset into training and validation sets and used only 43k of the 46k shapes for training. The backpack image is not from the LVIS subset. The images of super mario, the horse, and the minion are from somewhere other than the Objaverse dataset. **Report numerical scores for hundreds of generated 3D assets?** Since it's computationally expensive to evaluate optimization-based methods on hundreds of shapes, we add a comparison between the proposed method, Point-E, and Shap-E. We evaluate methods on 200 shapes from the GSO dataset, which doesn't contain Objaverse-LVIS shapes. The F-score for our method, Point-E, and Shap-E are 93.9, 91.5 and 93.3, respectively. The CLIP score for our method, Point-E, and Shap-E are 67.6, 65.1, and 68.7, respectively. **Objaverse license** Thanks for pointing this out. We utilize shapes with CC-BY 4.0 license for training and will add attribution in our paper. --- Rebuttal 2: Comment: Reviewer yirP, Please read the rebuttal provided by authors and raise a discussion if your concerns are not well addressed. Best, AC
Summary: The proposed method overcomes the challenges of lengthy optimization time, 3D inconsistency, and poor geometry that are common in existing methods. This method uses a view-conditioned 2D diffusion model, Zero123, to generate multi-view images from a single input image, and then lifts these images to 3D space. The 3D reconstruction is based on an SDF-based neural surface reconstruction method, with several training strategies proposed to enable the creation of 360-degree meshes. This method is faster, produces better geometry, and generates more 3D consistent results than existing methods. It has been evaluated on synthetic data and real-world images, demonstrating superior mesh quality and runtime. Additionally, it can be integrated with text-to-image diffusion models to support the text-to-3D task. Strengths: The main idea of this paper is to combine the view synthesis method Zero123 with the multi-view stereo (MVS) method SparseNeuS for 3D generation. This simple approach provides interesting insights and results, as it enables fast 3D generation that visually outperforms sophisticated 2D diffusion+NeRF methods like Dreamfusion, and also appears to be superior to other data-driven 3D generation methods such as Point-E and Shape-E. The contribution of this paper is valid considering the advantages of runtime and better visual results. Weaknesses: - While simple and effective, the idea of the paper is not that eye-opening in the sense that it seems to be a natural extension or improvement of Zero-123. - The performance of the proposed method is upper-bounded by the Zero-123. If Zeros-123 fails, it seems that there is no way for this method to generate reasonable output as well. In some sense, the problem of multi-view inconsistency is not solved but bypassed via the use of SparseNeuS. - The area is advancing fast. Follow-ups of DreamFusion such as Magic3D and Fantasia3D have already achieved big improvements. It's not sure how the proposed method compares with more recent 2DDiffusion+NeRF methods. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Going forward, which is a better path for 3D generation, novel view synthesis + mvs (like in this paper) or 3D data-driven (like in Shape-E)? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The issue of multi-view inconsistency is not addressed but bypassed. It would be nicer to have a method that is more fundamental to addressing this issue. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments and valuable suggestions. We will revise our paper based on your feedback. Here are our responses to your comments: **Natural extension or improvement of Zero123?** We would like to emphasize our contribution in proposing a new paradigm for generalizable single-image 3D generation, which overcomes the limitations of existing paradigms, such as efficiency and 3D consistency for optimization-based methods and poor generalization for 3D native generative models. It’s nontrivial to extend Zero123 for 3D reconstruction in a feed-forward manner. Even in Zero123’s paper, they utilize optimization-based methods for 3D reconstruction. While it seems natural to combine Zero123 and generalizable NeRF methods, there still exists multiple challenges when combining them: (a) original SparseNeuS only focuses on frontal-view reconstruction while we need 360-degree full mesh reconstruction; (b) original SparseNeuS only considers consistent multi-view images as input instead of inconsistent multi-view predictions; (c) we need the camera poses of the input view for 3D reconstruction. To overcome these challenges, we propose several critical training strategies and a pose estimation module. As highlighted by Reviewer L2hY, we demonstrate originality through our well-motivated and smart design choices and the underlying reasoning. **Multi-view inconsistency is not solved but bypassed.** We agree that multi-view inconsistency is still the main bottleneck and has not been fully addressed. However, our method is not tightly coupled with Zero123. Also, Zero123 is still the first few attempts in the direction of multi-view prediction. We believe that more progress will be made on 2D diffusion models for multi-view prediction, and any updates will also improve our method. Except when perfectly consistent multi-view prediction is possible, our proposed method is still effective and can be used to deal with small inconsistencies. **Comparison with more recent methods** We would like to point out that many 2D Diffusion+NeRF methods (including Magic3D and Fantasia3D) mainly support text conditions and cannot support the single image to 3D task natively. In the paper, we showed the result of Zero123+Stable DreamFusion, which differs from the original DreamFusion. Moreover, Magic3D has not released its code and uses an internal 2D diffusion model. We also would like to mention that we have compared the proposed method with most of the recent single image-to-text works: Point-E (Dec 22), RealFusion (March 23), 3DFuse (March 23), Zero123 (March 23), and even Shap-E (May 5). **Which is a better path for 3D generation** This is a great open question. It’s hard to say which path will dominate the task, as different applications may pose different requirements. We feel that multi-view prediction + 3D reconstruction (like in our paper) is a promising direction for open-world and time-sensitive applications. On the one hand, 3D native generative models (e.g., Shap-E) suffer from limited 3D data, while our method can benefit from richer 2D priors (e.g., millions of 3D shapes vs billions of 2D images), which makes open-world applications more possible. On the other hand, our method is much more efficient and better reserves 3D consistency than optimization-based methods (e.g., DreamFusion), which is important in time-sensitive applications (e.g., AR/VR, robotics, social media apps). It would be beneficial to combine various paths. For example, by using the generated results of our method as initialization, we can leverage optimization-based methods to fine-tune the results further and add more fine-grained details. With good initialization, optimization time should be significantly reduced, and 3D inconsistency issues should no longer exist. Furthermore, suppose we care about the internal structure of 3D shapes, it is better to combine our method with 3D native generative models since our method and optimization-based methods currently focus on the shape surface and cannot reconstruct the internal structure of a shape. --- Rebuttal Comment 1.1: Comment: The authors give a fair response to my reviews. My score remains to be borderline accept. --- Reply to Comment 1.1.1: Title: Thank you Comment: Thank you for your response and insightful reviews.
Summary: This paper tackles the task of single image 3D reconstruction, where given a single input image of an object, the proposed method aims to generates a 360 degree 3D textured mesh. Compared to prior works tackling a similar problem setting, this paper primarily aims to improve the quality of the reconstruction and reduce the time costed during inference time. The proposed method contain 3 modules: (1) an off-the-shelf view-conditioned 2D diffusion model to generate multi-view images from a single input image, (2) a camera pose estimator, (3) an existing SDF-based multi-view reconstruction method to lift these images into 3D space. The method is evaluated qualitatively and quantitatively on synthetic and real data, and outperforms most baselines. Strengths: Originality: - This paper is well positioned in the literature. It properly summarizes and tackles existing limitations of related works with a similar task setting. - This proposed method is a smart combination of existing methods, and the proposed design choices are well motivated. Clarity: - The writing is clear and easy to follow. Quality: - The qualitative and quantitative results outperform baselines and achieves SOTA. It still has a significant room for improvements. But it’s acceptable considering the challenges of the task. Significance: - The proposed method presents a novel idea for the literature -- incorporate a feed-forward model for diffusion-based methods. Weaknesses: - The novelty of the proposed method is relatively limited, as it heavily relies on prior works. Specifically, two out of the three modules of the proposed method (Zero123 and SparseNeuS) are built upon existing approaches. However, the authors do demonstrate originality through their design choices and the underlying reasoning for the combination of existing ideas. - The proposed method contains several off-the-shelf or disconnected modules. It could have been beneficial to provide more discussion and analysis on how errors of one module affect the final performance. For example, pose estimation is a task that is not completely solved, how will the pose estimation error affect the pipeline? For inference of real world images, how will the inaccuracy of segmentation masks affect the final quality? - Although the quality of the final results achieved by the proposed method can be considered state-of-the-art, there is still significant room for improvement. From the qualitative results, most reconstructed 3D model is far from the quality of the input image. It can be helpful to expand the discussion of limitation to analyze why these artifacts can happen (e.g. blurriness, distorted shape). Technical Quality: 3 good Clarity: 3 good Questions for Authors: In general I find this paper above the acceptance bar. The questions below aim to elicit additional clarification and insights of the paper: - If allowing an additional test-time optimization during inference time, will it further improve the quality of the outputs? (Or will it get harmed by the inconsistent multi-view images?) - The authors mentioned the proposed method suffers less from the multi-face issue (the Janus problem). Is it a benefit of the prior work Zero123 or from original design of the proposed method? - Minor: L.98 generates -> generate Description in L.150 “uses a spherical camera” and L.203 “a spherical camera model” is ambiguous. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper discusses failure cases in supp, including inconsistent multi-view images from Zero123, and artifacts on the back side. I encourage the authors to also explicitly mention in the main paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments and valuable suggestions. We will revise our paper based on your feedback. Here are our responses to your comments: **How do errors in one module affect the final performance?** Correct segmentation mask and pose estimation are important for the final generation quality, as ablated in Figure 10. Provided that the multi-view prediction is accurate, our pose estimation module is inherently capable of estimating precise elevation angles. For example, if we are given four nearby ground truth views, the module should be able to estimate the pose accurately in most cases. Although the median error of the current estimation module is 5 degrees, we attribute this error primarily to the inconsistency of the multi-view prediction, which can be regarded as the internal error of Zero123 and can be improved by a stronger multi-view prediction module. The current segmentation models (e.g., SAM) are very powerful. They can generate very accurate segmentation masks with a small amount of human input. As a result, we don’t feel the segmentation module can be the bottleneck. Instead, the main bottleneck remains with Zero123 and the reconstruction module. While our method is somewhat tolerant of inconsistencies of multi-view predictions, our approach may still fail for the severe failure cases of Zero123. However, our method is not tightly coupled with Zero123. Also, Zero123 is still the first few attempts in the direction of multi-view prediction. We believe that more progress will be made on 2D diffusion models for multi-view prediction, which will also enhance our method. **Reasons for the artifacts** We have observed some systematic artifacts in the original results, such as waffle-like textures on the backside. We find that these can be fixed by replacing the depth loss with the mask loss during the training of the reconstruction module. We will update our results in the revision. For other types of artifacts (e.g., distortion and blurriness), we feel that the inconsistency of multi-view predictions mainly causes them since the results are much better when feeding with ground-truth multi-view renderings. We also improved the mesh rendering script used in the original submission so that the rendering can better reflect the colors of the generated mesh instead of dimming it out. **Additional test-time optimization** There are two ways to add test-time post-optimization. The first one is fine-tuning with multi-view predicted images as done in the original SparseNeus. However, we find that it typically does not improve the results due to underlying inconsistencies among multi-view predictions, as shown in Figure 2 of the PDF (see main rebuttal). Another strategy is to leverage priors from 2D diffusion models (e.g., StableDiffusion) to further fine-tune the generated mesh as in DreamFusion. Our generated results can serve as a good initialization and accelerate the optimization. While NeRF representation and volume rendering is used in the original DreamFusion to better handle topological changes, in our fine-tuning, mesh representation and surface rendering can also be used to reduce the rendering time since there are no significant topological changes. **Reasons for less multi-face issue** Yes, less multi-face issues mainly benefit from view-conditioned 2D diffusion (e.g., Zero123). Our reconstruction module also tolerates and fixes some small inconsistencies in Zero123's predictions. **Limitations** Thanks for your suggestion. We will move the limitation from the supplementary to the main text. --- Rebuttal Comment 1.1: Comment: The authors rebuttal responded to my concerns. After reading the rebuttal and other reviews, I would keep my current rating of Weak Accept. The authors promised several parts for revision during the rebuttal phase – if accepted, please be sure to finish before the camera ready. E.g.: “We improved our reconstruction module and fixed some notable artifacts on the backside. We will update our results in the revision.”; “We also improved the mesh rendering script used in the original submission”. --- Reply to Comment 1.1.1: Title: Thank you! Comment: Thanks for your reply and insightful comment! We assure you that all the improvements and changes mentioned will be included in the revision.
Rebuttal 1: Rebuttal: Dear Reviewers, Thank you for dedicating your time to review our paper and offering insightful feedback. We sincerely appreciate your efforts to help enhance the quality of our research. We are also pleased to note that all reviewers were supportive of our work: (a) Recognize our contribution in proposing a novel single-pass paradigm for 3D generation (Ph7H, L2hY, yirP, eL63), which completely differs from existing 3D AIGC paradigms of optimization-based methods and 3D native generative models. (b) Praise our paper is well-positioned in the literature and properly summarizes and tackles existing limitations of related works (Ph7H, L2hY, AK29), including lengthy optimization time, 3D inconsistency, poor geometry, and poor input adherence. (c) Acknowledge our superior generation efficiency (Ph7H, AK29, yirP, eL63) and outperforming previous techniques (Ph7H, L2hY, AK29, eL63). (d) Acknowledge our efforts in addressing several challenges associated with lifting Zero123 predictions to 3D, and find our solutions and design choices to be effective (eL63), smart, and well-motivated (L2hY). (e) Praise our insightful ablation study, discussion, and underlying reasoning (L2hY, AK29, eL63). (f) Find our paper well-written and easy to follow (L2hY, Ph7H for introduction and related work). Pdf: /pdf/938390e05a87d2c3836911f99b4b0476aa9eaeaa.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper introduces an efficient method for single-image 3D reconstruction that significantly improves upon previous techniques by generating textured mesh in one feed-forward pass. The authors employ a view-conditioned 2D diffusion model, Zero123, to create multi-view images from a single input view and then lift these images into 3D space using SparseNeuS. The proposed technique is much faster than existing methods, while reserving 3D consistency. Strengths: The proposed method enables the generation of 3D meshes in one feed-forward pass after training. Due to its training-based, one-pass approach, the generation speed significantly outperforms many concurrent optimization-based methods. The quality of writing in the introduction and related work sections is commendable. The related work section covers the most recent work in the field and is well-organized. Weaknesses: The authors propose a multi-stage method primarily built upon the combination of multiple existing works. The work seems more engineering-oriented than technically innovative. Many aspects of the methodology, such as elevation estimation, 2-Stage Source View Selection, and Groundtruth-Prediction Mixed training, focus on bridging various components borrowed from existing baseline models. Additionally, as the method relies on Zero123 and SparseNeuS, will it inevitably inherit their limitations? Can this method circumvent the failure cases of Zero123 or SparseNeuS? The method 12345 is trained on 1,156 categories from the Objaverse-LVIS dataset. How does 12345 perform when the images fall outside these 1,156 categories? Many concurrent image-to-3D works can generalize to uncommon objects due to their large diffusion models. Could 12345's training in aligning Zero123 and SparseNeuS diminish the zero-shot capability? The authors provide limited visualization cases in their ablation study. Including quantitative results to support the ablation study would be beneficial, as 3D generation is typically unstable and hard to reproduce. The organization and presentation of the methodology section could be improved. The method involves multiple stages, but the authors do not strictly adhere to the method structure order and insert discussions throughout. This makes sections 3.1 and 3.3 difficult to follow. From the visualization, the mesh generation quality is not so impressive. Could the authors explain their motivation for learning SDF and RGB from two separate MLPs? Many NeRF papers typically predict these from the same network. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: See the Weaknesses above. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Despite the concerns raised in the weaknesses section, a positive review score is given in recognition of the effort to formulate a single-pass pipeline and for the considerable credit on the superior generation efficiency. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments and valuable suggestions. We will revise our paper based on your feedback. Here are our responses to your comments: **Perform outside 1,156 categories?** Yes, our method can generate meshes for images beyond the scope of the 1,156 training categories, as shown in Figure 1 of the attached PDF (see main rebuttal). Our 3D reconstruction module inherently leverages local correspondences for reconstruction and is thus designed to have a robust generalization ability to unseen data. This characteristic ensures that our method retains its zero-shot capability. **Circumvent the failure cases of Zero123 or SparseNeuS?** Our method demonstrates a certain level of tolerance towards the inconsistencies present in Zero123's multi-view predictions. In contrast, traditional optimization-based NeRF methods fail completely, as shown in Figure 3. We agree that our approach may also fail for the severe failure cases of Zero123. However, our method is not tightly coupled with Zero123. We believe that more progress will be made in 2D diffusion models for multi-view prediction, which will enhance our approach. **Bridging various components** It’s nontrivial to combine 2D diffusion models and 3D reconstruction, and there exist many challenges. As highlighted by Reviewer L2hY, we demonstrate originality through our well-motivated and smart design choices and the underlying reasoning. Also, we would like to emphasize our pivotal contribution in proposing a new paradigm for generalizable single-image 3D generation, which effectively addresses the limitations inherent in existing paradigms, such as efficiency and 3D consistency issues for optimization-based methods and poor generalization for 3D native generative models. **Limited examples in ablation studies** Thanks for the suggestion. We will add more qualitative examples and quantitative results in our revised version of supplementary materials. **Why two separate MLPs?** Many NeRF papers use two separate MLPs [1,2,3] because the SDF/density branch only takes XYZ coordinates as input, while the RGB branch may include both XYZ coordinates and view directions. It is common for them to share some feature networks (same as ours), but the final decoders are usually different. **Generation quality is not so impressive?** We improved our reconstruction module and fixed some notable artifacts on the backside. We will update our results in the revision. [1] Chen, Anpei, et al. "Tensorf: Tensorial radiance fields." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022. [2] Mildenhall, Ben, et al. "Nerf: Representing scenes as neural radiance fields for view synthesis." Communications of the ACM 65.1 (2021): 99-106. [3] Wang, Peng, et al. "Neus: Learning neural implicit surfaces by volume rendering for multi-view reconstruction." arXiv preprint arXiv:2106.10689 (2021). --- Rebuttal Comment 1.1: Title: Response to the rebuttal Comment: Thanks to the authors for the detailed rebuttal. I have no further questions at this point, and I hope the authors can improve their future version as promised. I have raised my score to weak accept.
null
null
null
null
null
null
Generalized Semi-Supervised Learning via Self-Supervised Feature Adaptation
Accept (poster)
Summary: The authors claim to have addressed the new problem of semi-supervised learning with feature distribution mismatch (FDM-SSL) and to have proposed a new method called Self-Supervised Feature Adaptation (SSFA) for this problem. The proposed SSFA consists of semi-supervised learning module and a feature adaptation module. The semi-supervised learning module optimizes the classifier based on joint minimization of supervised/unsupervised losses (to pseudo-labels) and a self-supervised auxiliary loss (for rotation prediction) with weak/strong data augmentation. The feature adaptation module aims to generate more reliable pseudo labels by updating the feature extraction backbone based on the self-supervised auxiliary loss on the unlabeled samples. Comparative experiments with existing semi-supervised learning are conducted and the effectiveness of the proposed method is claimed. Strengths: This paper addresses the FDM-SSL problem, which has few studied examples. The method is simple and its superiority over the major semi-supervised learning methods in the FDM-SSL problem is experimentally demonstrated. Weaknesses: A. Problem Novelty The authors claim in the abstract and conclusion sections that one of the novelties of this paper is in that they tackle the novel problem of semi-supervised learning with feature distribution mismatch. However, this claim is not valid because previous work [a] has addressed a very similar problem. The differences between these two should be emphasized and clarified. Furthermore, there are also several closely related problems that have been considered in the past (e.g., semi-supervised domain generalization [b] and few-shot domain generalization [c]), but discussion of their relevance is lacking. [a] Aminian et al., An Information-theoretical Approach to Semi-supervised Learning under Covariate-shift. ICML, 2022. [b] Zhou et al., Semi-Supervised Domain Generalization with Stochastic StyleMatch. NeurIPS Workshop, 2021. (Extended version is published at IJCV) [c] Yuan et al., A novel forget-update module for few-shot domain generalization. Pattern Recognition, 2022. B. Experiments Comparisons with existing methods for the related problems I listed in Weakness A above are missing. Furthermore, since the problem of FDM-SSL can be considered a variant of domain generalization, there should be comparisons with existing domain generalization (and more recent domain adaptation) methods. Due to the lack of such an evaluation, the effectiveness of the proposed method is not adequately demonstrated. C. Method Novelty The novelty of the proposed method is lacking. It is basically composed of common techniques in semi-supervised learning (such as weak/strong data augmentation or pseudo labeling with confidence thresholding). The feature adaptation module, which performs fine-tuning of the feature extraction backbone based on self-supervised auxiliary loss, has not been explored in semi-supervised learning, but the method itself is borrowed from [30]. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I have no specific questions for now. I would like to ask the authors to point out any misunderstandings I have about the above points I have raised as weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Its limitations are discussed in the concluding section. It would be interesting to add a discussion of the performance of the proposed method in the presence of another often discussed realistic distribution mismatch, class distribution mismatch (open set semi-supervised learning). Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments. We now answer them point by point. **W A. Problem Novelty: A discussion of several closely related problems ([a], [b] and [c]) is lacking.** Our proposed FDM-SSL problem setting is different from the previous works, we summarize the similarities and differences as follows: + **Comparing with [a]**. Our FDM-SSL and [a] both address the issue of different feature distributions for labeled and unlabeled data. The main differences between FDM-SSL and [a] lie in: **(1) test distributions.** [a] assumes that the feature distributions of test and unlabeled data are the same, diverging solely from labeled distribution. So [a] focuses on the model's performance on unlabeled distribution. Differently, our FDM-SSL framework imposes no constraints on the test distribution. The feature distribution of test data may encompass labeled, unlabeled, or even unseen ones during the training process. This presents a bigger challenge compared to [a]. **(2) cause of feature distribution mismatch.** [a] is more concerned about the different distribution of labeled and unlabeled data caused by *selection bias*, such as the construction of labeled and unlabeled data on the MNIST dataset in [a]. In contrast, our FDM-SSL is more concerned about the mismatch in feature distribution between labeled and unlabeled data caused by *image corruption* or *style changes*. Overall, there is a significant difference between FDM-SSL and [a]. + **Comparing with [b]**. The main differences between our FDM-SSL and Semi-Supervised Domain Generalization (SSDG) are: **(1) domain labels.** SSDG assumes that the training data has $K$ domains, providing domain labels for each sample. However, the domain labels are not presented in our settings since we assume there is no additional information for unlabeled data. Consequently, FDM-SSL is a more realistic setting with fewer constraints. **(2) distributions of labeled and unlabeled data.** In SSDG, each domain contains a set of labeled data and a set of unlabeled data. These labeled and unlabeled data are sampled from the same distribution. In contrast, in our FDM-SSL, there are no constraints on the feature distribution of unlabeled data. In detail, unlabeled data could potentially include samples sharing the same feature distribution as the labeled data, or there may be no unlabeled data sharing the same distribution as labeled data. + **Comparing with [c]**. The main differences between our FDM-SSL and Few-Shot Domain Generalization (FSDG) are as follows: **(1) number of labeled samples.** FSDG often assumes a large number of labeled samples in source domain for training with a few labeled samples in target domain for adaptation. In contrast, in FDM-SSL, the number of labeled samples is very limited but with a large number of unlabeled samples for training. **(2) distribution shift.** FSDG focuses on generalizing a model to new classes with a limited number of label examples, which implies a shift in the label space between the test set and the training set ( $p_{test} (y) \neq p_{train} (y)$ ). On the contrary, FDM focuses more on the robustness of feature distribution shift ( $p_ {test} (x) \neq p_ {train} (x)$ ). **W B. Experiments: Comparisons with existing methods for the related problems I listed in Weakness A and more recent domain adaptation methods are missing.** Based on our answer to Weakness A, our FDM-SSL is different from domain generalization. Instead, it presents a more realistic SSL problem with fewer constraints. Given the mismatch in feature distribution among samples in FDM-SSL, coupled with the similarities between domain generalization and domain adaptation, we choose the domain generalization and domain adaptation methods for comparison in our experiments. For domain generalization methods, considering that [a] does not have open-source code and [c] has different settings on the training set compared to SSL, we evaluate [b] in FDM-SSL setting. The following table shows the results of [b] on OFFICE-HOME dataset. We can observe that our method can achieve better performance than [b]. In addition, [b] introduced an additional style image generation model AdaIN to expand the training data resulting in a higher training cost (much higher memory usage and training time than our method). So our SSFA is more simple and efficient than [b]. | Method| A/ACPR| |P/ACPR| |R/ACPR | | | ---------- | ------ | ---- | ------ | ---- | ------ | ---- | | |L|UL|L|UL|L|UL| | StyleMatch |52.0|35.3|62.9|45.9|40.3|22.3| | FM-SSFA|**55.0**|**45.5**|**71.8**|**52.6**|**64.8**|**52.7**| For domain adaptation, we choose the state-of-the-art UDA methods PMTrans for comparison. Detailed experiment results and analysis can be found in the response to Q3 and Q4 for Reviewer WadC. **W C. Method Novelty** For more explanation of this issue, please refer to our reply to Q1 in the general response. **Limitations. Add a discussion of the performance of the proposed method in open set semi-supervised learning.** According to the reviewer's suggestion, we conduct experiments on open set semi-supervised learning. For CIFAR10, we split the classes into known and unknown classes by defining animal classes as known (6 classes) and others as unknown (4 classes). The following table shows the accuracy. | No. of labeled samples per class |5|50| | :--------------------: |-----|----| |FixMatch|48.2|86.0| |FM-SSFA|**65.8**|**88.5**| In open set semi-supervised learning, our method can still bring improvements to the baseline, and the improvement is more significant when labeled data is rarer. In addition, we also conduct experiments on standard SSL benchmark or when the ratio is low (refer to the answer to Q1 from Reviewer epDa), and the results verify that our proposed method is adaptable and effective to many scenarios. To sum up, our proposed SSFA is a more general semi-supervised framework for handling inconsistent distribution of labeled and unlabeled data. --- Rebuttal Comment 1.1: Title: Question to authors' rebuttal Comment: Thanks to the authors for their efforts in answering my questions. Some of my questions have been resolved. I have a question to the answer to A. Problem Novelty. Despite the fact that some past work like [a] has already considered the feature distribution mismatche between labeled and unlabeled samples, we can see that this paper contains claims that contradict this point, for example, "In this work, we introduce a novel setting to formalize the feature distribution mismatch between the labeled and unlabeled samples" in the abstract. Such statements overclaim the contributions of this paper and may mislead readers, so they should be corrected. Does the author agree with this? If yes, I would like to see a brief plan on how it could be fixed. --- Reply to Comment 1.1.1: Title: Answer to reviewer's question Comment: Thank you for pointing out that some statements about FDM-SSL are not very rigorous. We first clear up the confusion and then provide our revision plan to make the description of FDM-SSL more accurate. The Covariate-shift SSL problem discussed in reference [a] assumes that the test and unlabeled feature distributions are shifted with respect to labeled feature distribution. Our FDM-SSL setting is more general and challenging than the Covariate-shift SSL problem, designed for more complex and realistic scenario. We emphasize the differences in two aspects: + **Test distribution.** Covariate-shift SSL assumes the test data and unlabeled data are drawn from the same distribution, while FDM-SSL focuses on a more realistic scenario of different labeled, unlabeled and even unseen distributions simultaneously (see Table 1 in our paper). + **Mixed Unlabeled distribution.** In FDM-SSL, we address a scenario where unlabeled samples may come from a diverse mixture of multiple distributions, rather than a single distribution (see Section 3 in our paper). Following your suggestion, we will refine the claim of FDM-SSL in our paper by emphasizing the aforementioned points that are considered in FDM-SSL: + **Abstract Section.** We will restate the claim in lines 3-4 as follows: *In this paper, we propose a novel SSL setting that reflects a more realistic scenario. Within the novel setting, unlabeled samples are drawn from a mixed distribution that deviates from the feature distribution of labeled samples, while the test distribution covers labeled, unlabeled, and even unseen data distributions simultaneously*. + **Introduction Section.** We will restate the claim in lines 37-38 as follows: *In this study, we focus on a more realistic scenario FDM-SSL, i.e., the feature distributions of labeled and unlabeled data could be different and the feature distributions of test data could contain multiple distributions*. + **Related Work Section.** We will incorporate a comprehensive comparison between FDM-SSL and Covariate-shift SSL (as well as other related problem settings you raised) to show the differences. + **Conclusion Section.** We will restate the claim in lines 323-324 as follows: *In this paper, we focus on a realistic SSL setting, FDM-SSL, involving a mismatch between the labeled and unlabeled distributions, complex mixed unlabeled distributions and widely unknown test distributions*. If there is something you feel we have not adequately addressed yet, please do not hesitate to question.
Summary: The paper challenges the traditional assumption of semi-supervised learning: the feature distributions of labeled and unlabeled data are consistent. It claims that this assumption rarely holds in realistic scenarios, as: (a) unlabeled samples could contain various corruptions; (b) unlabeled samples could contain unseen styles. For this, the authors propose Self-Supervised Feature Adaptation (SSFA), a generic framework for improving SSL performance when labeled and unlabeled data come from different distributions. SSFA consists of a semi-supervised learning module and a feature adaptation module, where the semi-supervised learning module performs semi-supervised and self-supervised learning Strengths: (1) The paper is well written and easy to follow, presentation is good. (2) The proposed method is simple and effective. It consistently improves over its vanilla SSL algorithm counterpart. (3) The paper conducts extensive experiments on two realistic scenarios and ablation study to validate the contribution of each component and the overall method. Weaknesses: (1) The proposed method has limited novelty. Incorporating self-supervised learning into semi-supervised learning, or leveraging self-supervised learning for domain adaptation, both are not new conceptions. (2) Some important technical details are missing in the experimental setting section. For example, what is the number of shared layers and the self-supervised learning algorithm by default. These are important for reproducibility. Technical Quality: 3 good Clarity: 3 good Questions for Authors: (1) For Table 1, I am curious about the proposed method's performance on standard SSL benchmark, or when the ratio is low (like 0.1), this is a measure of whether the proposed method is robust to any scenarios, even if the noise is not that much. (2) The ablation study shows that the number of shared layers won't affect much as long as not sharing all, I would suggest the authors to conduct experiments on more backbones and get some empirical conclusion on how to set this hyper-parameter. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitation of this work is that the performance of SSFA is affected by the shared parameters between the main task and the auxiliary task. The ablation study shows that the performances do not differ much as long as not all parameters are shared. The number of layers to share should be a hyper-parameter to tune. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your useful comments! We see that your main concerns are novelty, experimental details and performance. We will address your concerns. **W1. The proposed method has limited novelty. Incorporating self-supervised learning into semi-supervised learning, or leveraging self-supervised learning for domain adaptation, both are not new conceptions.** Thank you for your comments on novelty. To our knowledge, our proposed SSFA is the first to propose to correct pseudo labels for unlabeled data through feature adaptation in SSL. Our extensive experiments in various scenarios have also verified the effectiveness of our method. Therefore, SSFA is a concise yet effective SSL framework that can be used to solve more realistic SSL problems. Please refer to our reply to Q1 in the general response for more detailed explanations. **W2. Some important technical details are missing in the experimental setting section. For example, what is the number of shared layers and the self-supervised learning algorithm by default. These are important for reproducibility.** Thanks for your suggestions. We will add more experimental details in the paper to enhance reproducibility. By default, the number of shared layers in our experiment is 2, the default self-supervised learning task is the rotation prediction task, and the corresponding default self-supervised loss is the cross entropy loss. **Q1. For Table 1, I am curious about the proposed method's performance on standard SSL benchmark, or when the ratio is low (like 0.1), this is a measure of whether the proposed method is robust to any scenarios, even if the noise is not that much.** According to your suggestion, we conducted experiments on standard SSL setting and on the setting of low ratio(=0.1) on CIFAR100 dataset with 400 labeled data. The results are summarized in the following table. It can be seen that our method can still bring significant improvements over the baseline. This indicates that our proposed SSFA framework is robust to SSL scenarios even under a low noise ratio. The results further demonstrate that our SSFA is a generalized SSL framework. | Method | standard | SSL | ratio 0.1 | | | | -------- | ------------ | ---- | --------- | ---- | ---- | | | L/UL | US | L | UL | US | | FixMatch | 33.3 | 25.8 | 31.7 | 10.5 | 24.8 | | FM-SSFA | **41.3** | **33.0** | **41.2** | **15.8** | **32.9** | **Q2. The ablation study shows that the number of shared layers won't affect much as long as not sharing all, I would suggest the authors to conduct experiments on more backbones and get some empirical conclusion on how to set this hyper-parameter.** Following your suggestions, we conducted relevant experiments on WiderResNet to explore the impact of the number of shared layers on CIFAR100 benchmark. In WiderResNet, "3 layers" represents that the main task and self-supervised task share the whole feature extractors. As shown in the following table, combined with our SSFA, the baseline has a significant improvement. In fact, in most cases, adjusting the number of shared layers does not result in significant performance fluctuations. In rare cases, sharing all feature extractors may lead to potential risks. Based on our findings, we provide a more suitable shared layer setting: sharing half of the parameters of the shared feature extractor which is able to ensure relatively better performance. | Method | 400 | labeled | 4000 | labeled | 4000 | labeled | | -------------------- | ----------- | ---- | ------------ | ---- | ------------ | ---- | | | L | UL | L | UL | L | UL | | FixMatch | 15.7 | 3.5 | 53.0 | 16.0 | 65.3 | 33.0 | | FM-shared (2 layers) | **25.7** | **22.2** | **60.2** | **52.5** | **69.1** | **57.8** | | FM-shared (3 layers) | 23.3 | 21.0 | 57.3 | 47.3 | 66.2 | 53.0 |
Summary: In this work, the authors propose a generalized Self-Supervised Feature Adaptation (SSFA) framework for FDM-SSL (Feature Distribution Mismatched Semi-Supervised Learning) that does not require prior knowledge of the distribution of unlabeled data. The SSFA framework aims to address distribution mismatch by decoupling pseudo-label predictions from the current model. It consists of two modules: the semi-supervised learning module and the feature adaptation module. The authors draw inspiration from previous work on auxiliary tasks and incorporate an auxiliary self-supervised task into the SSL module to train alongside the main task. In the feature adaptation module, the current model, primarily trained on labeled data, is updated by utilizing the self-supervised task to adapt the feature extractor before making predictions on unlabeled data. This adaptation allows for the generation of more accurate pseudo-labels, which in turn assist the SSL process. Strengths: - Addition of the feature adaptation module to adapt the model (trained on labeled data) to the unlabeled data distribution before generating pseudo labels seems to help with the distribution shift problem between labeled and unlabeled data. - Extensive empirical evidence provided to show that proposed SSFA method significantly improves classification accuracy on CIFAR100 and OfficeHome and Office31 datasets. Domain-level and class-level feature visualization also suggest successful feature adaptation across different domains and learning of more distinguishable representations across different classes. Ablations studies investigating the effect of different self-supervised tasks, number of shared layers between auxiliary and main task, and the sensivity of the method to different confidence thresholds are also useful. Weaknesses: - Lemma 1 is fine but convexity is a very strong assumption, which does not hold true in real-world deep loss functions. Some empirical evidence is provided, but it is somewhat counterintuitive to think that minimizing self-supervised empirical loss will also minimize supervised empirical loss. - Using weakly augmented samples for a supervised loss and strongly augmented samples for an unsupervised loss is not novel and first introduced in the FixMatch paper. Technical novelty is somewhat incremental though feature adaptation module can still be considered a significant improvement. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: It was not very clear from the text about how the feature adaptation model is being trained. The module is being trained for one iteration and then is being discarded. This part is confusing. Is the module trained one unlabeled sample at a time or using all unlabeled samples? If the module needs all unlabeled samples to be trained this may not reflect a real-world use case scenario and will limit the applicability of the approach. Figure 1 needs to be updated to reflect these important details. What kind of self-supervised loss is used in equation 5? Please define G in Lemma 1. What does subscript m in the loss function in Lemma 1 indicates? It was not previously used in defining loss function. Please correct the typos in Table 1. "Traditioal", Abundant/Scarce could replace Plenty/Lack. ------------- Your responses have clarified some of the concerns I had with Lemma 1. In particular, I acknowledge your explanations regarding the training process of the feature adaptation module, and the flexible nature of the self-supervised loss function. I am inclined to update my initial score to reflect these additional insights your rebuttal offered. Thanks. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Conclusions allude to some algorithmic limitations yet do not discuss societal impact. No negative societal impact beyond what is already present in generic machine learning algorithms has been observed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable comments! Below are the responses to your comments. **W1. Lemma 1 is fine but convexity is a very strong assumption, which does not hold true in real-world deep loss functions. Some empirical evidence is provided, but it is somewhat counterintuitive to think that minimizing self-supervised empirical loss will also minimize supervised empirical loss.** Thank you for your comments on lemma1. We will answer the questions in two parts. **1.1. It is somewhat counterintuitive to think that minimizing self-supervised empirical loss will also minimize supervised empirical loss.** Please refer to our reply to Q2 in the general response. We provide a toy example to explain this question. **1.2. Lemma 1 is fine but convexity is a very strong assumption, which does not hold true in real-world deep loss functions.** Please refer to our reply to Q3 in the general response. We provide a theoretical analysis and empirical evidence to prove its feasibility in the real-world. **W2. Using weakly augmented samples for a supervised loss and strongly augmented samples for an unsupervised loss is not novel and first introduced in the FixMatch paper. Technical novelty is somewhat incremental though feature adaptation module can still be considered a significant improvement.** Thank you for your approval of method novelty. The consistency loss introduced by FixMatch is not the primary focus of our method. The key innovation is to propose a unified framework SSFA for FDM-SSL. In our paper, FixMatch serves as an illustrative example to showcase the working pipeline of SSFA, but FixMatch can be replaced by any other pseudo-label-based semi-supervised method. As your mentioned, our innovation mainly lies in the proposed feature adaptation module, which introduces a self-supervised task to achieve feature alignment. This feature adaptation module offers flexibility, allowing multiple self-supervised tasks such as rotation prediction tasks and entropy minimization tasks. Overall, our work focuses on proposing a universal SSFA framework to solve the new and more realistic FDM-SSL scenario. Please refer to our reply to Q1 in the general response for more details. **Q1. It was not very clear from the text about how the feature adaptation model is being trained. The module is being trained for one iteration and then is being discarded. This part is confusing. Is the module trained one unlabeled sample at a time or using all unlabeled samples? If the module needs all unlabeled samples to be trained this may not reflect a real-world use case scenario and will limit the applicability of the approach. Figure 1 needs to be updated to reflect these important details.** We are not using all unlabeled data for updating the feature adaptation module. Instead, we only utilize the unlabeled data from the current batch. Typically, the batch size is not excessively large (e.g., set to 64 in our experiment). In particular, we update the feature extractor $\theta_ g$ to $\theta_ g'$ using the unlabeled data from the current batch. Afterwards, the updated feature extractor $\theta_ g'$ is used to extract new features for the unlabeled data to generate pseudo-labels. Notably, $\theta_ g'$ did not have any other impact on subsequent model training. We will add important details to Figure 1 in the revised version. **Q2. What kind of self-supervised loss is used in equation 5?** The self-supervised loss can be chosen flexibly depending on the used self-supervised task. In our paper, we employ different self-supervised losses corresponding to the rotation prediction task, contrastive learning task, and entropy minimization task, which are the cross entropy loss, the contrastive loss from SimCLR and the entropy loss, respectively. **Q3. Please define G in Lemma 1. What does subscript m in the loss function in Lemma 1 indicates? It was not previously used in defining loss function.** G is a constant, denoting the upper bound of $\lVert \nabla l_s(x,y;h)\rVert$. $l_m$ is the loss function of the main task. **Q4. Please correct the typos in Table 1. "Traditioal", Abundant/Scarce could replace Plenty/Lack.** Thanks for your suggestions. We will modify the paper based on the suggestions. **Limitations. Conclusions allude to some algorithmic limitations yet do not discuss societal impact. No negative societal impact beyond what is already present in generic machine learning algorithms has been observed.** Thanks for your suggestion. We will add a section to discuss the societal impact in the paper. --- Rebuttal Comment 1.1: Comment: Your responses have clarified some of the concerns I had with Lemma 1. I also acknowledge your explanations regarding the training process of the feature adaptation module, and the flexible nature of the self-supervised loss function. I have updated my initial score to reflect these additional insights your rebuttal offered. Thanks.
Summary: The authors propose an interesting and novel semi-supervised learning method with auxiliary self-supervised feature adaptation for addressing issues with heterogeneity in labelled and unlabelled data. In fact, the authors focus on heterogeneity across domains, which is not something traditionally considered in the SSL literature, but is more in the field of unsupervised domain adaptation. The authors combines ideas of feature adaptation from UDA with consistency regularization and pseudo-labelling from SSL to bridge the two fields. The results presented seem much better than the baseline methods. Strengths: 1. Table 1 is useful and sets the tone of the paper showing the problem settings and constraints of each type of approach. 2. The performance metrics reported in Table 2 and 3 are very impressive compared to baselines. Weaknesses: 1. However, the caveat is that the baseline methods cannot do both SSL and UDA, which makes comparisons of performance metrics harder. However, SSFA does seem to be a novel method bridging the two fields. Minor points: 1. Typo in Table 1: traditional 2. Might be useful to point out more forcefully Eq 2 is traditional in SSL, but Eq 7 is actually used, especially as SSFA Algorithm with equation references is in Supplement. Maybe it would be useful to not call both Eq 2 and Eq 7, L_u. 3. How was the cluster representation in Figure 4 made? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. The Feature Adaptation Module seems like 1-directional (unlabelled to labelled) and 1step transformation, as the unlabelled are aligned to the labelled. Have the authors tried using more steps in the optimization to allow more adaptation, or does that rely on the loss and step calibration? 2. The authors show the effect of different fixed thresholds in Eq 7 in Figure 5. Have the authors tried an adaptive threshold as in FlexMatch? 3. While the authors have compared SSFA to more recent and state-of-the-art models in SSL such as FreeMatch and SoftMatch, the authors have not compared against the arguably the state-of-the-art in UDA such PMTrans. How does SSFA compare against PMTrans or any of the more recent leaders the Domain Adaptation leaderboard on Office-31 and Office-Home? 4. Moreover, SSFA is especially impressive for low number of labelled data against UDA methods. How does it do in low labelled data settings against more recent UDA methods? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: The authors have addressed some limitations of their method Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive comments! We address all of your concerns point by point as below. **W1. However, the caveat is that the baseline methods cannot do both SSL and UDA, which makes comparisons of performance metrics harder. However, SSFA does seem to be a novel method bridging the two fields.** Currently, research on FDM-SSL remains limited. Considering the similarities between the task setting of FDM-SSL and more extensively studied SSL and UDA, we also conducted experiments on SSL and UDA methods to evaluate their ability in addressing FDM-SSL in Table 2 and Table 3 of our paper. As you pointed out, our SSFA framework has the potential to serve as a universal baseline, bridging SSL and UDA in the future. **W2. Minor points:** 1. **Typo in Table 1: traditional** Thanks for pointing out this issue. We will correct this typo in the final version. 2. **Might be useful to point out more forcefully Eq 2 is traditional in SSL, but Eq 7 is actually used, especially as SSFA Algorithm with equation references is in Supplement. Maybe it would be useful to not call both Eq 2 and Eq 7, L_u.** Thanks for your suggestions. We will make modifications based on the suggestions provided by the reviewer. 3. **How was the cluster representation in Figure 4 made?** In Figure 4, we randomly select four categories and extract features from labeled and unlabeled samples using the model's feature extractor. Then, we use t-SNE to reduce the dimensionality and visualize these features. Figure 4 (a) shows that the features of different categories are mixed together, indicating that FixMatch lacks the ability to distinguish different categories under FDM-SSL settings. In Figure 4 (b), the categories are clearly distinguished without being affected by different domains, indicating that the SSFA module can greatly alleviate the classification error caused by feature distribution mismatch. **Q1. Have the authors tried using more steps in the optimization to allow more adaptation, or does that rely on the loss and step calibration?** Following your suggestion, we further explored multi-step adaptation during the optimization process on CIFAR100 benchmark (with 400 labeled data and the $ratio$ is 1.0). As shown in the table below, only one step of adaptation can bring significant improvements over the baseline. And as the number of adaptation steps increases, the performance will be further improved, but with greater computational costs. Considering the trade-off between accuracy and computational cost, we perform one-step optimization in this paper. | Method | L | UL | US | | ------------------ | ----- | ----- | ----- | | FM | 15.7 | 3.5 | 8.5 | | FM-SSFA (1 step) | 25.7 | **22.2** | 22.5 | | FM-SSFA (5 steps) | 26.2 | 14.9 | 16.0 | | FM-SSFA (10 steps) | **41.1** | 18.7 | **33.7** | **Q2. Have the authors tried an adaptive threshold as in FlexMatch?** In our paper, we have compared SSFA to more recent and state-of-the-art SSL algorithms such as FreeMatch and SoftMatch. Both algorithms use self-adaptive thresholds similar to (but more complex than) FlexMatch. For further explanation, we conduct FlexMatch in FDM-SSL and give the results on CIFAR100 benchmark (with 400 labeled data and the $ratio$ is 0.5) as follows. It can be seen that SSFA can further improve the performance of FlexMatch, especially in the distribution of unlabeled and unseen data. | Method | L | UL | US | | -------------- | ---- | ---- | ---- | | FlexMatch | 35.8 | 2.2 | 23.6 | | FlexMatch-SSFA | **38.3** |**28.4** | **31.6** | **Q3. How does SSFA compare against PMTrans or any of the more recent leaders the Domain Adaptation leaderboard on Office-31 and Office-Home?** Following your suggestion, we evaluate PMTrans under FDM-SSL setting on OFFICE-HOME benchmark. As shown in the table below, PMTrans achieves very poor performance. In fact, we have also found that previous UDA methods (such as DANN and CDAN) may crash during training in some cases. We have mentioned the relevant reasons in the paper (lines 244 - 248). In addition, due to the fact that the PMTrans uses a more complex Swin-Transformer as the backbone, while our method uses Resnet50 as the backbone, PMTrans requires longer training time and more memory usage compared to SSFA. | Method | A/ACPR | | C/ACPR | | P/ACPR | | R/ACPR | | | -------- | ------ | ---- | ------ | ---- | ------ | ---- | ------ | ---- | | | L | UL | L | UL | L | UL | L | UL | | PMTrans | 8.6 | 2.3 | 17.4 | 10.5 | 23.8 | 12.7 | 15.3 | 6.6 | | FixMatch | 32.4 | 23.0 | 36.9 | 30.6 | 52.9 | 32.6 | 42.2 | 31.5 | | FM-SSFA | **55.0** | **45.5** | **44.7** | **41.7** |**71.8** | **52.6** | **64.8** | **52.7** | **Q4. Moreover, SSFA is especially impressive for low number of labelled data against UDA methods. How does it do in low labelled data settings against more recent UDA methods?** Following your suggestion, we evaluate the recent UDA method PMTrans and our SSFA in low labeled data settings (1 labeled sample per class) on OFFICE-HOME benchmark. It can be seen that in low labeled data settings, the performances of PMTrans and FixMatch are poor. However, with our SSFA, FixMatch has a significant improvement in performance. | Method | A/ACPR | | C/ACPR | | P/ACPR | | | -------- | ------ | ---- | ------ | ---- | ------ | ---- | | | L | UL | L | UL | L | UL | | PMTrans | 5.8 | 1.7 | 6.6 | 3.3 | 10.4 | 6.3 | | FixMatch | 4.7 | 3.5 | 10.0 | 6.6 | 6.3 | 5.7 | | FM-SSFA | **19.1** | **20.5** | **18.4** | **16.2** | **29.3** | **20.0** | --- Rebuttal Comment 1.1: Comment: I thank the authors for clarifying some of questions and performing new experiments on state-of-the-art UDA methods. Those UDA methods were confusing, and the authors also mention that other UDA methods such DANN and CDAN can crash during training. This makes it hard to compare the method to UDA methods. While this may be no fault of the method, it does makes the method look like an interesting application of self-supervised feature to semi-supervised learning with some improved performance results, but not quite comparable to UDA methods. --- Reply to Comment 1.1.1: Title: Thank you for the reply! Comment: Thank you for the reply. We analyze the reason for poor performance of UDA methods on FDM-SSL as follows: 1. UDA methods usually assume abundant source domain data for model training. However, in FDM-SSL, the labeled data is very scarce, which can potentially hinder the training process. For example, many UDA methods incorporate a domain discriminator to distinguish source and target samples. The severe class imbalance between limited labeled samples and abundant unlabeled samples poses challenges to the effective training of domain discriminators. 2. UDA typically deals with unlabeled data from a single distribution, focusing on adapting to one target distribution. In our FDM-SSL, unlabeled data can originate from multiple distributions without prior information. When employing UDA methods, the unlabeled samples are usually considered to come from a single distribution (target domain). This simplification may disturb the training of UDA methods and cause crashes when unlabeled data come from multiple distributions. Our SSFA takes a different approach. It leverages the feature adaptation module to accommodate different distributions of unlabeled data. This enables SSFA to flexibly extract features for unlabeled data from different distributions. If there is something you feel we have not adequately addressed yet, please do not hesitate to question.
Rebuttal 1: Rebuttal: We are grateful to all reviewers and ACs for the generous effort they have invested in reviewing our work! Some important or common questions are answered. **Q1. The novelty of the method may be limited.** Thanks for your comments. According to Occam's Razor, *Entities should not be multiple unnecessarily*. In addition, complex methods often incur increased training costs. Therefore, our method is intentionally designed to be simple yet highly effective. In our paper, we aim to alleviate the severe performance degradation caused by feature distribution mismatch between labeled and unlabeled data in SSL. We propose Self-Supervised Feature Adaptation (SSFA) from the perspective of distribution adaptation. By using a feature adaptation module to update the features of unlabeled samples, we can map input data with different feature distributions to the same feature space, thereby greatly improving the pseudo-labels accuracy for unlabeled samples. In this way, our proposed SSFA is not a specific semi-supervised method, but a universal semi-supervised framework based on feature adaptation. To our knowledge, our proposed SSFA is the first to propose to predict pseudo labels for unlabeled data after feature adaptation in SSL. This method is characterized by its simplicity and remarkable effectiveness in solving more complex SSL problems. Furthermore, in experiments, we have demonstrated that the SSFA framework can be combined with a wide range of pseudo-label-based semi-supervised learning methods. Besides, there is a wide selection of auxiliary tasks for feature adaptation modules, which greatly improves the practicality of our proposed SSFA framework. We believe that this simple framework can serve as a solid baseline for further research in Feature Distribution Mismatch SSL. **Q2. The results of Lemma 1 is counter-intuitive.** We provide a toy example to illustrate this counter-intuitive conclusion pointed by the reviewer: minimizing the loss of self-supervised tasks in Lemma1 indirectly minimizes that of the main task. Consider a two-layer linear network parametrized by $\theta_g$ (the shared linear layer) , $\theta_c$ (the linear head for main task) and $\theta_s$ (the linear head for self-supervised task) . The predictions for the main and self-supervised task can be denoted as: $\hat y_m = \theta_c^T \theta_g x$ and $\hat y_s = \theta_s^T \theta_g x$ separately. The main task loss is $l_m(x,y; \theta_g, \theta_c)=\frac12(y_m-\hat y_m)^2$ and the self-supervised task loss is $l_s(x,y; \theta_g, \theta_s)=\frac12(y_s-\hat y_s)^2$. Since $y_s$ is known, we update the shared feature extractor $\theta_g$ by one step of gradient descent on $l_s$. The updated feature extractor $\theta_g'$ is given by: $$ \theta_g'= \theta_g - \eta(y_s-\hat y_s)(-\theta_sx^T)= \theta_g - \eta(y_s-\theta_s^T \theta_g x)(-\theta_sx^T), $$ where $\eta$ is the learning rate. If we set $\eta=\frac{y_m-\hat y_m}{(y_s-\hat y_s)\theta_c^T\theta_sx^Tx}$, we can find the main task loss $l_m(x,y; \theta_g', \theta_c)=0$. This toy example demonstrates that it is theoretically possible and reasonable to reduce the loss of the main task to zero, by optimizing self-supervised tasks. **Q3. The assumptions of Lemma 1 are somewhat strong, which may not hold true in practice.** Although Lemma 1 relies on somewhat strong assumptions, its conclusion can be applied to real-world scenarios where the assumptions might not hold. We attempt to verify this point by first offering a theoretical analysis and then providing empirical evidence derived from the theory. + **Theoretical Analysis**: Based on lemma1, for any $\eta$, by smoothness and convexity, $$l_m(x,y;h-\nabla l_s(x;h))\leq l_m(x,y;h)+\eta{\langle \nabla l_m(x,y;h),\nabla l_s(x;h)\rangle}+\frac{\eta^2 \beta}2 {\lVert \nabla l_s(x;h)\rVert}^2.$$ Denote $\eta^*=\frac{\langle \nabla l_m(x,y;h),\nabla l_s(x;h)\rangle}{\beta \lVert \nabla l_s(x;h)\rVert^2}$. The equation becomes: $$l_m(x,y;h-\eta^*\nabla l_s(x;h))\leq l_m(x,y;h)-\frac{\langle \nabla l_m(x,y;h),\nabla l_s(x;h)\rangle^2}{2\beta \lVert \nabla l_s(x;h)\rVert},$$ namely, $$ l_m(x,y;h)-l_m(x,y;h-\eta^*\nabla l_s(x;h))\geq \frac{\langle \nabla l_m(x,y;h),\nabla l_s(x;h)\rangle^2}{2\beta \lVert \nabla l_s(x;h)\rVert}.$$ It can be seen that in the smooth and convex case, with the gradient inner product between the main task loss $l_m$ and the self-supervised task loss $l_s$, i.e., $\langle \nabla l_m(x,y;h),\nabla l_s(x;h)\rangle$, increasing, the updated model has a smaller loss on the main task. That is, the larger the inner product, the greater the decline of the loss function. + **Empirical Evidence**: For non-convex loss functions, we empirically show that our theoretical insights also hold. In Figure 2 of our paper, we plotted the correlation between the gradient inner product and the performance improvement of the model on the test set, where each point in the figure represents the average result of a set of test samples. From Figure 2, it can be seen that there is a positive correlation between the gradient inner product and model performance improvement. This observed phenomenon is consistent with the theoretical conclusion, that is, a strong gradient correlation clearly indicates higher performance improvements over the baseline. Pdf: /pdf/052fd2e39698bdf74135803f9746406f4d4b2809.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper focuses on Feature Distribution Mismatch SSL (FDM-SSL), which considers a mismatch between the labeled and unlabeled data. In this task setting, the challenges include the scarcity of labeled data and the mixed distributions of the unlabeled data. The paper proposes a framework named Self-Supervised Feature Adaptation (SSFA), which adapts the feature extractor to the unlabeled distribution. Experiments show that SSFA can improve SSL performance on labeled, unlabeled, and unseen data. Strengths: 1. The paper studies a new SSL problem named Feature Distribution Mismatch SSL. The presented SSFA method is simple and effective to address this problem. 2. Extensive experiments have been conducted to evaluate the performance of SSFA. 3. The paper is well written and presented. For instance, Table 2 is clear, it reveals the difference between task settings. Weaknesses: 1. The proposed SSFA method is built upon existing SSL and unsupervised domain adaptation (UDA) methods. In SSFA, the SSL module and Feature Adaptation Module are inherited from the benchmark SSL and UDA methods, respectively. The optimization objectives $L_x$, $L_u$, and $L_{aux}$ are widely used in SSL methods. The major novelty is the Feature Adaptation Module, which uses the unlabeled data as input to generate pseudo labels $q_b'$ instead of the original pseudo labels $q_b$ generated from the labeled data. I acknowledge that this work presents some novelties in proposing and addressing the challenges in domain-shifted SSL. However, the method itself is straightforward in my opinion. 2. The assumptions of Lemma 1 are somewhat strong, i.e., the loss is expected to be convex and smooth, the learning rate is fixed to $\frac{\epsilon}{\beta G^2}$. In addition, the result of Lemma 1 is counter-intuitive --- 'The empirical risk of the main task can be theoretically reduced to 0 by optimizing the empirical risk of self-supervised task.' Due to these aspects, I kindly doubt that this theoretical result could work well in practice. And, the derivation only guarantees that the empirical risk is decreasing along with gradient descent, but not 'reduced to 0'. Please check this. 3. I am a bit confused about the SSFA's theoretical performance on 'unseen data'. In Sections Introduction and Experiments, the authors claimed that SSFA can significantly improve SSL performance on unseen data. However, the method only adapts models to the *unlabeled* data during training. Why does SSFA improve the performance on unseen data? More theoretical evidence is encouraged to be presented to better support this claim. 4. I suggest the authors show some visualizations of the gap between labeled and unlabeled data, e.g., what are the image corruptions in the experiments? What's the style change from OFFICE-31 dataset to OFFICE-HOME dataset? I find some related details in Appendix's texts. However, it would be more friendly to show some visualizations of the data gap, since domain shifting is the main challenge studied by this work. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: See Weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments! We see that your main concerns are novelty, more explanations and some details. We answer your questions point by point. **W1. I acknowledge that this work presents some novelties in proposing and addressing the challenges in domain-shifted SSL. However, the method itself is straightforward in my opinion. However, the method itself is straightforward in my opinion.** Thanks for acknowledging the novelty of our work. We believe that a straightforward method offers several advantages, including simplified implementations, facilitating understanding, and serving as a solid baseline for further research in Feature Distribution Mismatch SSL. Please refer to our reply to Q1 in the global response for more details. **W2. The result of Lemma 1 is counter-intuitive. The assumptions of Lemma 1 are somewhat strong and I kindly doubt that this theoretical result could work well in practice. Lemma 1 only guarantees that the empirical risk is decreasing along with gradient descent, but not 'reduced to 0'.** Thank you for your feedback on issues related to Lemma 1. We will divide the answer to W2 into three parts. **2.1.The results of Lemma 1 is counter-intuitive --- 'The empirical risk of the main task can be theoretically reduced to 0 by optimizing the empirical risk of self-supervised task'.** Please refer to our reply to Q2 in the general response. **2.2 The assumptions of Lemma 1 are somewhat strong, potentially impeding the effectiveness of the theoretical result in practice.** Please refer to our reply to Q3 in the general response. **2.3 Lemma 1 only guarantees that the empirical risk is decreasing along with gradient descent, but not 'reduced to 0'.** According to the toy example, it is possible to indirectly reduce the main task loss to 0 by minimizing self-supervised task loss. In addition, according to Lemma 1, assuming that the loss is expected to be convex and smooth, as the gradient of self-supervised task loss decreases, the empirical risk of the main task will gradually decrease. Therefore, if we choose a proper training strategy to optimize the self-supervised task, the empirical risk of the main task can tend to 0. **W3. However, the method only adapts models to the unlabeled data during training. Why does SSFA improve the performance on unseen data?** We explain the reason why SSFA improves the performance on unseen data from two aspects. + **From a perspective of method.** In our SSFA, unlabeled samples are aligned with the feature distribution of labeled samples through the feature adaptation module, thereby improving the model's prediction accuracy for distribution-mismatch unlabeled samples. By making correct predictions on samples from various domains, the feature extractor becomes adept at mapping data from different domains to the same feature space, gradually eliminating the impact of domain divergence. As a result, the model learns to capture domain-invariant features as two feature space becomes indistinguishable. + **From a perspective of experiment.** As the domain number exposed by the model increases during training, the feature extractor tends to map more domains to the same feature space. It enhances the model's potential for generalizing to unseen domains. Therefore, the model has a higher improvement compared to the baseline on unseen data. To visually illustrate this, we visualize the domain-level features generated by SSL models with/without SSFA respectively. As Figure 1 in the PDF of the global response shows, FixMatch maps samples from different domains (1 labeled, 10 unlabeled and 5 unseen domains) to different clusters in the feature spaces. Differently, our FM-SSFA model can effectively fuse these samples. The fusion of features in the unseen domains and the seen domains indicates that our SSFA has a good generalization ability to map different domain features to the same feature space. **W4. I suggest the authors show some visualizations of the gap between labeled and unlabeled data, e.g., what are the image corruptions in the experiments?** Thanks for your suggestions. We will add relevant visualization images in our paper to further illustrate the feature distribution gap between labeled and unlabeled data. We show the visualizations of the gap caused by image corruption and style change in Figure 2 and Figure 3 in the PDF of the global response respectively.
null
null
null
null
null
null
Keep Various Trajectories: Promoting Exploration of Ensemble Policies in Continuous Control
Accept (poster)
Summary: This paper addresses the problem of exploration in ensemble-based reinforcement learning. It introduces a novel approach called Trajectory Aware Ensemble Exploration (TEEN) that aims to enhance exploration by increasing diversity among the sub-policies. This diversity is measured by the KL divergence between the distributions of the trajectories. The method achieves diversity by optimizing the variational lower bound, which is then incorporated into the policy gradient step with the addition of a regularizer term. To reduce estimation bias on the Q function, the method selects the minimum value from a random set of critics based on the average Q value of the actions chosen by all sub-policies. The paper provides theoretical analysis to support this design choice, demonstrating that it reduces both estimation bias and variance. The effectiveness of TEEN is validated through experimental results on MuJoco and Deepmind Control Suite. These results show that the proposed method successfully explores a wide range of states and surpasses the performance of existing DRL algorithms in continuous control, including those specifically designed for efficient exploration. Strengths: - It proposes a new method that improves the exploration in ensemble-based RL by increasing the diversity of the sub-polices in terms of their state-action visit distribution. - The implementation is straightforward and involves two steps on top of a DDPG-like method: - add a regularizer to the policy gradient - add a routine to select and update sub-policies and to compute the target Q value - It provides theoretical analysis to justify the choice of target Q value - Experimental results are presented to validate the efficacy of the method. Weaknesses: - There appears to be an inconsistency between Line 18 in Algorithm 1 and the text. Specifically, Line 18 suggests that all sub-policies are updated sequentially, while the "recurrent optimization" section indicates that only one sub-policy is updated in each gradient step. Could you clarify this discrepancy? - Theorem 1 demonstrates that the method reduces the expected Q value. However, the experimental results indicate that the algorithm tends to underestimate the Q value. It's unclear whether underestimation is preferable to overestimation. If possible, it would be beneficial to address this aspect in the analysis. - The main paper frequently refers to Appendix C, but it appears to be missing from the appendix section. - In Line 436 of the appendix, it seems that the last expression is missing a multiplier p(s). Could you confirm? - There's a mismatch between Lemmas 2 and 3 in the appendix and the content presented in the main paper. - The proof for (v) in Lemma 3 does not show the variance inequality: V[X_{1:{N+1}}] \geq V[X_{1:{N}}]. - Formatting Issue: - The readability could be improved if Table 2 used the same row/column heading as those in Table 1. - Appendix line 483: Table index is missing. - Grammar: - Line 151: we can “increase” this equivalent optimization target. “maximize” seems more appreciate here. - Line 272: “.While purely ensemble multiple models may not certainly improve the performance.“ - Line 273: “it does not take effect“, “evenly degrading”. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please refer to the weakness section. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: - The authors have discussed one limitation regarding the sensitivity about reward scaling in Sec. 7. - Could the authors comment on the computational overhead of the ensemble? Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your detailed feedback and insightful suggestions, please see the following for our response. > Q1: There appears to be an inconsistency between Line 18 in Algorithm 1 and the text. Specifically, Line 18 suggests that all sub-policies are updated sequentially, while the "recurrent optimization" section indicates that only one sub-policy is updated in each gradient step. Could you clarify this discrepancy? A1: Thanks for pointing this out. This line is incorrectly stated, we do update only one sub-policy in each gradient step with recurrent optimization. We will modify this part of the statement in the algorithm in the next version. > Q2: Theorem 1 demonstrates that the method reduces the expected Q value. However, the experimental results indicate that the algorithm tends to underestimate the Q value. It's unclear whether underestimation is preferable to overestimation. If possible, it would be beneficial to address this aspect in the analysis. A2: This is an open question. Underestimation may not always be preferable to overestimation for all tasks. However, in general, overestimation bias is usually considered more harmful than underestimation bias [1,2,3]. This is because usually overestimation bias directly destroys the learning of an intelligent body in severe cases [3], while underestimation bias can be mitigated by visiting the corresponding state-action pairs multiple times [1]. When dealing with underestimation bias, it is easy to introduce overestimation bias [1,2]. We follow the general way of dealing with overestimation bias. Our concern is how to make the value function estimate closer to the true estimate in this submission. > Q3: The main paper frequently refers to Appendix C, but it appears to be missing from the appendix section. A3: Sorry for that, and check the general rebuttal for the missing part. > Q4: In Line 436 of the appendix, it seems that the last expression is missing a multiplier $p(s)$. Could you confirm? A4: Sorry for this typo. We present the full proof of Lemma 2 here. Lemma 2 Proof. By definition, $\mathcal{I}(\rho;z) = \mathcal{H}(\rho) - \mathcal{H}(\rho|z) = \mathcal{H}(z) - \mathcal{H}(z|\rho)$ By randomly selecting the latent variable $z$, we consider that $\mathcal{H}(z)$ is a constant depending on the number of $z$. Thus, we have, $\mathcal{H}(z|\rho) = \mathcal{H}(z) + \mathcal{H}(\rho|z) - \mathcal{H}(\rho)$ $\\propto \\mathbb{E}_{(s,a,z)\\sim \\rho(s,a,z)}[-\\log{\\rho(s,a|z)}]$ $- \\mathbb{E}_{(s,a)\\sim \\rho(s,a)}[-\\log \\rho(s,a)]$ $=\mathbb{E}_{(s,a,z)\sim \rho(s,a,z)}[-\log \rho(s,a|z)] - \int -\rho(s,a)\log \rho(s,a) dsda$ $=\mathbb{E}_{(s,a,z)\sim \rho(s,a,z)}[-\log \rho(s,a|z)] - \int -\rho(s,a,z)\log \rho(s,a) dsdadz$ $=\mathbb{E}_{(s,a,z)\sim \rho(s,a,z)}[\log \rho(s,a)-\log \rho(s,a|z)]$ Where, $\rho(s,a)=\int \rho(s,a|z)p(z) dz = \frac{1}{n}\sum_{k=1}^{n} p(s,a|z_k)$. Then, $\mathcal{H}(z|\rho) = \mathbb{E}_{(s,a,z)\sim \rho(s,a,z)}[-\log \rho(z|s,a)]$ $= \mathcal{H}(z) + \mathcal{H}(\rho|z) - \mathcal{H}(\rho)$ $\\propto \\mathbb{E}_{(s,a,z) \\sim \\rho(s,a,z)}$ $\[\\frac{1}{n}\\sum_{k=1}^{n}\\rho(s,a|z_k)-\\log \\rho(s,a|z)\]$ $\propto - \mathcal{D_{KL}}\left[\rho(s,a|z)||\frac{1}{n}\sum_{k=1}^{n} \rho(s,a|z_k)\right].$ > Q5: There's a mismatch between Lemmas 2 and 3 in the appendix and the content presented in the main paper. A5: It's true that some of the symbols don't match the main paper, we'll fix that part, hopefully, this won't affect your understanding of this submission. For Lemma 3, we put some of the key formulas in the main paper which leads to an inconsistency. > Q6: The proof for (v) in Lemma 3 does not show the variance inequality: $V[X_{1:{N+1}}] \geq V[X_{1:{N}}]$. A6: Sorry about that. For some unknown reasons, our latest version was not uploaded correctly to the submission system. Our main body of this submission does not involve this result and we will remove this part in a subsequent version. Hopefully, this will not affect your understanding of our article. > Q7: Formatting Issue and Grammar. A7: Thanks for your advice. We will update these issues in the subsequent version. >L2: The computational overhead of the ensemble. A2: On the issue of computational cost, we use network with multi-head structure [4]. And with the use of recurrent training, our computational costs are highly reduced. To convinced that we calculate the training time over one epoch in HalfCheetah-v3 and compared with TD3, SAC and an ensemble method-SUNRISE. We list the results here. All the experiment are done in TITAN RTX 3060 with one single task mode. |Method | Epoch 1(s/Epoch)| Epoch 2 (s/Epoch)|Epoch 3 (s/Epoch)|Epoch 4 (s/Epoch)|Epoch 5 (s/Epoch)| |---|---|---|---|---|---| |TEEN (N=5) | 19.37| 18.35| 19.32| 17.49| 16.81| |TEEN (N=10) |18.35| 16.67|19.14 |19.09 |18.25 | |TEEN (N=15) |19.32 |18.89 |18.96 |19.84 |19.31 | |TD3|5.24 |4.68 |5.4 |5.63 |4.88 | |SAC| 11.74|8.51 |8.43 |8.44 |9.23 | |SUNRISE (N=5)|106.06 |105.9 |106.24 |102.29 |109.94 | [1] Kuznetsov A, Shvechikov P, Grishin A, et al. Controlling overestimation bias with truncated mixture of continuous distributional quantile critics[C]//International Conference on Machine Learning. PMLR, 2020: 5556-5566. [2] Fujimoto, Scott, Herke Hoof, and David Meger. "Addressing function approximation error in actor-critic methods." International conference on machine learning. PMLR, 2018. [3] Van Hasselt H, Guez A, Silver D. Deep reinforcement learning with double q-learning[C]//Proceedings of the AAAI conference on artificial intelligence. 2016, 30(1). [4] Osband I, Blundell C, Pritzel A, et al. Deep exploration via bootstrapped DQN[J]. Advances in neural information processing systems, 2016, 29. --- Rebuttal Comment 1.1: Comment: I appreciate the thorough response from the authors. My concerns have been adequately addressed and I have increased the score accordingly. --- Reply to Comment 1.1.1: Comment: We are glad that our rebuttal could address your concerns. We appreciate your review and raising your score.
Summary: This paper presents a new ensemble RL algorithm. The main motivation of this work is to increase the variation between sub-policies, which is measured by the distance between the corresponding state-action visitation distribution. It formulates the loss function by connecting the visitation distribution distance to the mutual information theory and introduces a discriminator function to measure the policy variation. It also proposes a few tricks to improve the exploration effectiveness of the proposed formulation. The method is experimented on a few Mujoco and Deepmind control suite tasks and compared against several off-policy RL methods and one ensemble RL method. Strengths: 1. The proposed method is motivated by an intuitive idea that increasing the trajectory variation of the sub-policies may improve the exploration and performance of the ensemble policy. 2. The results show the proposed method consistently outperforms baselines across a wide span of tasks in different benchmarks. 3. The experimental analysis is helpful to understand the secret sauce behind the proposed method and the effect of different hyper-parameters. Weaknesses: 1. The exposition of the paper need improvement (more in the next section) 2. The evaluated control problems are relatively simple and the approach needs to be evaluated on more complex tasks. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Is the statement for the overestimation problem in DDPG properly demonstrated? Eq(4) (line 113) seems to be the formulation for discrete action space while this paper and DDPG focus on continuous action space. Please refer to [[9]](https://arxiv.org/pdf/1802.09477.pdf) for details. 2. It is unclear what the latent variable $z_k$ is at the beginning when it is first presented. Could be more clear if it just says $z_k$ is a categorical index or one-hot encoding of the sub-policy selection. 3. Line 146 mentions using KNN to measure Eq (8) but does the proposed approach indeed use it? If so, where is it applied? 4. Eq (9) is an important step to connect state-action visitation distribution distance to mutual information. Any references or derivation for Eq (9)? 5. The Recurrent Optimization trick (Line 169) is adopted to prevent sub-policy from exploration degradation. If I understand correctly, it corresponds to Line 18-19 in Algorithm 1 where only the sampled policy $\pi_k$ is updated. However, the gradient of the loss function (Eq (14)) should be independent among sub-policies. Could the authors explain how will the sub-policy $\pi_{\phi_k}$ change if we update all sub-policies instead of just $\pi_{\phi_k}$ in Algorithm 1 (Line 18-19). 6. **Algorithm 1** helps understand the method while there is no reference to Algorithm 1 throughout the paper. 7. How is the discriminator function $q$ trained? I could not find any details about it. 8. How does the presented algorithm perform on more complex control tasks? The current evaluated tasks are relatively simple that do not require a sophisticated amount of exploration of the policy. It would be more convincing if the approach can be benchmarked on Humanoid and Dextrous hand manipulation tasks. 9. The appendix is incomplete. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The limitations have been discussed above (i.e. it is unknown about the performance of the presented approach on complex tasks). Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to extend our sincere gratitude for your insightful feedback on our submission. Your valuable comments have greatly contributed to enhancing the quality of our work. In response to your suggestions, we have carefully revised the manuscript to address the points you raised. > Q1: Is the statement for the overestimation problem in DDPG properly demonstrated? Eq(4) (line 113) seems to be the formulation for discrete action space while this paper and DDPG focus on continuous action space. Please refer to [9] for details. A1: Overestimation bias occurs for similar reasons in both discrete and continuous environments, although the types of action spaces differ. We use a more explicit way of expressing overestimation bias in continuous action space and this expression is also recognized and used (eg. TQC [2]). Please check [1] and [2] for more information. We will give a more appropriate elaboration in a subsequent version. > Q2: It is unclear what the latent variable $z_k$ is at the beginning when it is first presented. Could be more clear if it just says $z_k$ is a categorical index or one-hot encoding of the sub-policy selection. A2: In our code, $z_k$ is a categorical index to select k-th sub-policy,(eg. $z=1,[\pi_1, \pi_2, \pi_3]=\\pi_1$). While the sub-policies can also be got by one-hot encoding as an input to the network. We don’t restrict how the sub-policy is obtained by $z_k$ because there are lots of ways to construct which is also a question worth exploring. > Q3: Line 146 mentions using KNN to measure Eq (8) but does the proposed approach indeed use it? If so, where is it applied? A3: We did not use this approach as we illustrate in lines 144-148. Estimating probability densities using KNN is computationally intensive and does not apply to continuous states. In this paper, we avoided the problem of going to a direct solution for this difficult-to-estimate distribution by transforming Eq. using mutual information theory. The equations obtained in this way are precise and simpler to implement. > Q4: Any references or derivation for Eq (9)? A4: The Proof can be found in Appendix A. Where, $\mathcal{H}(\rho)=\mathbb{E}_{(s,a)\sim \rho}[-\log(\rho(s,a))]$ $=\mathbb{E}_{(s,a,z)\sim \rho}\left[\log \frac{\rho(s,a|z)}{\rho(s,a)}-\log \rho(s,a|z)\right]$ $= \mathbb{E}_{z}$ $[\mathcal{D_{KL}}(\rho(s,a|z)||\rho(s,a))]+ \mathcal{H}(\rho|z)$ > Q5: The Recurrent Optimization trick (Line 169) is adopted to prevent sub-policy from exploration degradation. If I understand correctly, it corresponds to Lines 18-19 in Algorithm 1 where only the sampled policy $π_k$ is updated. However, the gradient of the loss function (Eq (14)) should be independent among sub-policies. Could the authors explain how will the sub-policy $\pi_{\phi_{k}}$ change if we update all sub-policies instead of just π_{ϕ_{k}} in Algorithm 1 (Line 18-19). A5: if we update all sub-policies instead of just $\pi_{\phi_{k}}$ in Algorithm 1, We need a larger and gradually decaying coefficient alpha, and the results of this approach are much more brilliant. However, since the decay form of alpha is difficult to determine, thus, we used the Recurrent Optimization trick to avoid this problem which allows us to use a fixed coefficient $\alpha$ to get great behavior in most of the tasks. > Q6: Algorithm 1 helps understand the method while there is no reference to Algorithm 1 throughout the paper. A6: Thanks for your advice, we will add this reference at the appropriate place in the paper. > Q7: How is the discriminator function q trained? I could not find any details about it. A7: we train the discriminator parallelly with the policies, the update equation can be shown in Eq.(12) and we would add this part to our Alg.1 in the final version. Thanks. > Q8: How does the presented algorithm perform on more complex control tasks? The current evaluated tasks are relatively simple that do not require a sophisticated amount of exploration of the policy. It would be more convincing if the approach can be benchmarked on Humanoid and Dextrous hand manipulation tasks. A8: In this paper, we evaluate our algorithm in different exploration challenges from the MuJoCo control suite to the DeepMind Control suite. The experimental results show our algorithm improves both the performance and sample efficiency which reveals that these tasks do need more exploration and are still promising with a sophisticated amount of exploration. But we'd love to challenge our algorithm in more complex benchmarks. Thus, we conduct experiments in the 376-dimensional Humanoid-v3, which is exceptionally difficult to solve with off-policy algorithms. We run an experiment in 5 million (5e6) time steps in 5 seeds. The results show that our algorithm exceeds the baseline algorithm consistently. We report the average returns with mean and variance of evaluation roll-outs here and the learning curves can be found in global rebuttal. | 5M Step | TEEN | TD3 | SAC | | --- | :---: | :---: | :---: | | Humanoid-v3 | $8259.49\pm429.32$ | $6957.91\pm364.05$ |$7641.03\pm 261.63$| > Q9: The appendix is incomplete. A9: Sorry for that, we update some parts of the missing appendix in the general rebuttal. [1]Thrun, S. and Schwartz, A. Issues in using function approximation for reinforcement learning. In Proceedings of the 1993 Connectionist Models Summer School Hillsdale, NJ. Lawrence Erlbaum, 1993. [2] Kuznetsov A, Shvechikov P, Grishin A, et al. Controlling overestimation bias with truncated mixture of continuous distributional quantile critics[C]//International Conference on Machine Learning. PMLR, 2020: 5556-5566. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response and new experiments with manipulation problems. I have some follow-up questions listed below: 1. The loss function in Algorithm 1 Line 19 is inconsistent with Eq. 14 in terms of the sign of the discriminator term. Is there anything I missed? 2. The answer A5 says we need to decay $\alpha$ if we would like to update all the policies. Why? and how does it connect to the motivations for the Recurrent Optimization (line 166-175)? Since the response addresses most of my concern, I would like to increase my rating if the questions above are properly resolved. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate the feedback provided and are gratified to note that our rebuttal addressed a significant portion of your concerns. For the remaining questions, we clarify as follows. > Q1: The loss function in Algorithm 1 Line 19 is inconsistent with Eq. 14 in terms of the sign of the discriminator term. A1: We acknowledge your astute observation. Indeed, an oversight occurred in Algorithm 1, Line 19, where we inadvertently omitted a minus sign. This will be rectified in the forthcoming version. To elucidate, the correct formulation for Line 19 of Algorithm 1 should be $$\nabla_{\phi_k}\frac{1}{|B|}\sum_{(s,a,r,s')\in \mathcal{B}}(-\alpha\log q_{\zeta}(z_k|s,a)-Q_{\theta_k}(s,a)), a=\pi_{\phi_k}(s),$$ This is subsequently followed by a clip operation, as described in Equation 15. > Q2: The answer A5 says we need to decay $\alpha$ if we would like to update all the policies. Why? and how does it connect to the motivations for the Recurrent Optimization (line 166-175)? A2: If we update all the policies simultaneously, we cannot perform recurrent optimization. So we need to find an approach to balance exploration and exploitation. As illustrated in lines 293-295 and the previous A5 response, a large $\alpha$ can enforce all the sub-policies to perform diverse explorations. Therefore, one generally potential method is the decay of $\alpha$. To encourage exploration, one needs to leverage a large $\alpha$ at the start of training to maintain the diversity of sub-policies. However, as training progresses, there emerges a necessity to strike a balance between exploration and exploitation. This entails a transition of $\alpha$ from a high to a more moderate value. Fine-tuning this balance can be challenging. However, we don't need to explicitly balance the exploration and exploitation if the recurrent optimization trick is utilized in the training. The essence of recurrent optimization lies in its ability to cyclically randomly select a sub-policy for exploration, accentuating its propensity for diverse exploration, while concurrently allowing other sub-policies to concentrate on exploitation. Thus, the recurrent optimization trick naturally balances the exploration and exploitation, rendering a constant $\alpha$ in our implementation wholly sufficient. If you have further questions, we would be happy to discuss them with you.
Summary: This paper presents an algorithm called trajectories-aware ensemble exploration (TEEN) for sample efficient ensemble reinforcement learning. The authors point out that the existing ensemble methods do not effectively address the required diversity in exploration, which TEEN is designed to tackle better. It achieves this by encouraging diverse behaviors through exploration of the state-action visit distribution measure space. Theoretical results are also presented to show how the design principles of TEEN could encourage diversity in exploration, while the experiment results demonstrate its empirical success in popular benchmark tasks. Strengths: This manuscript presents an interesting thesis that sample efficient ensemble reinforcement learning methods require diverse exploration in sub-policies. The authors discuss this topic in detail and systematically approach the proposed solution. I view the finding of the diversity requirement of sub-policies coupled with the provided theoretical analysis and the limited yet promising experimental results as the main strength of the work. I, therefore, believe if properly executed, this can be a valuable contribution to the community. Weaknesses: One of the main limitations of the work is its limited evaluations. The authors evaluate the proposed TEEN method to answer three main research questions. But I find their evaluations lacking the experimental rigor that is needed to fully appreciate the proposed method. First, in RQ1 in evaluating the performance against baselines, the proposed method shows visible benefits in Mujoco tasks, which is promising. But my concerns are that all four tasks evaluated may contain similar exploration challenges. Given exploration is one of the key focuses of the work, I would like to see the performance benefits across different tasks that pose different exploration challenges. Perhaps this is what the authors are attempting to do with DeepMind control experiments, but despite the mention, I did not find such experiments in the paper or in the appendix (Appendix C is empty). Regarding RQ2 and RQ3, my concerns are with the fact that it is only based on a single task (Ant-v3). For both questions, it is not significant enough to make claims based on a single task result. Furthermore, for RQ3, while I never found the results on the DeepMind control suite, the authors mention the best $\alpha$ is 0.02. This is a very low $\alpha$ and almost as having no trajectory-aware exploration. This further questions my concerns about how generalizable the proposed method is across different tasks. Clarification is appreciated. Overall for experiments, authors use only TD3 as the learning algorithm. I would like to see if the observed performance gains are dependent on the choice of the learning algorithm. It is also fine if there is an observable reliance, but in that case, I would like to see the contributions rephrased accordingly. I presume training the proposed model is challenging, specifically when it comes to finding the best ensemble size N and number of target values M (due to the trial and error approach in finding them, as in RQ2). It also appears that this training would require considerable computations. I would like to see a discussion of these limitations (and other limitations) of the method. If the stated limitations are addressed, particularly related to the evaluation process, the proposed work can be of benefit to the community. Consequently, I would be inclined to reconsider my score if the authors can effectively rectify these issues. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Shouldn’t the equation 11 be $\log(N) - \mathbb{E}_{s,a,z}[-\log(\rho(z|s,a))]$? 2. Also, shouldn’t the LHS and RHS of equation 11 be approximately equal and not equal? 3. Define $\mu$ and $\sigma$ in theorem 1. 4. While I did not check the proofs carefully, it appears the authors make simplification assumptions in the proof of theorem 1 that is not stated in the theorem (ex: zero-mean distribution). Please state them properly in the theorem. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Please discuss the limitations of the work, as pointed out earlier. This will help readers better appreciate the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to express our heartfelt appreciation for the invaluable feedback you provided on our submission. Your insightful comments and suggestions have been instrumental in refining our research. We clarify your concerns as follows. > W1: First, in RQ1 in evaluating the performance against baselines, the proposed method shows visible benefits in Mujoco tasks, which is promising. But my concerns are that all four tasks evaluated may contain similar exploration challenges. Given exploration is one of the key focuses of the work, I would like to see the performance benefits across different tasks that pose different exploration challenges. Perhaps this is what the authors are attempting to do with DeepMind control experiments, but despite the mention, I did not find such experiments in the paper or in the appendix (Appendix C is empty). A1: When evaluating our algorithm on MuJoCo tasks, we took that concern into account as well and that’s why we attempt to do with DeepMind control experiments as you infer. In Table 2, We report the average returns with mean and variance of evaluation roll-outs across all algorithms on DMControl Tasks with 5 other tasks. The results show that our algorithm far exceeds the baseline in a very small number of steps in most tasks. The learning curves for these tasks have been moved to Appendix.C. Sorry for that, and check the general rebuttal for the missing part. > W2: Regarding RQ2 and RQ3, my concerns are with the fact that it is only based on a single task (Ant-v3). For both questions, it is not significant enough to make claims based on a single task result. A2: Regarding RQ2 and RQ3, the experiments for other environments have been moved to the appendix as we have mentioned in the main paper. We are sorry for the incomplete appendix, and check the general rebuttal for the missing part. And results are also consistent with our theoretical analysis. > W3: For the DeepMind control suite, the authors mention the best $\alpha$ is 0.02. ...... Clarification is appreciated. A3: We use a small $\alpha$ due to the small reward scale of the DeepMind control suite whereas in MuJoCo, the reward scale is 5 to 10 times of the DeepMind control suite. In the MuJoCo control suite, the accumulated reward exceeds 5k in Ant while in the DeepMind control suite, the max accumulated reward for all tasks is 1000. That’s why we use 0.2 for MuJoCo Control tasks while 0.02 for the DeepMind control suite encountering a small reward scale. And the ablation results also show that with higher alpha (0.5), our algorithm show better performance in HalfCheetah-v3 in which the reward scale is 2 times of Ant-v3. > W4: Overall for experiments, authors use only TD3 as the learning algorithm. I would like to see if the observed performance gains are dependent on the choice of the learning algorithm. It is also fine if there is an observable reliance, but in that case, I would like to see the contributions rephrased accordingly. A4: Yes, the gains are dependent on the choice of the learning algorithm. As we pointed out in Lemma 1 (Ensemble Sample Diversity Decomposition), where the ensemble diversity $\mathcal{H}(\rho)$ can be decomposed into two parts: $\mathcal{H}(\rho)=\mathbb{E}_{z}$ $[\mathcal{D_{KL}}(\rho(s,a|z_k)||\rho(s,a))] + \mathcal{H}(\rho|z) $ The first part is the state-action visit distribution discrepancy between the sub-policies and the ensemble policy induced by the KL-divergence measure and is our optimization target. While the second part implies the diversity of state-action visited by sub-policies which depends on which algorithm is used for the sub-policy. Thanks for your valuable suggestion, we’d include this in our contribution. > W5: I presume training the proposed model is challenging, specifically when it comes to finding the best ensemble size N and number of target values M (due to the trial and error approach in finding them, as in RQ2). It also appears that this training would require considerable computations. I would like to see a discussion of these limitations (and other limitations) of the method. A5: From the experimental results (More complete in the PDF of the general rebuttal.), Increasing the target value M will lead to underestimation and increasing the number of N can mitigate this. Thus, if we want to have a more accurate value function, a smaller M (no smaller than 2.) and a larger N is recommended. However, a larger ensemble size can lead to considerable computations, and in most cases, N=10 is enough to make the value estimate more accurate. To alleviate the computational pressure, we adopt network with multi-head structures[1] and we compare our computational cost with TD3, SAC and an ensemble method-SUNRISE. The results can be found in personal rebuttal for R4. It takes about twice as long as what SAC takes for N=10 and one fifth of what SUNRISE takes. > Q1: Shouldn’t eq. 11 be $\log (N)−\mathbb{E}_{s,a,z}[−log(\rho(z|s,a))]$? A1: Sorry for this typo. > Q2: Also, shouldn’t the LHS and RHS of equation 11 be approximately equal and not equal? A2: We are trying to optimize the equation on the left, but the direct solution is not a good estimator of $p(s | z)$, so we transform it into a way to increase the lower bound of the equation while decreasing the gap between the lower bound and the true value of, see Eq. 12. > Q3: Define μ and σ in theorem 1. A3: Thanks for your advice, we consider $X_i\sim\mathcal{N}(\mu, \sigma)$, and we will add this definition in theorem 1 in the new version. > Q4: While I did not check the proofs carefully, it appears the authors make simplification assumptions in the proof of theorem 1 that is not stated in the theorem (ex: zero-mean distribution). Please state them properly in the theorem. A4: Thanks for your advice, we assume that $X_{i}\sim \mathcal{N}(\mu, \sigma)$, and we will add this assumption in theorem 1 in the new version. --- Rebuttal Comment 1.1: Title: Thanks for the detailed response Comment: The rebuttal is responsive on all major points, and the new experiments further confirm the findings (performance and ablation results) and highlight the limitations (computational cost). Therefore, I will raise my score accordingly. --- Reply to Comment 1.1.1: Comment: We appreciate your review and raising your score. Your thoughtful reviews strengthen our submissions. We are happy to continue discussions with you if there are any further questions.
Summary: The paper presents an efficient ensemble learning strategy termed Trajectories-awarE Ensemble exploratioN (TEEN), which promotes exploration by diversifying the state-action visitation distribution of multiple sub-policies. The proposed method introduces a discrepancy measure that tries to maximize the difference in entropy of estimating the state-action visitation with and without the information of the policy (represented by the latent variable) and vice versa. Such a discrepancy is approximated by learning a discriminator that tries to differentiate between state-action pairs visited by different policies. And the log-likelihood of the discriminator is used as a regularizer for policy improvement along with maximizing the expected return of the policy. The authors employ additional strategies of updating a single policy at a time and clipping the discriminator log probabilities to ensure proper exploration. Finally, the estimation bias is controlled by calculating the target value as the minimum of the mean of the ensemble Q-values for each sub-policy. The choices are theoretically motivated and the algorithm is analyzed in simulation on Mujoco Control and DM Control Suite benchmarks. Strengths: 1. TEEN provides new insights into using the measure of the state-action visitation distribution to promote exploration along with maximizing returns in policy optimization. 2. Each design choice made is properly justified: namely updating one sub-policy at a time and clipping the regularizer gradients. Weaknesses: 1. There might be states not captured by the state-action visitation of one particular sub-policy (say $\pi_a$). In that case, the contribution of $\rho^{\pi_a}(s,a)$ to $\rho$(s,a) in eq (6) can be highly noisy and misinformative (garbage probability). Giving equal weightage to state-action visitation of every policy while calculating for the ensemble might be misleading. 2. It seems that the latent variable plays a key role in defining the algorithm and the discriminator. However, there is no discussion about the choice of $z_k$ and training of the discriminator in the paper or in Alg 1. How do you choose $z_k$ and how does the training take place of the discriminator? If done parallelly with the policies, the framework follows a min-max update rule, shown to be unstable in prior works like GAIL. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. How do you decide the actions while inference? 2. Can you clarify more about the weakness pt.2? 3. Section C is missing in the Supplementary. 4. I am unsure, but it seems that TEEN uses very high random exploration steps as mentioned in the supplementary. Do you use same for all the other algorithms? 5. If the regularization is already clipped, why is there a coefficient ($\alpha$) required? Can a comparison be shown between these two cases? Also, $\alpha=0$ means that there is only a change in Q-value computations as compared to TD3, hence there must be a result for all the environments with $\alpha=0$ and $\alpha=0.2$ to show a more convincing contribution of the regularization term. Currently, it is only shown for one env (Ant) and it is particularly not clear how much the regularization term is contributing to the exploration in addition to the ensemble of sub-policies. 6. The timesteps shown in the results are timesteps for individual sub-policy or the whole ensemble i.e. (1M/10 = 100k steps for single sub-policy)? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have mentioned the limitation as the tuning of $\alpha$. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for the your detailed feedback and insightful suggestions, please see the following for our response. > W1: There might be states not captured by the state-action visitation of one particular sub-policy(say $\pi_{a}$). In that case, the contribution of $\rho^{\pi_{a}}(s,a)$ to $\rho(s,a)$ in eq(6) can be highly noisy and misinformative (garbage probability). Giving equal weightage to state-action visitation of every policy while calculating for the ensemble might be misleading. A1: If a state is not captured by the state-action visitation of one particular sub-policy, this unexplored state gives equal large weightage to every policy and further enforces sub-policies to explore this unexplored state instead of misleading them through the reward function restricts policies to not explore aimlessly. Once the unexplored state is captured by the policy, there is no more misleading, as training proceeds. > W2: It seems that the latent variable plays a key role in defining the algorithm and the discriminator. However, there is no discussion about the choice of $z_k$ and training of the discriminator in the paper or in Alg 1. How do you choose z_k and how does the training take place of the discriminator? If done parallelly with the policies, the framework follows a min-max update rule, shown to be unstable in prior works like GAIL. A2: We sample $z_k$ from uniform distribution every 5k steps during recurrent training which can be shown in line 152 *“We fix p(z) to be uniform in this approach by randomly selecting one of the sub-policies to explore. We have $H(z) = −\frac{1}{N} \sum^{N}_{k=1} logp(z_k) ≈ logN$, which is a constant.”* Here, we fix $p(z)$ to be a prior distribution to guarantee that H(z) is a constant. Otherwise, there must be a more complicated form of the regular term If the distribution of z constitutes a posterior distribution through feedback. Uniform distribution means that the sub-policies are homogenous contributing equally to the ensemble exploration. As you understand, we train the discriminator parallelly with the policies, the update equation can be shown in Eq.(12) and we would add this part to our Alg.1 in the final version. The update of our algorithm is not a pure min-max update rule. TEEN updates the policy from both value function and the discriminator that you mentioned. The value function can help stabilize the gradient of policy and the exploration process further stabilizing our discriminator. > Q1: How to decide the actions while inference? A1: Check the response for W2. > Q2: Can you clarify more about the weakness pt.2? A2: Check the response for W2. > Q3: Section C is missing in the Supplementary. A3: Sorry about that, we upload some parts of Supplementary C in the general rebuttal. > Q4: I am unsure, but it seems that TEEN uses very high random exploration steps as mentioned in the supplementary. Do you use same for all the other algorithms? A4: The random starting exploration time steps are set to be $2.5\times10^{4}$ in this submission. And this is a standard hyper-parameter [1], which keeps consistent for all the tested algorithms in our submission. > Q5: If the regularization is already clipped, why is there a coefficient (α) required? Can a comparison be shown between these two cases? Also, α=0 means that there is only a change in Q-value computations as compared to TD3, hence there must be a result for all the environments with $\alpha=0$ and $\alpha=0.2$ to show a more convincing contribution of the regularization term. Currently, it is only shown for one env (Ant) and it is particularly not clear how much the regularization term is contributing to the exploration in addition to the ensemble of sub-policies. A5: The reason we do regular term cropping is that when p in log p is small (e.g. 0.001), it will have a large gradient which will cause instability in the optimization process, whereas the coefficient alpha is designed to do a balance between exploration and exploitation, a higher coefficient alpha means that the agent is encouraged to explore the environment. See the PDF released in the general rebuttal for the evaluation when α=0. > Q6: The timesteps shown in the results are timesteps for individual sub-policy or the whole ensemble i.e. (1M/10 = 100k steps for single sub-policy)? A6: Yes. The timesteps shown are for the whole ensemble, which is 1M/10 = 100k steps for a single sub-policy if we use 10 sub-policies. [1] https://github.com/sfujim/TD3/blob/master/main.py --- Rebuttal Comment 1.1: Title: Further discussion would be welcome. Comment: Dear Reviewer C6dk Thank you for your detailed review of our research. Your constructive comments are very helpful in helping us to improve our submissions. We truly value your feedback and have addressed your concerns in detail in our response. Have all your questions been answered? Since we have **less than** three days left for discussion, please let us know if there's anything else you'd like to discuss. If you're satisfied with our responses, we humbly hope you might consider giving the submission a **higher score** based on our response and your high comments on the Sound, Presentation, and Contribution. Best wishes, Authors.
Rebuttal 1: Rebuttal: We greatly appreciate the reviewer's thorough reading and detailed comments about our submission. Your constructive reviews strengthen our draft. And some major concerns might be due to potential misinterpretations of the paper and we would like to clarify them in the responses. We updated Appendix C in this place. **1. Performance evaluation on Deepmind control suite.** In Figure 1, we present the performance curves on DMControl tasks. Our proposed algorithm TEEN also demonstrates superior performance on these tasks, which is consistent with our main paper. TEEN outperforms other baselines by a large margin on the fish-swim task. **2. Ablation study.** To understand the impact of hyper-parameters, we conducted experiments in various environments to show how the ensemble size N, the number of value functions M, and the weight $\alpha$ take effect. The results of the experiment are also in line with our expectations. The results can be found in Figure 2. **3. Performance on a challenging task humanoid.** To convince our evaluation, we conduct our algorithm in a challenging environment Humanoid-v3 in the MuJoCo suite, which is shown in Figure 3. The state dimension of Humaniod is 376 [^1], which is exceptionally difficult to solve. Our algorithm TEEN shows good performance in this task. We compared our algorithm TEEN with sample-efficient algorithms TD3 and SAC. We follow the standard evaluation settings, carrying out experiments over five million (5e6) steps and running all baselines with 5 random seeds. [^1] https://www.gymlibrary.dev/environments/mujoco/humanoid/ Pdf: /pdf/8da60df481b6ad459e042a257915c64ec09c8e24.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Effective Bayesian Heteroscedastic Regression with Deep Neural Networks
Accept (poster)
Summary: The authors consider the task of properly modeling heteroscedastic observation noise with Bayesian Neural Networks (BNNs). They identify several shortcomings in prior work concerning variance modeling and propose to rely on parameterization via natural gradients instead. Together with a Laplace approximation to model epistemic uncertainty and a closed-form posterior predictive likelihood approximation the proposed approach is evaluated on three sets of experiments. Strengths: - The paper considers a well-defined task, identifies prior weaknesses, and offers a principled solution. - The paper is well written and can be understood easily by a reader - The experimental setups cover a wide range of domains Weaknesses: - The individual steps, i.e., switching to natural parameters, reliance on Laplace approximations, and approximate closed-form marginalizations for the posterior predictive are mostly minor adaptations from prior work - Section 5.2 reads more like an afterthought that is added to the paper to have an additional experiment. - The setup description is mostly left to the prior work which introduced the experiment. - The evaluation of the results is similarly vague. E.g., Heteroscedastic VI (PP) has a huge variance in Figure 1, Exp 1, whereas it suffers no such problems in the individual replications. The Laplace approach of the proposed method not only performs a little bit worse than the others in setting two (survival-screen-A375), but struggles a lot, whereas the MAP approach does not suffer from this problem. ## Minor - None of the tables do follow the NeurIPS style guide. Captions should be placed above a table, and tables should not contain vertical lines (fig 2) ## Typos - Fig 2 right and Table 1/2 switch the se notation (from $\pm$ to brackets) - Between Table 1 and 2 the formatting of the method column switches from right-aligned to left-aligned Technical Quality: 3 good Clarity: 3 good Questions for Authors: - I am not sure how to parse the sentence in l27/28. Hasn't epistemic uncertainty modeling been the main focus for now with aleatoric uncertainty modeling being of secondary interest? Can the authors provide some intuition on what they mean here? (I might just be missing the obvious...) - Figure 2: How does the corresponding test NLL look like for the proposed approach on this task? - On the VI setups: - Can the authors speculate on the performance of VI in the naval and plant experiments? In both data sets the performance is not just a drastic outlier compared to the other models, but also with respect to prior results reported in these setups. See, e.g., the results reported in the original paper introducing this set of experiments (Hernández-Lobato & Adams, 2015), as well as numerous other BNN and GP papers on this setup who report variational inference results as part of their baselines (e.g., Bui et al., 2015; Wu et al., 2019; Haussmann et al., 2020,...). - A parallel question applies to VI on MNIST where it performs exceptionally bad, whereas I would expect a similar performance at least to the homoscedastic setup. - Wrt the runtime. VI should in practice be only slightly slower than the other methods. Especially given the statement in the appendix that for all methods in the non-image data, CPUs were sufficient, the argument on why the hyperparameter selection was limited is not completely clear to me. Can the authors elaborate? - Have the authors experimented with a simpler approach than Blundell et al.'s mixture prior, relying instead on a simple mean-field normal prior with local-reparameterization (Kingma et al., 2015) to stabilize the gradients? Then a single weight sample per forward pass should be sufficient to train the models without stability issues on both UCI as well as the image data sets without too many problems increasing the runtime cost to only twice as much as a deterministic net would cost. - In UCI and CRISPR, the proposed MAP approach performed almost always similarly to/better than the Laplace approach. This behavior drastically changes for the synthetic image data experiments, especially FashionMNIST. Can the authors speculate/provide a discussion on why the additional structure provided provides this huge difference? _____ Bui et al., 2016: Deep Gaussian Processes for Regression using Approximate Expectation Propagation Haussmann et al., 2020: Sampling-Free Variational Inference of Bayesian Neural Networks by Variance Backpropagation Kingma et al., 2015: Variational Dropout and the Local Reparameterization Trick Wu et al., 2019: Deterministic Variational Inference for Robust Bayesian Neural Networks Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Limitations of the model are discussed, but the societal impact is not. Given the theoretical nature of the paper, I do not consider this lack of discussion to be a problem. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the overall positive evaluation of our paper and your detailed and constructive comments and questions! > Section 5.2 reads more like an afterthought that is added to the paper to have an additional experiment. Thank you for your comment. This is mostly due to lack of space. We will give the section more prominence, and elaborate more on its nature and relevance. > Evaluation of the results is vague. E.g., Heteroscedastic VI (PP) has a huge variance in Figure 1, Exp 1, whereas it suffers no such problems in the individual replications. The Laplace approach of the proposed method not only performs a little bit worse than the others in setting two (survival-screen-A375), but struggles a lot, whereas the MAP approach does not suffer from this problem. Regarding VI: during training on the mean response, some runs diverge to outlier values despite exhaustive tuning of hyperparameters and the use of common VI libraries and their recommended settings. Hence those lead to a large error (while the majority of runs achieves good results). We provide more details regarding the VI baseline below. Regarding Laplace: this is indeed curious and we will investigate further. We want to emphasize that these are challenging real-world datasets and we don't expect any single method to perform best on these. Despite that, we find it encouraging that our method performs best in two out of three settings. > I am not sure how to parse the sentence in l27/28. Hasn’t epistemic uncertainty modeling been the main focus for now with aleatoric uncertainty modeling being of secondary interest? Can the authors provide some intuition on what they mean here? In the case of heteroscedastic regression, recent approaches, e.g. the faithful and $\beta$-NLL methods, are (variations of) maximum likelihood estimation and therefore do not provide epistemic uncertainties. This observation has motivated our statement. On the other hand, the Bayesian community indeed focused more on epistemic uncertainties instead of aleatoric uncertainties. We will clarify this. > Figure 2: How does the corresponding test NLL look like for the proposed approach on this task? The NLL is reported in Table 2 on the left. Note that the homoscedastic baseline in Fig. 2 is different since we use the empirical Bayesian homoscedastic method in Table 2 for its better performance. > In UCI and CRISPR, the proposed MAP approach performed almost always similarly to/better than the Laplace approach. This behavior drastically changes for the synthetic image data experiments. We hypothesize that the proposed empirical Bayesian approach with Laplace particularly helps for deeper neural networks because it can adjust the prior precision/weight decay individually per layer, which is not tractable with cross-validation. On UCI and CRISPR, the networks only have one hidden layer. We therefore believe that the proposed benchmark is interesting for heteroscedastic neural network regression because it requires more complex networks than prior benchmarks. > Can the authors speculate on the performance of VI in the naval and plant experiments? In both data sets the performance is not just a drastic outlier compared to the other models, but also with respect to prior results reported in these setups. [...]. A parallel question applies to VI on MNIST where it performs exceptionally bad, whereas I would expect a similar performance at least to the homoscedastic setup. Thank you for drawing our attention to these matters. As stated above, we had training stability issues despite extensive manual tuning for VI and some runs end up diverging while others give good performance. As per your suggestion, we will improve our VI baselines with local reparameterization and simpler prior. > VI should in practice be only slightly slower than the other methods. Especially given the statement in the appendix that for all methods in the non-image data, CPUs were sufficient, the argument on why the hyperparameter selection was limited is not completely clear to me. Can the authors elaborate? VI runs 10 times slower due to 10 MC samples. Less MC samples during training showed even more convergence issues. Therefore, we tested hyperparameters manually to obtain a suitable grid for validation. Nonetheless, some outlier runs reduce the performance. As stated above, we hope to alleviate these issues with flipout or local reparameterization. > Have the authors experimented with a simpler approach than Blundell et al.'s mixture prior, relying instead on a simple mean-field normal prior with local-reparameterization (Kingma et al., 2015) to stabilize the gradients? Thank you for the suggestion. As stated above, we will experiment with these. However, we want to point out that we used a standard implementation for VI in neural networks (`blitz-bayesian-pytorch`) and followed suggested defaults (see also Appendix D.1.2 ll. 599-607). For now, we added the MFVI performance from the suggested DVI paper for comparison in Table 8 (see rebuttal PDF). > Minor comments and Typos Thanks for pointing these out. We will make the necessary changes. We will include the above clarifications, as well as the improvements mentioned in the general response into our manuscript. We hope that these improvements positively influence your assessment of our paper. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications. I keep my score of recommending acceptance. Given the stability issues you report during training, I would, however, recommend to always rely on a local reparameterization from now on as it greatly reduces variance during training. I do not know the cited library, but you can find implementations in most major libraries, e.g., pyro for pytorch relies on it per default [1] and tensorflow probability [2] contains implementations of it as well. Inference without it just unnecessarily reduces (or even sometimes destroys as you observed) your numerical stability without there being a theoretical justification. _____ [1] https://docs.pyro.ai/en/stable/contrib.bnn.html [2] https://www.tensorflow.org/probability/api_docs/python/tfp/layers/DenseLocalReparameterization --- Reply to Comment 1.1.1: Comment: Thank you for your feedback. We greatly appreciate the advice! As also stated above, we will improve the VI baseline in the revised manuscript incorporating your suggestion. We already conducted some preliminary experiments, in which we observe that your recommended modifications improve the stability of the training of the VI baseline. As an example, see attached preliminary results for the CRISPR experiment, where we compare our previous VI implementation, `VI (Blundell)`, to `VI (Flipout + N prior)`, where we use Flipout and a simple Normal prior, in terms of test log-likelihood. Shown are mean and standard error over 10 seeds. | | flow-cytometry-HEK293 | survival-screen-A375 | survival-screen-HEK293 | |--------------|-----------------------|----------------------|------------------------| | VI (Blundell) | -1.30 (0.590) | -0.26 (0.006) | -0.15 (0.008) | | VI (Flipout + N prior) | -0.40 (0.006) | -0.27 (0.007) | -0.26 (0.01) |
Summary: This paper extends the technique of the Laplace Approximation to incorporate heteroscedastic aleatoric uncertainty. The proposed technique achieves this by exploiting the natural parameters of the Gaussian likelihood. Main motivation is the coupling of the mean and the input dependent variance in the Gaussian likelihood. Using the natural parameters, the paper suggests the heteroscedastic Gaussian log-likelihood, linearized Laplace Approximation, and marginal likelihood for obtaining the prior precision term. Two standard benchmarks including UCI and one self-made benchmarks, improvements over the previous methods are demonstrated. Strengths: In my opinion, the followings are the strengths of this paper: - Presentation of the paper is generally clear to see the contributions of the paper. I appreciated especially the section 2, which carefully addresses the problem at hand. - The contribution of the paper is relevant to the current Bayesian Deep Learning community. For example, extension of the Laplace Approximation to consider input dependent aleatoric uncertainty is meaningful. To do so, the paper provides long stretch from finding the problem in standard formulation of the Gaussian likelihood, to showing the adaptations on linearized Laplace Approximation, Gaussian log likelihood, predictive distribution and marginal log likelihood. - Experimental results show good performance. Weaknesses: The followings might be the weaknesses of the paper (all major): - The experiments only focus on regression problem, which is limited in scope for a paper on Bayesian Deep Learning. The paper mainly focuses on the regression problems, e.g., all the experiments as well as the methodologies (section 2 and section 5). One way to address this point could be certain changes in the title. Would it be possible to include the term "regression" in the title? - Certain parts of the paper need revision in presentation. 1. Many technical terms are introduced without explaining them well. Major examples are in the introduction 2nd paragraph, e.g., feasible generalized least squares, natural parameters, standard FGLS, etc. 2. The logic flow in the introduction may not be very kind to the reader. In particular, I had difficult time reading the 3rd paragraph. After reading the paper, I could grasp the concept well, but my feeling is that it is diving too much into the details early on. 3. I left minor comments later in the review. - The baselines are restricted to the area of heteroscedastic aleatoric uncertainty estimation, which may be limited for broader audience. The scale of the experiments are also limited to toy-ish benchmarks. One option would be to use the uncertainty-baselines benchmarks. In all the experiments, the baselines are rather restricted. While these choices validate the main point of the paper, I think the paper could improve by also competing against generic state of the art in uncertainty estimation. Would it be possible to include atleast MC-dropout, deep ensemble, and maybe stochastic HMC? (with homoscedastic aleatoric terms?) In this way, one could examine the importance of heteroscedastic aleatoric uncertainty estimation. - Related work section is missing in the paper, which makes it harder to locate this work within the state-of-the-art. To back up, there is no related work section in the main body of the paper. This makes it difficult to locate this work within the current state of the art. Would it be possible to include them? (one could shorten other parts of the paper). Some important areas are the usage of natural parameters of the Gaussian distribution, and more detailed treatment on aleatoric uncertainty, e.g., calibration methods, combination of both model and data uncertainty, etc. To name only few, highly relevant works seem to be: (1) Natural-Parameter Networks: A Class of Probabilistic Neural Networks (one of the works that uses natural parameters in more general context) and (2) estimating model uncertainty of neural networks in sparse information form (one of the works that uses natural parameters of Gaussian distribution for Laplace Approximation). Why the authors build on regularization based aleatoric uncertainty estimation method should also be mentioned. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: (minor comments/questions) - in lines 20-21: isnt active learning and reinforcement learning part of the decision making problems? - in lines 22-24: there are lots of work on making data noise as function of inputs. It might be an outdated statement. - lines 48-54 should be made easier to read. - Can you comment on the practical relevance of the problem 2.1? - Is it possible to refer to all the derivations? (if they are present in the appendix?) - line 198: is it a typo the jacobian term? - is it possible to comment on the complexity of the overall pipeline, when compared to the standard linearized Laplace Approximation? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: The limitations sections exist. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed feedback and constructive questions which will help inprove our manuscript. > The paper mainly focuses on the regression problems. [...] Would it be possible to include the term "regression" in the title? Thank you for pointing out that heteroscedastic does not imply the focus on regression. We are happy to adjust the title to include "regression", for example, "Effective Bayesian Heteroscedastic Regression: Model Selection and Epistemic Uncertainties". > The baselines are restricted to the area of heteroscedastic aleatoric uncertainty estimation, which may be limited for broader audience. [...] Would it be possible to use the uncertainty-baselines benchmarks? We believe that the problem of regression, as opposed to classification, is similarly important in practice and for the broader audience but relatively underexplored in deep learning. In fact, the uncertainty-baselines benchmarks illustrate this issue since they exclusively include classification problems. > In all the experiments, the baselines are rather restricted. [...] Would it be possible to include atleast MC-dropout, deep ensemble, and maybe stochastic HMC with homoscedastic aleatoric terms? Thanks for these suggestions. We are happy to include these as baselines. For UCI regression datasets, we include the results provided by (Wu et al, 2019; Gal et al 2016). These works use the same setup as we do, so results are comparable and already provide evidence that our approach outperforms these additional baselines, speaking to the significance of our work. However, we want to note that deep ensembles are applicable to all the baselines and our method, see Eschenhagen et al. (2021). We would therefore add an ablation that studies the effect of additional ensembling for ours and other methods. We are happy to extend these baselines to all our experiments in the camera-ready version and will assess the feasibility of an HMC baseline. References: - Wu A., et al. Deterministic Variationa Infeference for Robust Bayesian Neural Networks. In ICLR, 2019. - Gal Y., and Ghahramani Z. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning. In ICML, 2016. - Eschenhagen R., et al. Mixtures of Laplace approximations for improved post-hoc uncertainty in deep learning. Bayesian Deep Learning Workshop 2021. > Certain parts of the paper need revision in presentation. [...] undefined technical terms, confusing introduction. Thank you for the clear description of what was unclear. We will improve the text accordingly. > Related work section is missing in the paper, which makes it harder to locate this work within the state-of-the-art. [...] We understand that while we discuss the most recent related work in great detail (Sec. 2, Appendix B and C), we should improve the discussion of related work in a broader context. Thus, we will extend the discussion of related work in Sec. 2, making sure that we cover the Baselines that we now added, and your suggestions discussed below. > relevant works seem to be: (1) Natural-Parameter Networks: A Class of Probabilistic Neural Networks and (2) estimating model uncertainty of neural networks in sparse information form. Thank you for the additional references, we will refer to them but want to point out that they are rather complementary to our work: (1) uses natural exponential families for posterior approximations in Bayesian neural networks and (2) could be used as alternative to KFAC for the Laplace posterior approximation. Both only apply to homoscedastic regression but could very well be combined with our approach to be applicable for heteroscedastic regression. > Isn't active learning and reinforcement learning part of the decision making problems? Thanks for pointing out this error, we will fix it. > Suggestions made in the "Questions" section We will make the corresponding changes and clarifications: 1) active and reinforcement learning part of decision making problems; 2) correct lines 22-24; 3) clarify lines 48-54; 4) refer to derivations in appendix; 5) clarify Jacobian term. > Can you comment on the practical relevance of the problem 2.1? Recent works on heteroscedastic regression focus on ad-hoc regularizations that seem to work for certain problems but fall short when applied to more complex problems. Problem 2.1 is a prototype of a complex heteroscedastic regression problem that a good algorithm should be able to solve. > Is it possible to comment on the complexity of the overall pipeline, when compared to the standard linearized Laplace Approximation? The standard linearized Laplace approximation is not applicable to the heteroscedastic case due to potentially negative definite Hessian, which our approach fixes using natural parameters. The complexity of our approximation is the same as for homoscedastic regression. This also shows in the empirical measurements in Table 9 of the rebuttal pdf where homoscedastic and heteroscedastic inference have the same runtime. We will further clarify the computational complexities in the appendix. We hope that we could address your questions and remarks in our response and the attached pdf with additional results. If so, we would appreciate if you considered revising your score. --- Rebuttal Comment 1.1: Title: On author response Comment: I would like to thank the authors for the efforts involved. I have read the rebuttal as well as that of other reviewers. I decided to increase the score, given that the promises made during the rebuttal are properly address in the final revision.
Summary: This article focuses on refining the techniques used to manage two types of uncertainties in complex regression tasks utilizing deep neural networks. The uncertainties, known as aleatoric (arising from inherent randomness in the data) and epistemic (originating from the model's limitations), are critical to address for robust and accurate predictions. The authors propose an innovative approach using the natural parameterization of the Gaussian likelihood to overcome the gradient scaling issue commonly encountered in traditional methodologies. Additionally, they introduce an efficient Laplace approximation that enhances heteroscedastic neural networks, providing epistemic uncertainties, and facilitating automatic regularization through empirical Bayes. This method outperforms earlier strategies in heteroscedastic regression, demonstrating scalability and obviating the need for hyperparameter tuning. Empirical validation on diverse datasets, including a new image dataset, UCI regression, and CRISPR-Cas13, yielded superior performance, suggesting potential applicability to other real-world datasets in the future. Strengths: - They are addressing very important shortcomings in one of the best algorithms for quantifying uncertainty using neural networks. - Their method seems mathematically solid and is supported by extensive experiments. - The code is available, and this work seems to be reproducible. - Their method is thoroughly compared with the rival algorithms. - Compared to rival methods, this method has less hyperparameters. Weaknesses: I cannot spot any weaknesses in this paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - How large is the computation cost and time of the proposed method in comparison to the rival methods? In case it is significantly heavier, it should be properly discussed in the limitation section. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitations of this work are discussed properly. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive remarks! Below we would like to clarify your concerns regarding computational cost of our method. > How large is the computation cost and time of the proposed method in comparison to the rival methods? 1. Using the natural parameterization of the likelihood does not change the computational cost or complexity. 2. Using the Laplace posterior and predictive incurs a one-time cost of computing the approximation (Sec. 4.1) and the posterior predictive is slightly more expensive (Sec. 4.3) with $\mathcal{O}(P\sqrt{P})$ instead of $\mathcal{O}(P)$ for an MLP with fixed hidden layer sizes using KFAC. Empirically, the predictives are very fast and this is not a bottleneck. In comparison to the standard predictive, Table 9 in the rebuttal pdf shows that the posterior predictive is only 5x slower than a single forward pass without epistemic uncertainty. 3. Using empirical Bayes even **reduces** the computational cost because it only requires a single training run in comparison to validation-based selection of the regularization. This is apparent from Table 9 as well where the runtime of empirical Bayes is roughly 5 times faster when the hyperparameter tuning is included. Overall, the proposed natural Laplace method leads to faster training than rival methods when including the regularization strength selection and a slight increase in runtime of the posterior predictive. We will clarify the above points in Sec. 4.4 and hope that our answer alleviates your concerns regarding the computational cost and time of our method. If so, we would appreciate if you considered revising your score. --- Rebuttal Comment 1.1: Comment: Thank you for the extra information provided. I will modify my assessment accordingly.
Summary: This paper introduces a new method for fitting heteroscedastic regression models with neural networks. In contrast to previous work that highlights the deficiencies of MLE and indirectly hint at regularization, this paper suggests using natural gradients during training and combines Bayesian ideas for regularization and to handle uncertainty. This modeling approach accounts for both aleatoric and epistemic uncertainty through the posterior predictive distribution. Bayesian inference can be computationally expensive, so they utilize a Laplace approximation to the posterior and simplifying factorizations on the Hessian. Natural gradients avoid some of the issues with gradients that occur when training with other parameterizations ($\mathcal{N}(\mu, \sigma^2)$). Empirically this method achieves strong results on real-world data for regression tasks as well as on image data. Strengths: - Problem is well-motivated, and the distinctions between this method and existing ones are clear - Experiments show solid results and extend the method to a setting with image data as opposed to only tabular data - Combines ideas from approximate Bayesian inference with a parameterization commonly used for Gaussian processes Weaknesses: - typo line 98: "the mean respectively the covariance matrix" --> "the mean and covariance matrix, respectively" - Structurally, it was odd to see Problem 2.1 presented so high up in the paper - I would appreciate more discussion on the differences between the natural MAP vs natural Laplace methods. In particular, the differences in the interpretations of the two models (what sorts of uncertainties they account for) and when to use which Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - How does this method perform on data that is homoscedastic (ie the model is misspecified for the task)? Does the variance function flatten out, or will it overfit the variances (or $\eta_2$)? - Does empirical Bayes put this method at risk of overfitting? - Is $\eta_2(\cdot)$ necessary if it gets omitted when estimating the epistemic uncertainty (line 250) and what was the reason for this choice if it could have been done at low cost? - How does this type of "Bayesian-ness" compare to the ideas presented in Stirn and Knowles (2020) or Detlefsen et al (2019) in terms of interpretation? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: There was not much discussion of this, but that is appropriate for the nature of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive evaluation of our work and your constructive comments and questions. **Weaknesses** > It was odd to see Problem 2.1 presented so high up in the paper. Currently, Section 2 serves as a theoretical comparison to the most related works. Within this context, we introduce the problem for the case-study presented in Figure 2 to highlight the conceptual difference to our approach. Given your feedback and the feedback of the other reviewers, we plan to make the discussion of related work more direct in this section complementing our discussion in the introduction, Section 2 and Appendix B and C. We can defer the definition to the experiments. We also welcome other suggestions. > I would appreciate more discussion on the differences between the natural MAP vs natural Laplace methods. These only differ in how the regularization is optimized: for "MAP", we optimize the regularization on a validation set, while for "Laplace", we use empirical Bayes (ML-II; see Sec. 4.2) to optimize the regularization jointly on the training data. We provide the pseudocode for the optimization in Appendix E1: for "MAP" the optimization of $\delta$ and the corresponding hyperparameter update are omitted. We will make this clearer by adding replacing Laplace with "empirical Bayes (EB)". **Questions** > How does this method perform on data that is homoscedastic (ie the model is misspecified for the task)? Does the variance function flatten out, or will it overfit the variances (or $\eta_2$)? Thanks for the interesting question. We added results for this on the Skafte and image regression tasks with modified homoscedastic noise to the rebuttal pdf in Figure 8. We indeed find that our method successfully regularizes towards a homoscedastic aleatoric uncertainty. > Does empirical Bayes put this method at risk of overfitting? Since we only optimize the regularization strength, the potential to overfit is quite limited. Our experiments confirm this: we do not find it to overfit compared to validation-based selection of the regularization. > Is $\eta_2$ necessary if it gets omitted when estimating the epistemic uncertainty (line 250) and what was the reason for this choice if it could have been done at low cost? Yes, $\eta_2$ is necessary since the Jacobian $J_\mu(x)$ depends on it and is required to approximate the epistemic uncertainty. This is a low-cost estimate since it is in a closed form. > How does this type of "Bayesian-ness" compare to the ideas presented in Stirn and Knowles (2020) or Detlefsen et al (2019) in terms of interpretation? Stirn & Knowles (2020) and Detlefsen et al. (2019) propose non-Bayesian methods and therefore infer only heteroscedastic aleatoric uncertainties $\sigma^2(x)$. They have no epistemic uncertainty, which would require uncertainty about the model or its parameters. For a visual depiction of the additional epistemic uncertainty, see "Heteroscedastic Laplace" on the right in Fig. 1. We hope that we could clarify your questions and if so, would appreaciate if you considered revising your score. --- Rebuttal Comment 1.1: Comment: Thanks for the extra information and clarifications. I will update my score.
Rebuttal 1: Rebuttal: We thank the reviewers for their time and constructive feedback on our manuscript. It is encouraging to see that the reviewers rated our paper overall positively pointing out that our paper *"identifies prior weaknesses, and offers a principled solution"* (Reviewer 7fHN), that Reviewer HwYi *"appreciated especially the section 2, which carefully addresses the problem at hand"*, that our *"method seems mathematically solid and is supported by extensive experiments"* (Reviewer PsPP) and lastly that Reviewer 2cQi acknowledges our dedication to *"extend the method to a setting with image data as opposed to only tabular data"*. Further, as asked by the reviewers, we investigated the following additional aspects (cf. attached pdf): * **Baselines (HwYi, 7fHN)**: After confirming that the papers of Wu et al., (2019), and Gal and Ghahramani (2016) follow the same experimental setup as we do for the UCI datasets, we included their results for MFVI (Graves, 2011), DVI (Wu et al., 2019), and MC Dropout (Gal and Ghahramani, 2016) as shown in Table 8. We find that our proposed approaches still reach the overall best performance. * **Homoscedastic Data (2cQi)**: We performed additional experiments to confirm that our approach also performs well on homoscedastic data: a) we plot the mean and uncertainty estimates for a homoscedastic variant of the data generated for Figure 1 and b) compare both on a homoscedastic variant of the image regression task in Table. The results are in Figure 8 of the rebuttal pdf. They show that the proposed approach does not overfit and instead recovers the homoscedastic aleatoric uncertainty. * **Runtime (PsPP)** In general, we would like to note that the Laplace methods do not require cross-validation, which is necessary for MAP and the remaining baselines and therefore have an advantage in terms of runtime. In Table 9 of the rebuttal, we show runtimes for the image regression task for training (including hyperparameters) and inference. * **Visualizations** In addition to showing the differences between MAP and Laplace, we also plot the mean and uncertainty estimates for MC Dropout and MFVI for the example provided in Figure 1 (cf. Figure 7, rebuttal pdf)---showing a favourable fit for our method. Beyond these experiments, we will also add MC Dropout and an improved VI baseline to the remaining experiments, and add an ablation in which we instantiate our methods and baselines as deep ensembles (Reviewer HwYi). Further, we will extend Section 2 to discuss more related work---especially from the Bayesian learning literature as suggested by the reviewers, and lastly, include all clarifications provided during the rebuttal into our manuscript. - Wu A., et al. Deterministic Variational Infeference for Robust Bayesian Neural Networks. In ICLR, 2019. - Gal Y., and Ghahramani Z. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning. In ICML, 2016. - Graves A. Practical variational inference for neural networks. In NeurIPS, 2011 Pdf: /pdf/bd7928144335f369a834baa9745b4bc0804e09cf.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Combinatorial Group Testing with Selfish Agents
Accept (poster)
Summary: The paper studies Combinatorial Group Testing (CGT) problem in a game-theoretic setting. The setup consists of a set $[n]$ of $n$ agents, of which a set of $k$ agents, denoted by $K$ are `active'. The goal is to reveal their identities in the minimum time possible via set of queries. Each query $Q$ is a subset of $[n]$ and the feedback available to agents is information about $Q\cup K$. Authors present results using the notion of adversarial equilibrium (AE). Two scenarios are explored: one where the number of active agents (k) is known, and one where it is not. In the known scenario, they demonstrate AE strategies that achieve near-optimal revealing times $O(k \log(\frac{n}{k}))$. However, in the unknown scenario, the revealing time increases to the order of $n^{k-1}$ with a lower bound of $\Omega(n)$. ===================================================================== EDIT: I have improved the score from 6 to 7 after authors' response. Strengths: (+) The problem is very nicely and clearly introduced in Section 2. Also, the problem is quite interesting. The paper is well written, especially the technical aspects are well articulated and discussed. (+) The algorithm for the case where $k$ is known (BS\_Jumps) and the analysis showing that the performance is ``close'' to the optimal is a sound contribution. (+) CGT has been studied for decades. This new outlook leveraging game-theoretic ideas is indeed interesting (though this paper is not the first one to use it). (+) Several interesting applications are identified and discussed. Weaknesses: (-) The paper essentially builds up on the ideas of Chionas et al. [2023]. It also seems like extension of their setups, which is not entirely a bad thing; however, it limits the extent of contributions. The main algorithmic contribution of the paper is basically the \emph{BS\_Jumps} algorithm. The other algorithms are either from the literature or are straight-forward. (-) The feedback function used is not well motivated (at least this reviewer can not make a practical sense out of it). Please discuss the implications of it. (-) This paper offers theoretical results, which is good. However, there is no numerical evaluation/experiments section. I think a section dedicated to the empirical testing or validation of the proposed algorithms will surely enrich the paper. I think this is an important weakness of the paper. (-) The case where $k$ is unknown is not thoroughly (at least not as thoroughly as the case with known $k$) discussed. \end{itemize} Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: (*) I do not understand the idea of `` selfishness'' of an agent. What constitutes the selfish behavior and what are the implications of that? Please clearly define and introduce it? (*) In Section 2.1, is it true that none of the agents have information about the query generated at time $t$ (as one agent can deviate from the prescribed strategy). so, is it fair to say that ``algorithm'' is a central entity collecting the output of each agent's decision to be included in the query or not along with agents decision to deviate from the standard strategy or not. This central algorithm entity then creates a query. (*) The authors present two new algorithms but don't provide enough information about their efficiency, besides mentioning the revealing time. For instance, are there any concerns about computational complexity, memory use, or scalability to larger problems? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: (*) As I mentioned before, please discuss the motivation/limitations/implications of the specific feedback function (lines 117 -- 119). (*) The authors mention various applications of CGT in the introduction, but it's unclear how well their AE model or algorithms would perform in these situations based on the information provided. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Weakness 1**: *The paper … builds up on ideas of Chionas et al. [2023]. It also seems like extension of their setups, which is not entirely bad thing; however, it limits the extent of contributions. The main algorithmic contribution of the paper is basically the BS_Jumps algorithm.* Reply: We adapt their definition of Adversarial Equilibrium (AE) in our new setting with any algorithms (strategies), including non-adaptive and adaptive ones, but Chionas et al. consider only non-adaptive strategies. This adaptation of AE requires new formalization of deviation -- deviation was fixed in advance (non-adaptive) in Chionas et al. while in our case, it could be decided by the deviating agent online. There are also minor differences in the problem setting, as they consider CR problem but we focus on general CGT. Adaptive strategies pose new significant challenges, discussed in the general rebuttal file, and as a result -- our algorithms, formulas and analysis are **entirely different**. **Weakness 2, Limitation 1**: *Feedback function used is not well motivated (at least this reviewer can not make a practical sense out of it). Please discuss the implications of it.* Reply: Our feedback function, defined in lines 117-119, is one of the simplest considered in CGT, known in literature as ternary feedback or $(2, \log n)$-feedback in the generalized CGT nomenclature, cf. Klonowski et al. Generalized framework for group testing:... TCS 2022. If the query has a singleton intersection with the hidden set, then naturally feedback reveals the respective agent's ID. If the intersection is empty or has size at least $2$, then the feedback is empty or clash, respectively. We chose this feedback for presentation of our ideas because of its simple nature (eg, no need to "decode'' the ID of singletons). Probably even more popular feedback is when instead of ID of the singleton, a value $1$ or range $\ge 1$ (so called "beeping") is revealed. We show (Suppl. Materials Sec. 5) how to extend our results to such weaker CGT feedbacks and to the setting used in the CR problem. Thus, our results cover a wider range of natural, well motivated CGT and CR settings. **Weakness 3**: *The case where $k$ is unknown is not thoroughly (at least not as thoroughly as the case with known) discussed.* Reply: The case of unknown $k$ is less discussed, as we prove a strong lower bound in this case - we prove that any CGT+AE algorithm in this case has revealing time at least $\Omega(n)$. We also design and analyze an algorithm which is polynomial in $n$ for any constant $k$; see also our response to Q1 of Reviewer X2Ux regarding technical challenges. It is interesting, though, to explore cases between knowing and not knowing at all the number of active agents, eg, having partial knowledge about $k$, an upper bound on $k$, or even consider the number of active agents as a random variable and devise bounds for the distribution of this number. **Q1:** *I do not understand the idea of "selfishness" of an agent. What constitutes the selfish behavior and what are the implications of that? Please clearly define and introduce it?* Reply: In our setting, each agent's goal is to minimize its revealing time. Selfish behavior means that an agent could deviate from what a query says, see lines 169-176, in order to reduce its revealing time. As we prove that our algorithms are AE, no agent will deviate unilaterally without a threat of worsening their revealing time. An implication of selfish behavior in equilibrium is that the respective metric of global efficiency is compromised compared to the cooperative setting. This is quantified by Price of Anarchy, and the loss in efficiency is depicted in Table 1. **Q2:** *In Sec. 2.1, is it true that no agent has information about query generated at time $t$ (as one agent can deviate from the prescribed strategy). so, is it fair to say that "algorithm" is a central entity collecting output of each agent's decision to be included in the query or not along with agents decision to deviate from standard strategy or not. This central algorithm entity then creates a query.* Reply: Reviewer’s intuition is correct, our only comment is that we use word "algorithm" in association with strategies (or computing them) at each agent, while the "centralized entity" mentioned by Reviewer is actually the **feedback function**. It was defined within CGT setting (from line 115), but we will add a reminder after line 168 that: after each agent decides if it is present in the query or not, the feedback function’s outcome is computed based on the agents' decisions and communicated to all agents. **Q3:** *The authors present two new algorithms but don't provide enough information about their efficiency, besides mentioning the revealing time. For instance, are there any concerns about computational complexity, memory use, or scalability to larger problems?* Reply: The main efficiency metric is the revealing time (query complexity), i.e., number of game’s rounds. Total communication complexity can be derived as a function of protocol’s revealing time. In each round, each active agent communicates a single bit. The ternary feedback can be encoded by $\log n$ bits. Let $T$ be the protocol’s revealing time. Thus, total communication complexity of a protocol is $(k+\log n) \cdot T$ bits. The total number of local steps during the game can be upper bounded by $\log n \cdot T$, as each round of algorithm BS_Jumps requires updating token, which is a logarithmic operation (see Suppl. Materials, Sec. 3). The memory needed in each agent is logarithmic, due to simple recursion in main algorithm and token update and a constant number of variables. This is assuming that basic arithmetic operation on values, with logarithmic bit representation, is done in single local step, and these numbers are stored in a memory unit. **Limitation 2** applications addressed in the general rebuttal file. --- Rebuttal Comment 1.1: Title: Thanks for the clarification Comment: Thank you for the clarification. I believe authors addressed my question in sufficient details, including more information about feedback function, explanation of terms, distinction and positioning of works in the contexts of known works. The only remaining "concern" is the weakness I mentioned in the earlier review. "This paper offers theoretical results, which is good. However, there is no numerical evaluation/experiments section. I think a section dedicated to the empirical testing or validation of the proposed algorithms will surely enrich the paper. I think this is an important weakness of the paper." Authors have not commented on it. I will be happy to increase my score after authors comment on that. Please note that I just would like to know why no experiments are included, or why there is no good need of them here? Thanks. --- Reply to Comment 1.1.1: Title: Numerical evaluation/experiments Comment: We thank to the reviewer for the comments. Here are our explanations about the experiments: We have an ongoing cooperation with a blockchain company which runs the global blockchain Tezos, where we proposed to them our algorithm BS_Jumps as a fairness mechanism for their PoS (Proof of Stake)-based consensus protocol (as described in Section 5.4 of our Supplementary Materials). They are currently considering the implementation and experimenting with our algorithm on their platform but their policy and timing did not allow us yet to include these in the NeurIPS submission.
Summary: This submission applies and extends a very recent game theoretic perspective, introducing in particular a notion of strategy equilibrium called adversarial equilibrium, on contention resolution games to the more general setting of combinatorial group testing. In combinatorial group testing, a (typically small) group must be revealed among a large set of candidates by feedback to certain queries. In this game theoretic setting the small group is considered as the set of players who execute strategies to include or remove themselves from the queries prescribed by an algorithm given on input. Adversarial equilibrium captures the notion that, no deviation from a strategy allows a to not increase and to strictly decrease their revealing time for all adversarial and one adversarial choice of the small set. It also provides strategies for agents which satisfy an adversarial equilibrium and achieve almost the best possible bound (the lower bound is known and independent from the considered game theoretic model) on the latest revealing time of a player in the small group when the size of the small group is known, and less tight upper and lower bounds for the latest revealing time of a player achievable by adversarial equilibrium strategies. Strengths: The combinatorial group testing problem is highly relevant, also in the context of machine learning and hence fits the scope of NeurIPS well. The contribution claims to generalise some recent results accepted for publication at a strong venue (unfortunately the proceedings seem not to be available yet and hence I was not able to have a closer look at it). The quality of the technical writeup is good and I did not spot any flaws. Weaknesses: To me the motivation of requiring adversarial equilibria or even considering CGT as games played by the small set that should be revealed is not sufficiently supported in my opinion. In particular I would appreciate a discussion of settings of CGT in which this is natural to consider (the applications provided in the supplementary material are a step in this direction but I would ask even more explicitly why one would be interested in designing AE strategies for blockchain mechanisms; for CR this seems a bit more clear to me but this is also somewhat unsurprising as this is the context in which AE was introduced) as well as a more explicit comparison of how `adversarial aspects' have been addressed in CGT previously. -L76: it is not completely clear to me, what `usefulness' is demonstrated here. By this I do not mean to say that the presented results are uninteresting but rather that I do not understand how they make a point of any benefit behind requiring AE as it is even presented for unknown k that this comes at the necessary cost of worse latest revealing times. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Can you clarify the above points relating to the motivation behind taking a game theoretic perspective and requiring adversarial equilibrium for CGT? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: No concerns. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for valuable feedback. Below we give our answers. If the reviewer believes that we have addressed adequately all of her/his concerns, we would very much appreciate if she/he reconsiders upgrading the score for our paper. **Question 1:** *To me the motivation of requiring adversarial equilibria or even considering CGT as games played by the small set that should be revealed is not sufficiently supported in my opinion. In particular I would appreciate a discussion of settings of CGT in which this is natural to consider (the applications provided in the supplementary material are a step in this direction but I would ask even more explicitly why one would be interested in designing AE strategies for blockchain mechanisms; for CR this seems a bit more clear to me but this is also somewhat unsurprising as this is the context in which AE was introduced) as well as a more explicit comparison of how "adversarial aspects" have been addressed in CGT previously.* Response: Our CGT+AE framework could be applied to **assure fairness** in any application of the CGT which assumes some autonomy of the elements. For instance, one could consider new blockchain mechanisms in which miners compete to add their blocks to the blockchain in a fair way. To do it, they could pick or be assigned with a random ID, and our new framework assures that all blocks will be efficiently added, in a random order (as no miner has any incentive to deviate the search going through random IDs). Another application that uses the full CGT setting is where we are given a distributed database with $n$ selfish servers, where some $k$ of those servers hold $k$ pieces of information that a user is looking for in the database. In the beginning, the IDs of these servers are not known to the user (in this sense, we could model it as adversarial choice). Each of the $k$ servers wants to have its information to be found ("released") as early as possible (because it can use its resources to serve the next query when it is free and does not need to wait to "release" its information, or could be even awarded for promoting the information stored on it). A possible approach could give random ids to the servers before running a CGT+AE search, thus also ensuring fairness in the treatment of the servers, unless they would deviate (but that is prevented, or at least discouraged, by the solution being an AE). ​ Similar technique could be applied to the master-workers systems, including Distributed/Decentralized/Federated ML and AI. In order to **avoid biases** when processing the results of the jobs in some order, the master could require the workers who are ready (in our model -- active) to submit their results in random order. To do it, the master attaches random IDs to the jobs sent to workers, and asks those of them who are ready to execute the CGT protocol which is AE. The CGT part assures that all work is collected. Random IDs of jobs assure random order, provided no worker deviates. AE part assures that no matter which set of workers is ready, none of them is incentivized to deviate. ​ There are also several other applications of CGT considered in recent ML/AI publications, to different types of searches, string mining, etc. cf., ​ - J. Engels, B. Coleman, and A. Shrivastava. Practical near neighbor search via group testing. NeurIPS 2021 ​ - D.R. Kowalski and D. Pajak. Light agents searching for hot information. IJCAI 2022 ​ We will include these applications in the proceedings version of our paper (as typically 1 extra page is available in the proceedings) if the paper is accepted. **Question 2:** *-L76: it is not completely clear to me, what "usefulness" is demonstrated here. By this I do not mean to say that the presented results are uninteresting but rather that I do not understand how they make a point of any benefit behind requiring AE as it is even presented for unknown k that this comes at the necessary cost of worse latest revealing times.* Response: Apologies for a confusion. We used the word "usefullness" in the wrong context. What we meant here is that our added notion of AE does make a difference comparing to the classical CGT (and related settings, such as CR) because it leads to different algorithms, techniques and performance bounds. --- Rebuttal Comment 1.1: Comment: Thank you for your response. All points raised by weaknesses are addressed to my satisfaction and I will raise my score to a recommend acceptance.
Summary: This paper introduces a game theoretic framework in the context of combinatorial group testing. In detail, in a population of $n$ elements, there are $k<n$ agents that are considered active (e.g. they are positive to some disease) and the goal is to use queries on subsets of the $n$ elements to reveal who those $k$ agents are. These active agents can strategize, i.e., choose whether they are included in the queries used to reveal the subset of active elements. The authors introduce a modified notion of adversarial equilibrium that allows adaptive queries (i.e. queries that can depend on previous queries). Strengths: * Clear and interesting outline of previous work. * Accessible language, self-contained definitions, and overall clear presentation of background and novel ideas. * The game theoretic setting seems interesting, natural, and well-motivated. Weaknesses: The main weakness is the fact that the main results have to do with an a priori known number of active elements $k$. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: * What would be the main challenge in getting similar results for an unknown number $k$? * Why do you not consider non-active elements as non-strategizing? * Could you clarify the connection between CR and CGT? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The limitations are, to the best of my understanding, adequately discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for valuable feedback. Below we give our answers. If the reviewer believes that we have addressed adequately all of her/his concerns, we would very much appreciate if she/he reconsiders upgrading the score for our paper. **Weakness:** *The main weakness is the fact that the main results have to do with an a priori known number of active elements.* Response: We prove that, it is the nature of the problem that the lack of any knowledge of $k$ yields high complexity. More precisely, we prove that if $k$ is unknown, no algorithm can guarantee a revealing time smaller than $\Theta(n)$ -- please see our lower bound; in this case we show an algorithm with the revealing time of $O(n^k)$. Hence, for unknown $k$ the problem is provably hard. On the other hand, this question of the reviewer opens an interesting avenue for further work. That is, we can ask what happens if we have some partial knowledge about $k$, for instance, a rough upper bound on $k$, e.g., $k \leq k^*$ for some $k^*$ known to the algorithm? In this case, could one prove an upper bound on the revealing time that is, for instance, polynomial in $k^*$ and $\log(n)$ ? Another opportunity for efficient algorithms would be a model with extended feedback function, e.g., returning some approximate or even exact size of the intersection of the query and the hidden set (so called Quantitative Group Testing). However, one would need to study if there are any beneficial deviations due to such extended feedback received in each round of the game. **Question 1:** *What would be the main challenge in getting similar results for an unknown number $k$?* Response: If $k$ is unknown, the adversary has even more power, since it not only selects the configuration but also the size, and the agents are agnostic to it. Our current approach is, roughly speaking, to query complements of all possible configurations for increasing values of $k-1$ (starting from $k-1=1$), so that when the considered size of a configuration is correct, any deviation leads to discoverable effect and a "punishment strategy" could be applied by non-deviating agents. Finding a way to avoid addressing all configurations of a given size in separate queries, and thus (substantially) reducing the number of queries, seems to be the main challenge in the setting with unknown $k$. We formally show that this setting is more difficult than the one with known $k$ by proving a lower bound of $\Omega(n)$ on the revealing time. We remind that in BS_Jumps$(n,k)$ we achieve equilibrium with low latency because of the knowledge of the $(k-1)$'st revelation -- this is not possible in the case of unknown $k$. **Question 2:** *Why do you not consider non-active elements as non-strategizing?* Response: The nature of CGT splits agents into $k$ active agents and the remaining $n-k$ non-active agents. However, at the beginning of the game no agent knows which other agents are active and therefore could strategize. The set of active agents is being revealed as the game proceeds. But we cannot assume that from the perspective of any (active) agent, some other agent, say $i$, is non-strategizing, because that agent may not know whether $i$ is active until the whole set of active agents is revealed. That is, the non-active agents cannot be assumed to be non-strategizing, because the protocol (AE) must be prepared for ANY of the $n$ agents to be chosen (by the adversary) as active, and therefore -- prevent its potential deviations. **Question 3:** *Could you clarify the connection between CR and CGT?* Response: Formally speaking, CR is a communication problem in which each player has a packet to send successfully (i.e., alone) on the multiple-access channel to which it is connected (together with other agents). CGT is a learning problem in which a hidden set must be discovered by asking queries and receiving feedback. The main conceptual similarity between CR and CGT is the environment -- agents decide to transmit (equiv., be included in the query) or not and they receive feedback from the channel (equiv., broker of the CGT process -- the feedback function). Differences are in details, as CR problem -- as a communication problem -- may adopt communication features, such as acknowledgments (i.e., feedback received only by a single transmitter), possibility of sending actual information to be received by others in case of successful/alone transmission (apart of the feedback), possibility of performing fake/jamming transmissions (even if it is an alone transmission, it does not fulfill the goal of CR because the actual packet has not been transmitted), etc. Another subtle difference is that in CR each player has to transmit alone at least once in order to fulfill its goal, which in CGT we may sometimes reveal the hidden set based on feedback analysis, even if not every active element occurred in a singleton intersection with a query. We also provide some intuitions about transformation from CGT to CR in the Supplementary Materials Section 5; we could copy it to the main part, space permitting. --- Rebuttal Comment 1.1: Title: Thanks Comment: Thank you for the detailed responses. I would like to raise my score to 6.
Summary: In this paper, the authors study the problem of combinatorial group testing in a new framework, when the elements to be found are selfish players. The authors propose an algorithm such that no agent is willing to deviate from the algorithm strategy (being in this way in an adversarial equilibrium). Strengths: - The paper is well-written and very clear. The graphical example the authors provide helps in understanding the mechanism beyond the algorithm. - The authors provide lower bounds on the novel problem setting and an algorithm to solve the problem. - I checked the proofs and the results seem sound and well-explained. Weaknesses: - It is not very clear to me which is the application of this approach. What kind of application cannot be modeled with classical CGT and they can be modeled with this multi-agent setting? In which cases do these selfish agents appear? I saw that there are some examples in the appendix, but it would be helpful to spend more time describing them. - I suggest moving to the main paper an intuition for the presented lower bounds. It is always helpful to understand the hardness of the problem. - What are the main technical challenges with respect to Chionas et al.? Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weaknesses. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: No potential negative societal limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for valuable feedback. Below we give our answers. If the reviewer believes that we have addressed adequately all of her/his concerns, we would very much appreciate if she/he reconsiders upgrading the score for our paper. **Question 1:** *It is not very clear to me which is the application of this approach. What kind of application cannot be modeled with classical CGT and they can be modeled with this multi-agent setting? In which cases do these selfish agents appear? I saw that there are some examples in the appendix, but it would be helpful to spend more time describing them.* Response: Our CGT+AE framework is not actually competing, in terms of applications, with the classical CGT, but it is rather an add on. It could be applied to **assure fairness** in any application of the CGT which assumes some autonomy of the elements. For instance, one could consider new blockchain mechanisms in which miners compete to add their blocks to the blockchain in a fair way. To do it, they could pick or be assigned with a random ID, and our new framework assures that all blocks will be efficiently added, in a random order (as no miner has any incentive to deviate the search going through random IDs). Another application that uses the full CGT setting is where we are given a distributed database with $n$ selfish servers, where some $k$ of those servers hold $k$ pieces of information that a user is looking for in the database. In the beginning, the IDs of these servers are not known to the user (in this sense, we could model it as adversarial choice). Each of the $k$ servers wants to have its information to be found ("released") as early as possible (because it can use its resources to serve the next query when it is free and does not need to wait to "release" its information, or could be even awarded for promoting the information stored on it). A possible approach could give random ids to the servers before running a CGT+AE search, thus also ensuring fairness in the treatment of the servers, unless they would deviate (but that is prevented, or at least discouraged, by the solution being an AE). ​ Similar technique could be applied to the master-workers systems, including Distributed/Decentralized/Federated ML and AI. In order to **avoid biases** when processing the results of the jobs in some order, the master could require the workers who are ready (in our model -- active) to submit their results in random order. To do it, the master attaches random IDs to the jobs sent to workers, and asks those of them who are ready to execute the CGT protocol which is AE. The CGT part assures that all work is collected. Random IDs of jobs assure random order, provided no worker deviates. AE part assures that no matter which set of workers is ready, none of them is incentivized to deviate. ​ There are also several other applications of CGT considered in recent ML/AI publications, to different types of searches, string mining, etc. cf., ​ - J. Engels, B. Coleman, and A. Shrivastava. Practical near neighbor search via group testing. NeurIPS 2021 ​ - D.R. Kowalski and D. Pajak. Light agents searching for hot information. IJCAI 2022 ​ We will include these applications in the proceedings version of our paper (as typically 1 extra page is available in the proceedings) if the paper is accepted. **Question 2:** *I suggest moving to the main paper an intuition for the presented lower bounds. It is always helpful to understand the hardness of the problem.* Response: If our paper gets accepted, we will add these aspects to the main body of the paper, as well as the details of the above application to blockchain. (NeurIPS typically allows one more page in the proceedings.) **Question 3:** *What are the main technical challenges with respect to Chionas et al.?* Response: The main challenge arises from the fact that our work considers adaptive strategies while the one by Chionas et al. considers only non-adaptive strategies. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for the detailed rebuttal. I followed the discussion with the other reviewers and carefully read your answer. Due to this, I continue to recommend the acceptance of the paper.
Rebuttal 1: Rebuttal: We would like to thank the reviewers for their effort and valuable feedback. Below we address the main two requests for providing more applications and describing technical challenges comparing to Chionas et al. IJCAI 2023. Other questions we address as individual rebuttals to the reviewers. **More details on applications of the CGT+AE framework.** Response: Our CGT+AE framework is not actually competing, in terms of applications, with the classical CGT, but it is rather an add on. It could be applied to **assure fairness** in any application of the CGT which assumes some autonomy of the elements. For instance, one could consider new blockchain mechanisms in which miners compete to add their blocks to the blockchain in a fair way. To do it, they could pick or be assigned with a random ID, and our new framework assures that all blocks will be efficiently added, in a random order (as no miner has any incentive to deviate the search going through random IDs). Another application that uses the full CGT setting is where we are given a distributed database with $n$ selfish servers, where some $k$ of those servers hold $k$ pieces of information that a user is looking for in the database. In the beginning, the IDs of these servers are not known to the user (in this sense, we could model it as adversarial choice). Each of the $k$ servers wants to have its information to be found ("released") as early as possible (because it can use its resources to serve the next query when it is free and does not need to wait to "release" its information, or could be even awarded for promoting the information stored on it). A possible approach could give random ids to the servers before running a CGT+AE search, thus also ensuring fairness in the treatment of the servers, unless they would deviate (but that is prevented, or at least discouraged, by the solution being an AE). Similar technique could be applied to the master-workers systems, including Distributed/Decentralized/Federated ML and AI. In order to **avoid biases** when processing the results of the jobs in some order, the master could require the workers who are ready (in our model -- active) to submit their results in random order. To do it, the master attaches random IDs to the jobs sent to workers, and asks those of them who are ready to execute the CGT protocol which is AE. The CGT part assures that all work is collected. Random IDs of jobs assure random order, provided no worker deviates. AE part assures that no matter which set of workers is ready, none of them is incentivized to deviate. There are also several other applications of CGT considered in recent ML/AI publications, to different types of searches, string mining, etc. cf., - J. Engels, B. Coleman, and A. Shrivastava. Practical near neighbor search via group testing. NeurIPS 2021 - D.R. Kowalski and D. Pajak. Light agents searching for hot information. IJCAI 2022 We will include these applications in the proceedings version of our paper (as typically 1 extra page is available in the proceedings) if the paper is accepted. **Main (technical) challenges with respect to the paper by Chionas et al.** Response: The main challenge arises from the fact that our work considers all possible strategies (adaptive and non-adaptive) while the one by Chionas et al. considers only non-adaptive strategies: - Thus, on one hand, adaptiveness considered in our work allows building a wider class of strategies that adapt to the feedback history of the game, possibly being much better and faster revealing the hidden set. This opens additional opportunities of designing more efficient algorithms, but makes the proofs of lower bounds more difficult (as they need to hold for any feasible algorithm). - However, on the other hand -- it is more difficult to assure that such adaptive strategies form an equilibrium, comparing to the analysis of non-adaptive strategies in Chionas et al. This is because there are much more deviating adaptive strategies than non-adaptive ones -- i.e., the deviating agent can decide about deviation online, even after revealing part of the hidden set, while non-adaptive deviating strategy must fix the changes in its strategy in advance (and they stay the same during the whole game, regardless of what are the other elements in the actual hidden set). In particular, in our analysis, we could not just fix a deviating strategy and consider any hidden set (as in Chionas et al.), but we also need to take into account deviations occurring after revealing some elements from the hidden set during the game. Other challenges, though not as critical as above, arise from subtle differences between the Contention Resolution problem (focused on in the other work) and CGT (focused on in our work), for instance, we do not assume the feature analogous to "jamming transmissions" which was helpful in case of non-adaptive CR to enforce "punishment mechanism". See also our response to Reviewer X2Ux for more details on differences between CGT (considered in this work) and CR problem (studied by Chionas et al.). As a result of the abovementioned new challenges, our algorithms and lower bounds are **entirely different** (in terms of design, formulas and analysis) from the ones in Chionas et al.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Maximize to Explore: One Objective Function Fusing Estimation, Planning, and Exploration
Accept (spotlight)
Summary: This paper studied reinforcement learning with general function approximation setting. The authors proposed a new MEX framework, which unifies the exploration and exploitation within a unconstrained optimization objective. Under the structural assumptions about low GEC, they established regret upper bound for learning with their framework. Moreover, they conducted experiments in Mujoco under both model-based and model-free setting and achieved promising results. Strengths: Computational efficiency is indeed an crucial shortage of existing provably efficient algorithm frameworks. I'm glad to see some effort to investigate how to close those gaps. The idea is clean, the proof is sound to me. The experiments look interesting and promising. Weaknesses: (1) For the model-free setting, the objective (Eq. 3.1) may not be as easily to optimize as it looks like. By Eq. 3.3, the definition of L_h^k, solving Eq. 3.1 requires to solving a mini-max optimization problem. On the other hand, those constrained optimization objectives in previous literatures (like [1] and [2]) can also be directly converted to a Lagrangian form, which can also be regarded as one objective (a with minimax optimization problem). I don't think it is easy to conclude the objective in this paper is indeed easier to implement than the others. (2) There is not much technique novelty. The proof techniques can be found in previous literatures. The maximization objective is quite straightforward, which just convert the confidence interval constrained optimization objective in previous literatures to a Lagrangian style objective, although there are indeed some difference (here the dual parameter is fixed). [1] Jin et. al., Bellman Eluder Dimension: New Rich Classes of RL Problems, and Sample-Efficient Algorithms [2] Du et. al., Bilinear Classes: A Structural Framework for Provable Generalization in RL Technical Quality: 3 good Clarity: 3 good Questions for Authors: (1) for the model-free setting, does there exist a general choice of $l$ function in Assumption 3.1? When Assumption 4.2 is satisfied, does $l$ function always exist? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N.A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Weakness 1: For the model-free setting, the objective (3.1) may not be as easy to optimize as it looks like. By (3.3), the definition of L_h^k, solving (3.1) requires to solving a mini-max optimization problem. On the other hand, those constrained optimization objectives in previous literature (like [1, 2]) can also be directly converted to a Lagrangian form, which can also be regarded as one objective (with a minimax optimization problem). I don't think it is easy to conclude the objective in this paper is indeed easier to implement than the others.** Firstly, we highlight that MEX is a maximization objective as a whole, w.r.t. value + loss. In contrast, using the Lagrangian form to solve previous constrained optimization program gives a minimax objective w.r.t. value + dual * loss, which is indeed harder to solve than MEX. If coupled with the choice of $L_h^k$ in (3.3) [1], their target would be even harder than minimax optimization. Secondly, we need to clarify that the definition of $L_h^k$ in (3.3) is only a specific estimator construction in theory. It is indeed used to handle the double-sampling issue to achieve a sharper supervised guarantee (Proposition 5.1). Under Bellman completeness assumption (Assumption 3.1), the extra infimum term can help us to control the variance term involved in estimating things like $(E[\mathcal{E}(f)])^2$. Please refer to **General Response Q1** for more explanation for this. On the other hand, another straightforward choice of $L_h^k$ is to use the sample mean with $m$ trajectories as the estimator to achieve a low variance at the cost of a worse estimation guarantee (see [2] for a similar algorithmic design). From this perspective, the infimum term in (3.3) is not necessary, and solving (3.1) only involves maximization instead of minimax optimization. Finally, in practice, we can simply choose $L_h^k$ to be the squared TD error as an approximation, which gives a pure maximization objective. Also, for the model-based case, a simple log-likelihood estimator (3.5) is enough, which also doesn't involve minimax optimization. **Weakness 2: There is not much technique novelty. The maximization objective is quite straightforward, which just convert the confidence interval constrained optimization objective in previous literatures to a Lagrangian style objective, although there are indeed some difference (here the dual parameter is fixed).** We clarify our technical novelty in the following. As discussed in our paper, MEX is not simply a lagrangian form of the optimistic planning method proposed by existing works, e.g., [1, 2]. The theoretical gap is also non-trivial. Firstly, using Lagrangian duality to transform the constrained optimization to a minimax optimization requires both the objective function and the constraints to be convex, which in general does not hold for these optimistic planning methods. Secondly, analysis techniques in the previous papers with optimistic planning can **NOT** be generalized to our algorithm, and new proof techniques are in need. To illustrate this, we point out that in the regret decomposition (Line 871), we make a tighter analysis of the value difference $V^\star -V_{f^k}$ (Term (i)). In traditional optimistic planning papers, they need the selected hypothesis $f^k$ to maximize $V_f$ s.t. $L(f)\le \beta$ for some $\beta$. As concentration analysis tells that $L(f^\star)\le \beta$, they simply upper bound the model value difference $V^\star -V_{f^k} = V_{f^{\star}} - V_{f^k}$ by zero. While in MEX, we do not solve such a constrained optimization problem. Hence we are not proving that $V^\star -V_{f^k}\le 0$ for the selected hypothesis $f^k$. In contrast, our MEX objective (3.1) shows that its upper bound can be further refined as $-L(f^k) +L(f^\star)$. It turns out that this term can cancel some parts in the upper bound of the other value difference $V_{f^k} - V^{\pi_{f^k}}$ (Term (ii) in regret decomposition). The key here is to show the concentration type inequality (Assumption 4.3). After this cancellation, the remaining parts of both Term (i) and Term (ii) are relatively easy to handle. In contrast, traditional optimistic planning papers consider to upper bound the whole Term (ii) $V_{f^k} - V^{\pi_{f^k}}$, which relies on more complicated arguments. Hence our proof technique is novel compared with previous literatures. **Question 1: for the model-free setting, does there exist a general choice of $l$ function in Assumption 3.1? When Assumption 4.2 is satisfied, does $l$ function always exist?** Firstly, we note that the introduction of the abstract function $l$ in Assumption 3.1 is mainly due to the technical consideration of our paper, since we aim to cover all existing known theoretically tractable model-free MDP instances. E.g., MDPs of bilinear class [2], which itself is defined based upon the abstract function $l$ (Line 944). Indeed, for most of the time, the loss choice is natural in practice as the TD error $f_h(x_h,a_h) - r_h - f_{h+1}(x_{h+1})$. Secondly, the existence of such an $l$ should actually be regarded as a part of Assumption 4.2 (low GEC) for the model-free setting, since the discrepancy function $\ell$ in Assumption 4.2 is chosen as $\ell_f = (E_{x_{h+1}\sim P_h(\cdot|x_h,a_h)}[l_f(x_h,a_h,x_{h+1})])^2$ (see Eq. 5.1). In Section 5 and Appendix E, we clarify the choice of $l$ for the specific MDP instances we consider, which satisfies Assumption 3.1 and 4.2 simultaneously. This demonstrates the existance of the desired function $l$. Thanks for your question and we will make it clearer in the revision. [1] Jin, Chi, Qinghua Liu, and Sobhan Miryoosefi. "Bellman eluder dimension: New rich classes of rl problems, and sample-efficient algorithms." Advances in neural information processing systems 34 (2021): 13406-13418. [2] Du, Simon, et al. "Bilinear classes: A structural framework for provable generalization in rl." International Conference on Machine Learning. PMLR, 2021. --- Rebuttal Comment 1.1: Comment: Thanks for the response by authors. My concerns are addressed and I would like to keep the score for acceptance. --- Reply to Comment 1.1.1: Title: Reply by Authors Comment: Dear Reviewer J9HB, Thank you for your review and support. We will incorporate your valuable suggestions into our paper as we revise it based on the feedback from all reviewers. Your comments greatly assist us in strengthening the overall quality of our work. Best regards, Authors
Summary: This theory paper proposes an algorithmic framework where the "hypothesis" (model for MB methods, Q function for MF methods) for each iteration is chosen by maximizing one objective: the hypothesis likelihood (i.e., NLL for MB methods, TD error for MF methods) plus the expected returns (value function at the initial state). The paper provides regret guarantees matching prior work. The paper proposes a practical variant of the method (building on top of MBPO and TD3) that shows some promise on sparse versions of benchmark continuous control tasks (while matching performance on the standard, non-sparse versions). Strengths: **Originality**: To the best of my knowledge the proposed frame work is original. * One prior paper that's related to the model-based version of this paper is https://arxiv.org/abs/2110.02758. This that paper lacks the theoretical contributions of this paper, and comes at the problem from a different perspective, the resulting algorithms look surprisingly similar: both learn a model that is optimized to (1) have high likelihood and (2) produce high-return policies. I'd be curious to see if the current paper also performs well on the stochastic gridworlds used in the prior paper to showcase the benefits of the "optimistic" model. **Clarity**: I generally found the writing clear, which is especially impressive for a large theory project packed into 9 pages. The bottom of this text block has a few suggestions for improving clarity. A few high-level things * The related work section is generally well written, including a pretty thorough review of prior work. One suggestion is to explain the differences from prior work. What are the limitations that this paper will address? * I didn't get much from reading Section 5; I'd prefer that the space be spent on more intuition for the previous section, or more empirical results. **Significance**: Overall, it seems like the proposed framework is a novel and potentially useful way of designing better RL algorithms. I especially appreciate the simplicity of the proposed approach. I think that the paper may be a bit closer to prior work that the paper makes it out to be (see comment under Originality, and the comment about Gumbel), but it still seems to represent somewhat of a departure from the conventional ways of thinking about RL algorithms. Like most new algorithmic frameworks, it seems like there may be a few kinks that have to be worked out empirically (how to choose $\eta$; is there a way of implementing the model-free version without the extra CQL term). Nonetheless, it seems like the paper could inspire more work in this direction. **Strengths** * The paper is generally well written, includes a thorough review of much prior work. * The proposed framework is derived for both model-free and model-based algorithm. * An empirical version of the proposed method is applied to to reasonable continuous control tasks. This is really nice to see in a theory paper. Weaknesses: * In a few spots, I felt like the paper claims were not entirely substantiated: * "One Objective to Rule Them All" in the title -- Without comparisons to "all" prior methods, I don't think this is a fair claim. Perhaps a more accurate title would be "An Objective Fusing Estimation and Planning for Exploration" * "algorithms predominantly undertake three tasks" -- This seems to be describing methods based on Thompson sampling and posterior sampling, but I'm not sure that R-MAX-style methods fit this mold * "outperforms MBPO by a stable margin ... showcases greater sample efficiency" -- I'm a bit unsure what these statements are referring to. My read of Figure 1 is that there might be statistically significant gains on 2 / 8 tasks (cheetah-vel-sparse, ant-vel-sparse). I would still vote for accepting the paper even if it only outperforms baselines on 2/8 tasks (but I think that accurately portraying the results is very important); this is especially true if we see large benefits on some tabular settings (see suggestion under Originality). * The paper alludes to "data-dependent level-sets" in a number of places, but never defines what these are. I'd recommend either including a few sentences describing similarities/differences, or moving this discussion to the appendix. * I have a few questions about some of the technical details (see below). * From a practical perspective, the model-free version of the method seems to violate conventional wisdom: many practical TD methods (e.g., TD3, MBPO) choose the Q function that has lowest Q-values, whereas the proposed method chooses the one with highest Q values. Not only does this make be a bit concerned that the theoretical framework might not always work well in practice (as evinced by the need for the additional CQL regularizer), but it also makes me wonder exactly how the model-free version of the method is implemented when combined with TD3 (does it take the min or max of the Q functions)? At least for the model-free version, one thing that might help would be to sure that using the CQL regularizer allows the _same_ method to achieve good results in both online and offline settings. --------------------- **Minor comments**: * In the abstract, mention that this is going to be a theory paper. I expected this to be an empirical paper until partway through the intro, when I realized that all of the citations were referring to other theory papers. As such, I found the claim that prior methods "involve impractical algorithmic components" a bit strange, as prior _practical_ methods cannot require impractical components. * L45 -- L47: I found this part a bit unclear because it was unclear what the optimization variable is (a policy, a model, data). This point is clarified later. * L50: I found this part a bit unclear because it was unclear what the hypothesis class was over (policies, models, etc). * Some small grammar errors throughout. An automated grammar checker should catch these (e.g., dropped articles) * L80 -- This definition could be cut, as it's already included a few paragraphs earlier. Cutting this would make it possible to remove some of the whitespace hacks. * L101 -- What is "BE"? * Empirically, in many TD algorithms the TD error _increases_ as the policy improves. The standard explanation is that collecting non-zero rewards results in higher TD errors. This seems potentially related to the analysis in this paper. * L260 -- Including a table for comparing these prior methods might help. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. How important is the value of $\eta$ for both the theoretical and empirical results? 2. L147 "Without loss of generality" -- Can the authors elaborate on this point? My guess is that we can ignore stochastic rewards because we could define a new MDP that has a copy of each state for each possible reward that that state could have (and update the transition probabilities accordingly). I'm not sure why it's OK to assume that the reward function is known. 3. There's a sense in which the proposed method resembles posterior sampling methods, by drawing a connection with Gumbel-softmax-style sampling. One way to sample a random variable is to _deterministically_ choose the value that maximizes likelihood + noise, for an appropriately chosen noise distribution (e.g., the Gumbel distribution for categorical distributions). I'm curious if we can interpret posterior sampling methods as a version of Eq. 3.1 where the first term ($V(x)$) is replaced by an appropriate noise distribution (which has nothing to do with the value). 4. L206: Does $V_f^\pi$ make sense for both the model-based and model-free versions of the method? For the model-based version, I'm interpreting this as the expected returns of policy $\pi$ under model $f$. But for the model-free version, I'm unsure if this is supposed to be the expected returns of $\pi$, or the estimate from (value function) $f$. 5. L206, "reduced to vanilla actor-critic": Is this true for the model-based version of the method? 6. Is the second term in Eq. 3.3 required? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are not discussed in the main paper. The appendix has a cursory paragraph on limitations. I would recommend including the limitations in the main text, and including a transparent discussion of benefits/limitations of the proposed method relative to prior methods. For example, the first term in Eq. 3.1 seems to sometimes cause optimization problems. Ensemble-based methods might be more stable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Originality: Compare MEX with [1] on Gridworld environments.** Based on [3], we compare MEX on Gridworld with [1] in Figure 2 of **General Response**. **Clarity: Comparison to related works and organization of Section 5.** Please see **General Response Q3** for a brief comparison. We will add the comparison and reorganize Section 5 in the revision. **Weaknenss 1:Some of the paper claims were not entirely substantiated.** - *About the title*: By using "all", we meant to refer to the three of "Estimation", "Planning", and "Exploration", because our MEX objective can fuse all three components in a single objective. - *About "algorithms predominantly undertake three tasks"*: The tasks of estimation, planning, and exploration are the main components of most existing sample-efficient online RL algorithms, including but not restricted to Thompson-sampling-style methods. Essentially, all of these components are necessary for both model-free and model-based algorithms (TS-style method can also be either model-free or model-based). - *About "outperforms MBPO by a stable margin,..., showcases greater sample efficiency":* We clarify that the mean return of MEX-MB is higher than the MBPO baseline in the sparse-reward tasks, including cheetah-vel-sparse, ant-vel-sparse, hopper-vel-sparse, and ant-vel-sparse, especially the first two. Besides, MEX-MB outperforms MBPO with fewer samples in the standard Mujoco tasks, including walker2d, half cheetah, and ant. We will revise our statement to make it more accurate. **Weakness 2: About "data-dependent level-sets" in a number of places.** Thanks for pointing that out! We will add a concrete discussion about the "data-dependent level-sets" adopted by previous theoretical works. **Weakness 3: The model-free version seems to violate conventional wisdom: many practical methods (e.g., TD3, MBPO) choose the Q function that has lowest Q-values.** Firstly, the method to choose the lowest Q-value is **NOT** addressing the same problem that MEX wants to solve. TD3 chooses the lowest value between two separate Q-networks to handle the overestimation faced when estimating things like $E[\max_{a\in\mathcal{A}}\{Q(x,a)\}]$. In contrast, MEX chooses a high Q-value in order to incentivize exploration. These two methods do not contradict each other and can be implemented simultaneously. That is, when calculating the Bellman target, one chooses the lower value from two separate Q networks. When optimizing the Q-networks, we still use the objective (6.3) that combines TD error and Q-value. Secondly, we note that the CQL regularizer is actually a kind of entropy regularization (Appendix H.2). We use this in order to stabilize training. We highlight that the sign of the regularizer here is also the **opposite** to that adopted by the original offline CQL paper. Finally, we clarify that our method to choose a high Q-value does **NOT** violate conventional wisdom. Prevalent approaches in deep online RL that achieve good exploration commonly involve a bias to high Q-values, for instance, an ensemble of neural networks [62, 48, 13 cited in paper], intrinsic motivation-driven methods [26, 3, 10 cited in paper], etc. MEX is consistent with their convention of choosing a relatively high Q-value to help exploration. **Minor comments regarding writing and organization.** Thanks for all your detailed comments! We will try to improve the readability and organization of our paper following your suggestions point-by-point. **Question 1: How important is the value of $\eta$ for both the theoretical and empirical results?** See **General Response Q2** for an explanation. **Question 2: About the known and deterministic reward, without loss of generality.** Thanks for pointing this out! The better statement is that our result can be readily generalized to the case when the reward is stochastic and unknown. The techniques have been presented in, e.g., [2]. Thanks to these previous works, we can present our results in a simplified way. **Question 3: About using Gumbel-softmax to connect MEX to posterior sampling.** This might be correct, but it is not clear whether this could achieve the desired theoretical results if we replace the value function term with a *value-independent* noise. Also, the original Gumbel-softmax trick is operated on a finite set of choices, where for each possible choice, a Gumbel noise is added. But in our setup, we choose a function from a possibly infinite class. Thus to implement a Gumbel-softmax trick, a stochastic process indexed by $f$ is required. This is unclear and complicated in practice. **Question 4: Does $V_f^{\pi}$ make sense for both the model-based and model-free versions of the method?** We clarify this as follows. As discussed in Example 2.1, for the model-free version, we use $V_{h, f}(x)$ to refer to $\max_{a\in\mathcal{A}}f(x,a)$. However, we **did not** use the notation $V_{h,f}^{\pi}$. This notation only appears in some intermediate definitions in Example 2.2 for the model-based case and does not appear in the unified algorithm and theory. We may only use the notation $V_h^{\pi}$ to refer to the value function of $\pi$ under the true model. Please check that out in our paper. **Question 5: About the reduction to vanilla actor-critic for the model-based version of MEX.** Here we only aimed to discuss the high-level relationship between the model-free version of MEX and the vanilla actor-critic. We will make it clearer in the revision. **Question 6: Is the second term in Eqn. (3.3) required?** See to **General Response Q1** for an explanation. [1] Eysenbach, Benjamin, et al. "Mismatched no more: Joint model-policy optimization for model-based rl.". [2] Agarwal, Alekh, and Tong Zhang. "Model-based rl with optimistic posterior sampling: Structural conditions and sample complexity." [3] Wu, Chenyang, et al. "Bayesian optimistic optimization: Optimistic exploration for model-based reinforcement learning." --- Rebuttal Comment 1.1: Title: Reviewer response Comment: Dear authors, Thanks for the detailed response and new experiments. This definitely helps clarify my understanding of the paper. I have one request: please revise the title to be more precise. I title that implies that the proposed method is "ruling over estimation" seems presumptuous and will likely be inaccurate in a few years. Also, please make sure to revise the paper to address the discussion with many reviewers about discussing the differences/similarities with prior work. Best, Reviewer --- Reply to Comment 1.1.1: Title: Reply by Authors Comment: Dear Reviewer wpGc, Thank you for your review and support. We will incorporate your valuable suggestions into our paper as we revise it based on the feedback from all reviewers. As a part of the revision, we will change the title to a more precise one and add detailed discussions with prior works. Your comments greatly assist us in strengthening the overall quality of our work. Best regards, Authors
Summary: The paper presents a unified objective for optimizing regret in online RL algorithms with theoretical justification and experimental evaluation. The objective crucially dos not dependent on posterior estimation or bi-level optimization, making it easier to implement in practice than previous theory motivated algorithms. The authors provide a thorough theoretical analysis of their framework, covering both model-based and model-free approaches. Strengths: The paper is very well crafted, and provides an algorithm that is both theoretically sound and empirically impactful, which is always a great achievement. Even though the presentation is necessarily dense at times, given the theoretical nature of the main contribution, the paper is fairly accessible, with the core intuitions mostly presented well. The results and the practical algorithm presented in the empirical section at the end are surprisingly intuitive and seem straightforward to implement on top of several model-based RL algorithms, not just MBPO. Caveat: I am not necessarily qualified to comment on the correctness of all mathematical details or the impact of the theoretical contributions. I am overall familiar with most concepts used (i.e. Bellman and Eluder rank) but not a core RL theory researcher, so I might be missing some details. Weaknesses: Given the theoretical nature of the paper, the presentation is necessarily dense, which makes it hard to grasp a couple of intuitive concepts. While the authors do their best to present all main components, here are some suggestions for improvement: - The authors use generic functions several times, most prominently in Section 3, 3.2 and 3.3. They defer concrete realizations to later, but it would be helpful to provide these examples earlier so that readers unfamiliar with the exact approach have an easier time understanding the role that these function play. (Similarly in Section 5, 5.1) - The authors use a unified notation for model-free and model-based hypotheses. While this makes generic statements easier to provide, it led to some confusion on my part because I lost track of whether a model-free or model-based hypothesis was meant at some points. Potentially a specialized notation would clarify these whenever a concrete algorithmic framework is discussed. - While I know that this is not always the main stylistic choice in theory papers, I want to encourage the authors to explain why each assumption is used, i.e. what role they play in proofs. This is especially important for 3.1, which is very abstract and hard to relate to anything this early in the presentation. The role of the exploration policy is not elaborated on in the concrete implementation, etc. This is promised in line 199, but I cannot find a note on the exploration policy in Section 5. The full loss presented in 6.3 does not clarify how the additional term fulfills all the requirements introduced in Section 4. While I was able to piece together most of it, an explicit explanation here would greatly improve the ease with which the role of the assumptions can be understood. In addition, some intuitive description of what this objective optimizes would be nice to have. The experimental section is missing a discussion on how $\eta'$ is set in practice (also, why is $\eta'$ used instead of $\eta$?). 211: for seek of -> for the sake of? Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Assumption 4.3 and Proposition 5: Are there requirements on the exploration policy to guarantee that sufficient coverage is obtained for the supervised learning guarantee? I might be misunderstanding the exact details here, but for the supervised guarantee to hold, it feels like some assumptions on the data is needed? Section 5 (line 279) makes it seem like the choice of $l$ depends on characteristics of the MDP. If so, what aspects of the MDP need to be know a priori to make this choice? Line 298: Is the choice of the Hellinger distance crucial, or would other choices work here as well? 6.2 If a differentiable model of the environment is learned, would it be possible to obtain the gradient wrt to the value function directly, without resorting to a PG style estimator? This would be directly implementable in MBPO, as the model is fully differentiable using reparametrization. The experimental section results miss some standard information: what implementation of MBPO was used (this is also important to obtain hyperparameters), how where the environments chosen (especially why is Humanoid missing), and what is denoted by the shaded area in the plots. Several times "specific MDP structure" is mentioned (lines201, 275, 304 [I assume here it refers to the GEC assumption?]), but the authors do not fully clarify what is meant by this. What is meant by this? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: The authors do not address the limitations of their work. Potential discussion points are: the reliance on structural assumptions in the MDP are not provided in-depth, and the tuning of the parameter $\eta$. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Weakness 1: The presentation is necessarily dense, making it hard to grasp a couple of intuitive concepts.** Thanks for the constructive comments! We address them in the following. - *Function approximations:* we will present the concrete realizations of the general function approximations ealier to make the audience get familiar more easily. We will use $\mathcal{F}$ and $\mathcal{M}$ to distinguish model-free and model-based hypotheses in the revised version. - *Bellman completeness assumption:* Please refer to **General Response Q1** for an explanation. **Weakness 3: The role of the exploration policy is not elaborated for the concrete implementations.** Thanks for pointing it out. We actually referred to Appendix E for detailed discussions of each concrete example, including the choice of the exploration policy. We will add these to the main texts in the revision. **Weakness 4: About the loss Eqn. (6.3) used in experiments.** We will give more elaboration in the revision. Intuitively, the whole second term of Eqn. (6.3) corresponds to the exploration term $\max_{a_1\in\mathcal{A}}Q_{1,f}(x_1,a_1)$ in Sections 3 & 4, where we also used some tricks to stabilize training. In specific, motivated by CQL [1], we subtract the model value $E_{x\sim\beta}E_{a\sim\pi(\cdot|x)}[Q_{\theta} (x,a)]$ by $E_{x\sim\beta}E_{a\sim \mu}[Q_{\theta}(x,a)]$, where $\beta$ is the state distribution of the replay buffer, $Q_\theta$ is the parameterized Q-network, and $\mu$ is the uniform distribution over the actions. By introducing $E_{x\sim\beta}E_{a\sim\mu}[Q_{\theta}(x,a)]$, we can reduce the variance during optimizing $\theta$ for the model value, which stabilizes the training process. **Weakness 5: About the hyper-parameter $\eta'$ in experiment.** Please refer to **General Response Q2** for an explanation. **Question 1: About Assumption 4.3 and Proposition 5.** We clarify that for both model-based and model-free settings, we **do not** require any coverage assumptions on the exploration policy to obtain the supervised learning guarantee. All we considered is a purely online learning protocol. The intuition is that the expectation involved in $E_{\xi_h\sim\pi_{\exp}}$ coincides with the policy to collect the data, thus the concentration ineq. doesn't require coverage conditions. **Question 2: What aspects of the MDP need to be known a priori to make the choice of $l$?** We note that the technical treatment presented in line 279 is mainly due to the convention of the original GEC paper [2], where their authors mainly wanted to handle the linear mixture model in the model-free case and get a $\sqrt{T}$-regret. Indeed, for most of the time, the loss choice is natural in practice. E.g., for the model-free case, $l$ would be the TD error $f_h(x_h,a_h) - r_h - f_{h+1}(x_{h+1})$. Meanwhile, for the model-based case, the loss estimator $\ell$ would just be the log likelihood. We remark that the linear mixture model can be handled in a model-based manner and does not rely on the fact that its customized loss is known a priori. **Question 3: Is the choice of the Hellinger distance crucial, or would other choices work here as well?** We note that there exist some variants of GEC using TV norm, but we can show that GEC-Hellinger is always smaller or equal to the GEC-TV. We do not include a detailed discussion on this since the main focus of this paper is the new algorithmic framework MEX. We will add a footnote to comment on this. **Question 4: 6.2 If a differentiable model of the environment is learned, would it be possible to obtain the gradient wrt to the value function directly, without using a PG style estimator?** Yes, that would be possible. By adopting methods such that the planning process (corresponding to the optimal value function term in Mex's target) is differentiable, e.g., [3], we can obtain the gradient wrt to the value function directly in a differentiable model of the environment, e.g., MBPO. **Question 5: The experimental section results miss some standard information and add more experiments on the Humanoid environment.** We used the MBPO implementation and the hyperparameters provided in the *mbrl-lib*([4]). The experiments in the paper are conducted in both the standard and the sparse-reward Mujoco locomotion tasks. We also clarify that the shaded area in all plots in the paper corresponds to the standard deviation among five random seeds. As for the new humanoid task, we test the performance of model-based MEX in this task and report the results in Figure 1 of **General Response**, where $\eta^\prime=1e-3$. **Question 6: The authors do not fully clarify what is meant by "specific MDP structure".** Thanks for pointing this out! We will elaborate on this in the revised version. To be specific, - Line 201: we mainly refer to the difference in Q-type and V-type problems in the literature: whether the action distribution used in the discrepancy loss $\ell$ is the same as the historical one or not. If not, we need one step of uniform exploration over the action space in the exploration policy; - Line 275: we refer to the transition and reward model. We mean that for different MDPs, our analysis carries on in a unified manner, insead of a case-by-case one; - Line 304: we mean that the MDP should have some specific structures so that the GEC condition is satisfied. **References:** [1] Kumar, Aviral, et al. "Conservative q-learning for offline reinforcement learning." Advances in Neural Information Processing Systems 33 (2020). [2] Zhong, Han, et al. "A posterior sampling framework for interactive decision making." *arXiv preprint arXiv:2211.01962* (2022). [3] Amos, Brandon, et al. "Differentiable mpc for end-to-end planning and control." Advances in neural information processing systems 31 (2018). [4] Pineda, Luis, et al. "Mbrl-lib: A modular library for model-based reinforcement learning." arXiv preprint arXiv:2104.10159 (2021). --- Rebuttal Comment 1.1: Title: Reply Comment: Thanks for the reply. This covers all my remaining concerns. Thank you for your contribution! --- Reply to Comment 1.1.1: Title: Reply by Authors Comment: Dear Reviewer vhgd, Thank you for your review and support. We will incorporate your valuable suggestions into our paper as we revise it based on the feedback from all reviewers. Your comments greatly assist us in strengthening the overall quality of our work. Best regards, Authors
Summary: This paper proposes an RL framework, maximize to explore (MEX), that requires solving an unconstrained optimization which encompasses estimation, planning, and exploration. It proposes both model-free and model-based algorithms and further extends this framework to zero-sum Markov game setting. All these algorithms are proved to be efficient. It also designs practical algorithms based on existing deep RL baselines. Experimental results show that the proposed algorithms outperform baselines in sparse reward environments. Strengths: This paper proposes a general framework that is adapted to various settings while enjoying favorable theoretical guarantee. Its result generalizes previous ones and is important. It is technically sound, clearly written, and well organized. Weaknesses: The idea of maximizing to explore is not novel, and previous work are not adequately cited. This line of research is first developed by [Kumar & Becker (1982)](https://ieeexplore.ieee.org/document/1102878) who proposes an estimation criterion that biases maximum likelihood estimation with the cost (or value) of decision. In recent years, several work ([Liu et al., 2020](https://proceedings.mlr.press/v119/liu20g.html); [Mete et al., 2021](https://proceedings.mlr.press/v144/mete21a.html); [Hung et al., 2021](https://ojs.aaai.org/index.php/AAAI/article/view/16961); [Mete et al., 2022](https://ieeexplore.ieee.org/document/9751189/); [Mete et al., 2022](https://proceedings.neurips.cc/paper_files/paper/2022/hash/3c601cd5866099648c6dc783e7f39858-Abstract-Conference.html); [Wu et al., 2022](https://proceedings.neurips.cc/paper_files/paper/2022/hash/5bcb807ae43ad0851a6ba6162a866404-Abstract-Conference.html)) have applied this method to different settings and proved some theoretical results. In particular, [Wu et al. (2022)](https://proceedings.neurips.cc/paper_files/paper/2022/hash/5bcb807ae43ad0851a6ba6162a866404-Abstract-Conference.html) gives a theoretical result similar to the one for model-based MEX in this paper. Rather than completely ignoring previous efforts, it is more appropriate to propose MEX as a further generalization of previous research. The proof of regret bound in this paper relies on the low generalized eluder coefficient assumption. However, it is not discussed whether this assumption hold in practice especially when the hypothesis class is parameterized by neural networks. Technical Quality: 3 good Clarity: 3 good Questions for Authors: While the theory indicates that the coefficient balancing value and evidence should change over time, it is set to be a constant in practice. Why is it the case? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: This paper has not discussed its limitations, especially about optimization aspects of the learning objective and the practicality of assumptions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Weakness 1: Lacking discussions of previous research that have similar ideas.** Thanks very much for pointing these related works out! We will add these works and discuss them in detail in the revision. In the following, we compare our work with these works briefly. [1]: While this work firstly proposed an estimation criterion that biases maximum likelihood estimation with the cost/value, their algorithm is actually different from ours, by their Eqn. (6) - (8) in Section 3, their algorithm performs the estimation of model $\alpha$ and policy optimization separately, for which they only obtained asymptotic convergence guarantees. Also, how well their decision rule explores remains unknown in theory. In contrast, MEX adopts a single optimization objective that combines estimation with policy optimization, which also ensures sample-efficient online exploration. [2,3,4,5,6]: They study Reward-Biased Maximum Likelihood Estimation (RBMLE) in Multi-arm bandit ([2]), Linear Stochastic Bandits ([3]), tabular RL ([4]), and Linear Quadratic Regulator settings (a linear parameterized model of MDP,[5,6]) and also obtain the theoretical guarantees. While these settings are special cases for our proposed algorithms and our proven theoretical guarantee can also be generalized to these concrete cases. As we claim in this paper, our main contribution is to address the exploration-exploitation trade-off issue under general function approximation, which makes our work differs from these papers. [7]: It's true that this work considers an algorithm similar to MEX, but our theory differs from theirs in both techniques and results. Our theory is based upon a unified framework of online RL with general function approximations, which covers their setup for model-based hypothesis with kernel function approximation (RKHS). More importantly, they derived asymptotic regret of their algorithm based upon certain uniform boundedness and asymptotic normality assumptions, which are relatively strong conditions. In contrast, we derived finite sample regret upper bound for MEX, and the only fundamental assumption needed is a lower Generalized Eluder Coefficient (GEC) MDP, which contains almost all known theoretically tractable MDP classes (therefore covers their RKHS model). Finally, our paper further extends MEX to two-player zero-sum Markov games where similar algorithms and theories are previously unknown to the best of our knowledge. Also, the works mentioned above do not design experiments in deep RL environments, while we propose deep RL implementations and demonstrate their effectiveness in MuJoco environment. **Weakness 2: It is not discussed whether the low GEC assumption hold in practice especially when the hypothesis class is parameterized by neural networks.** We note Proposition 17.20 of [8] provides a general estimation result for GEC when the function class can be embedded into an RKHS. And the original GEC paper [9] also provides many examples that are beyond linear function approximation and can capture non-linearity to some degree. **Weakness 3: It seems that the provided code for MEX-MB is wrong and is merely an implementation of MBPO.** Our code for MEX-MB is adapted from MBPO. As described in Lines 341-344, MEX-MB differs from MBPO only in the model update procedure by adding the model value gradient during model updates. The corresponding code can be found in 'MEX_MB/mbrl/models/one_dim_tr_model.py'. This also shows that our method is easy to implement with minimal computational overhead to boost performance. **Question 1: While the theory indicates that the coefficient balancing value and evidence should change over time, it is set to be a constant in practice. Why is it the case?** We note that both in the theory and experiments, we set the coefficient as a constant. In theory, we set it to be $1/\sqrt{K}$ instead of $1/\sqrt{k}$, where $K$ is the number of episodes for the entire online learning process. **References:** [1] P. Kumar and A. Becker, "A new family of optimal adaptive controllers for Markov chains," in IEEE Transactions on Automatic Control, vol. 27, no. 1, pp. 137-146, February 1982, doi: 10.1109/TAC.1982.1102878. [2] Liu, Xi, et al. "Exploration through reward biasing: Reward-biased maximum likelihood estimation for stochastic multi-armed bandits." International Conference on Machine Learning. PMLR, 2020. [3] Hung, Y.-H., Hsieh, P.-C., Liu, X., & Kumar, P. R. (2021). Reward-Biased Maximum Likelihood Estimation for Linear Stochastic Bandits. Proceedings of the AAAI Conference on Artificial Intelligence, 35(9), 7874-7882. [4] Mete, Akshay, et al. "Reward biased maximum likelihood estimation for reinforcement learning." Learning for Dynamics and Control. PMLR, 2021. [5] Mete, Akshay, Rahul Singh, and P. R. Kumar. "The RBMLE method for Reinforcement Learning." 2022 56th Annual Conference on Information Sciences and Systems (CISS). IEEE, 2022. [6] Mete, Akshay, Rahul Singh, and P. R. Kumar. "Augmented RBMLE-UCB approach for adaptive control of linear quadratic systems." Advances in Neural Information Processing Systems 35 (2022): 9302-9314. [7] Wu, Chenyang, et al. "Bayesian optimistic optimization: Optimistic exploration for model-based reinforcement learning." Advances in neural information processing systems 35 (2022): 14210-14223. [8] Zhang, Tong. Mathematical analysis of machine learning algorithms. Cambridge University Press, 2023. [9] Zhong, Han, et al. "A posterior sampling framework for interactive decision making." *arXiv preprint arXiv:2211.01962* (2022). --- Rebuttal Comment 1.1: Title: Re: Rebuttal by Authors Comment: Thank you for your detailed and comprehensive rebuttal addressing my concerns. Your clarifications have sufficiently addressed my concerns. Taking into account your thorough rebuttal and the improvements you've outlined for your paper, I agree to raise the score from 5 to 7 and look forward to seeing the updated version of your paper. --- Reply to Comment 1.1.1: Title: Re:Re: Rebuttal by Authors Comment: Dear Reviewer 44XZ, Thank you for your review and support. We will incorporate your valuable suggestions into our paper as we revise it based on the feedback from all reviewers. Your comments greatly assist us in strengthening the overall quality of our work. Best regards, Authors
Rebuttal 1: Rebuttal: **General Response:** Thank each reviewer for the review. We provide comments on some common questions here. **Q1: Explanations on the Bellman completeness assumption (Assumption 3.1) and the choice of loss function (3.3), especially the second term, for the model-free case.** **A1:** **About the Bellman completeness assumption (Assumption 3.1)**: This assumption does have a very clear technical and also intuitive interpretation. It is critical for us to get a sharper supervised estimation guarantee (Assumption 4.3) for the model-free case. In short, to estimate $(\mathbb{E}[(f_{h}(x_h,a_h) - r_h - f_{h+1}(x_{h+1}))])^2$, we cannot simply use the empirical squared TD error $(f_{h}(x_h,a_h) - r_h - f_{h+1}(x_{h+1}))^2$ because $E[X^2] = (E[X])^2 + \sigma^2$, where $\sigma^2$ is the conditional variance. The main technical consideration is that with Assumption 3.1, we can use the second term in Eqn. (3.3) to control the conditional variance $\sigma^2$. The intuition why we need the function class to be complete under $\mathcal{P}_h$ is that if the $\mathcal{P}_h f$ still falls into the function class, then we can relate the conditional variance with the infimum over the function class which appears in the second term of (3.3). Otherwise, the infimum term can help nothing if $\mathcal{P}_h f$ falls outside the function class. **About the loss function (3.3)**: At a high level, involving the second term of (3.3) is only a specific estimator construction under the Bellman completeness assumption. It is indeed used to handle the double-sampling issue via the Bellman completeness assumption to achieve a sharper supervised guarantee, as discussed above. On the other hand, in the model-free case, another straightforward choice is to use the sample mean with $m$ trajectories as the estimator to achieve a low variance, at the cost of a worse estimation guarantee (see [1] for a similar algorithmic design). From this perspective, this term is not necessary. **Q2: About the hyper-parameter $\eta$ in theory and experiments.** **A2:** In our theory, we select $\eta$ to be $1/\sqrt{K}$. Our theoretical result would keep the same if we set $\eta = c /\sqrt{K}$ for any constant $c>0$. Such a choice of $\eta$ is vital to balance between exploaration and exploitation so as to achieve the overall $\widetilde{\mathcal{O}}(\sqrt{K})$-regret. In the experiments, we make some adaptations of $\eta$ to the experimental setups: - Firstly, slightly different from the theory, in the experiments (Line 322 to 337), the empirical loss is an averaging over $(k,h)\in[K]\times[H]$ instead of a direct summation, and $\eta'$ is multiplied in front of the model value instead of the empirical loss. Thus equivalently $\eta' = 1/(\eta T)$, where $T=HK$ is the number of timesteps during training, and $\eta$ is used in the theory. This trick stabilizes the training process. Higher $\eta'$ means a higher weight on exploration, which is often used in sparse-reward settings. - Secondly, since in the experiments the reward is not normalized to $[0,1]$ as in theory, we need to scale $\eta'$ up to a constant to match the model value and the empirical loss (we often consider squared TD error). Hence it is natural to further use $\eta'= r_{\max}/(\eta T) = r_{\max}\sqrt{K}/T$. Here $r_{\max}$ is the maximum reward in the experiment which is around $10$, $T=1e6$, and $K=1e3$. This gives $\eta'\approx 3e-3$. After some trials around $3e-3$, we choose the $\eta'$ as specified in Appendix H.3. **Q3: Comparison with existing theoretical works.** **A3:** We will add more explanation of the difference between MEX and previous theoretical approaches in the revision, which can better position our work. In the following, we make a brief comparison. - Compared to the most of the version-space-based algorithms, e.g., [1, 2], our framework does not maintain a version space at each iteration and then conducts constraint optimization over the space. Therefore, our algorithm can be much more easier to be approximated in practice. We believe that such a practical computational guidance is meaningful if we want to bridge the theoretical work with the practical one; - Compared to the posterior sampling algorithms presented in, e.g. GEC paper [3], our framework does not require the algorithms to know a good prior distribution. In practice, whether such a prior distribution required by the posterior sampling is available is not guaranteed in general. - Compared to the E2D algorithm proposed by [4], our framework only uses an maximization oracle instead of a minimax optimization subroutine. Similarly, the E2D algorithm cannot be approximated efficiently in practice. **References:** [1] Du, Simon, et al. “Bilinear classes: A structural framework for provable generalization in rl.” International Conference on Machine Learning. PMLR, 2021. [2] Jin, Chi, Qinghua Liu, and Sobhan Miryoosefi. “Bellman eluder dimension: New rich classes of rl problems, and sample-efficient algorithms.” Advances in neural information processing systems 34 (2021): 13406-13418. [3] Zhong, Han, et al. “A posterior sampling framework for interactive decision making.” arXiv preprint arXiv:2211.01962 (2022). [4] Foster, Dylan J., et al. "The statistical complexity of interactive decision making." arXiv preprint arXiv:2112.13487 (2021). Pdf: /pdf/98b1d7935da2d150b4e1c7c7c3eca5e7941daf21.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Koopa: Learning Non-stationary Time Series Dynamics with Koopman Predictors
Accept (poster)
Summary: This paper tackles the task of non-stationary time series forecasting with Koopman operator. Specifically, the authors propose Koopa Blocks, which uses Fourier transform to disentangle the time series into time-variant and time-invariant components, and leverages Koopman operator to model their dynamics respectively. The method is evaluated on six real-world benchmark datasets. Strengths: 1. The paper is well-written and easy to follow. 2. This study represents a pioneering endeavor in integrating Fourier transform and Koopman theory, with the aim of disentangling time-variant and time-invariant components within the Fourier spectral domain. The idea is both innovative and thought-provoking, setting a precedent for future investigations into the research of deep Koopman methods. 3. The additional "Scaling Up Forecast Horizon" experiments demonstrate that Koopa is capable of accurately forecasting in situations involving length mismatches or long-term scenarios. In the realm of the Koopman method, these long-term prediction results are exceptional. 4. The proposed model is memory and time efficient compared to previous methods. Weaknesses: 1. The performance improvement of univariate forecasting is incremental. Also, the improvement of multivariate forecasting is also not so significant under most cases. 2. The essence of the deep Koopman method lies in the spectrum of the Koopman operator, because the eigenvalues determine the model's behavior during long-term evolution. However, the author did not provide any analysis or visualized results regarding the eigenvalues. This lead to the following issues: - Given the considerable forecasting length (up to 192 in this case), it is remarkable how the model manages to achieve accurate predictions without encountering issues such as explosion or decay during the forward pass. Is it true that all the eigenvalues of the system have a modulus of one, ensuring stable and non-divergent behavior throughout the prediction process? - Intuitively, it would be expected that the time-variant Koopman operator possesses more eigenvalues with larger phase angles compared to the time-invariant operator. I wonder whether this is true in this case. What does the spectrum for these two operator looks like? Technical Quality: 2 fair Clarity: 4 excellent Questions for Authors: 1. The authors implement the DMD with torch.linalg.lstsq. Were there any concerns regarding gradient explosion during the autograd process through this layer? Did the authors employ any techniques to stabilize the training process? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 4 excellent Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks to Reviewer pXJ3 for providing a detailed review and insightful questions. **Q1:** About the performance of the proposed method. To our best knowledge, the benchmark of time series forecasting has been excavated for a long time. And PatchTST as the SOTA forecasting model has surpassed concurrent work by a large margin. However, the Transformer-based model still suffers from the efficiency problem, which makes it less applicable for practitioners. Meanwhile, the recent revival of linear models presents a simple but effective approach, but the performance is still under promotion on non-stationary time series. Therefore, we delve for **applicable and efficient time series forecasting** based on Koopman theory and achieve **77.3%** training time and **76.0%** memory cut-off on average, while exceeding current SOTA in **34 out of 48** benchmarks. We hope it would help practitioners achieve performance on par with a heavily trained Transformer-based model while benefiting from the efficiency of linear models. **Q2**: Techniques for stabilizing the training process. As shown in our code implementation, we employ several techniques to stabilize the training process: - **Operator initialization:** We adopt the operator initialization as previous Koopman-based forecasters, where the eigenfunctions start from standard Gaussian distribution with all-one eigenvalues. - **Explosion checking**: When we find explosion occasionally happens in the proposed Koopman Predictor, we introduce an explosion checking mechanism that replacs the operator encountering nan with the identity matrix. - **Hierarchically disentanglement**: While reproducing the results of previous Koopman forecasters, we often find the training fails with explosive outputs. As the reviewer mentioned insightfully, we analyze the phenomenon and find the hierarchical disentanglement could alleviate the problem (Please refer to $\underline{\text{Q2 and Q4 of the reviewer M3Xz}}$ for the analysis). Besides, motivated by your suggestion of analyzing of the eigenvalues, we conduct spectral experiments to ablate the disentanglement and hierarchical forecasting mechanism. We use $average(||z|-1|)$ to measure the the stability of operator describing the pattern evolution. On Exchange dataset, we plot the eigenvalues $z$ and the measurement of the time-invariant operator in the first layer in the following cases: - a. Single-layer model with only time-invariant operator. - b. Single-layer model with time-invariant and time-variant operators. - c. Two-layer model with time-invariant and time-variant operators. The visualization results are shown in $\underline{\text{Figure 2 of the global response}}$, with the introduction of disentanglement and hierarchically stacking, we observed the eigenvalues become more close to the unit circle and present a more stable operator. **Q3**: Spectral analysis of time-invariant and time-variant operators. Thanks for this valuable suggestion. We visualize the eigenvalues of time-invariant and time-variant operators on Exchange dataset in $\underline{\text{Figure 4 of the global response}}$, And we have the following two findings: - We find it not obvious that more eigenvalues with larger phase angles exist in the time-variant Koopman operator than in the time-invariant operator. For example, in time series $y_t=\alpha sin(\omega_0 t)+\beta sin(\omega_t t) + \epsilon_t$, where $\alpha, \beta, \omega_0$ do not change with time, $\epsilon_t$ is white noise and $\omega_t$ varies window-wisely, the first term $\alpha sin(\omega_0 t)$ behaves time-invariantly because of stable periodicity, while $\beta sin(\omega_t t)$ with varying periodicity is close to our desired time-variant component. However, no partial order assumption is made between $\omega_0$ and $\omega_t$. Thus, the phase angles of eigenvalues may not have explicit relations. - Time-variant Koopman operators have many multiple eigenvalues around zero, which indicates simpler evolution patterns than Time-invariant operators. It's possible because Time-variant KP learns the dynamics within one lookback window, while the other learns the dynamics underlying the whole dataset. --- Rebuttal Comment 1.1: Comment: Thanks for the response. While the concern regarding performance gain remains significant in the paper, I recognize that the proposed method presents an inspiring idea by combining Koopman theory and spectral methods. Moreover, its high interpretability adds value to the work, making it a valuable contribution to the Koopman learning community. I will raise my rating to weak accept. --- Reply to Comment 1.1.1: Title: Thanks for Your Response and Raising the Score Comment: Many thanks to Reviewer pXJ3 for the insightful pre-rebuttal review and valuable feedback. Your detailed suggestions help us a lot in the rebuttal and paper revision!
Summary: This work propose a new architecture inspired by Koopman theory for time-series forecasting task. To address the non-stationarity problem, this work follow the idea of disentangle local and global time-series representation and forecasting model respectively. Particularly, this work propose to use Fourier filter on long sequence to extract more robust and stable patterns for global model and Koopa Block to hierarchically disentangle local and global representation in a residual structure. The proposed method achieves competitive performance when comparing with recent baselines. Strengths: 1. The structure is well organized. Figures 2 and 3 provide a clear visual representation of how the proposed method works. 2. The experiments are comprehensive. The proposed method can beat the recent method PatchTST on the commonly used long sequence forecasting benchmark. Weaknesses: 1. Although the paper's title revolves around the Koopman theory, the proposed method is generic and not closely related to it. From my understanding, most of technique contributions lie in how to extract local and global representation and design forecasting model for non-stationary time-series. The work solely follow the the conclusion of Koopman theory and model the latent dynamic with linear transition model. 2. Technique novelty is limited. The general idea of using local and global presentation/model to tackle non-stationary time-series tasks has been used. This work builds upon these foundations and introduces several modifications by integrating some other methods used in recent deep time series forecasting, like residual block, Fourier filter. 3. Section 1 needs improvements. It is clear for me how Koopman operator address the non-linear dynamics, but section 1 does not clearly show how existing work of Koopman theory tackle non-stationary problems. Could the authors elaborate more on line 40-42? In line 52-54, could the authors explain the motivation of replacing reconstruction loss? In line 30-31, the authors introduce "Non-stationary ...... in different periods". However, the proposed method do not show the capability to detect period boundary. 4. The experiments are comprehensive, but it would be better to design experiments revolve around the topic and the proposed method. Ablation studies can successfully validate the effectiveness of the design of local and global model, but the experiment associated time and memory cost analysis might lose focus. The authors can also explore the effect of number of block to verify the capability of hierarchical disentanglement. 5. It would be better to discuss the related work of non-stationary time series forecasting, like regime switching state-space models, time-varying state-space models, etc. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: how to set the hyperparameters D? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: N.A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks to Reviewer EGYL for the detailed and insightful review. **Q1:** Technique novelty of proposed method. As clarified in $\underline{\text{Q5 of the reviewer M3Xz}}$, we sum up "how existing work of Koopman theory tackles non-stationary problems" and highlight our difference with previous Koopman forecasters. And we'd like to clarify our work is not straightforward as "integrating some other methods used in recent deep time series forecasting, like residual block, Fourier Filter": - To our best knowledge, no previous Koopman forecasters consider stackable modular design and hierarchical forecast. To address the issue, we propose specialized Koopman Predictors to learn respective dynamics. - We make scalable incorporation with deep residual structure. It is totally different from residual connection proposed in ResNet. Instead, we feed **the residual of fitted input** of one layer to the next layer in the hope to learn hierarchical dynamics and **aggregate forecast** from all layers as the model output. - We propose a way to **extract the spectrum distribution of dataset**, which is based on FFT but is just the first step of our filter. Fourier Filter is not like canonical filters, which are implemented by truncating pre-defined frequency. Besides, the proposed filter is specially designed for Koopman forecasters, and the extraction of time-variant component is hardly explored in previous work. **Q2:** The motivation to stack Koopa Blocks and replace reconstruction. Thanks a lot for your valuable suggestion to verify the effect of block stacking, please refer to the results and our analysis in $\underline{\text{Q4 of the reviewer M3Xz}}$. Besides, with the introduction of hierarchical forecasting, the ground truth of reconstruction should be the residual of fitted dynamics, which is unavailable during training. Therefore, we adopt the deep residual structure, where solely forecasting objective works as a good optimization indicator, and we further validate it in $\underline{\text{Table 2 of the global response}}$. **Q3:** How does the proposed model detect the period boundary? Firstly, Fourier Filter helps the model detect window-shared periodicity. Based on the disentangled time-invariance component with almost shared periodicities, the global operator learns lookback-forecast **transition**, which relies less on the specific period boundary but more on the evolution pattern (such as how the phase changes and the decay rate of eigenfunctions). **Q4:** Reclarify the position and contribution of our work. The reviewer mentioned that our work "solely follows the conclusion of Koopman theory". We agree with this argument but also would like to clarify that the position of our work is to **propose an applicable architecture based on Koopman theory for real-world forecasting**. Concretely, our contributions mainly revolve around architecture and can be listed as follows: - Specialized designed disentanglement for Koopman forecasters. - Koopman Predictor with reconstruction loss raveled out and achieve end-to-end objective optimization. - Hierarchically disentangling and deep residual structure for model scale up. - Derive operator updating rule with reduced complexity based on the linear properties of Koopman operator. **Q5:** Discuss related works of non-stationary time series forecasting. Previous works adopt series stationarization to attenuate non-stationarity for better predictability, such as Adaptive Norm, DAIN, and ReVIN. Several recent works have also explored model architecture for non-stationary forecasting, especially with theory support that addresses time-variant properties. As the reviewer insightfully mentioned, the State Space Models (SSM) share lots of similarities with Koopman forecasters. Representatives such as Kalman Filters widely applied on control systems, and deep SSMs are good at modeling long sequences. Therefore, we'd like to discuss their similarity and differences as follows: (1) Similarities: - They portray the time series as the state transition of the system. - They are able to portray time-variant transitions. - They can introduce external driving factors to address non-stationarity caused by regime/concept drifting. (2) Differences: - Approach to describing time-variance: Regime Switching State Space Models assume pre-defined regime number and conditions to transit (Markov Process modeling). Time-variant SSM requires exquisitely designed transitions such as the HiPPO matrix. While our Koopman forecaster analyzes time-variant dynamics by disentanglement and leveraging DMD. - Model implementation: SSM is always implemented by RNN, with memory to model long-term dependencies. Canonical Koopman forecasters are always incorporated with AutoEncoder. Besides, the transition learner of Koopman forecasters (e.g. parameterized matrix) can be more simple and interpretable. - Objective optimization: SSM relies on Variational Bayesian Inference and always uses KL-divergency (CrossEntropy) for optimization. Koopman forecasters are traditionally optimized by reconstruction and dynamics advancing loss (MSE). And our model is solely optimized by end-to-end forecasting loss. To further address your concern, we also include the widely used SSM: LSSL and Regime Switching LSSL as additional baselines. Here are the results. Koopa still achieves the best performance in all benchmarks. |Average(MSE)|ECL|ETTh2|Exchange|ILI|Traffic|Weather| |-|-|-|-|-|-|-| |LSSL|0.284|1.426|1.036|4.580|0.699|0.216| |RegimeSwitching|0.164|0.361|0.176|2.841|0.495|0.189| |Koopa|**0.143**|**0.303**|**0.110**|**1.734**|**0.404**|**0.161**| **Q6:** Hyperparameter analysis of the dimension of Koopman embedding $D$. $D$ is selected from $\{64, 128, 256, 512\}$. We have also provided hyperparameter sensitivity analysis in $\underline{\text{Section 3 of supplementary materials}}$, where we find the performance insensitive to the choices of $D$. --- Rebuttal Comment 1.1: Comment: - Could the authors clarify the connection between the proposed architecture design to the Koopman theory, how the proposed architecture can reduce approximation error of non-linear dynamic system? In theory, higher dimension dimension means lower approximation error w.r.t non-linear function. How to explain that the performance is insensitive to the choices of D? - Real-data is complicated, different datasets usually have different issues leading to performance bottlenecks. How to demonstrate that the performance improvement over the real-world datasets stems from Koopman theory via a better approximated non-linear function instead of deep architecture design (residual block, periodicity pattern extraction for this long-sequence forecasting benchmark, global and local disentanglement), whose effectiveness has been widely demonstrated in recent research work. - Can the authors elaborate more on my 3rd question in weakness? --- Reply to Comment 1.1.1: Title: Thanks for the Reviewer's Prompt Reply (Part 1) Comment: Thanks a lot for your valuable prompt comments. We'd like to provide more elaborations on each point of your response. **1. How the proposed architecture reduces the approximation error of non-linear dynamics** Due to the complexity and non-divergent eigenvalues evolution of non-linear dynamics, it is always challenging to directly apply operator learning on non-stationary series as one dynamics, which motivates us to consider the following aspects of architecture. (a) Stackable blocks learning hierarchical dynamics To our best knowledge, we make Koopman-based forecasters "deep" for the first time. Each layer learns weak stationary process with well-natured operators for **stable training** (Please refer to $\underline{\text{Q2 of Reviewer pXJ3}}$ for our detailed analysis). Deeper layers learn the residual of previously fitted dynamics and aggregate layer-wise dynamics for the final forecast, which **enhances the model capacity** for learning complex non-linear dynamics. As shown in $\underline{\text{Table 1 of the global response}}$, the difficulty of dynamics approximation in one block can be larger than in multiple blocks. (b) Time-variant operators based on new disentanglement Inspired by Koopman theory portraying non-linear dynamics into sub-regions individually governed by local operators, we **associate dynamics sub-regions to varying windows** in time series and calculate time-variant operators by analytical method DMD. Though previous work also utilizes time-variant operators, they do not **explicitly design time-variant and invariant disentanglement** and **elaborate on respective dynamics modeling**. In our architectural ablation of $\underline{\text{Table 3 of the main text}}$, we validate the effectiveness of applying disentanglement, learning parameterized operator globally, and analytically calculating operators locally. **2. How to explain the performance is insensitive to the choices of D** Thanks a lot for your question with scientific rigor. As we clarified that stacking blocks could enhance model capacity, the sensitivity analysis of $D$ is conducted under a fixed number of Koopa blocks $B=3$, which can be large enough for dynamics modeling. To further address your concern, we further check the sensitivity of $D$ under varying block number $B$. | ETTh1 (MSE) | D=64 | D=128 | D=256 | D=512 | | ----------- | ----- | ----- | ----- | ----- | | B=1 | 0.400 | 0.396 | 0.393 | 0.390 | | B=2 | 0.393 | 0.392 | 0.389 | 0.388 | | B=3 | 0.388 | 0.387 | 0.386 | 0.386 | | B=4 | 0.385 | 0.385 | 0.385 | 0.384 | | Exchange (MSE) | D=64 | D=128 | D=256 | D=512 | | -------------- | ----- | ----- | ----- | ----- | | B=1 | 0.150 | 0.140 | 0.134 | 0.125 | | B=2 | 0.134 | 0.129 | 0.126 | 0.123 | | B=3 | 0.127 | 0.124 | 0.125 | 0.120 | | B=4 | 0.120 | 0.118 | 0.118 | 0.117 | It can be more obvious that a larger $D$ leads to lower error when $B$ is small. But stacking blocks as $B=3$ makes the performance insensitive to $D$. So we will include the findings and update hyperparameter sensitivity meticulously in the final version of our supplementary materials. --- Reply to Comment 1.1.2: Title: Thanks for the Reviewer's Prompt Reply (Part 2) Comment: **3. How to demonstrate the performance improvement stems from Koopman theory via a better approximated non-linear function instead of deep architecture design.** To check how much the performance is improved by the Koopman theory-inspired building block. We compare Koopa with competitive baseline NBEATS[1] and its variants, which are all built on deep residual architecture with three available choices of the basic block. * Trend Block: MLP learning weighting on pre-defined polynomial basis $\{1, t, ..., t^p\}$. * Seasonal Block: MLP learning weighting on pre-defined Fourier series basis $\{cos(2\pi it), sin(2\pi it)\}$. * Generic Block: MLP learning point-wise weighting from lookback to forecast window. The basic blocks can be replaced by our Koopa Block, which is composed of Disentanglement (Fourier Filter) and Koopman Predictor (Enc+Dec+Operator). It is notable that NBEATS do not explicitly design disentanglement as well, so we leverage the well-acknowledged trend-seasonal disentanglement proposed by Autoformer[2]. We also trail on the periodicity selection of Fourier series in Seasonal Block based on the sample rate of datasets for a fair comparison. | Datasets (MSE) | ECL | ETTh2 | Exchange | ILI | Traffic | Weather | | - | - | - | - | - | - | - | | NBeats Generic Block | 0.190 | 0.246 | 0.050 | 3.302 | 0.620 | 0.141 | | NBeats Trend Block | 0.201 | 0.258 | 0.048 | 2.610 | 0.656 | 0.142 | | NBeats Seasonal Block | 0.181 | 0.238 | 0.056 | 3.359 | 0.613 | 0.135 | | NBeats Decomp + Seasonal + Trend Block | 0.198 | **0.224** | 0.043 | 2.552 | 0.698 | 0.155 | | Koopa Block (Ours) | **0.130** | 0.226 | **0.042** | **1.621** | **0.415** | **0.126** | The results demonstrate the significant performance gain brought by Koopman-based block (even with disentanglement) in ECL, ILI, and Traffic datasets. Notably, Koopman theory that employs varying localized operators is suitable to tackle distribution shift and the introduction of measurement function learning enhances the model capacity for non-linearity. It is also quite different from recent research works (global local decomp + respective modeling) that the localized part is still modeled by global-shared learnable parameters. **4. Elaborate more on 3rd question in weakness** (a) Existing works of Koopman theory tackling non-stationary problems. While Koopman theory has been widely incorporated with AutoEncoder for sequential modeling and forecasting [3, 4], **there are still few works that attend to the power of Koopman theory for non-stationary problems**. We sum up the pipeline as follows: - Introduce extra constraints to refine operator stability on non-stationary series, (e.g. PCL [5]). However, the operator can be intrinsically unstable learned from non-stationary data, since the temporal statistics and evolution regime can change greatly. So the predicted time series can be over-stationary. - Use local and global operators to deal with temporal distribution shifts. Since learning one operator for the whole dynamics is hard, KNF [6] as the most related work for non-stationary forecasting proposes to learn a unified global operator and utilize the self-attention map within the window as local operators. It directly **adds the global and local operator** as a time-variant transition, while we **add the components** given by respective predictors. It uses **classical AE structure** with reconstruction and forecasting loss and employs MLP for the coefficients on pre-defined measurement functions. (b) Elaborate more on lines 40-42. Take Duffing oscillator as shown on the left of $\underline{\text{Figure 1 of the main text}}$ for an example. It is hard to directly find one operator for portraying the dynamics but if we divide the dynamics into three localized sub-regions, we find several operators with simpler forms can be enough to describe. In that, for non-stationary time series with complicated time-variant $K_t$, **we discretize it into several window-wised operators** governing different periods (sub-regions). (c) Elaborate more on lines 52-54 and lines 30-31. As clarified in $\underline{\text{Q2 and Q3}}$, we hope our response can fulfilled the reviewer's expectations and would be very happy to answer any further detailed questions. [1] N-BEATS: Neural basis expansion analysis for interpretable time series forecasting. [2] Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting. [3] Deep learning for universal linear embeddings of nonlinear dynamics. [4] Learning deep neural network representations for koopman operators of nonlinear dynamical systems. [5] Forecasting sequential data using consistent koopman autoencoders. [6] Koopman neural forecaster for time series with temporal distribution shifts. --- Reply to Comment 1.1.3: Title: Looking forward to your reply. Comment: We sincerely appreciate the time you dedicated to reviewing our paper. Given the limited timeframe for author-reviewer discussion, please kindly let us know if our response has addressed your concerns. Following your suggestions, we improve the paper in the following aspects: - We clarify how the proposed architecture reduces the approximation error of non-linear dynamics. - We further analyze the hyperparameter sensitivity of D. - We add comparisons with the deep architecture design and highlight the performance gain stemming from a better approximated non-linear dynamics with Koopman theory. - We elaborate on more on your 3rd question in weakness. Thanks again for your valuable review. Looking forward to your reply. --- Reply to Comment 1.1.4: Title: Request of Reviewer’s attention and feedback Comment: Dear Reviewer EGYL, Thanks again for your valuable and constructive review, which has inspired us to improve our paper further substantially. Following your suggestions, we have clarified the connection between our architecture design to Koopman theory, further analyzed the hyperparameter sensitivity of $D$​, demonstrated the performance gain benefited from our Koopman-based blocks, and discussed all your mentioned weaknesses in every detail. We do our best to solve your concerns in the limited time and characters. We hope that this new version has addressed your concerns to your satisfaction. We eagerly await your reply and are happy to answer any further questions. We kindly remind you that the reviewer-author discussion phase will end by Aug 21st at 1 pm EDT, with just 4 hours left. After that, we may not have a chance to respond to your comments. Sincere thanks for your dedication! Authors
Summary: The authors address the issue that most real-world time-series are non-stationary. To tackle this problem they introduce Koopa, a globally learned (time-invariant) and localized (time-variant) linear Koopman operators, to exploit respective dynamics underlying different components. In practice, Koopa is composed of stackable Koopa Blocks, where each block learns operators hierarchically by taking the residual of previous block fitted dynamics as input. The authors evaluate the proposed model on variety of time-series data sets and compare their method with transformer and convolution based deep architectures. Koopa outperforms or performs on par with the existing methods, yet has a significantly lower computational costs. Moreover, the proposed method can be use in settings where the forecast horizon must be increased beyond the training set-up. Strengths: Originality: The related work section in well written. It covers both relevant model types in the field, for (a) time series forecasting, TCN, RNN and transformers are mentioned, as well as the authors position the current work in its context; for (b) Koopman operators, the authors provide an overview of the related work and how their work differs. The work proposes a novel method leveraging Fourier analysis to disentangle time-series components, eDMD for estimating time-variant Koopman Predictor, and time-invariant Koopman Prediction as a learnable parameter. It differentiates itself from KAEs due to its block like structure, where the residual of the previous block is fed as input into the next block. Quality: The submission is techinncally sound. The authors build upon Koopman operator theory and Wold's Theorem. In addition, experimentally they validate whether their proposed sub-components of Koopa, Fourier Filter, truly disentangles time-variant from time-invariant series. As well as provide an ablation on Koopa structure. Clarity: The work is well written providing sufficient relevant background knowledge on Koopman operator theory also for reader who is unfamiliar with this theory. Significance: The obtained results are significant. The proposed method outperforms or performs on par with existing time-series forecasting methods. More importantly, the method is very efficient with respect to training time and memory costs compared to transformer or convolution based methods. Lastly, the authors showcase that their method is applicable for scaling up forecast horizon, where standard deep learning architectures fail. In all, based on the submission the author's suggest a very promising modelling technique, however, I am not an expert of Koopman operator theory (nor the works in this sub-domain). Weaknesses: Clarity: When first mentioned, I would recommend using the complete name of the model type rather than only the acronym, for example, temporal convolutional network rather than just TCN, it makes it easier for the reader to read. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: For time-variant KP how do you choose S, is the Time-variant KP sensitive to S? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors have not addressed limitation of their work. I would suggest the authors to include such section in the final version of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks to Reviewer a2Yj for providing thorough detailed comments. **Q1:** Hyperparameter sensitivity of the segment length $S$. As per your suggestion, we have checked the robustness of the proposed method with respect to the hyperparameter $S$, which varies in $\{H/8,H/4,H/2,H\}$, where the forecast length is $H$ and the lookback length $T=2H$. | segment length $S$ | Exchange | Traffic | ETTh2 | ILI | Weather | ECL | | :----------------- | :-------- | :-------- | :-------- | --------- | --------- | --------- | | $H/8$ | 0.046 | **0.458** | 0.244 | **2.326** | 0.129 | 0.157 | | $H/4$ | **0.044** | 0.481 | 0.244 | 2.351 | 0.127 | 0.149 | | $H/2$ | 0.045 | 0.494 | **0.241** | 2.354 | 0.127 | 0.141 | | $H$ | 0.047 | 0.494 | **0.241** | 2.331 | **0.126** | **0.132** | We find the proposed model insensitive to $S$ on several datasets but sensitive on ECL and Traffic datasets. It may be because there are many variables in these two datasets, but we currently share $S$ on all variables. Since the prediction of some variables may be greatly affected by $S$, the forecasting performance of the dataset with more variables can be more sensitive to $S$. **Q2:** About the writing issues. Thanks for your valuable suggestions. We will replace the acronym of models with their complete names. All the changes will be included in the final version of our work. **Q3:** About the limitations of the proposed method. Thanks a lot for your concern, we evaluate the limitations of our proposed method in $\underline{\text{Section 6 of supplementary materials}}$, which can be summarized as follows: - The design space of the encoder and decoder. We will further consider the embedding of various evolution patterns for better multivariate forecasting and have deeper integration of Koopman measurement functions. - The model interpretability. We will dig into Koopman modes decomposition to reveal the linear behavior underlying non-stationary time series. - Incorporation with DMDc. We will consider factors outside observations to tackle time series forecasting with covariates. --- Rebuttal Comment 1.1: Comment: Thank you for the additional ablation experiment and for voicing the limitations of the presented work, I recommend the authors to include the limitations in the main paper rather than the supplementary material. I will keep my score as it is. --- Reply to Comment 1.1.1: Title: Thanks for your response Comment: Thanks a lot again for your response and every effort on the review. We will include the limitations in the final version of the main paper.
Summary: This paper addresses non-stationary time series forecasting, in which case the temporal distribution of time series is changing over time. Koopman theory is introduced in this paper for its fundamental capacity on modeling dynamical systems and a Koopman forecaster called Koopa is proposed. Concretely, in this paper, time series is disentangled into time-variant and time-invariant components. For time-invariant component, Koopa introduces a learnable matrix as Koopman operator, and for time-variant component, Koopa leverages eDMD to find the best fitted matrix that advances forward the system. The proposed framework is technically sound but somewhat lacks novelty. Strengths: 1. Non-stationary time series forecasting is challenging and it is reasonable to apply Koopman theory to tackle this task. 2. Disentangling time series into variant and invariant components is intuitive. 3. The methods of constructing Koopman operators for both time-variant and -invariant components are straightforward. Weaknesses: 1. The proposed model lacks novelty. Both disentangling time series and applying Koopman theory are commonly used techniques in time series analysis. 2. The technical contribution of this paper is limited. The extraction of time variant and invariant components is too simple and lacks analysis. The application of Koopman theory is simply the construction of Koopman operators. 3. Hierarchically disentangling time series lacks intuition and motivation. What's the point of further disentangle variant component into invariant and variant components? 4. The experiment does not involve highly non-stationary datasets. 5. Koopman operators are highly correlated to measurement functions. In the branch of time variant component, although the model constructs diverse Koopman operators, the measurement, i.e., the encoder, is static. For more technical contributions, a deeper integration of Koopman's theory is needed. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 1 poor Limitations: Not applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks to Reviewer M3Xz for providing a detailed review. **Q1:** Novelty of disentangling module. We'd like to emphasize our proposed method differs from previous works as follows: - It considers time-variant and invariant disentanglement, which is essential for Koopman forecasters, and few series decompositions specialize in it. - The module is not a traditional filter. It filters components by **spectrum distribution of the dataset**, instead of pre-defined frequency thresholds. Concretely, it collects statistics by FFT. The subset in the top percentage sorted by amplitude indicates **dominant time-invariant properties (e.g. periodicity) shared among the dataset**. We have conducted analysis in $\underline{\text{Figure 5 of the main text}}$, which presents the disentangling effects. And ablation studies in $\underline{\text{Table 3 of the main text}}$ check its necessity and compares the performance of other filter choices. **Q2:** Motivation of the disentanglement. Koopa is essentially based on disentanglement, which is **motivated by Wold's Theorem**. We sum up motivations as follows: - Wold's Theorem implies weakly stationary series can be decomposed into deterministic and stochastic parts. The former (e.g. sine wave) can be easily modeled by an operator, while the latter is the output of a linear filter with stationary processes as input. - We utilize data-dependant encoder to learn underlying stationary process and use time-variant operators to portray the varying behavior of the linear filter. The disentanglement is natural on series dynamics and has not been explored in previous works. As an intuitive example: $y_t=\alpha sin(\omega_0 t)+\beta sin(\omega_t t) + \epsilon_t$, where $\alpha, \beta, \omega_0$ do not change with time, $\epsilon_t$ is white noise and $\omega_t$ varies with window. $\alpha sin(\omega_0 t)$ behaves time-invariantly because of stable periodicity, while $\beta sin(\omega_t t)$ has varying periodicity. The components learned by Koopa are in $\underline{\text{Figure 3 of the global response}}$, we find that * Time-invariant KP predicts invariant components with stable period. * Time-variant KP can predict variant components with window-wise varying period. **Q3:** Experiments of highly non-stationary datasets. The reviewer mentioned it "does not involve highly non-stationary datasets." Based on ADF statistics, there is 99% confidence to accept that a unit root is presented, which indicated these datasets are highly non-stationary: Exchange, ETTh2, and ILI. We also include Cryptos evaluated in non-stationary forecaster KNF. Koopa still surpasses existing forecasting models. |MSE|Koopa|PatchTST| TimesNet| DLinear| KNF| |-| -| -|- | -| - | |Cryptos|**1.105**|1.111|1.110 |1.116|1.108| **Q4:** Motivation of the hierarchical disentanglement. The review questioned why "disentangle variant component into invariant and variant components". Instead, we **disentangle the residual of fitted time-variant component.**, which is motivated by the applicability of Koopman theory and scalability of deep model: - The operator eigenvalues determine long-term evolution, where modulus not close to one causes non-divergent evolution. We highlight **a stable Koopman operator only reconstruct well on weakly stationary series**. So we desire each layer to learn weak stationary process hierarchically and feed the residual of fitted dynamics for the next layer to correct. - Koopman Predictor aims not to fully reconstruct the whole dynamics at once, but to partially describe dynamics, so we do not force rigorous reconstruction in each layer. - Deep decomposition has been demonstrated as effective architecture (e.g.NBEATS, Autoformer) and makes the module scalable for complicated patterns. We show the effect of stacking in $\underline{\text{Table 1 of the global response}}$. Here are the observations: - Sinlge-layer model always leads to the worst forecasting, which is significant on non-stationary datasets (Exchange and ILI). - Stacking multi-layer Koopa blocks leads to increased performance on most datasets. **Q5:** Reclarify the contributions of applying Koopman theory on deep models. The reviewer stated "Koopman theory are commonly used techniques in time series analysis." We agree with the argument but want to highlight **incorporating theory into the architecture of deep models** is non-trival. We sum up previous works on Koopman forecasters as follows: - Utilize AutoEncoder to learn measurement functions. - Introduce extra constraints to refine operator stability on non-stationary series. - Use local and global operators to deal with temporal distribution shift. Instead, our work tackles the above as follows: - Design Koopman Predictor with reconstruction loss raveled out and incorporate it into deep residual structure. - Not employ any consistency constraint but exquisitely design hierarchal stackable blocks to refine stability. - Though previous work also utilizes respective operators, they do not explicitly design and apply time-variant and invariant disentanglement. Our work further copes with realistic problems that previous works have not explored: - End-to-end optimization: We observe in $\underline{\text{Table 2 of the global response}}$ that forecasting objective works as good indicator and end-to-end optimizing always achieves better forecasting. - Operator adaptation: We derive operator updating rule with reduced complexity for the first time and Koopa can utilize new snapshots to scale up forecast horizon. **Q6**: About the encoder design. Thanks for your suggestion for encoder design with deeper theory integration. We currently use MLP for the sake of efficiency. It still leaves an open problem and we have tried to make it not "static", such as learning the weighting on pre-defined measurements, but it does not achieve further performance improvement. We will appreciate it a lot if the reviewer could inspire us for other potential designs. --- Rebuttal Comment 1.1: Comment: Thanks for the efforts made on the response. The authors have clarified their motivations and emphasized the novelty. Additional experiment on Cryptos is included. Most of my concerns have been well addressed and I would like to raise my score to 5. --- Reply to Comment 1.1.1: Title: Thanks for Your Response and Raising the Score Comment: We would like to thank Reviewer M3Xz for providing a detailed valuable pre-rebuttal review, which helps us a lot in the rebuttal and paper revision. Thanks again for your response and raising the score! In the fi
Rebuttal 1: Rebuttal: ## Global Response to All Reviewers We sincerely thank all the reviewers for their insightful reviews and valuable comments, which are instructive for us to improve our paper further. To cope with non-stationary time series forecasting, this paper proposes Koopman-based forecasters (Koopa) to portray the underlying time-variant properties. Inspired by Koopman theory, we design renovated Koopman Predictors and disentanglement to hierarchically reveal complicated dynamics. We delve into architectural design for applicable forecasters on real-world non-stationary series, with end-to-end forecasting objective optimization, fairly increased model efficiency, and operator adaptation to scale up forecast horizon. **Comprehensive experiments and module analysis are included. Koopa achieves competitive performance while saving 77.3% training time and 76.0% memory on six real-world benchmarks.** The reviewers generally held positive opinions of our paper, in that the proposed method is "**novel**", "**technically sound**", "**a pioneering endeavor**", and "**the idea is innovative and thought-provoking, setting a precedent for future investigations into the research of deep Koopman methods**", this paper is "**well-written**" and "**provides a clear visual representation** " and the experiments are "**comprehensive**", "**significant**". The reviewers also raised insightful and constructive concerns. We made every effort to address all the concerns by providing sufficient evidence and requested results. Here is the summary of the major revisions: - **Analysis of operators (Reviewer 36zV, pXJ3)**: We provide the correlation analysis between time series and learned operators. To address the issue of operator stability, we visualize the respective eigenvalues in Koopman Predictors. The results present the effectiveness of our architectural design for better convergence of training operators. - **Motivation of proposed modules (Reviewer M3Xz)**: We illustrate our motivation in both theoretical and experimental aspects. We clarify disentanglement and block stacking based on Wold's Therom, Koopman theory and the scalability of deep model. By conducting ablation and evaluating the other possible designs, we verify that our model still achieves the best performance and good generality. - **Technical novelty (Reviewer EGYL, M3Xz)**: We highlight our difference with previous works on decomposition and Koopman forecasting. We specifically design and apply the disentanglement for Koopman-based forecasters and renovate canonical KAEs with raveled-out reconstruction branch by incorporating it into deep residual structure. - **Analysis of hyperparameters (Reviewer a2Yj, EGYLj)**: We newly add hyperparameters sensitivity analysis on the segment length and the dimension of Koopman embedding. And we also clarify the selection strategy of model hyperparameters. - **Discussion of related works (Reviewer EGYL)**: We discuss the similarities and differences between our method and related works on non-stationary time series forecasting. We especially compare Koopa with State Spaces Models in the aspects of implementation, optimization, and the way to describe time-variance. We also conduct experiments on the full benchmarks to check their performance. The valuable suggestions from reviewers are very helpful for us to revise the paper to a better shape. We'd be very happy to answer any further questions. Besides, shared Tables of the global response are listed as follows, and Figures are provided in PDF. > Table.1 Analysis of block number. | Block Number (MSE) | ILI | ETTm2 | Exchange | Weather | ETTh2 | ECL | | ------------------ | :-------- | :-------- | :-------- | :-------- | --------- | --------- | | 1 | 2.184 | 0.136 | 0.520 | 0.129 | 0.242 | 0.153 | | 2 | 1.980 | 0.133 | 0.506 | 0.126 | 0.241 | **0.141** | | 3 | 1.974 | 0.132 | 0.479 | 0.125 | **0.236** | 0.142 | | 4 | **1.850** | **0.131** | **0.473** | **0.124** | 0.239 | 0.158 | > Table.2 Analysis of introducing reconstruction | ETTh2 (MSE\|MAE) | Predict 48 | Predict 96 | Predict 144 | Predict 192 | | --------------------- | --------------- | --------------- | ---------------------- | ------------| | Koopa | **0.226** \| **0.300** | **0.297** \| **0.349** | **0.333** \| **0.381** | 0.356 \| 0.393 | | + Reconstruction | 0.244 \| 0.311 | 0.309 \| 0.356 | 0.339 \| 0.385 | **0.354** \| **0.392** | |||||||| | **Exchange** | | | | | | Koopa | **0.042**\| **0.143** | **0.083** \| **0.207** | **0.130** \| **0.261** | **0.184** \| **0.309** | | + Reconstruction | 0.047 \| 0.152 | 0.102 \| 0.225 | 0.131 \| 0.263 | 0.235 \| 0.352 | |||||||| | **ECL** | | | | | | Koopa | **0.130** \| **0.234** | **0.136** \| **0.236** | **0.149** \| **0.247** | **0.156** \| **0.254** | | + Reconstruction | 0.183 \| 0.282 | 0.159 \| 0.260 | 0.165 \| 0.265 | 0.167 \| 0.269 | |||||||| | **Traffic** | | | | | | Koopa | **0.415** \| **0.274** | **0.401** \| **0.275** | **0.397** \| **0.276** | **0.403** \|**0.284** | | + Reconstruction | 0.518 \| 0.353 | 0.446 \| 0.317 | 0.445 \| 0.325 | 0.444 \| 0.322 | |||||||| | **Weather** | | | | | | Koopa | **0.126** \| **0.168** | **0.154** \| **0.205** | **0.172** \| **0.225** | **0.193** \| **0.241** | | + Reconstruction | 0.143 \| 0.186 | 0.163 \| 0.208 | 0.178 \| 0.226 | 0.196 \| 0.246 | Pdf: /pdf/c5b3a80063cece7729fc9fc6c56d3254c9970e0c.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper studied time series forecasting problem for weather forecasting, energy consumption, and financial assessment. To propose a model that generalizes on varying distribution, the authors proposed Koopa which is composed of modular Koopman predictors to describe and advance forward series dynamics. The motivation is Koopman-based methods are appropriate to learn non-stationary time series dynamics. The implementation is straightforward in the supplement material. The experimental results show the effective performance of the proposed methods compared with state-of-the-art methods. Strengths: The paper proposed a novel method to solve the time series forecasting problem. The introduction of Koopa Block is interesting. The experimental results show that the model outperforms existing methods with more efficient memory usage. Weaknesses: It is interesting to see more analysis about Koopa blocks. For example on Figure 5, authors showed the visualized blocks are different which corresponds to different curves. But it is clear that the curves of K1, K2, and K2 share the similarity that they goes down first, then goes up. It is interesting to see the correlation analysis if there is some. The experiments show the proposed method achieved better performance with efficient training time and memory usage. I am not sue if there is convergence analysis and what are the parameters, such as the number of epochs for training, batch size. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Is there any correlation between Koopa blocks if the curves share similarity as shown in Figure 5 ? 2. How is the convergence of the proposed method and compared methods? 3. What are the parameters, such as the number of epochs for training, batch size for experiments? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: There is no theoretical analysis but the main contribution is to propose a novel architecture. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks to Reviewer 36zV for providing thorough insightful comments. **Q1:** The convergence of the proposed method and compared methods. As shown in $\underline{\text{Figure 1 of the global response}}$, we provide the model training curves to check the convergence of our proposed model and other methods. The training curves of the proposed model in blue demonstrate a consistent and smooth convergence, indicating its effectiveness in converging towards an optimal solution. **Q2:** Analysis of learned operators in Koopa blocks. As shown in $\underline{\text{Figure 5 of the main text}}$, we have qualitatively revealed the correlation that non-contemporaneous subseries that differ in **global tendency** will be distinguished by the heatmap weight of operators. As the reviewer mentioned insightfully, they also share **similar series variations, but with varied durations**. We find it not easy to be directly visualized since it can be manifested by varying decay rates of respective evolution of eigenfunctions. It leaves instructive for us to improve model interpretability with further consideration of Koopman eigenfunctions analysis. Besides, for a quantitative correlation analysis, we sample subseries monthly from Exchange dataset and calculate the correlation between **global tendency** and **operators**: - We use linear regression to fit each subseries and use the slope as a manifestation of global tendency. - We consider several properties of learned operators: the sum of elements (Sum), the sum of eigenvalues (Trace), the maximum absolute value of eigenvalues (2-Norm), and the Frobenius norm (F-Norm). The Pearson correlation $\tau$ between slopes and operator properties is listed as follows: | | Sum | Trace | 2-Norm | F-Norm | | ------ | :---: | :---: | :---: | :---: | | $\tau$ | **0.845** | 0.605 | 0.523 | 0.624 | We observe a strong correlation between the global tendency and the sum of elements. This suggests that there is indeed a correlation between Koopa blocks when their curves exhibit similarity. **Q3:** About the detailed training parameters. As per your request, we provide training parameter as follows, which is shared on all six benchmarks. | Batch Size | Training Epochs | Learning Rate | Early Stoping Patience | LR Decay Strategy | | :---: | :---:| :---:| :---: | :---: | | 32 | 10 | 0.001 | 3 | ExponentialLR with decay rate: $\gamma=0.5$ | --- Rebuttal Comment 1.1: Title: official comment from reviewer 36zV Comment: Thanks for responses. The authors answer my questions. I will keep my score to support the paper.
null
null
null
null
null
null
Oracle Complexity of Single-Loop Switching Subgradient Methods for Non-Smooth Weakly Convex Functional Constrained Optimization
Accept (poster)
Summary: This paper studies switching subgradient method for solving weakly-convex objective (weakly-)convex constrained problems. The oracle complexity for finding a nearly stationary point is derived for both convex constraint case and weakly-convex constrained (with good initialization). Numerical experiments validate the efficiency of the proposed algorithms on classical machine learing tasks. Strengths: The paper is overall well written. It is the first single-loop first-order method handling a weakly-convex objective (weakly-)convex constrained problems, which is much convenient to implement compared to double-loop algorithms. The theoretical complexity result is clearly stated and competitive with sound proofs. Weaknesses: No obvious weaknesses and please see the questions part for some detailed comments. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. For the weakly-convex constrained case, if we cannot have feasible initial point, does the algorithm SSG converge? If so, what kind of point will it arrive? As we know, it is usually a NP-hard problem to obtain a feasible point for nonconvex constraint. In this case, does the SSG converge to the stationary point satisfying $0\in \partial (f+1_{\mathcal{X}\cap [g\leq0]})(x)$? 2. In Assumption 4.1, it is assumed that the Slater's condition holds. Does it mean that we cannot include the equality constraint $g(x)=0$ for the following analysis? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for reviewing our paper and providing valuable feedback that helps us further improve our work! **Question 1: For the weakly-convex constrained case, if we cannot have feasible initial point, does the algorithm SSG converge? If so, what kind of point will it arrive? As we know, it is usually a NP-hard problem to obtain a feasible point for nonconvex constraint. In this case, does the SSG converge to the stationary point satisfying $0\\in\\partial(f+1_{\mathcal{X}\cap[g\leq0]})(\mathbf{x})$?** If we don't have a feasible initial point, the SSG method will still converge but it will not necessarily converge to an nearly $\\epsilon$-stationary point under our definition. Instead, it may converge to an infeasible and nearly stationary point of $g$, namely, a point $\\mathbf{x}\in\mathcal{X}$ that satisfies $0\\in\\partial(1_{\mathcal{X}}+g)(\mathbf{x})$ but has $g(\mathbf{x})$ significantly larger than $0$. At this point, the subgradient $\partial 1_{\mathcal{X}\cap[g\leq0]}(\mathbf{x})$ the reviewer conjectured is not defined. This phenomenon can explained by the design of the SSG algorithm. In fact, if $\mathbf{x}^{(t)}$ is very infeasible so $g(\mathbf{x}^{(t)})>\epsilon_t$, the SSG method will keep using the subgradient of $g$ to update $\mathbf{x}^{(t)}$. Until $g(\mathbf{x}^{(t)})\leq\epsilon_t$, the SSG method is equivalent to the subgradient descend method applied to $\min_{\mathbf{x}\in\mathcal{X}} g(\mathbf{x})$. Since $g$ is non-convex, the subgradient descend method may fail to reduce $g$ to zero so $g(\mathbf{x}^{(t)})\leq\epsilon_t$ may never happen, but the subgradient descend method can at least ensure $\mathbf{x}^{(t)}$ converge to a nearly stationary point of $g$. Please note that, in this scenario, the subgradient of $f$ is never used so $f$ has zero influence on where $\mathbf{x}^{(t)}$ converges to. From the explanation above, we also know that non-convex constrained optimization is intractable if the initial solution can be anywhere. Please consider problem $\min_{x} -(x-1)^3 \text{s.t.} (x-1)^3+1\leq 0 $. Here, $x=1$ is an infeasible point but the gradients of the objective and the constraint functions are both zero at this point. In other words, if $x=1$ is the initial solution, all gradient-based methods will get stuck at this infeasible solution. **Question 2: In Assumption 4.1, it is assumed that the Slater's condition holds. Does it mean that we cannot include the equality constraint $g(x)=0$ for the following analysis?** We can modify Assumption 4.1 to allow linear equality constraints like $\mathbf{A}\mathbf{x}=\mathbf{b}$ but not a nonlinear equality constraint like $g(\mathbf x)=0$. Under Assumption 4.1, the constraint is convex, but adding a nonlinear equality constraint will make the constraint non-convex, which will require a different analysis (e.g. the analysis in Section 5). Let me provide some details on how we can extend our algorithm and theory to allow linear equality constraints. More specifically, we can extend our results to the problem $\min_{\mathbf x\in\mathcal{X}}f(\mathbf x) \quad \text{s.t.} \quad h(\mathbf x)\leq 0,\quad \mathbf A\mathbf x=\mathbf b,$ where $f$ is weakly convex and $h$ is convex. Then we still assume Assumption 4.1 except that 4.1B is changed to *B'. There exists $\mathbf x_{\text{feas}}\in\text{int}(\mathcal{X})$ such that $h(\mathbf x_{\text{feas}})<0$ and $ \mathbf A\mathbf x_{\text{feas}}=\mathbf b$..* In this case, we only need to implement Algorithm 1 with $g(\mathbf x):=\max\\{h(\mathbf x), \\|\mathbf A \mathbf x-\mathbf b\\|_\infty\\}$. It is known that Slater's condition allows the present of linear equality constraints. This is why we only need $h$ to be strictly feasible. Using this fact, we can still obtain an upper bound of the Lagrangian multiplier similar to Lemma 4.2. Then Theorem 4.3 can be proved in the same way as before using this new upper bound. We will include this new result in the revision. --- Rebuttal Comment 1.1: Comment: Thank you for your clear and thorough response.
Summary: The paper considers a weakly-convex constrained optimization problem and establishes the oracle complexity of the switching subgradient method for finding a nearly stationary point. Strengths: 1. Considering a single-loop algorithm, the paper can establish the complexity $O(1/\epsilon^4)$ for a case of weakly-convex constraint. This is a nice result. 2. The paper has a rigorous convergence analysis. Weaknesses: In the numerical experiments, the paper compares a single-loop method with double-loop methods but reports the evolution of objective functions over iterations. This is not fair. The evolution over time should be reported. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Can the analysis be extended to the case when Assumption 4.1 C is removed (for example, when $\mathcal X$ is unbounded)? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: n.a. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for reviewing our paper and providing valuable feedback that helps us further improve our work! **Comment: In the numerical experiments, the paper compares a single-loop method with double-loop methods but reports the evolution of objective functions over iterations. This is not fair. The evolution over time should be reported.** As the reviewer suggested, we have produced the figures showing how the objective value, infeasibility and stationarity measure evolve with CPU time in each algorithm. Please find the figures in the PDF file we submitted for this rebuttal. We will also include them in the revision of this manuscript. Comparing the plots in iterations and the plots in CPU time, we find that the curves of our SSG methods and the IPP-SSG method are almost unchanged, but the curve of the IPP-ConEx method becomes worse when plotted in CPU time. This is because the SSG and IPP-SSG methods only need to compute either $\mathbf{\zeta}_f^{(t)}$ or $\mathbf{\zeta}_g^{(t)}$ in each (inner) iteration while IPP-ConEx has to compute both $\mathbf{\zeta}_f^{(t)}$ and $\mathbf{\zeta}_g^{(t)}$ per inner iteration and thus induces additional computation cost. This suggests that the SSG methods we study are more efficient than the primal-dual methods like IPP-ConEx in computation time because the former computes fewer subgradients per iteration. **Comment: Can the analysis be extended to the case when Assumption 4.1 C is removed (for example, when $\mathcal{X}$ is unbounded)?** If we do not want to assume $\mathcal{X}$ is bounded, we can replace Assumption 4.1.C with the following assumption. *C'. There exists $D\in\mathbb{R}$ such that $\\|\mathbf{x}-\mathbf{x'}\\|\leq D$ for any $\mathbf{x}$ and $\mathbf{x'}$ in $\mathcal{S}:=\\{\mathbf{x}\in\mathcal{X}|g(\mathbf{x})\leq 0\\}$.* Here, $\mathcal{S}$ is the feasible set. This assumption does not require a bounded $\mathcal{X}$. For example, when $\mathcal{X}=\mathbb{R}^d$, this new assumption means $g$ has a bounded $0$-sublevel set. One example satisfying this assumption is $g(\mathbf{x})=\\|\mathbf{A}\mathbf{x}-\mathbf{b}\\|_p-c\leq0$ where $\mathbf{A}$ has a full column rank and $p\geq 1$. With this change to Assumption 4.1., we can still obtain the same complexity for the SSG method. We will include this new result in the revision. However, if we simply drop the boundedness assumption (Assumption 4.1.C) without making any alternative assumptions, we are unable to prove the same convergence results. This is because this boundedness assumption is critical for ensuring the Lagrangian multiplier $\widehat\lambda_t$ is bounded for all $t$ (see Lemma 4.2.) which is needed to prove the desired convergence rates. This boundedness assumption is also made in other papers on non-smooth non-convex constrained optimization, e.g., [14] and [58].
Summary: This paper examines the oracle complexity of the switching subgradient method used for solving non-convex constrained optimization problems. The target functions are weakly convex, while the constraint functions are either convex or weakly convex. Remarkably, this method matches the complexity of double-loop methods while only necessitating a single loop implementation. Strengths: The authors have produced a commendable piece of work in the domain of weakly convex optimization, a specific category of nonconvex problems noted for its favorable theoretical convergence attributes. Although there is an abundance of single-loop algorithms such as subgradient or proximal point for weakly convex optimization, much of the existing literature has focused on unconstrained or simply-constrained issues. It is therefore somewhat unexpected that there are no analogous results in functionally constrained settings. Seen in this light, the technical novelty and contribution of this paper are remarkable. Weaknesses: While the authors have provided a comprehensive review of the literature, there seems to be an overemphasis on marginally relevant references. The broad coverage of optimization beyond weakly convex settings, although interesting, does not directly contribute to the key message of the paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Assumption 4.1. assumes the boundedness of the solution, is this easily satisfied in practical scenarios? Moreover, should you include such a constraint in your second application? Can you extend your algorithm for weakly convex problems with linear equality constraints? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for reviewing our paper and providing valuable feedback that helps us further improve our work! **Comment: ... there seems to be an overemphasis on marginally relevant references. The broad coverage of optimization beyond weakly convex settings, although interesting, does not directly contribute to the key message of the paper.** In the revision, we will remove some marginally relevant references in the section of related works, for examples, the ones assuming smoothness, because weak convexity is more interesting when the problem is non-smooth. **Comment: Assumption 4.1. assumes the boundedness of the solution, is this easily satisfied in practical scenarios?** It depends on the applications. To train a real-world machine learning model, one can artificially add a ball constraint as a regularization technique to avoid overfitting. For example, in application (15) in Section 6.1, we add the constraint set $\mathcal{X}=\\{\mathbf x\in\mathbb{R}^{d}|\\|\mathbf x\\|\leq r\\}$, which guarantees the boundedness of the feasible set. In practical scenarios, one can use cross validation to choose $r$ if necessary. If we do not want to assume $\mathcal{X}$ is bounded, we can replace Assumption 4.1.C with the following assumption. *C'. There exists $D\in\mathbb{R}$ such that $\\|\mathbf{x}-\mathbf{x'}\\|\leq D$ for any $\mathbf{x}$ and $\mathbf{x'}$ in $\mathcal{S}:=\\{\mathbf{x}\in\mathcal{X}|g(\mathbf{x})\leq 0\\}$.* Here, $\mathcal{S}$ is the feasible set. This assumption does not require a bounded $\mathcal{X}$. For example, when $\mathcal{X}=\mathbb{R}^d$, this new assumption means $g$ has a bounded $0$-sublevel set. One example satisfying this assumption is $g(\mathbf{x})=\\|\mathbf{A}\mathbf{x}-\mathbf{b}\\|_p-c\leq0$ where $\mathbf{A}$ has a full column rank and $p\geq 1$. With this change to Assumption 4.1., we can still obtain the same complexity for the SSG method. We will include this new result in the revision. However, if we simply drop the boundedness assumption (Assumption 4.1.C) without making any alternative assumptions, we are unable to prove the same convergence results. This is because this boundedness assumption is critical for ensuring the Lagrangian multiplier $\widehat\lambda_t$ is bounded for all $t$ (see Lemma 4.2.) which is needed to prove the desired convergence rates. This boundedness assumption is also made in other papers on non-smooth non-convex constrained optimization, e.g., [14] and [58]. **Comment: Moreover, should you include such a constraint in your second application?** We do not need to include such a constraint in the second application, i.e., problem (17). From the theoretical perspective, we have proved in Section B.1.1 in the appendix that (17) satisfies Assumption 5.1, which does not require a bounded $\mathcal{X}$, so we can just let $\mathcal{X}=\mathbb{R}^d$ and apply the convergence result in Theorem 5.6 based on Assumption 5.1. From the modeling perspective, (17) has already the regularization term $\text{SCAD}(\mathbf x)$ in the objective function. Hence, there is no need to add a ball constraint for the regularization purpose. **Comment: Can you extend your algorithm for weakly convex problems with linear equality constraints?** We consider two cases in this paper: (1) $g$ is convex; (2) $g$ is weakly convex. In the first case, we can extend our algorithm and theory to allow linear equality constraints. More specifically, we can extend our results to the problem $\min_{\mathbf x\in\mathcal{X}}f(\mathbf x) \quad \text{s.t.} \quad h(\mathbf x)\leq 0,\quad \mathbf A\mathbf x=\mathbf b,$ where $f$ is weakly convex and $h$ is convex. Then we still assume Assumption 4.1 except that 4.1B is changed to *B'. There exists $\mathbf x_{\text{feas}}\in\text{int}(\mathcal{X})$ such that $h(\mathbf x_{\text{feas}})<0$ and $ \mathbf A\mathbf x_{\text{feas}}=\mathbf b$..* In this case, we only need to implement Algorithm 1 with $g(\mathbf x):=\max\\{h(\mathbf x), \\|\mathbf A \mathbf x-\mathbf b\\|_\infty\\}$. It is known that Slater's condition allows the present of linear equality constraints. This is why we only need $h$ to be strictly feasible. Using this fact, we can still obtain an upper bound of the Lagrangian multiplier similar to Lemma 4.2. Then Theorem 4.3 can be proved in the same way as before using this new upper bound. We will include this new result in the revision. In the second case where $g$ is weakly convex, although we can still implement Algorithm 1 with the new $g$ defined above, we cannot extend the complexity theorem, i.e., Theorem 5.6, with linear equality constraints, unfortunately. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed reply. I will keep my rating.
null
null
Rebuttal 1: Rebuttal: We would like to thank all reviewers for reviewing our submission and providing valuable feedbacks. We have provided answers to each reviewer's questions separately. Reviewer tsNX suggested us to report the evolutions of the curves over time instead of over iterations. We have included the new figures in the uploaded PDF file with this rebuttal. Pdf: /pdf/3d437c86b0b52e14728e7a838edd3364cbfc562d.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Tuning Multi-mode Token-level Prompt Alignment across Modalities
Accept (poster)
Summary: The paper aims at improving few-shot classification based on CLIP-ViT-B/16 using prompt tuning. The authors introduced a multi-modal prompt tuning framework with token-level alignment and distribution matching. The prompts are tuned on ImageNet and then the model is benchmarked on 15 image datasets. Strengths: + The method achieves better results than baselines (e.g., CoOp, MaPLe) on a series of datasets, including both few-shot classification and transfer learning tasks. + The code is attached, making the method reproducible. + The introduced approach should be practical in real-world applications. Weaknesses: - The writing appears rushed and some sections are not easy to follow. There are also some typos and grammar issues, e.g., L135-136 "...CLIP. maximuming...". - The central insight of this work is to introduce token-level alignment in a multimodal prompt-tuning framework. However, the fine-grained alignment with optimal transportation is not new (e.g., Hierarchical Optimal Transport for Multimodal Distribution Alignment, NeurIPS19) and the multimodal prompt-tuning framework is similar to existing works like MPT. It provides limited new knowledge to the community. - In the experimental part, there are only comparisons based on base-size models. However, it is pretty essential to verify the scalability of the introduced method on larger models. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Please see the Weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The authors have discussed their limitations on the GPU memory during test. It may also be important to discuss their training cost. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer dk2b for the comments and suggestions. Below, we address the concerns raised in your review point by point. Please let us know if you have any further concerns or whether this adequately addresses all the issues that you raised with the paper. > The writing appears rushed and some sections are not easy to follow. There are also some typos and grammar issues, e.g., L135-136 "...CLIP. maximuming...". We will carefully improve the writing, reorganize the sections, and correct typos in the revision. > The central insight of this work is to introduce token-level alignment in a multimodal prompt-tuning framework. However, the fine-grained alignment with optimal transportation is not new (e.g., Hierarchical Optimal Transport for Multimodal Distribution Alignment, NeurIPS19) and the multimodal prompt-tuning framework is similar to existing works like MPT. It provides limited new knowledge to the community. First, we would like to note that one of the main contributions of this paper is to develop a unified multi-model prompt tuning method under the HOT framework, e.g., most previous works can be viewed as a particular case of our framework (Table 1 in the Appendix). Second, We agree that the token-level alignment module shares a similar idea with the previous Hierarchical Wasserstein Alignment algorithm Technically. However, this does not imply that the proposed methodology is not innovative. Our model is different from the previous work in terms of both tasks and objective functions. The proposed ALIGN views the transport distance as the similarity of prompts and the model is optimized by the classification loss. While the previous work focused on aligning clustered datasets and trained the model by minimizing the transport distance unsupervised. It is not a trivial case that directly employs the previous method on the prompt tuning task. Third, compared to existing MPTs, which often learn single modality-specific prompt sentence, our method provides the community with a novel alternative that prefers to learn multiple prompts, resulting in diverse concept discovery. Mathematically, we formulate the prompt tuning task as a HOT problem, which distinguishes the proposed model from previous MPTs. Last, the transport plan provides users with a visualization tool to explain the learned prompts, while the previous MPTs fail to give such interpretability. > In the experimental part, there are only comparisons based on base-size models. However, it is pretty essential to verify the scalability of the introduced method on larger models. Technically, the proposed method can be easily applied to pre-trained two-tower models. In the manuscript, we follow a series of previous works and load the pre-trained CLIP as our backbone. Extensive experiments are conducted to evaluate the superiority of the proposed model. We believe that those empirical results support this paper. We would appreciate it if the reviewers could provide some references that apply prompt tuning on large-scale vision-language models. > The authors have discussed their limitations on the GPU memory during test. It may also be important to discuss their training cost. We have reported the detailed comparison in the global comment section. Please check for more discussion. --- Rebuttal Comment 1.1: Title: Following up with Reviewer dk2b Comment: Dear Reviewer dk2b, Thanks again for your effort in reviewing our paper and give us a great chance to improve the paper quality. We hope that our response can address your concerns. Considering that the discussion period will end on Aug 21st, we would like to know if you have any other questions about our paper, and we are glad to have a discussion with you in the following days. If our response has addressed your concerns, would you mind considering re-evaluating our work based on the updated information? Best regards, Authors --- Rebuttal Comment 1.2: Title: Response Comment: Thanks for the authors' response, which addressed some of my concerns, e.g., the differences from existing works. Unfortunately, the authors did not report the results on larger sizes of CLIP models, which raises concerns that the proposed method may not work on large models. There's a case that, since larger sizes of foundation models are strong enough, such advanced prompt alignment method fails to yield extra gains compared to existing prompt alignment methods. So I choose to maintain my initial rating. --- Reply to Comment 1.2.1: Title: Additional results on larger size ViT-H/14 CLIP Comment: We thank Reviewer dk2b for your replies and for specifying the main concern. Like previous works, we conducted extensive experiments by loading the ViT-B/16 as the CLIP model. As clarified in the previous rebuttal, technically, our method can be easily applied to CLIP-like modes and improve their performance. Following the reviewer's suggestion, we have evaluated our method based on ViT-H/14, which consists of a 32-layer image encoder and a 24-layer text encoder. The results of Base-to-New and Few-shot tasks are listed below (results are reported as the mean value of three seeds). Base-to-New results (Base New H) | Methods | | Cal | | | | | | DTD | | | | | | Eur | | |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | CLIP | 98.4 | 95.4 | 96.8 | | | | 74.5 | 70.5 | 72.4 | | | | 73.0 | 83.7 | 77.9 | | MaPLe | 98.0 | 94.9 | 96.4 | | | | 83.2 | 74.4 | 78.5 | | | | 96.2 | 84.5 | 89.9 | | ALIGN | 99.2 | 95.7 | 97.4 | | | | 85.8 | 75.7 | 80.4 | | | | 96.6 | 84.9 | 90.3 | | Methods | | Pets | | | | | | Cars | | | | | | UCF | | |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | CLIP | 96.4 | 98.9 | 97.6 | | | | 91.2 | 97.1 | 94.0 | | | | 82.5 | 83.0 | 82.7 | | MaPLe | 96.7 | 98.3 | 97.4 | | | | 90.3 | 96.8 | 93.4 | | | | 86.3 | 84.2 | 85.2 | | ALIGN | 97.0 | 98.8 | 97.8 | | | | 91.6 | 97.3 | 94.3 | | | | 86.3 | 84.5 | 85.4 | Few-shot results (1/2/4/8 shots) | Methods | | Eur | | | | | | | Pets | | | |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | CLIP | 67.0 | 67.0 | 67.0 | 67.0 | | | | 94.5 | 94.5 | 94.5 | 94.5 | | MaPLe | 73.5 | 75.2 | 79.7 | 85.1 | | | | 92.1 | 92.7 | 93.8 | 94.0 | | ALIGN | 77.3 | 78.1 | 80.0 | 89.5 | | | | 93.9 | 94.5 | 94.9 | 94.8 | Due to the limited time, we report the Base-to-New results on Caltech 101, DTD, EuroSAT, Oxford Pets, Stanford Cars and UCF 101 datasets, and report the Few-shot results on EuroSAT and Oxford Pets datasets. From the results, we find that 1), Thanks to the larger size ViT-H/14, the performances of all three methods are improved with a large gap; 2) Our method outperforms the CLIP and MaPLe in most cases, this demonstrates the robustness of our proposed ALIGN method over backbone networks of different sizes. We will add these results in our revision. Thank you again for the valuable suggestion, which led to a more solid submission. We hope the above results can address your concern well. We are glad to have further discussion with you. Please feel free to contact us if you have any questions.
Summary: This work aims to overcome the limitations of previous works in prompt tuning for vision-language models. Unlike prior approaches that focus on single modality or holistic prompt alignment, the paper proposes a multi-mode token-level tuning framework that leverages optimal transportation to align prompt tokens across different modalities. The framework introduced in the paper relies on two crucial elements: multi-mode prompt discovery and token-level alignment. By enabling diverse semantic representations, multi-mode prompt discovery ensures a broader range of prompts. On the other hand, token-level alignment allows for a more detailed exploration of similarity between modalities. Extensive experiments demonstrate the effectiveness of the new method ALIGN. Strengths: 1. Overall, this work is well-motivated, and the paper is well-written. It focuses on a practical problem in prompt learning and proposes a novel method called ALIGN. 2. This work first leverages Optimal Transportation to address the limitations of prompt tuning. Intuitively, the new method can succeed in finding out token level vision-language prompts. Compared with the single-mode or holistic level prompt tuning approaches, this method can better reveal the connection between visual and textual prompts, which would be a very valuable insight for the study of vision-language alignment. 3. The authors conduct extensive experiments to evaluate their method, and the results demonstrate non-trivial improvements over the existing baseline models. The comparison is clear, and their method ALIGN achieves SoTA performance in most scenarios. Weaknesses: 1. The paper lacks discussion of computing cost. It seems that your ALIGN requires more self-attention computation compared to the baselines such as VPT and TPT. Your improvements might be challenged if the difference in training/inference cost is not provided or is too big. 2. While the paper demonstrates its merits in token-level prompt learning, more empirical results in fine-grained tasks such as semantic segmentation should be included. However, the paper only gives classification results. I understand that implementing prompts into segmentation tasks is not easy and it merely appears in your baselines, but it would be a very strong support to your ALIGN’s effectiveness and significance. 3. Some expressions are confusing. For example, in line 249-250, there is “For each task, we optimize the number of epochs”. Do you mean “the same number of epochs”? Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. How about the comparison of computing cost between your method and the baselines? 2. Is there any experimental results in inference tasks other than classification? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See "Weaknesses". Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer Uuq5 for providing positive feedback and helpful suggestions. Below are our responses. > Lacks discussion of computing cost and Q1: How about the comparison of computing cost between your method and the baselines? We have reported the detailed comparison in the global comment section. Please check for more discussion. > Is there any experimental results in inference tasks other than classification? and more empirical results in fine-grained tasks such as semantic segmentation should be included and Q2: Is there any experimental results in inference tasks other than classification. We thank the reviewer for pointing out the potential applications of the proposed model. The proposed model proposed a novel token-level alignment for multi-modal prompt tuning tasks. It improves the classification results by aligning the semantics of the visual patches and the textual tokens. Following the previous empirical setting, we conduct extensive experiments on 4 classification tasks on 12 datasets. We appreciate the reviewer's suggestion and will leave the semantic segmentation to future work. Also, we would be very happy if the reviewer could suggest relevant papers. > For each task, we optimize the number of epochs. Following previous work, we use different epochs for 4 tasks. We will correct the statement in the revision. --- Rebuttal Comment 1.1: Title: Response to authors' rebuttal Comment: Thank you for your rebuttal. My concerns are addressed, and I keep my rating of weak acceptance. --- Reply to Comment 1.1.1: Comment: Thank you for the replies, We are glad that our response address your concerns. We will revise our work accordingly.
Summary: This paper introduces a multi-mode token-level alignment framework for multi-modal prompt tuning, which improves the representation of visual and textual modalities and can be used to improve existing methods. The task is formulated as a distribution matching problem, addressed using prompt and token-level optimal transportation (OT), providing a principled and elegant solution. The method is applied to few-shot classification, dataset transfer learning, and domain generalization, showing superior results on widely used datasets. Strengths: • The learning of multi-modal, multi-mode prompts is facilitated by establishing optimal transport (OT) at the prompt and token level. • The structure of the manuscript is solid and it's well-written overall. • The efficiency of the proposed ALIGN method for both few-shot classification and generalization has been confirmed through a series of diverse experiments. Weaknesses: • The proposed model might be memory-intensive, however, an analysis of the additional time and memory costs has not been provided. • The omission of specific details, particularly regarding the ablation analysis, somewhat undermines the impressive results. - The study does not examine the influence of prompt length and quantity on the experimental outcomes. - It remains unclear whether token-level alignment provides any enhancements compared to prompt-level alignment. • Miscellaneous issue - Figure 1 is not mentioned in the body text. - It seems that line 135 is missing a period, and the expression "maximuming" appears to be awkward. - In Table 1, CoOp shows the best result in 'Stanford Cars'-Base, so it should be highlighted instead of ALIGN Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See the comments in weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: • As pointed out in the conclusion section, this paper's method still demands substantial GPU memory. • The method suggested does not appear to be well-suited for a fully zero-shot scenario where there is no training samples. This scenario is little bit different from the Base-to-New Generalization scenario. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer hEEC for providing positive feedback and helpful suggestions. Below are our responses. > Analysis of the additional time and memory costs has not been provided. We have reported the detailed comparison in the global comment section. Please check for more discussion. > Additional results of prompt length and quantity. | Length | 2 | 4| 8 |16 | |:--:|:----:|:--:|:--:|:--:| | UCF 101 | 81.27 | 81.24 | 81.45 | 81.42 | | Flowers102| 83.75 | 83.84 | 83.32 | 83.57 | | Stanford Cars | 76.80 | 76.68 | 77.14 | 76.82 | |Number | 1 | 2 | 4 | 8 | |:--:|:-----:|:--:|:--:|:--:| | UCF 101| 80.84 | 81.13 | 81.27 | 81.42 | | Flowers102| 82.64 | 82.97 | 83.75 | 83.43 | | Stanford Cars | 74.18 | 75.32 | 76.80 | 76.73 | Following your advice, we have reported the results of prompt length and quantity. We fix the prompt length as 2 and set the number of prompts as 4 according to the previous works in the manuscript. From the added results, we find that the proposed model enjoys good robustness of prompt length and quantity. And one may have better results than we report when finetuning those hyperparameters. > Unclear whether token-level alignment provides any enhancements compared to prompt-level alignment. | Methods | UCF 101 | Stanford Cars | Flowers102 | Oxford Pets | |:--:|:---:|:--:|:----:|:---:| | ALIGN w/o prompt | 80.84 | 74.18 | 82.64 | 96.51 | | ALIGN w/o token | 81.04 | 75.32 | 83.09 | 96.61 | | ALIGN | 81.27 | 76.80 | 83.75 | 96.79 | We have reported the ablation results of the introduced two modules. Compared ALIGN w/o token and ALIGN, we find that the token-level alignment indeed has a positive improvement for the performance. > Miscellaneous issus We thank the reviewer's careful reading. We will correct those typos in the revision. > this paper's method still demands substantial GPU memory We would like to note that not only the proposed method but also its baseline methods load the pre-trained CLIP as the image and text encoders and thus all compared methods in this paper demand substantial GPU memory. It is a common limitation among the prompt-tuning methods and is beyond the scope of this paper. > The method suggested does not appear to be well-suited for a fully zero-shot scenario where there is no training samples We agree with the reviewer's concern. In fact, all compared baselines (except for CLIP) introduce to-be-learned embeddings for better contiguous prompt tuning, and thus the training samples are needed to train such embeddings. Besides the few-shot and Base-to-New settings, we conduct the cross-dataset experiment, where the models are trained on a source dataset and tested on a target dataset. This allows no overlap categories between the source and target datasets and somewhat can be viewed as a fully zero-shot setting for the target dataset. Our proposed model outperforms baseline methods in most cases, showing the generalization of the method. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thanks for addressing my feedback; I'm inclined to increase my score from 5 to 6 in favor of the paper's acceptance. --- Reply to Comment 1.1.1: Title: Thank you Comment: We thank Reviewer hEEC for your replies and for increasing your score. Your appreciation encourages us to improve the submission in the revision. Thank you again!
Summary: The paper proposes a prompt-learning method for adapting CLIP to few-shot classification. The proposed prompt is multimodal, i.e., both vision and language encoder is adaptable, and multimode, i.e., each modality is assigned with several prompts for diverse representation. Experiments are conducted on few-shot and base-to-new transfer settings, showing the method's superiority over previous state-of-the-art. Strengths: * Promising results are achieved, both in the few-shot setting and base-to-new transfer setting. * The presentation is mostly clear and easy to understand. Weaknesses: * Incremental novelty of the paper. Visual-language prompt learning with multimode prompts and optimal transport (OT) is explored in PLOT [17]. Also, multimodal prompts are explored in works like [27][28]. This proposed multimode multimodal prompts with OT multimode seems to combine the two types of previous methods and reveals limited insights. * Missing comparison on computation complexity. As multimode prompts require quite large parameters and computations compared to a single prompt, I want to see the comparison of parameter efficiency and time efficiency. * It is strange that there is no ablation study of each component of the models. It is important to see the individual contribution of the multi-mode prompts and the token-level alignment. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: * In Fig. 3, ALIGN performance on UCF is much superior to previous methods, but the superiority on UCF is not obvious in the Base-to-New setting, especially on the base classes. Why is that case? * In Page 3 Line 97, "Empirical findings" needs to specify the source of information or citing papers. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer S87N for the comments and suggestions. Below, we address the concerns raised in your review point by point. Please let us know if you have any further concerns or whether this adequately addresses all the issues that you raised with the paper. > Incremental novelty of the paper First, we would like to note that building a unified prompt tuning framework that enjoys the good properties of multi-mode and multi-modal methods is novel. It is not the case that it is a trivial combination(e.g., extend MPTs by learning multiple prompts and then apply OT to align two modalities). Technically, the introduced hierarchical OT (prompt-level and token-level OT) in this paper is different from that in PLOT in terms of formulation, motivation, and alignment strategies. 1) The prompt-level OT models the visual and textual representations as two empirical distributions over the M and N modality-specific prompt embeddings, and the token-level OT further views each prompt embedding as a discrete distribution over its token points (patch or token). Those two formulations enable the proposed model to capture hierarchical features and make fine-grained alignments. However, PLOT calculates the OT distance between the N textual prompts and a set of visual patch embeddings, ignoring the textual token-level alignment, which is different from our cases. 2) As discussed above, we aim to align two domains hierarchically, the token-level OT focuses on image patches and sentence tokens, and the prompt-level OT focuses on global image and sentence features. This distinguishes ALIGN from PLOT. PLOT focuses on global sentence features and local image patches. 3) One of the main challenges is to combine the two OTs efficiently. Here we naturally formulate the prompt-level and token-level alignments as a hierarchical OT problem, where the transport distance of the token-level OT acts as the cost matrix of the prompt-level OT, boosting the connection of those two OTs. Our proposed model belongs to MPTs and provides a new hierarchical OT perspective to improve their performance. In fact, the existent two MPTs [27][28] share a similar idea that is based on traditional continuous prompt tuning. This paper proposes a novel unified framework that can learn multi-mode multi-modal prompts for MPTs and achieves consistent improvement over the existent MPTs. > Missing comparison on computation complexity We have reported the detailed comparison in the global comment section. Please check for more discussion. > Ablation study of each component of the models. | Methods | UCF 101 | Stanford Cars | Flowers102 | Oxford Pets | |:--:|:---:|:--:|:----:|:---:| | ALIGN w/o prompt | 80.84 | 74.18 | 82.64 | 96.51 | | ALIGN w/o token | 81.04 | 75.32 | 83.09 | 96.61 | | ALIGN | 81.27 | 76.80 | 83.75 | 96.79 | Following your advice, we have reported the ablation results of the introduced modules above. We find that both the multi-mode prompts and the token-level alignment contribute to the improvements. > In Fig. 3, ALIGN performance on UCF is much superior to previous methods. Here we thank reviewer S87N for pointing out this mismatch case in Fig3. We have checked the numerical results carefully of Fig3 and found that there is typing error on UCF101 16-shot result (85.69 to 95.69). Note that the corrected results still outperform baselines on all few-shot settings. We apologize again for making you confusion. In terms of the Base-to-New setting, CoCoOp found that CoOp usually overfits the seen set and is not generalizable to the unseen set (higher base-set accuracy and lower new-set accuracy). The proposed model balances the seen and unseen sets well and achieves the highest H score. > Citing papers at line 97 We will cite related papers [1,2] in the revision. [1] Yuhang Zang, et al. Unified vision and language prompt learning. [2] Muhammad, et al. Maple: Multi-modal prompt learning. In CVPR 2023. --- Rebuttal 2: Title: Following up with Reviewer S87N Comment: Dear Reviewer S87N, We deeply appreciate your thoughtful review and your time. Following your constructive suggestions, we have discussed the different between our method and previous baselines, updated the comparison of learnable parameter and inference time, reported the missed ablation studies, and addressed the typos and citations. We tried our best to address your concerns, we would like to know if you have any other questions about our paper, and we will be more than happy to have a discussion with you in the following days. If our response has addressed your concerns, would you mind considering re-evaluating our work based on the updated information? Best regards, Authors --- Rebuttal 3: Title: final discussions Comment: Dear Reviewer, As discussions come to an end soon, this is a polite reminder to engage with the authors in discussion. Please note we take note of unresponsive reviewers. Best regards, \ SAC
Rebuttal 1: Rebuttal: We thank all the reviewers for the time and expertise they have invested in these reviews and for their valuable comments. We are encouraged that all reviewers noted the impressive results achieved by the paper, and reviewers TnsF, hEEC, and Uuq5 praised the novelty and effectiveness of our method. Your comments and suggestions have helped us to improve the paper. We provide a response and clarifications below for each reviewer respectively and hope they can address your concerns. >Complexity Analysis A common concern among reviewers is the complexity analysis of the proposed paper. Here we report the comparisons on the number of trainable parameters (#Parameters) and inference time below (FPS): | Methods | CoOp | CoCoOp | VPT | PLOT | UPT | MAPLE | ALIGN | |:---:|:--:|:--:|:---:|:---:|:--:|:--:|:---:| | #Parameters | 2,048 | 35,360 | 13,824 | 8,192 | 3,555,072 | 3,555,072 | 3,582,720 | | FPS | 645 | 37 | 152 | 583 | - | 282 | 62 | We find that the overall multimodel prompts tuning methods (last three) require more trainable parameters and inference time than single-modal methods. The proposed ALIGN method aims to learn multi-mode prompts across visual and textual modalities and achieves consistent improvements over baseline methods in most cases. The proposed ALIGN requires slightly more training parameters than UPT and MAPLE because of the multiple prompts. And it also requires more inference time than MAPLE, due to the hierarchical OT operations. The proposed model supports GPU parallel inference and thus has a faster testing time than CoCoOp. Here, we would like to note that the main idea is to develop a unified prompt tuning framework and provide a new hierarchical OT view for the community. We thank the reviewers for pointing out the potential future work. We will keep this in mind and Continuously improve our method. Pdf: /pdf/8cff87518faf8055a9cdabf2cca35f7167c6aa77.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper argues that existing prompt turning fails to caption the sample diversity, learning to sub-optimal prompt discovery. To this end, they propose a multi-mode token-level alignment for multi-modal prompt tuning. Specifically, they formulate the prompt turning as the hierarchical optimal transportation problem (distribution matching problem). As a result, the extensive experiments on few-shot image classification, transfer learning, and domain generalization, show the superiority of the ALIGN. Strengths: 1. **The proposed method is effective and comprehensive**. Learning and aligning a set of prompt tokens across modalities by hierarchical optimal transportation is effective and simple. 2. **The result is strong and the evaluation is comprehensive**. The extensive experiments on 15 widely used image datasets under the setting of four task settings show the superiority of the proposed method. Weaknesses: 1. **Time Complexity Analysis.** Since this method introduces hierarchical optimal transmission, I would like to ask whether there is more time overhead in the inference phase than in other methods. Can you provide relevant experimental comparisons in detail? 2. **More Cases for Visualization.** In order to prove that learned prompt tokens have the ability to capture diverse visual concepts, it seems that Fig. 4 cannot explain the above claim well. I am more curious whether there are multiple prompts that will pay attention to the same visual concepts. Can you provide more cases for analysis? 3. **The different with PLOT[1] and MAPLE[2].** Can you provide a more thorough comparison to illustrate the novelty of your work? 4. **The writing is need to be improved**. It is hard to understand the core idea from the messy introduction. What's more, Sec.2 Background and Sec.4 Related work have a large number of duplicates that exist and can be merged. [1] PLOT: Prompt Learning with Optimal Transport for Vision-Language Models. ICLR 2023. [2] MaPLe: Multi-modal Prompt Learning. CVPR 2023. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: As shown in weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer TnsF for providing positive feedback and helpful suggestions. Below please find a response. > Time Complexity Analysis We have provided a detailed experimental comparison at the global comment. Please refer to the above analysis for more details. >More Cases for Visualization One of the main motivations of this paper is to mine diverse visual concepts, and thus we formulate the priors of both visual and textual prompt embeddings as the Uniform distributions in Eq(3) of the manuscript, e.g., each visual patch has an equal probability of being attended to. This mathematically guarantees that the learned prompts have the ability to align diverse visual concepts. Empirically, we find that most prompts find their concepts and few prompts attend to the same concepts. Following your advice, we added more visualization in the uploaded pdf file. Please have a check. We will add those results in our revision. >The different with PLOT and MAPLE Compared to PLOT (multiple modes and single modality) and MAPLE (single mode multiple modalities), our proposed multi-mode multi-modal token-level alignment algorithm acts as a unified framework for prompt tuning, e.g., PLOT and MAPLE can be viewed as our particular cases. We briefly summarize the main differences below. 1) Multi-mode prompt learning for both visual and textual modalities. Our method learns M visual and N textual prompts for two modalities of CLIP, while PLOT learns N textual prompts and MAPLE learns a single visual prompt and single textual prompt. This enables our model to capture diverse visual and textual concepts. 2) Token-level alignments of both visual and textual modalities. The proposed model aligns the vision and language domains by considering token-level features, e.g., the visual patch features and the textual token embeddings, resulting in fine-grained comparison. The MAPLE and PLOT methods either only consider the prompt-level alignments or focus on visual patch features, failing to model the token-level features of both modalities. 3) Hierarchical transport framework for prompt tuning. Technically, we formulate the prompt tuning task as a hierarchical OT problem, where the visual and linguistic representations are modeled as the empirical distributions over the M visual and N textual prompt embeddings (prompt-level OT), and the prompt embeddings are further modeled as the empirical distribution over the patch and token features, respectively. >The writing is need to be improved Thank you for the writing suggestion, and we will highlight the core idea, clear the Background and Related work section in the revision.
null
null
null
null
null
null
Accelerating Molecular Graph Neural Networks via Knowledge Distillation
Accept (poster)
Summary: * This paper explores knowledge distillation (KD) on speeding up molecular GNNs, which has challenges of being regression instead of classification, and having both scalar and vector outputs * They tried a few different methods on a few different teacher-student combinations, with the best results closing 65% of the gap between the student and teacher in energy, and 21% in forces, slightly less for different student-teacher models. * Analyzed performance with respect to similarity of models, * Tried data augmentation techniques, but with no improvements Strengths: * This paper tackles an important problem that is an unstudied area: applying KD to GNNs of 3D space with regressions * Tackles challenges of KD: regression instead of classification, vector outputs in addition to scalar * Paper is clearly written, with good related background to my knowledge * Despite the weaknesses described below, I believe this is an important area for KD to extend into, and this paper makes good initial progress at it. Weaknesses: * Despite good improvement for the small model for energy, it’s still quite a ways from the performance of the large model. Force metrics are especially bad, considering force MAE is almost 2x even with KD for some models. * Beyond that, it is hard to get a sense of how good these improvements are. How does it compare to the improvement of KD in other fields? One suggestion is to plot speed vs. accuracy to see the tradeoff, which may help determine the usefulness in downstream applications (e.g. hypothetically, a small model is twice as fast but 10% less accurate, but perhaps you can make up for that by running a bit less than 2x the number of inference calls, so then that’s considered “good”). * I would like to see other teacher-student combinations - for example, why 2 sizes of PaiNN but not other models? I am curious about big gemnet on small gemnet, since gemnet is the better model in this selection. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * Why are forces much worse than energy improvement? Any analysis into this? * Error bars? I’m not sure how much of Table 3 is noise * I am unsure how much it matters about how similar the teacher and student model are. Isn’t it often that you use the same architecture for the student, just smaller? * Why does vanilla KD do significantly worse on PaiNN-big to PaiNN-small? This doesn’t seem intuitive. * Data augmentation: how much data did you add? Is the amount of rattling reasonable (e.g. too much movement may be OOD for the teacher) * Suggestion for baseline: how good are your KD techniques when the student model is the exact same as the teacher? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors acknowledged limitations and potential negative impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We want to thank the reviewer for their comments and feedback! We are extremely happy to hear they appreciate the importance of our work, the unique challenges we address in the paper, and the quality of the presentation. We also appreciate your insightful comments. We carefully respond to your concerns below. ### Performance improvements are substantial (W1) We respectfully disagree with the assertion that the improvements not being substantial. Using the KD strategies we study, ***we consistently close >60% of the gap*** between teacher and student models for energy predictions, and 10-20% for forces. Especially on the COLL dataset (which we now have moved to the main text) where these numbers go ***up to 96.7% and 62.5% for energy and force prediction, respectively*** (see the COLL table in the attached pdf). And still, we want to reiterate that this ***improvement is out-of-the-box*** with respect to inference throughput, which is the bottleneck we tackle in this paper. In other words, the improvement does not come at the expense of slower models at inference, meaning that even a small improvement could be useful. And moreover, ***this is the first work in the area***, and we hope we can inspire further work to further close the gap. #### Force predictions (W1+Q1) Given the amount of training data on energies (one scalar per system) vs forces (one vector per atom), we think that learning to predict energies is a harder problem. We hypothesize that by distilling knowledge from a teacher model "knowing" more about the energy surface of a molecule, ***there is more potential to improve the energy predictions***. Because of the tradeoff between energy and force accuracy, people often develop 2 separate models optimized for energy and force predictions, tuning different hyperparameters (Gasteiger *et al.* (2021)). Here, we do not perform any such optimization specifically. But, we still achieve force improvements up to 62.5%, which combined with our energy prediction gains, constitute a good overall improvement for our initial exploration in the area. Future work can focus on better optimizing for force predictions. ### Putting performance improvements into context (W2) We believe this is hard to quantify as previous research is mostly on classification, where percentage gains are normally smaller. To put the performance into some context, closing almost 60% of the gap in energy predictions between GemNet-OC and PaiNN on OC20 gives an energy error lower than that of GemNet-dT (which is severely slower), trained on the same OC20-2M dataset (Gasteiger *et al.* (2022)). We thank the reviewer for the suggestion to plot speed vs. accuracy. We created such a plot, but due to the different scales, the performance was less visible and we instead plotted in the format of Figure 1, which conveys the same information. We also have that information in Table 2, which summarises the tradeoff between the models. We now edited Figure 1 to include the results on the COLL dataset and benchmarked the models on COLL, which we added to the appendix. ### Importance of model similarity (Q3) We can see two aspects in the question: 1) Our results do not in the end show that model similarity is important for KD. 2) You were already under the impression that model similarity shouldn't be a necessity for successful KD before our analysis. Regarding 1), we think that this is a contribution of our work; we initially thought that it would be easier to distill between more similar models, but the results suggest otherwise. About 2), we were not of this impression before seeing the final results, and although model similarity doesn't seem to be a requirement for knowledge distillation, our results (e.g. Figure 3) indeed indicate that model similarity increases with KD. ### Vanilla KD on PaiNN-big to PaiNN-small (Q4) Note that Vanilla KD appears to not do well only on OC20 between PaiNN-big to PaiNN-small. We believe that is because the gap between the force prediction accuracy of the 2 baseline models used in this scenario is already rather small, meaning the extra signal we provide during Vanilla KD is not as significant. ### Data augmentation (Q5) In the case of random rattling, we added noise to all samples in the batch. To avoid adding too much noise (and going OOD), we experimented with adding noise of a fixed norm, and we tried different values starting from 0.1 Å. We have added this information in the appendix. For the synthetic data, we mixed it with the real data, with the fraction of synthetic data in a single batch being \{0%, 5%, 10%, 20%, 50%, 80%\} on average. ### Error bars (Q2) We thank the reviewer for the comment. We have run additional runs for GemNet-OC -> PaiNN on the OC20 dataset. A table can be found in the attached pdf, also added to the appendix and we add a reference in the results section. The force results for the baseline model are missing, as one of the runs completely failed on the ood_both task. The force MAEs for the baseline were 45.3, 43.7, and 115.7 meV/Å. ### Other teacher-student combinations (W3) We agree that distilling into a smaller GemNet-OC model would be an interesting experiment. However, given the limited time, we have not been able to finalize these experiments. Preliminary, it looks like energy predictions see a substantial improvement when using the n2n loss. ### Baseline suggestion (Q6) We thank the reviewer for the suggestion for another baseline, but we are not sure what the reviewer means here by *"exact same"* - distilling to the exact same model configuration - e.g. PaiNN-big to PaiNN-big. That would be considered teacher-free KD which we haven't explored, but seems to be an established and well-performing method [1]. [1]Yuan et al, Revisiting Knowledge Distillation via Label Smoothing Regularization, CVPR 2020 --- Rebuttal Comment 1.1: Comment: Thank you for the thorough response. > We respectfully disagree with the assertion that the improvements not being substantial. To clarify, it is great that you close 60% of the gap, but it's not the improvement of the gap I'm concerned about, it's the absolute performance of the student model. While this paper makes great progress on improving the small models, it may be that the small models are still unusable due to such a high force/energy MAE - for example, the best student force MAE in Table 3 is 42.1meV, much worse than large GemNet-OC at 25.7. It's easy to have an impressive % of closing the gap when the gap is large to begin with. > we are not sure what the reviewer means here by "exact same" Yes, I meant the exact same model configuration. The purpose of this would be to study the effect of the distillation process during training, independently of the difference in model size/architecture, and it would be good to see similar results as in [1]. (This is a minor suggestion that is more of a sanity check that the KD process is working as expected) --- Reply to Comment 1.1.1: Comment: Thank you for your response and for engaging in the discussion! We respond to any remaining concerns below. ___ >While this paper makes great progress on improving the small models, it may be that the small models are still unusable due to such a high force/energy MAE The accuracy of a model is not the only consideration for downstream applications. Applications typically rather care about the trade-off between speed and accuracy. If they would not, they could just run the full DFT calculation. This is demonstrated well by model families in other, more developed fields such as vision (e.g. the EfficientNet model family) or language (e.g. the Llama model family). **Smaller molecular GNN models can be useful** for e.g. pre-screening materials, simulations on "easy" in-distribution data, or for running in tandem with a larger model that provides corrections when needed. As such, KD is a method for **pushing the Pareto frontier in the speed vs. accuracy space**. The student models are indeed less accurate, but they are also **3x and 8x faster**. For KD, it is most interesting to explore configurations where the student is substantially faster, which typically comes with a similar downside in accuracy. This implies a challenging problem: **Closing a large percentage of a large gap means that the absolute improvement is also large**. >It's easy to have an impressive % of closing the gap when the gap is large to begin with. We understand your perspective, but think that the opposite might also be true: obtaining a large relative improvement of a large gap can be more challenging than achieving the same when the initial gap is small, since the former is associated with a larger absolute improvement. **Either way, we provide examples for both cases, as we experiment with teacher and student models that have variable gaps in performance.**
Summary: This paper aims to improve the performance of resource-efficient GNNs for molecular simulation, an area where top models are becoming increasingly larger and more cumbersome. Several knowledge distillation approaches are proposed with the aim of regressing the smaller student model's embeddings onto those of a larger teacher model. The paper conducts a unified empirical benchmark of various distillation approaches and demonstrates that smaller molecular GNN performance can be non-trivially boosted when trained using KD from a larger teacher model. Strengths: ## Significant motivation to accelerate GNNs for molecular simulation - Improving scalability and efficiency of GNNs specialized for molecular simulations is a worthwhile research question. The best architectures for this area are increasingly becoming very large in terms of compute requirements, so improving the performance of smaller models is certainly worthwhile. - I agree that this is the first paper to propose distillation as an approach to boost smaller models - the novelty of the overall contribution also makes this paper interesting. ## Experiments are performed in a fair and unified manner under the same pipeline - This is true to the best of my understanding and without having looked at the code. - Empirically benchmarking and demonstrating to what extend various KD techniques can boost molecular GNNs is worthwhile. ## Overall clear and well-structured presentation Weaknesses: ## Limited technical novelty - Equation 3 essentially projects teacher and student node features to a common dimension and performs regression to align the embedding spaces. And perhaps this is 'all we need' for doing effective KD for molecular GNNs. - However, in my opinion, this technical idea is of limited novelty. - I will caveat this by saying that empirical studies which may not introduce new methodology are very important. I would not reject a paper for this. I think the empirical benchmark is a strength. - I don't think anything about Equation 3 is very specific to GNNs for molecular simulation. While this paper may be the first to apply and evaluate these ideas for molecular simulations (which is worthwhile), I don't think the technical ideas are sufficiently novel or tailored to this area. ## Missing contextualization w.r.t. general GNN distillation literature - The study is certainly valuable for its focus on molecular simulation applications. However, better contextualization and (possibly) comparison with existing work on general purpose GNN representation knowledge distillation could further improve it. - The introduction (line 37) claims that KD is limited for regression tasks, but feature-based KD or representation distillation is a generally applicable technique regardless of what downstream task one is interested in. I currently do not see a good reason to not compare to at least some standard baseline on general purpose representation distillation, or (better) to existing work on GNN representation distillation ([LSP, Yang et al.](https://openaccess.thecvf.com/content_CVPR_2020/papers/Yang_Distilling_Knowledge_From_Graph_Convolutional_Networks_CVPR_2020_paper.pdf), [G-CRD, Joshi et al.](https://arxiv.org/abs/2111.04964)). Joshi et al. could also be cited for introducing CKA-based embedding similarity analysis in the context of GNNs, which this paper's analysis also uses. - Line 87 states that distillation approaches for standard GNN architectures have only concentrated on classification tasks. I do not agree with this claim, since representation distillation approaches are agnostic to the downstream task. **Overall, I felt that the paper does not give enough credit to existing, more general purpose solutions on GNN distillation in the literature. Yes, the goal of the paper is to be an empirical study specific to molecular simulations and this is a positive (see Strengths). However, the proposed techniques are seemingly not bespoke to molecular simulation, so better contextualization and comparison to existing work is warranted, in my opinion.** Technical Quality: 3 good Clarity: 3 good Questions for Authors: - The Introduction or elsewhere in the Experiments could provide a key quantification of whether/how much the proposed techniques boosted GNNs over training from scratch and baseline KD techniques. Figure 1 gives a nice illustration, but it is unclear which KD technique was used to obtain each of the three results. In each of the cases, was it the baseline or one of yours? - For line 136, please elaborate a bit on what 'complications' we need to account for? Why can't we just use the scalar component of features to perform distillation in these networks? (As my reading of this paper's results seem to suggest that distilling from vector channels was not actually that useful...) - For Table 2's EFwT column, an application-oriented question: Yes, we can distill into small and efficient models like SchNet and boost their performance slightly, but is this scientifically relevant? For instance, would you use the SchNet model after your distillation based training for running molecular simulations? - The formulation of v2v via equations 3 and 6 was a bit worrying to me. If the student's final vector feature is regressed onto the vector feature of the teacher using mean absolute/square error, won't doing this not account for how these quantities are equivariant/geometric vectors in 3D space? - For example, there could be a very high MSE between two rotated versions of the same set of vectors. However, if you found the optimal alignment and then computed MSE, it would be zero. - Thus, comparing two sets of geometric vectors requires first finding an optimal alignment/rotation matrix and then computing the RMSE. Just computing unaligned RMSE may not be very informative. This is standard practice in fields such as protein structure prediction. - Have I misunderstood this aspect of the work? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: There is some discussion on limitations and potential negative societal impact. I would have been interested in deeper discussions around both the technical aspects as well as application-specific limits of this line of work. For instance, whether GNN model expressivity, especially for geometric and equivariant GNNs, plays a role in the choice of teacher-student pairings? There may be classes of functions that certain student models can provably not learn ever (eg. distinguishing types of neighborhoods or entire geometric graphs). The paper does not propose any principles for selecting teacher-student pairings for molecular simulations. (This is related to my question on whether a distilled SchNet is ever really useful in practice, too.) Additionally, for these OCP datasets, could it be interesting to distill across tasks? What may happen if we distill from S2EF models into IS2RE models (a task with significantly lesser data), and is this interesting? Did you try this? Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for the detailed feedback! Happy to hear you appreciate the importance and novelty of our work, and the quality of our empirical results. We also appreciate your insightful comments. We carefully respond to your concerns below. ### Technical novelty of our work and contextualization (W1+W2) We agree that putting our work into context is central and made multiple improvements to address this gap. We have ***improved the contextualization*** in our paper based on the discussion points below (see general response for the exact changes). Following the suggestion of the reviewer, we ran ***additional experiments with LSP and GSP (see table in attached pdf)***. We added a reference to Joshi et al. (2021) as a paper also using CKA. We furthermore want to highlight that: 1) Eq. 3 represents a general framework for feature-based KD, which we agree is not new on its own. KD research is focused on finding new methods based on that framework-i.e finding appropriate representations (neurons, layers, combinations of layers, relationships); transformations (projections, fusion, fission, etc), and losses (MSE, MAE, LSP, GSP, G-CRD). Thus, not every new method that belongs to the class of feature-based KD is not novel. Actually, GSP and G-CRD methods (Joshi et al. (2021)) mentioned are new types of KD loss, which also fall under the framework defined by Eq 3. 2) Here, we do not create a new KD loss but tackle the question of what type of features to distill in molecular GNNs (a separate, unique degree of freedom in our context). We define and study 5 strategies for distilling different features across molecular GNNs. They are indeed built upon the overarching framework of feature-based KD, but adapted to the requirements of this field, e.g. how to respect the physics of the problem, which types of features to use, how to transform equivariant and directional representations. 3) As such, they are not directly comparable to methods like LSP, and G-CRD. Actually, our strategies can be regarded as general frameworks that can use LSP, GSP, G-CRD as a KD loss. In our study, we decided to use MSE loss as we found it more appropriate for our setup. Contrastive learning (the backbone of LSP, GSP and G-CRD) is useful for node classification where you want to contrast different classes of nodes, but less clear how to use for graph-level regression tasks on molecules (i.e. atoms are a part of the whole). We had already experimented with GSP as a loss, available in the appendix where we perform ablation studies of layers, transformations and losses. MSE seemed to work significantly better. 4) We agree that as a general approach, feature-based KD is mostly agnostic to the downstream task. Yet, ***prior work on applying feature-based KD to regression task is quite limited. Most feature distillation methods are tailored towards classification tasks*** - e.g. still use logit information (e.g. LSP, GSP, and G-CRD). ### Definition of v2v KD (Q4) We believe this is a misunderstanding. Note that the vectorial features of *both* the student and the teacher are equivariant by construction, and PaiNN uses this to make equivariant force predictions. It is hence desirable that also the directions of the vectorial features are correct, and rotating the vectors before applying the loss would not encourage this. It is therefore ***important to use a loss that encourages vectors to align in both magnitude and direction***. To make that more clear in the text, ***we add the following in connection to Eq. 6***: "Note that as these features are equivariant to rotations, it is important to use a loss that encourages vectors to align in both magnitude and direction - e.g. MSE." ### Experimental results are available in Table 3 (Q1) We are puzzled by the comment that we should: "provide a key quantification of whether/how much the proposed techniques boosted GNNs over training from scratch and baseline KD techniques.". We provide detailed experimental results in Table 3 (and Table 12 for COLL), which give all the details mentioned in this comment. Could you clarify what you think is missing? We indeed summarize these in Fig 1, where we depict the gap between student and teacher models. We are happy the reviewer found this figure useful. We agree that the caption of Figure 1 was not clear enough. Now it explicitly says that percentages represent our best results for each teacher-student configuration depicted. ### Clarifying line 136 (Q2) We apologize for any confusion here. ***We have made that more explicit:*** by adding *"as one needs to extract and align representations corresponding to comparable features in both models."* ### SchNet is scientifically relevant (Q3) We see the point with respect to OC20. But it is likely relevant for cases with a well-sampled chemical space. Also, simple, well-established methods protect us from overfitting to highly-engineered models. We decided to use SchNet as it is still one of the most widely known models in the field. ### Discussing limitations We thank the reviewer for the very helpful suggestions as potential limitations. - ***"defining teacher-student pairings based on their (provable) expressivity"*** - understanding the theory behind KD is a very active area of research (how it works, when it works, etc.). It is definitely an interesting future research avenue, so ***we added that to future work***. - ***"distillation across OCP tasks"*** - This is not something we have considered, but we think you have a point that this extra data could be used. However, we don't envision this within a **KD framework**, but rather a **pretraining framework**. The reason for this is that the two tasks are very different, and we think it is unclear how the knowledge of a teacher should be distilled into a student. We have ***expanded our discussion into limitations/future work*** and added the aforementioned points, as well as other suggestions from the reviewers. --- Rebuttal Comment 1.1: Title: Questions clarified; score increased Comment: Thank you for the rebuttal. My questions have mostly been clarified and I'm happy to increase my score to reflect this. --- > Most feature distillation methods are tailored towards classification tasks - e.g. still use logit information... I still disagree. By definition, no feature based distillation methods use the logits. They use the features to perform distillation. > Definition of v2v KD (Q4): We believe this is a misunderstanding. Note that the vectorial features of both the student and the teacher are equivariant by construction... Thank you for the clarification, understood. > Could you clarify what you think is missing? Perhaps which KD method was the one used for each of the improvements in Figure 1. But overall, I agree. The interested reader can just spend some time reading through the results table to figure the same thing out. --- Reply to Comment 1.1.1: Comment: Thank you for your response and for engaging in the discussion! We are very happy to see that our improvements and arguments have addressed your concerns and made a substantial difference in your evaluation! ___ >By definition, no feature based distillation methods use the logits. They use the features to perform distillation. Yes, we fully agree that, as a general method, the distillation of features is not targeted to a specific downstream task. However, we note that most methods proposed in the literature have been developed and/or evaluated with classification in mind, including the aforementioned LSP, GSP, and G-CRD, which also make use of logit information through their additional vanilla KD term.
Summary: This paper investigates the use of knowledge distillation (KD) to accelerate molecular dynamics using graph neural networks (GNNs) and improve their predictive accuracy. They describe their experiments with various KD protocols such as node to node, edge to node, and vector to vector KD for four different molecular GNN architectures, namely, PaiNN, SchNet, GemNet-OC. They demonstrate that KD can improve the speed with minor compromise on accuracy of molecular GNNs without altering their architecture by studying the OC20 catalysts. Strengths: The paper demonstrates the approach of knowledge distillation for developing faster GNN-based interatomic potentials for molecular simulations. This is an important problem to be addressed in the field. Weaknesses: There are several weaknesses for the paper as outlined below. 1. The authors have selected three different GNNs as the teacher models. However, the performance of the GNNs are not comparable to the state-of-the-art for the same architectures on the same dataset (if I am not mistaken) as confirmed by other publications and OC20 dashboard. 2. Authors selected only dataset for the evaluation, which is OC20. However, OC20 is not primarily a molecular simulation dataset as I understand. It is a dataset for energy and force prediction primarily based on DFT minimization. Although the task of force and energy prediction is similar molecular simulations, the dynamics task is not tested. Other more complex datasets on dynamics are not included in testing. 3. Although, this is one of the first attempts for KD in molecular simulations, the improvement in performance of most of the models is only marginal and not substantial. 4. The stability of these models on molecular simulation is not tested. Further, error evaluation metrics are not exhaustively presented [1]. [1] Fu, X., Wu, Z., Wang, W., Xie, T., Keten, S., Gomez-Bombarelli, R. and Jaakkola, T., 2022. Forces are not enough: Benchmark and critical evaluation for machine learning force fields with molecular simulations. arXiv preprint arXiv:2210.07237. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The choice of the specific architectures is not clear. Although the authors have tried to be extensive by the selection of three different architectures, none of these architectures are giving SOTA performance. Accordingly, the reasons for choosing these architectures are not clear. It would be interesting to try the approaches on SOTA architectures such as equiformer, NequIP, or allegro. 2. The baselines chosen seem to be not really fair and give mostly negative results which suggests that they are incapable of any knowledge distillation. Fair baselines, should be implemented based on regression tasks and appropriate loss functions. 3. More extensive studies on different and challenging datasets should be carried out. This is available in most of the papers on machine learned potentials such as 3BPA, AcAc, alanine dipeptide, or LiPS. 4. Stability of the KD-based model need to be analyzed more carefully to understand the limitations and applicability of the models. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Authors have not included a detailed discussion on the limitations of the study. This should be included. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback and are glad to hear that they acknowledge the importance of the problem we are tackling! We also appreciate your insightful comments. We carefully respond to your concerns below. ### We have an additional benchmark dataset: COLL (W2+Q3) We wish to highlight that ***we have actually run experiments on 2 distinct datasets, including a molecular dynamics dataset - COLL!*** ***Please see the discussion in our general response.*** ### OC20 is a highly relevant molecular simulation dataset (W2+Q3) We are confused about the statement that OC20 is not a molecular simulations dataset - ***it is a dataset concerned with the simulation of catalyst-adsorbate systems***. In this work, we tackle the task of predicting energy and forces, which is the most general task in OC20 and has the broadest applicability across catalysis and related fields (Chanussot *et al.* (2021)). Also, note that performing DFT minimization in high throughput searches is of high interest in the materials science domain, and inference speed is of high importance to enable covering as large a part of chemical space as possible. ***OC20 is the largest dataset for this task***, and we thus think this is a highly relevant dataset. ### The models we study were SOTA at the time of research (W1+Q1) We also want to highlight that ***the models investigated in this paper were SOTA on both OC20 and COLL at the time of research and experimentation***. Looking at the relevant S2EF leaderboard, even today there are only three models better than GemNet-OC: SCN, eSCN and EquiformerV2. However, note that the public weights of SCN and eSCN were only released in February and March this year, in parallel to our work (Aug 2022-May 2023), and the weights for EquiformerV2 were released in July 2023 (after our submission). Hence, GemNet-OC was SOTA at the time of our work (***and even today is still very representative of the SOTA***). The other models we study (PaiNN and SchNet) are chosen as they represent models that are faster, but not SOTA performance. ***To make that more clear to the reader***, we include that information in the text. ### Performance improvements are not marginal (W3) We respectfully disagree with the assertion that most of the improvements we achieve are marginal. Using the KD strategies we study, ***we consistently close >60% of the gap*** between teacher and student models for energy predictions, and 10-20% for forces. Especially on the COLL dataset, which the reviewer might have missed (we hope this is now more apparent after moving these results to the main text), where these numbers go ***up to 96.7% and 62.5% for energy and force prediction, respectively*** (refer to the COLL table in the one-page pdf document attached to our response). And still, we want to reiterate that this ***improvement is out-of-the-box*** with respect to inference throughput, which is the bottleneck we tackle in this paper. In other words, the improvement does not come at the expense of slower models at inference, meaning that even a small improvement is still useful. Moreover, ***this is the first work in the area***, and we hope we can inspire more work to further close the gap. ### Vanilla KD are fair and not arbitrary (Q2) We are not sure why our 2 vanilla KD approaches have come across as unfair to the reviewer. Our vanilla KD methods are ***not arbitrary***. These are strategies inspired by vanilla KD from classification, adapted to our context and undergone significant optimization as part of our work.We agree that the results we achieve with vanilla KD are not always perfect, but ***they do prove promising in many experiments, often outperforming other techniques***. ***Especially on the COLL dataset***, which the reviewer might have missed (we apologize again and hope this is more apparent now with the results moved to the main text), where looking at force prediction, Vanilla (2) outperforms the other methods on 2 of the 3 teacher-student configurations, and achieves just slightly worse than n2n on the third. Also, we want to highlight that there might have been a misunderstanding about the inclusion of the two vanilla KD approaches in our study due to how we had originally phrased some parts of the text. The two vanilla KD strategies ***have not been investigated in this field before***, so analyzing their performance is as much part of our work as are the other strategies. In other words, these are additional strategies we study, and not really baselines we want to compare with or beat necessarily. However, we agree that this may not be clear from the phrases we use in the paragraph where we define these two methods. ***We make that more clear in the text (see general response)***. ### Discussing limitations and future work We agree that our previous discussion on limitations/future work was somewhat narrow, only concerning potential issues around increasing training times. Accordingly, ***we have expanded this section and provided a more comprehensive account of the limitations of our methods and future work***, mentioning future directions like combining KD strategies (e.g. n2n and v2v); extending the framework to other types of features, molecular tasks and datasets; improving the theoretic intuition behind KD; and performing a more comprehensive stability analysis of the approach. ### Stability analysis and error evaluation metrics (W4+Q4) We concur that a stability analysis would be indeed valuable. However, considering that this is a first work on KD for molecular GNNs, we have used well-established metrics, and have categorized stability analysis as beyond the present research. We thank you for the suggestion and have ***included this aspect in our limitations and future work discussions***. In terms of error evaluation metrics, we do provide all the metrics that are relevant for the datasets and tasks we benchmark on - OC20 and COLL. --- Rebuttal Comment 1.1: Title: Thank you for the response Comment: > We have an additional benchmark dataset: COLL (W2+Q3) Thank you for pointing the attention to the COLL dataset and adding them to the main manuscript. > OC20 is a highly relevant molecular simulation dataset (W2+Q3) I am not questioning the relevance of the dataset. The main issue was that the experiments were performed only on one dataset, which focused on one task. There are several papers on MD potentials etc., and there is a reason why they demonstrate it on different datasets instead of one. Because the tasks associated with each dataset are different, it is important to show whether the approach works in diverse situations. For instance, whether a trained potential is generalizable to unseen temperatures (3BPA), compositions (rMD17), pressures, and sizes (in terms of the number of atoms, for example, LiPS or water), and whether it can capture bond-breaking (AcAc dataset), etc. It is not clear whether the student potential has these capabilities. > Performance improvements are not marginal (W3) I still disagree with this. I think showing percentage improvement is not necessarily a good way to argue in this case, as another reviewer pointed out, when there is a huge percentage difference is there, to begin with. I think the question would be whether they are acceptable potentials or not, and for this, stability analysis is important. As of now, it is not clear whether the new potential is truly usable or not. > The models we studied were SOTA at the time of research (W1+Q1) Again, I disagree. NequIP, allegro, MACE, BotNet, etc. were all released in 2022 and much before the submission. Equiformer V1.0 was released in ICLR 2022. It is possible that they did not test it on the OC20 dataset, a specific model that the authors have chosen. But these models are demonstrated on several other datasets. This also highlights why choosing one particular dataset and relying on the experiments on this dataset is not necessarily a good approach --- Reply to Comment 1.1.1: Comment: Thank you for your response and for engaging in the discussion! We are happy to hear that your concerns about us having 1 dataset have been addressed after highlighting our results on the additional COLL dataset. We respond to your remaining concerns below. ### **Generalization and diversity** We agree that it is important to show whether the approach works and is generalizable in diverse situations. This is actually why we have selected to do our analyses on the OC20 and COLL datasets, in particular. **OC20 is by far the largest and most chemically diverse benchmark.** It thoroughly tests generalization to unseen system sizes and compositions, with 3/4 of the validation set being out-of-distribution data. **COLL is also a diverse benchmark dataset**, comprising highly distorted structures at high temperature, including bond formations. As such, we do believe we demonstrate extensive validation and generalization, sufficient to support our goal to showcase the potential of KD in the area. ### **Evaluation metrics and stability analysis** We also fully agree that long-term stability is an interesting and important research direction. However, it is also important to note that stability analyses are quite involved and computationally expensive. This is why **stability analysis** has been investigated mainly in cases where it is crucial, e.g. when looking at statistical properties over long simulation rollouts, and **is not something that is routinely done in papers similar to ours**. **Almost all of the published work in the area relies on the same metrics we use** and does not perform the suggested stability analysis, including very recent papers [1,2,3]. In this work, **we used common, established error metrics**, which are the standard for datasets like OC20, COLL, and others. As in other works, we think this is sufficient to demonstrate that our methodology is promising. Still, to recognize the significance of stability analyses, we include this as future work in our paper. [1] Passaro et al., Reducing SO(3) Convolutions to SO(2) for Efficient Equivariant GNNs, ICML 2023 [2] Duval et al., FAENet: Frame Averaging Equivariant GNN for Materials Modeling, ICML 2023 [3] Lioa et al., EquiformerV2: Improved Equivariant Transformer for Scaling to Higher-Degree Representations, 2023 ### **Performance improvements** The accuracy of a model is not the only consideration for downstream applications. Applications typically rather care about the trade-off between speed and accuracy. If they would not, they could just run the full DFT calculation. This is demonstrated well by model families in other, more developed fields such as vision (e.g. the EfficientNet model family) or language (e.g. the Llama model family). **Smaller molecular GNN models can be useful** for e.g. pre-screening materials, simulations on "easy" in-distribution data, or for running in tandem with a larger model that provides corrections when needed. As such, KD is a method for **pushing the Pareto frontier in the speed vs. accuracy space**. The student models are indeed less accurate, but they are also **3x and 8x faster**. For KD, it is most interesting to explore configurations where the student is substantially faster, which typically comes with a similar downside in accuracy. This implies a challenging problem: **Closing a large percentage of a large gap means that the absolute improvement is also large**. We understand your perspective that percentage improvements can potentially be easier to achieve when there is an already huge percentage difference, but we think that the opposite might also be true: obtaining a large relative improvement of a large gap can be more challenging than achieving the same when the initial gap is small, since the former is associated with a larger absolute improvement. **Either way, we provide examples for both cases, as we experiment with teacher and student models that have variable gaps in performance.** And we highlight percentages of closing the gap because this is _exactly_ what KD tries to do - i.e. to reduce the gap between models. ### **SOTA models** We agree that there are many other state-of-the-art models that would be interesting to investigate. However, this work is not about benchmarking a specific model or proposing a new SOTA, but about showing that KD is a promising general method in this area. As such, **we evaluate KD on well-performing, established architectures**: GemNet-OC was the best _available_ model on the OC20 leaderboard, and PaiNN and SchNet are widely-used, more lightweight models. Importantly, these models cover a wide diversity of approaches, and KD still works consistently: GemNet-OC is based on edge representation and angles, PaiNN on vectorial representations, and SchNet on node representations. Additional models would of course make the evidence even stronger, but we think that the presented consistent improvements already show the potential of KD in this area.
Summary: The main contribution of this work is the exploration of knowledge distillation as a means to enhance the performance and scalability of GNNs for molecular simulations. The authors introduce custom KD strategies, namely node-to-node, edge-to-node, and vector-to-vector distillation, to overcome limitations of KD for regression tasks in GNNs. The performance of the KD protocols is evaluated by training student models to predict molecular properties such as energy and forces. The results show improvement in the performance of student models. Strengths: The paper is well written and the topic is of practical importance. Weaknesses: See questions. Technical Quality: 3 good Clarity: 3 good Questions for Authors: It would have been interesting seen these methods applied to model with Tensor features like e3nn, which are much heavier to train than the models in the paper. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors discuss the limitation of their approach. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the concise, positive review! We are glad to hear that they appreciate the presentation of our work and acknowledge its importance! ### Applying KD to tensorial models We also thank the reviewer for their suggestion to apply our methods to tensorial models like e3nn, but we kindly regard that as beyond the scope of this paper. The focus of this work was to conduct an empirical evaluation of the utility of KD in the area, which we show across 3 different architectures that represented the SOTA at the time of experimentation. Recognizing the positive findings of our work, we agree that it would be an interesting extension to also explore how they translate to tensorial models. To carry this message to the reader, ***we have added this as an additional point in our discussion on future work***.
Rebuttal 1: Rebuttal: We wish to take this opportunity to thank the reviewers for their time and effort in assessing our work. We are deeply grateful for the reviews we have received. We were extremely happy to hear that the reviewers recognized the importance of our work and the quality of our results and presentation! We also appreciate the reviewers' insightful comments and feedback, which significantly contributed to the improvement of our presentation. We have taken the time to carefully respond to all of their concerns separately. Below, we include some of the central points of the reviews, which we believe are important to clarify. ### Additional benchmark dataset: COLL We would like to highlight that we have actually run experiments on ***2 distinct datasets*** - ***OC20-2M*** (the biggest and most diverse dataset in the area) and ***COLL*** (a challenging dataset for molecular dynamics). We apologize if our ***results on COLL***, originally presented in Table 12 in the appendix due to space constraints, escaped the reviewer's attention. Recognizing the importance of such additional validation, ***we have decided to move Table 12 to the main text*** and extend our discussion of the results on the COLL dataset. We have ***also revisited Figure 1 to include results on COLL*** to make the presence of an additional benchmark dataset more apparent. We have also included the table summarizing our results on COLL in the one-page pdf document attached to our response. Note that we had previously had an error in the percentage calculations for GemNet-OC -> PaiNN-big on COLL, resulting in significantly smaller percentage improvements than those actually achieved. We have fixed that and updated the table. ### Clarification on Vanilla KD strategies We believe there might have been a misunderstanding about the inclusion of the two vanilla KD approaches in our study due to how we had originally phrased some parts of the text. The two vanilla KD strategies (inspired by the vanilla, logit-based KD used in classification problems (Hinton *et al.* (2015)) ***have not been investigated in this field before***, so analyzing their performance is as much part of our work as are the other strategies. In other words, these are additional strategies we study, and not really baselines we want to compare with and beat necessarily. As such, the fact that they can sometimes outperform n2n, e2n, and v2v does not undermine our general line of work - to demonstrate that KD is a viable strategy in the context of molecular GNNs. However, we agree that this may not be clear from the phrases we use in the paragraph where we define these two methods. ***To make that more clear in the text, we change***: - the caption of the paragraph from *"Baseline KD strategies"* to *"Additional KD strategies"*; - and the introduction of that paragraph from *"To validate the performance of our KD strategies, we evaluate their performance against 2 vanilla-based KD approaches suitable for regression tasks."* to *"We further evaluate two additional KD approaches inspired by the vanilla logit-based KD used in classification, which we augment to make suitable for regression tasks.* ### Other changes to the paper: - ***Additional experiments (see one-page pdf)*** - Additional seeds for GemNet-OC -> PainNN-big (see response to reviewer 6ZfK) - Additional losses, LSP and GSP (see response to reviewer n7yW) - Additional results GemNet-OC -> GemNet-OC-small (currently training - see response to reviewer 6ZfK) - ***We have revisited parts of our methods section to improve the contextualization*** of the proposed techniques (see response to reviewer n7yW). More notably, we have added the following to the definition of *n2n*: *"Note this is a general approach that utilizes node features only, making it applicable to standard GNNs. Here, we want to enforce the student to mimic the representations of the teacher for each node (i.e. atom) independently, so we use a loss that directly penalizes the distance between the features in the two models, such as MSE (similar to the original formulation of feature-based KD in Romero et al. (2014)). Other recently proposed losses L_feat for the distillation of node features in standard GNNs specifically include approaches based on contrastive learning (Yang et al. (2021), Joshi et al. (2021), Yu et al. (2022) Huo et al. (2022)) and adversarial training (He et al. (2022)). We do not focus on such methods as much since they are better suited for (node) classification tasks (e.g. contrasting different classes of nodes), and not for molecule-level predictions."* - ***We have expanded our discussion on limitations and future work*** to accommodate the suggestions of the reviewers, mentioning future directions like combining KD strategies (e.g. n2n and v2v); extending the framework to other types of features, molecular tasks, and datasets; improving the theoretic intuition behind KD and its applicability; and performing a more comprehensive stability analysis of the approach." Pdf: /pdf/955e02c823eb43922f94ac3d64ac5edbe3dcad04.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: the paper proposes new knowledge distillation strategies to enhance the hidden representation in the molecular GNN, downstream regression task of energy and force predicition shows promising results Strengths: 1. the paper is well written and easy to follow 2. the experiments show the knowledge distillation can improve the regression performance. Weaknesses: 1. to handle various structures in the molecule dataset, the paper proposes to use some GNN model to extract hidden features and do feature-based KD on top of that. the novelty is limited. 2. need to include more benchmark datasets to demonstrate the effectiveness. Also, in table 3, vanilla KD can outperform the proposed method in certain scenarios. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: see above Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. We are glad to hear that you like the presentation of our work and highlight the improvement in performance we achieve! We also appreciate your insightful comments. We carefully respond to your concerns below. ### We have an additional benchmark dataset: COLL (W2.1) We would like to highlight that we have actually run experiments on ***2 distinct datasets*** - ***OC20-2M*** (the biggest and most diverse dataset in the area) and ***COLL*** (a challenging dataset for molecular dynamics). Due to space constraints, we had to put the results on COLL in the appendix, which might have made them easy to miss. We now moved these results into the main paper. ***Please see the discussion in our general response.*** ### Vanilla KD is part of our investigation, not a baseline to beat (W2.2) We believe there might have been a misunderstanding about the inclusion of the two vanilla KD approaches in our study due to how we had originally phrased some parts of the text. The two vanilla KD strategies (inspired by the vanilla, logit-based KD used in classification problems (Hinton *et al.* (2015)) ***have not been investigated in this field before***, so analyzing their performance is as much part of our work as are the other strategies. In other words, these are additional strategies we study, and not really baselines we want to compare with and beat necessarily. As such, the fact that they can sometimes outperform n2n, e2n, and v2v does not undermine our contribution - to show that KD is a viable strategy in the context of molecular GNNs and compare different methods. However, we agree that this may not be clear from the phrases we use in the paragraph where we define these two methods. ***To make that more clear in the text, we change***: - the caption of the paragraph from *"Baseline KD strategies"* to *"Additional KD strategies"*; - and the introduction of that paragraph from *"To validate the performance of our KD strategies, we evaluate their performance against 2 vanilla-based KD approaches suitable for regression tasks."* to *"We further evaluate two additional KD approaches inspired by the vanilla logit-based KD used in classification, which we augment to make suitable for regression tasks."* ### Novelty of our work (W1) The reviewer does not mention a reason behind their assertion that our work is of limited novelty, so we are not sure what the problem may be. ***We believe our work is a novel contribution to the field because:*** - *We introduce KD to molecular modelling*: we explore the utility of KD for molecular GNNs for the first time; - *Technical novelty*: we adapt feature-based KD to explore custom KD strategies (n2n, e2n, v2v, 2xVanilla) for the distillation of representations in equivariant and directional molecular GNNs. These are indeed built upon the established framework of feature-based KD, but they have been significantly adapted to the requirements of this field, e.g. how to respect the physics of the problem, which features to use, how to perform distillation). - *Comprehensive empirical analysis*: we conduct extensive empirical analyses and ablation studies of our framework and its components (features to distill, transformation functions, losses, data augmentation methods). As such, it perfectly ***aligns with the definition of novelty outlined in NeurIPS' guidelines***. --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: I read the whole rebuttal and appreciate authors' response. I increased my rating.
null
null
null
null
null
null
k-Median Clustering via Metric Embedding: Towards Better Initialization with Differential Privacy
Accept (poster)
Summary: This paper studies $k$-median problem in a discrete metric space. Here there is a metric space $(U, \rho)$ where $|U| = n$, and we are given a multiset $D$ of datapoints (which is a subset of $U$). The goal is to select $k$ centers $F$ from $U$ that minimizes the sum of the distances of points in $U$ to their closest center. A popular approach for $k$-median is local search (aka Lloyd's algorithm) where we try to improve the solution locally by replacing one center with another; this is done until such swap does not yield enough improvement (i.e. factor of $1 - \\alpha / k$) over the current solution. A main component is Lloyd's algorithm is initialization: a better initialization can lead to faster convergence. This paper proposes a new initialization for $k$-median that improves upon the popular $k$-median++ initialization (Arthur & Vassilvitskii, 2007). The latter gives $O(\log k)$-approximation, whereas the new algorithm yields $O(\log \min\\{k, \Delta\\})$ where $\Delta$ is the ratio between small and largest distances in U. The proposed initialization works as follows: first embed the metric into a tree metric and then approximate k-median on the tree metric. The former is done using a known algorithm of (Blelloch et al., 2017) and the distortion of $O(\\log \\min\\{k, \\Delta\\})$ follows from a classic result of (Bartel, 1996). The latter is the main contribution of the paper: the authors give $O(1)$-approximation algorithm that runs in time $O(n \\log n)$, which is faster than known exact algorithms (Shah, 2003) which requires $O(k n^2)$ time. The algorithm works in two stages. In the first stage (Algorithm 2), it uses a greedy algorithm to select disjoint subtrees that "cover" most of the datapoints. Then, in the second stage (Algorithm 3), one node is selected from each subtree as a center. The authors also consider $k$-median under *differential privacy (DP)* constraints. Roughly speaking, DP requires that the output distribution of the algorithm does not change too much when we add/remove a single datapoint. To achieve DP, the authors add Laplace noise to the number of datapoints in each subtree, before running the non-private procedure described above. The noise is scaled geometrically which the authors show is sufficient to achieves an additive error of $O(k \\Delta \\log n / \\epsilon)$. Combined with DP local search algorithm of (Gupta et al., 2010), this yields a final 6-approximate algorithm with an additive error of $O(k^2 \\Delta \\log n \\log \\log n / \\epsilon)$ which improves upon the additive error of $O(k^2 \\Delta \\log^2 n / \\epsilon)$ from Gupta et al.'s paper. Finally, the authors empirically compare the new initialization algorithm with k-median++ initialization and random initialization using two datasets, MNIST and a synthetic graph dataset. The experiments indeed show reduction in the number of iterations needed for non-private k-median, and a reduction in cost in the DP case. ## Post-Rebuttal Comment Thank you the authors for the rebuttal. While I agree that fixing # of iterations can help validate HST vs other initialization, I would still like to encourage including experiments varying # of iterations in the final version since this is a non-trivial procedure. (As I said earlier, it could be possible that other, worse, initialization turns out to be equally good if we increase the number of iterations a bit.) Strengths: - Clustering is one of the most important problem is unsupervised learning, and any improvement can be impactful to the area. Similarly, differentially private clustering is one of the most important private learning problem and this paper advances our understanding on the problem. - In terms of techniques, I find the paper to be interesting and novel. There have been many papers on differentially private clustering in recent years but all of them focus on something else (e.g. focusing on Euclidean/L_p space, or small-dimensional space). This paper tackles the general metric space setting with a new approach of better initialization. This seems interesting and might lead to more work in the future. Weaknesses: - One downside is that the contributions are quite specific: - First, the quantitative improvement is quite small, i.e. reducing $O(k \\log n)$ steps to $O(k \\log \\log k)$ steps (or additive error from $O(k^2 \\log^2 n)$ to $O(k^2 \\log n \\log log n)$ in the DP case). - Second, as also stated in the paper, exact algorithms for solving $k$-median on tree metrics is already known and therefore the main contribution of this paper is in speeding this up. Furthermore, in the larger picture, this speed up is theoretically quite small: even though the running time for the exact algorithm was $O(k n^2)$, this was already dominated by the time used by the local search which is at least $\\Omega(k n^2)$. Technical Quality: 3 good Clarity: 3 good Questions for Authors: In your experiments, have you considered running the different DP algorithms using different number of iterations? In particular, is it possible that more iterations help for other (worse) iterations? ## Minor comments The following are only for improving subsequent revisions. Please do *not* respond to them during rebuttal. - Line 23: I'm not sure what "oracle" means in this context. Maybe either clarify or remove it? - Paragraph starting at Line 101: Maybe the definition of $\eta(f)$ (sensitivity) should be moved to right after the Laplace mechanism since it also uses sensitivity? - Line 117: "tn" -> "in". - Line 154: It was a bit confusing to me what "probabilistic distribution of partition" means in this context & what guarantees does it give. Maybe some clarification here would be good. - Lemma 3.4: Again, it is not super clear from the text what the expectation is over. Also adding quantifier "for all $x, y \\in U$ would also be good. - Line 301: Missing reference. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer MjrB Thanks for your feedback. Questions: In your experiments, have you considered running the different DP algorithms using different number of iterations? In particular, is it possible that more iterations help for other (worse) iterations? Answer: We note that, the major goal of our experiments is to validate the HST and DP-HST provide better initial centers than k-median++ based methods. Therefore, the number of iterations used for the subsequent local search does not affect the main comparisons at the initial centers that HST methods performs better. Increasing the number of iterations in DP local search may not always improve the performance, because more iterations $T$ leads to smaller privacy budget $\epsilon'=\frac{\epsilon}{4\triangle (T+1)}$ used in each iteration. Thus, there is a trade-off in DP local search. Again, we appreciate your review of our work. We hope your reply answers your questions. --- Rebuttal Comment 1.1: Comment: Thank you the authors for the rebuttal. While I agree that fixing # of iterations can help validate HST vs other initialization, I would still like to encourage including experiments varying # of iterations in the final version since this is a non-trivial procedure. (As I said earlier, it could be possible that other, worse, initialization turns out to be equally good if we increase the number of iterations a bit.) --- Reply to Comment 1.1.1: Comment: Dear Reviewer MjrB, Thanks for your reply. While the number of local search iterations are independent of the initialization quality iteself, we will be happy to include some more figures with more local search iterations in the paper as kindly suggested, if an additional page is allowed. If the no extra page is allowed, we can put them in the appendix. Thanks again for your review of our work and many nice suggestions. Title: Thank you
Summary: The paper considers the problem of finding good initializations for k-median clustering algorithms with and without differential privacy. To this end, the authors propose NDP-HST and DP-HST, two methods based on Hierarchically Well-Separated Trees to find a good initialization of k points. The theoretical approximation guarantees are analyzed and compared to existing works. HST methods are then compared to kmedian++ and random initialization on MNIST and random graph-based datasets. Strengths: - Contains extensive theoretical analysis of approximation guarantees - The paper is written in clear language - The proposed method is relatively simple and therefore more likely to be useful in practice Weaknesses: - The figures with performance comparisons only show estimates of the means while differential privacy requires algorithms to be random and thus often have large variations in scores. To understand whether the proposed method is better instead of 'lucky' the results must include some measure of variation. - HST does not improve results after running the k-median algorithm except for (seemingly) in the differentially private setting, see previous point. - It is not clear to me that Algorithm 4 guarantees differential privacy and the proof in the appendix only states that this is "straightforward, by using the composition theorem". See question 1 Minor: - Line 117 is "tn" a typo? - Line 301 is missing a citation/reference. - On line 304 it is stated that "For non-DP tasks, we set L = 6. For DP clustering, we use L = 8". This choice should be motivated, if these values have been chosen by tuning then the other methods should be tuned as well. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Why does Algorithm 4 guarantee differential privacy? From my understanding, $\hat{N_v}$ constitutes a count query that should be protected with the Laplace mechanism (or Geometric mechanism). I don't understand why $2^{(L−h_v)}$ is the sensitivity. 2. How were the parameters for the algorithms selected? (e.g. line 304) Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors do not address limitations. It might be necessary to discuss the ethical implications regarding Example 2.3. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer JWq6 Thanks for your feedback. Questions 1: Why does Algorithm 4 guarantee differential privacy? From my understanding, $\hat{N_v}$ constitutes a count query that should be protected with the Laplace mechanism (or Geometric mechanism). I don't understand why $2^{L-h_v}$ is the sensitivity. Answer: The sensitivity of count for the nodes in each level is 1. By adding $Lap(2^{(L-h_v)}/\epsilon)$ to the count, the privacy budget is $\epsilon/2^{(L-h_v)}$ for the $h_v$-th level. In the privacy analysis, since the subtrees rooted from the nodes in a same level are disjoint, removing one data point only affects one node in each level. Thus, we can add up the privacy budgets by the composition of DP, to get the total privacy cost as $\sum_i\epsilon/2^{L-i}<\epsilon$. Question 2: How were the parameters for the algorithms selected? (e.g. line 304) Answer: Theoretically, $L$ should be $O(\log n)$ where $n$ the size of input. Empirically, we tested several choices of $L$ and presented the one with best performance. However, the plots for $L=7$, for example, are very similar to those of $L=6,8$. Again, we appreciate your review of our work. We hope your reply answers your questions. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal, I will keep my rating as is. I would recommend the authors to include some measure of variation in their results but of course, the results already show an improvement compared to existing methods.
Summary: The paper studies the problem of finding an initial solution for k-median with privacy, which can be improved using local search methods. This is a standard approach, and privacy is well motivated in this setting. The authors develop a new algorithm for initialization using Hierarchically Well-Separated Tree (HST) techniques. They show that this can be made private, which improves on the additive error bound, compared to prior methods. The authors evaluate the private algorithm experimentally, and show improvement compared to baselines Strengths: The problem is well motivated. The improvement in the initialization using the Hierarchically Well-Separated Tree (HST) approach is quite reasonable. The proposed methods lead to improvement in the worst case additive bounds for the k-median objective and the running time. There are several technical ideas in the algorithm and analysis, which are interesting. The algorithm is evaluated empirically, and shows some improvement over other baselines. Weaknesses: The presentation is quite poor. Many definitions and the algorithm are not very well explained. Quite a lot of notation is used, and it would help if it is defined in one place, instead of readers having to keep searching for it. Some details are missing, and the techniques in the private algorithm are fairly simple. There are some issues with the privacy model, which can be fixed, but make the writing confusing in its current form. The experimental results are not too compelling. Even in the unbalanced setting, the difference between DP-HST and DP-kmedian is not big. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: The differences between the problem definitions in (1) (2) are not completely clear. Shouldn't the k-median cost be defined with respect to all the data points? That should be same in the private and non-private versions. For the private version, can the neighboring dataset D' be one with a point i\not\in U? Example 2.3: if the data points in D are identified through some private features, it might not be meaningful to consider a dataset D' which has one data point changed. Is it more meaningful to consider a different privacy model in terms of feature values? lines 129-130: the algorithm of Arya et al. (which is slightly different from the way it is described in Algorithm 1) allows any swap that improves the cost, but here, only the best swap is considered. Is that a mistake, or does everything work as in local search? Sections 3.1 and 3.2 are not very easy to follow. In definition 3.1, it would be useful to clarify what the nodes and edges of T are. In line 3 of Algorithm, there is a reference to algorithm 7. It might be useful to describe it informally, and note that it is given in the supplement line 256: it would be useful to explain why the parameter used in the noise is a bound on the sensitivity. If a data point in D changes, how does the tree change? It is not very clear if the results in Fig 3 are strong enough. Even in the unbalanced setting, the difference between DP-HST and DP-kmedian is not big. It is also not clear how general the setting for the private dataset is. There are a number of editorial things to be fixed. Some of them are: line 117: "Note that tn" line 160: "\alpha= 2-HST" line 238: "in general metric space" line 241: "for more detaileds" line 243: "consider initialization method" line 301: missing reference line 311: "both l_1 and l_2". Might be useful to complete the sentence and add metrics line 318: "HST offers better initial centers" Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: To some extent. There are no negative social impacts Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer vfbx, Thanks for your feedback. Question 1: The differences between the problem definitions in (1) (2) are not completely clear. Shouldn't the k-median cost be defined with respect to all the data points? That should be same in the private and non-private versions. For the private version, can the neighboring dataset D' be one with a point $i\not\in U$? Answer: The non-private k-median cost in Definitions (1) is standard in the literature, defined with respect to all the data points $U$. The private k-median cost in (2) follows [Gupta et al., 2010], defined with a subset (called "demand set") of points $D \subseteq U$, which is the subset that requires privacy protection. A concrete example is given in Example 2.3. For the private k-median, both $D$ and $D'$ are subsets of U. Hence, the neighboring set $D'$ cannot have a point $i\not\in U$. Question 2: Example 2.3: if the data points in D are identified through some private features, it might not be meaningful to consider a dataset D' which has one data point changed. Is it more meaningful to consider a different privacy model in terms of feature values? Answer: In Example 2.3, we just tried to provide an example on what the demand set $D$ might be in practice. Our privacy model focuses on user-level privacy---from the output centers of $D\subset U$, an adversary cannot identify if any user is in $D$. The general setup is still protecting the k-median centers w.r.t. a removal of one node. Thus, the privacy of specific features is different problem which is not our focus in this paper. Question 3: lines 129-130: the algorithm of Arya et al. (which is slightly different from the way it is described in Algorithm 1) allows any swap that improves the cost, but here, only the best swap is considered. Is that a mistake, or does everything work as in local search? Answer: We would like to make a note that the local search algorithm of [Arya et al., 2004] (their Figure 1) requires the swap to improve the cost by at least $(1-c/k)$, which is the same as in our Algorithm 1. The difference is that Arya et al. does not find the best one---typically, it do the swap whenever a swap satisfying the requirement is found. In our paper, we search over all the swaps and pick the best one with smallest cost. Both strategies are valid. This difference does not affect the theoretical analysis (error guarantees) since the key to the iterative improvement in the cost is the $(1-c/k)$ factor. Question 4: Sections 3.1 and 3.2 are not very easy to follow. In definition 3.1, it would be useful to clarify what the nodes and edges of T are. Answer: To help understand the data structure, we specifically included Figure 1 in the paper as an illustrative example of a 3-level padded decomposition and the corresponding 2-HST. Please kindly let us know if you feel this helps. Thank you. Question 5: In line 3 of Algorithm 2, there is a reference to algorithm 7. It might be useful to describe it informally, and note that it is given in the supplement. Answer: We mentioned Algorithm 7 and Appendix A at line 168. Question 6: line 256: it would be useful to explain why the parameter used in the noise is a bound on the sensitivity. If a data point in D changes, how does the tree change? Answer: The sensitivity of count for the nodes in each level is 1. In our algorithm, by adding $Lap(2^{(L-h_v)}/\epsilon)$ to the count, the privacy budget is $\epsilon/2^{(L-h_v)}$ for the $h_v$-th level, which is designed for approximation analysis. The tree is build on the universe set $U$. Hence, a data point change in $D$ will not affect the tree construction. Question 7: It is not very clear if the results in Fig 3 are strong enough. Even in the unbalanced setting, the difference between DP-HST and DP-kmedian is not big. It is also not clear how general the setting for the private dataset is. Answer: Firstly, in all for imbalanced $D$ plots in Figure 3, DP-HST performs better than DP-kmedian++. Particularly, for the initial cost, the advantage is significant: 400 vs. 600 and 65 vs. 95, for $r=100$ and $r=1$, respectively. DP-HST also outperforms DP-kmedian++ in terms of the final cost. Secondly, as we mentioned before, our setup follows the standard setting in private clustering in discrete space, e.g. [Gupta et al., 2010]. Thanks also for the suggestions on the typos. Again, we appreciate your comments on our submission. We hope our response can well address your questions. Please kindly let us know if more clarification is needed. Thank you. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the detailed response. You might want to modify example 2.3 so it doesn't look like a feature privacy type issue, and clarify neighboring datasets. For the experiments, its hard to see from the plots how much is the improvement, and in how many instances. Some other metric, e.g., %improvement, might be helpful. A similar point has been noted as a weakness by Reviewer JWq6 --- Reply to Comment 1.1.1: Comment: Dear Reviewer vfbx, Thanks for your response and raising the score. We will follow your nice suggestion to update Example 2.3. Regarding your second suggestion, we are happy to comply and include such tables in the paper. In the following, we compute $(cost_{HST}-cost_{kmedian})/cost_{kmedian}$, the improvement of our proposed HST based methods over kmedian++, corresponding to the 3rd and 4th columns in Figure 2 and Figure 3. A negative value means our method improves the baseline. Initial k-median cost, non-DP | | $k=2$ | $k=5$ | $k=10$ | $k=15$ | $k=20$ | |---------------|--------|--------|--------|--------|--------| | MNIST-$l_1$ | -8.0% | -5.4% | -5.4% | -5.2% | -4.6% | | MNIST-$l_2$ | -5.0% | -5.1% | -2.4% | -2.8% | -2.1% | | graph $r=100$ | -69.8% | -0.4% | 4.3% | -2.0% | -1.1% | | graph $r=1$ | -30.8% | -14.2% | -3.1% | -2.7% | -1.2% | Initial k-median cost, DP: | | $k=2$ | $k=5$ | $k=10$ | $k=15$ | $k=20$ | |---------------|--------|--------|--------|--------|--------| | MNIST-$l_1$ | -7.3% | -3.8% | -6.6% | -7.0% | -5.0% | | MNIST-$l_2$ | -3.5% | -3.3% | -6.2% | -4.1% | -5.2% | | graph $r=100$ | -39.9% | -73.2% | -10.7% | -6.5% | -10.0% | | graph $r=1$ | -28.0% | -28.7% | -20.2% | -27.1% | -20.8% | Final k-median cost, DP: | | $k=2$ | $k=5$ | $k=10$ | $k=15$ | $k=20$ | |---------------|-------|--------|--------|--------|--------| | MNIST-$l_1$ | -0.6% | -2.2% | -4.6% | -3.8% | -3.0% | | MNIST-$l_2$ | 0.1% | -0.9% | -2.5% | -2.7% | -3.4% | | graph $r=100$ | -6.8% | -22.8% | -17.0% | -8.1% | -3.2% | | graph $r=1$ | -7.7% | -17.5% | -9.4% | -14.0% | -10.1% | We see that our proposed method is better in all cases in terms of the cost at initialization, in both non-DP and DP settings. The advantage is not as significant for the final k-median cost, but in most cases, DP-HST also outperforms on MNIST. On the graph data, DP-HST is considerably better in all cases in the final cost. Again, thanks for your review and suggestions.
Summary: The submitted paper considers fast initializations for k-median. They proceed in a fairly natural manner: (1) Do tree embedding. (2) Run simple constant-factor approximation on the tree embedding. A cool advantage with their simple algorithm on the tree embedding is that the authors show that they can make the initialization differentially private (DP); to me that's the main additive value of the paper. They can then combine the above algorithm to with a DP local search to shave off a log(n) factor in the additive error of the the previous best results for DP k-median algorithms under general metrics. (The improved bound is basically obtained because they can now run the local search algorithm with fewer number of iterations). They also perform an experimental comparison with their method compared to k-means++ where they perform favorably. Strengths: - The simplicity and nice algorithm that is easy to implement is a clear positive. - Nice adaptation to differentially private clustering. Weaknesses: - The paper lacks a little bit in a big and impressive contribution. Yes the DP bound is improved but it is still quite far from the lower bound... - Somewhat similar to prior works. - Extending the ideas to k-means would be very interesting as then the tree embedding would be trickier to use? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - k-median++ (at least k-means++) initialization is known to be not always great empirically. That's why the authors already in the first paper proposed a greedy variant (which is the one that is the default in the scikit implementation). Why not compare with this one? - Can you elaborate a little more what the main differences are between this and the cited work by Cohen Added et al? They also use tree embeddings for fast initialization if I remember correctly. Small things: - Intro l 35: wouldn't a naive adaptation of Lloyd work in general metrics? - l 117: "in" - l 130-132: the choice of alpha should show up in the approximation guarantee (it is not exactly 5 right?) - l 280: "Following previous work, " Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: - Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer 5iV1, Thanks for your feedback. Q1. k-median++ (at least k-means++) initialization is known to be not always great empirically. That's why the authors already in the first paper proposed a greedy variant (which is the one that is the default in the scikit implementation). Why not compare with this one? Answer: In the greedy variant of k-means++, in every step, we sample $l$ candidate centers instead of one and then pick the one that minimizes the new cost. As you mentioned, in some cases/on some data, this greedy strategy may have better performance empirically. However, it has worse approximation guarantee as reported in the following paper (which is also cited in our paper) A nearly tight analysis of greedy k-means++, Grunau et al., SODA 2023 Therefore, in this paper we used the "standard" k-means++ for comparisons and for better theory guarantee. Q2. Can you elaborate a little more what the main differences are between this and the cited work by [Cohen Added et al 2021] ? They also use tree embeddings for fast initialization if I remember correctly. Answer: The main differences are: (1) We focus on general metric space in this paper and our tree construction is based on padded decompositions of a general metric space; while Cohen-Added et al consider the Euclidean metric space and use the Quadtree Embedding; (2) The tree search algorithm for pick the centers in our paper is novel and different from their approach; (3) Our HST initialization strategy works for both non-private setting and private setting, while their paper only considered non-private setting. Again, we appreciate your feedback. We hope our response answers your questions. --- Rebuttal Comment 1.1: Comment: Thank you for your responses!
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Three Iterations of (d − 1)-WL Test Distinguish Non Isometric Clouds of d-dimensional Points
Accept (poster)
Summary: This paper studies the completeness of $l$-WL test for Euclidean point sets. It shows an algorithm that certifies two Euclidean point sets of dimension $d$ are isometric using $(d-1)$-WL test, where only three iterations suffice. The results extend to $d$-WL test which only requires one iteration. The punchline is to study "how pair-wise distances determine the identity of all points", which is well-developed for Euclidean scenarios. Taking the plane as an example, the proposed method shows that storing single-source all-pair distances of 2 points and the norm of the rest points suffice to uniquely reconstruct the whole point set. Such data can be obtained through several rounds of WL-test, which completes the claim that WL-test is able to distinguish the isometry of Euclidean point sets. Strengths: - The problem is well-motivated, most of the paper is written clearly. - The results characterize well on three parameters involved for isometry-test of Euclidean point sets: the data dimension $d$, the WL-test dimension $l$ and number of rounds needed ($r=3$ for $l=d-1$, $r=1$ for $l=d$). It should be an interesting result to be announced. Weaknesses: I think overall the paper is not hard to follow, but the presentation can be better - The contents on page 4 and 5 would be a lot easier to understand if there is a picture. - The algorithms deserve a box highlighting each step either in description or pseudocode, though I personally prefer just words. Another issue is that it seems knowing the distance profiles is sufficient for the isometry test. Then why do we have to converge to WL test? The time and space consumption of WL-test is non-trivial. Since we have the two point sets at our hand, just consider a simple algorithm calculating the profile tuple $(A, M_1, M_2, ...)$ and use the proposed reconstruction algorithm, what is wrong with this? I believe the space might be the same, but the time would be much better. minor: line 268, $d$-tuple?. Another suggestion on Table 1. I think using $l>d$ WL test is no longer of interest, maybe you can replace the dots with a slash. You can consider different colors for previous work, this work, and open. Last, there could be general $d$. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: See above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: I would say the result itself is interesting, only that as a theory paper the technical novelty is quite limited, especially if this can be done even without WL-test. The connection to GNN is in a phase of "Good to know" but does not seem to have application implications. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Reviewer's comment: "I think overall the paper is not hard to follow, but the presentation can be better The contents on page 4 and 5 would be a lot easier to understand if there is a picture. The algorithms deserve a box highlighting each step either in description or pseudocode, though I personally prefer just words." RESPONSE: see “GENERAL RESPONSE” in the Author Rebuttal box at the beginning. Reviewer's comment: "Another issue is that it seems knowing the distance profiles is sufficient for the isometry test. Then why do we have to converge to WL test? The time and space consumption of WL-test is non-trivial. Since we have the two point sets at our hand, just consider a simple algorithm calculating the profile tuple and use the proposed reconstruction algorithm, what is wrong with this? I believe the space might be the same, but the time would be much better." REPONSE: The reviewer is right in that the point of the paper is not to provide an efficient algorithm to test isometry between two given point clouds, but to provide a characterization of the expressive power of geometric GNN’s. In particular, it informs the practical decision related to the choice of parameters in the design of a MPGNN’s when working with d-dimensional point clouds: order d and three layers are sufficient (as far as expressivity is concerned).
Summary: The authors rigorously prove that applying the d-1 WL to the distance matrices of point clouds in d dimensions is complete: that is, two point clouds are related by an isometry (and relabeling of the points) if and only if they are not separated by the d-1 WL test Strengths: The questions the authors address is a fundamental theoretical question in the study of geometric machine learning. They give a rigorous and non-trivial proof that essentially solves the question. Very good work. Weaknesses: No significant weaknesses. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: In lines 43-49 in the discussion of Hordan et al.: The authors of that paper discuss not only a 3-WL algorithm but also what the call in the abstract `a 2-WL-like algorithm', when d=3. To avoid confusion it may be helpful to address this claim in the discussion somehow. I had some issues with Section 6: (a) Firstly, defining MPGNNs for point clouds has been discussed in various ways in previous works: in [9]-[10] cited in the paper and in "On the expressive power of geometric graph neural networks" by Joshi et al which should be cited as well. It would make sense to discuss the relationships to the definitions there or at least mention they exist (b) In the described MPGNN each x is given a `one hot encoding'. Since there are infinite possible x, how exactly is this accomplished? (c) I was confused at why Corollary 6.1 was stated for (d-1)-MPGNN instead of d-MPGNN. Later I saw that this is explained in lines 75-80 and is due to the differences between WL and Folklore-WL but I think it should be retierated in Corollary 6.1 (d) Generally, I feel like this paper does a great job re the WL tests themselves, but the discussion of MPNNs is somewhat short and non-convincing, and perhaps it would be better just to ref to the other papers mentioned above which discussed these issues in more depth (at the authors discretion, I support acceptance either way) I think the proof of Lemma 3.1 can be shortened and simplified. Once the equation in line 211 is established, you can immediately show that plugging it into the right hand side of (2), for both f(x) and f(y), gives the equality you want. With the space the freed up, you could consider adding an illustration for the cone condition (lines 137-138) Tiny comments: Line 129: `Algorithm' should not be capitalized Line 137: **line** on this line Line 142: Initialization Data: instead of `it consists..' I would prefer something like `the initialization data consists..' Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Reviewer's comment: "In lines 43-49 in the discussion of Hordan et al.: The authors of that paper discuss not only a 3-WL algorithm but also what the call in the abstract `a 2-WL-like algorithm', when d=3. To avoid confusion it may be helpful to address this claim in the discussion somehow.” RESPONSE: Both algorithms in Hordan et al. explicitly use coordinates of the points. Thus, formally, they do not fall into our framework. However, a 3-WL algorithm of Hordan et al., after additional observations, can be used to show that 3-WL, as defined in our paper, is complete in R^3 after 2 iterations (we discuss this in Section 5 where we also improve their result by showing that 3-WL is complete in R^3 after 1 iteration). At the same time, we do not see how to convert the ‘2-WL-like’ algorithm of Hordan et al. into an algorithm in our setting. Nevertheless, we will mention their ‘2-WL-like’ algorithm with these remarks in the introduction. Reviewer's comment: “I had some issues with Section 6: (a) Firstly, defining MPGNNs for point clouds has been discussed in various ways in previous works: in [9]-[10] cited in the paper and in "On the expressive power of geometric graph neural networks" by Joshi et al which should be cited as well. It would make sense to discuss the relationships to the definitions there or at least mention they exist” RESPONSE: We were not aware of the paper by Joshi et al. It is certainly relevant and we will cite it. Other than that, we plan to completely rewrite this section as explained in point (d) below. Reviewer's comment: “(b) In the described MPGNN each x is given a `one hot encoding'. Since there are infinite possible x, how exactly is this accomplished?" RESPONSE: We were thinking of having a one-hot encoding only with respect to the “atomic types” of mutual distances that can be achieved in the given cloud of points S. There is a finite number of them. Hence, two tuples x and y in S achieve the same one-hot encoding if and only if they have the same atomic type of mutual distances. Reviewer's comment: “(c) I was confused at why Corollary 6.1 was stated for (d-1)-MPGNN instead of d-MPGNN. Later I saw that this is explained in lines 75-80 and is due to the differences between WL and Folklore-WL but I think it should be retierated in Corollary 6.1" RESPONSE: Sure, we can do that, but please see our response to the next point. Reviewer's comment: “(d) Generally, I feel like this paper does a great job re the WL tests themselves, but the discussion of MPNNs is somewhat short and non-convincing, and perhaps it would be better just to ref to the other papers mentioned above which discussed these issues in more depth (at the authors discretion, I support acceptance either way)" RESPONSE: After careful consideration, we believe that the reviewer is right. This section is too short and is not adding anything essentially new to the paper. In case we get accepted we will add a more succinct explanation saying that, by using previously established techniques, it is possible to construct efficient MPGNNs that achieve the expressive power of the WL test on clouds of points. Reviewer's comment: “I think the proof of Lemma 3.1 can be shortened and simplified. Once the equation in line 211 is established, you can immediately show that plugging it into the right hand side of (2), for both f(x) and f(y), gives the equality you want." RESPONSE: We thank the reviewer for the suggestion: indeed, it seems to make the proof shorter and simpler. Reviewer's comment: “With the space the freed up, you could consider adding an illustration for the cone condition (lines 137-138)" RESPONSE: That’s a good idea, thanks. We will definitely add such a figure in the final version of the paper. --- Rebuttal Comment 1.1: Comment: I am happy with the reviewers answers. Thanks!
Summary: The expressive power of GNNs has long been a central topic in the GNN community. WL test serves as a fundamental algorithm in guiding designing expressive GNNs. For each k-WL, it has been shown that there always exist non-isomorphic graphs that cannot be distinguished by k-WL, and thus k-WL is incomplete for any k. However, things become different for geometric graphs, which are point clouds lying on a finite d-dimensional Euclidean space and the isomorphism is characterized by isometry. This paper makes a significant contribution by proving that k-WL is complete for distinguishing (k+1)-dimensional geometric graphs. Moreover, a constructive method shows that three-iteration suffices, which contrasts to a well-known result that there exist non-isomorphic graphs that cannot be distinguished by standard k-WL within o(n) iterations where n is the number of nodes. The authors further proved that k-WL can distinguish k-dimensional geometric graphs in only one iteration. Strengths: 1. **Fundamental problem**. I believe characterizing the upper and lower bound of k such that k-WL is complete for distinguishing d-dimensional geometric graphs is a fundamental problem. This is due to a series of reasons. - First, k-WL is a very elegant algorithm and applies to geometric graphs straightforwardly. - Second, there have been debates whether equivariant architectures that use coordinate-wise information is necessary for learning geometric graphs, or purely distance information suffices. This paper answers this question timely and affirmatively. Overall, the problem formulation is very *clean* and are likely to have decent impact in the geometric deep learning community. 2. **Strong theoretical result**. After going through the proof technique, I feel that the proof is non-trivial, despite presentation is very clear, well-organized, and rigorous. After checking several of the proofs, I am confident that the proofs are correct. - Related to prior works: to my knowledge, the theoretical result seems to be new. While I mainly focus on standard GNN expressivity theory, I have read several works related to this paper, such as Pozdnyakov & Ceriotti et al., Hordan et al., and Zhang et al.. - Regarding completeness of theoretical result: this paper only gives upper bounds on the required k for distinguishing d dimensional geometric graphs. However, the bound is tight for 2 or 3 dimensional data. Other cases are highlighted in the limitation part and left as open problems. In this sense, the contribution seems to be sufficient for an acceptance. 3. **Great presentation**. This paper is very well-written and easy to read. The organization is great, the proof sketch is carefully written, and the proof in the Appendix is also well-written (I couldn't even find a typo). I really enjoy reading this paper. Weaknesses: 1. This is mainly a theoretical paper, without any experimental evaluation. However, I understand that showing experimental result is not very necessary given this paper focuses on a fundamental theoretical problem. 2. Regarding theoretical results, I have several concerns and questions. While all of them are not major weaknesses, I would like the authors to answer these questions and revise the manuscript accordingly in the camera-ready version. - The authors focus on the setting where all points are distinct, i.e., S is a set rather than multiset. The proof in Line 146-157 requires such a condition. But I think similar result should hold when S is a multiset. The authors may add a brief remark to illustrate this point. - The authors wrote that Hordan et al. have proved that in the same setting, the geometric 3-WL test is in fact complete. However, I cannot find this result in their paper. Instead, they consider a different algorithm that requires the coordinates as input (although the output is invariant under isometric transformation). Moreover, they also considered higher dimensions. So could the authors give an explanation for the paragraph in Line 43-49 (if I miss something)? 3. The number of related work seems to be a bit insufficient. I can list several works which I think is relevant to this paper. - Martin Furer. Weisfeiler-Lehman Refinement Requires at Least a Linear Number of Iterations. This paper proved that, for any k, there exist non-isomorphic graphs that cannot be distinguished by standard k-WL within o(n) iterations where n is the number of nodes and can be distinguished by standard k-WL within $\Theta(n)$ iterations. Moreover, the same pair of graphs can be distinguished within much fewer iterations by increasing k. These results surprisingly parallels your results. - Bohang Zhang, et al.. Rethinking the Expressive Power of GNNs via Graph Biconnectivity. This paper proposed the expressive power of generalized distance WL test, which is basically the same as 1-WL in this paper while using different distance metrics. 4. Minor issue: in Line 54, the citation [12] seems to be wrong. Do you mean [13]? Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: See the weakness part above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The limitation has been clearly stated in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Reviewer's comment: "This is mainly a theoretical paper, without any experimental evaluation. However, I understand that showing experimental result is not very necessary given this paper focuses on a fundamental theoretical problem." RESPONSE: Indeed, as the referee points out, the aim of the paper is to give theoretical guarantees and proofs. It is nevertheless important to point out, however, that relevant experiments have in fact been performed by other groups, such as described in (table 1, third column from the right, in) the preprint [Li, Z., Wang, X., Huang, Y., & Zhang, M. (2023), Is Distance Matrix Enough for Geometric Deep Learning? arXiv preprint arXiv:2302.05743, (version 4)]. We shall highlight these experiments further in the camera-ready version of the paper, in case it is accepted. Reviewer's comment: "The authors focus on the setting where all points are distinct, i.e., S is a set rather than multiset. The proof in Line 146-157 requires such a condition. But I think similar result should hold when S is a multiset. The authors may add a brief remark to illustrate this point." RESPONSE: We thank the referee for pointing this out. While some of the wordings of the proofs include reference to the hypothesis that S is a set, the proof strategy works without changes for the extension to the case S is a multiset. In case we get accepted, we shall add this extension of the theorem statements for the final version. Reviewer's comment: "The authors wrote that Hordan et al. have proved that in the same setting, the geometric 3-WL test is in fact complete. However, I cannot find this result in their paper. Instead, they consider a different algorithm that requires the coordinates as input (although the output is invariant under isometric transformation). Moreover, they also considered higher dimensions. So could the authors give an explanation for the paragraph in Line 43-49 (if I miss something)?" RESPONSE: Although the geometric 3-WL algorithm of Hordan et al. uses coordinates as inputs, it can be turned into a proof that 3-WL, as defined in our paper (that only uses pairwise distances) is complete in R^3 after 2 iterations. We discusse this in more detail at the beginning of Section 5, and we plan to add a reference to this in the introduction (lines 43-49) for a final version of the paper. We thank the reviewer for pointing out that Hordan et al. also consider higher dimensions. We will add a remark that the algorithm of Hordan et al., although it explicitly uses coordinates, can be turned into a proof (modulo the Barycenter lemma that we establish in our paper) that d-WL is complete in R^d after 2 iterations. Reviewer's comment: "The number of related work seems to be a bit insufficient. I can list several works which I think is relevant to this paper: - Martin Furer. Weisfeiler-Lehman Refinement Requires at Least a Linear Number of Iterations. This paper proved that, for any k, there exist non-isomorphic graphs that cannot be distinguished by standard k-WL within o(n) iterations where n is the number of nodes and can be distinguished by standard k-WL within $\Theta(n)$ iterations. Moreover, the same pair of graphs can be distinguished within much fewer iterations by increasing k. These results surprisingly parallels your results. - Bohang Zhang, et al.. Rethinking the Expressive Power of GNNs via Graph Biconnectivity. This paper proposed the expressive power of generalized distance WL test, which is basically the same as 1-WL in this paper while using different distance metrics." RESPONSE: We were not aware of the above papers. They are certainly very relevant and we will cite them, and we thank the referee for pointing them out to us. Reviewer's comment: "Minor issue: in Line 54, the citation [12] seems to be wrong. Do you mean [13]? RESPONSE: Indeed, the referee is correct, we will change this citation accordingly. --- Rebuttal Comment 1.1: Title: Thank you Comment: Thank you for your thoughtful response. My concerns have been addressed and I would be happy to see this paper accepted.
Summary: The paper addresses the question of testing the existence of a one-to-one application between two point clouds such that the distances are preserved. This information can be used, for example, to build better structured graph neural network architectures to optimize their performance. The starting point is that the distances between elements of a point cloud can be used to label the edges of a graph connecting them. Hence, the problem boils down to the detection of isometry between two graphs, for which the Weisfeiler-Lehman test is the classic tool. However, the latter makes it possible to conclude if two clouds are not isometric, but not necessarily that they are, depending on the computation cost. This paper sheds light on the issue in terms of the size of the ambient space Strengths: The document is fairly well written and the contributions are clearly stated. Since the results are theoretical in nature, they essentially consist of a succession of proofs. But they seem to be rigorously executed. Not being a specialist in the field, I can't really appreciate the impact of such a theoretical result. This brings me to my next comment on the weaknesses. Weaknesses: The stated result, i.e. the ability to answer affirmatively if isometry between point clouds is proven from the WL test, is in itself very interesting. It squarely fits into "understanding ML results". However, the authors stressed the importance of their results in improving the design of neural net architecture. This is unfortunately not clarified and it should be nice if the authors could comment more on that. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The proofs are sometimes very verbose and would benefit from geometric illustration rather than prosaic text. The proof of lemma 3.1 (which seems connected to Konig-Huygens theorem) can easily go in the appendix to save space. As it is, I find it difficult to follow the entire logic of the proofs beyond a line-by-line follow-up. The corollary 6.1 seems constructive. It woulds be helpful to describe the algorithm in a pseudo-code and maybe illustrate the proposed MPGNN. I believe this will make the paper more accessible to a wider audience. Since I don't have the necessary knowledge, I can hardly judge the impact and practical importance of such a result. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Reviewer's comment: "the authors stressed the importance of their results in improving the design of neural net architecture. This is unfortunately not clarified and it should be nice if the authors could comment more on that." RESPONSE: What we mean is simply that, when working with cloud points in dimension d, our results inform the decisions regarding the choice of parameters of the geometric GNN: as far as expressivity is concerned, order d and three layers are sufficient. Questions: The proofs are sometimes very verbose and would benefit from geometric illustration rather than prosaic text. The proof of lemma 3.1 (which seems connected to Konig-Huygens theorem) can easily go in the appendix to save space. As it is, I find it difficult to follow the entire logic of the proofs beyond a line-by-line follow-up. The corollary 6.1 seems constructive. It woulds be helpful to describe the algorithm in a pseudo-code and maybe illustrate the proposed MPGNN. I believe this will make the paper more accessible to a wider audience. Since I don't have the necessary knowledge, I can hardly judge the impact and practical importance of such a result. RESPONSE: see “GENERAL RESPONSE” in the Author Rebuttal box at the beginning.
Rebuttal 1: Rebuttal: GENERAL RESPONSE: Many thanks for your comments and suggestions. Regarding presentation, we acknowledge that adding a figure to illustrate the main idea behind the proof of our main result, as well as giving a high-level description of the underlying algorithm in a framed box, will greatly improve the presentation. In case our paper is accepted we will definitely implement all these. We now proceed to respond to the individual comments.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
MultiFusion: Fusing Pre-Trained Models for Multi-Lingual, Multi-Modal Image Generation
Accept (poster)
Summary: This paper presents MultiFusion, a multilingual multimodal image generation model, which can be effectively trained by fusing existing pre-trained visual models, language models, and stable diffusion models. Using a multilingual autoregressive language model as a bridge, MultiFusion follows MAGMA to enable multimodality by learning adapters. Before connecting the language model with the stable diffusion module, it learns semantic embeddings with a contrastive learning objective in a parameter-efficient setup. Finally, it connects the language model with the diffusion model with monomodal data. i.e., an image or caption. Experimental results demonstrate that the trained MultiFusion model can generate high-quality images with multimodal interleaved prompts. Besides, with modular design and the fusion of pre-trained models, the training can be quite effective compared to training from scratch. Several analyses such as attention manipulation also provide insights into the multimodal language models. Strengths: - The paper makes a clever combination of pre-trained models and adapter learning techniques, including 1) MAGMA for cross-modal adaptation/fusing, 2) contrastive learning before fusing LM with the diffusion module, 3) cross-attention learning of SD to align the conditioning with the new embedding space. These operations delicately combine the pre-trained modules into an end-to-end multimodal-text-to-image model. - Experimental results show that MultiFusion can produce high-quality images conditioned on multimodal and multilingual inputs, with a wide range of applications and use cases. - The analysis on attention manipulation is quite interesting. Weaknesses: - It would be great to improve the presentation of the paper, especially methods and implementation details. Although Figure 1 presents an overview of the architecture, I have to guess some of the implementation details and carefully find clues from a large amount of text. Some suggestions: (1) you could provide some figures to show the details of how the adapters connect the pre-trained models and how you learn them; (2) you could also clarify the training tasks, and data in tables. - Existing works have explored how to learn adapters to connect pre-trained modules. For example, Flamingo learns gated adapter modules to connect language models with visual models, and generate text conditioned on multimodal inputs. The paper provides a full solution to the problem with careful design but it is kind of an integration of existing adapter methods. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - Are all the outputs of the language model connected to the diffusion model or only the last token? E.g., Input: [img_tok1] [img_tok2] [text_tok1] [text_tok2] [text_tok3] -> output vectors: [h1] [h2] [h3] [h4] [h5]. Are all five vectors passed to the diffusion model or only the last one? - Why is an autoregressive model used as the encoder? I understand the CLIP model has disjoint text and image encodings, but why not use bidirectional models as the multimodal encoder? Any explanations? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: As mentioned in the paper, the model always produces variations of input images, which limits its applications in image editing. I think it is worth mentioning this limitation, which provides further understanding of MultiFusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer 2Kwy for their feedback and suggestions. Below we address each weakness and question separately. Tab. 5-7, as well as Fig. 11 and 12, can be found in the supplementary PDF of this review. ### W1) Architectural details We thank the reviewer for the suggestions and incorporated this feedback in our global response regarding system design. The adapter layers are added to each attention and feed-forward layer of the transformer. Following the method proposed by MAGMA [16], the adapters are trained autoregressively on a combination of large scale image-text datasets (c.f. Tab.5 and 6). The image tokens are prepended to the text tokens and the language modeling loss for next token prediction is computed over the text only. We adjusted Fig. 1 and its caption to clarify these details. Additionally, we supply further information on the training data sizes, parameter counts, and GPU hours for all components in Tab. 5, along with details on data splits, languages, and modalities in Tab 6. We will add both tables to the final version of the paper. ### W2) Pre-existing work on adapters We agree with the reviewer’s assessment that MultiFusion builds on previous work for multimodal adapter tuning. Nonetheless, we want to highlight that MultiFusion’s contribution lies in the investigation of building a cohesive system for a complex downstream task utilizing these pre-trained components, thus introducing a novel method of expressing prompts and steering image generation for diffusion models, and significantly reducing the computational costs required for this system. ### Q1) Embedding tokens Indeed, the hidden representations of all tokens are passed to the diffusion model. In the example outlined by the reviewer, this would mean 5 embedding vectors are used for image generation conditioning. ### Q2) Autoregressive encoder We argue that decoder models perform better on tasks such as manipulation with natural language (“Subject X with background Y and style Z”) (cf. qualitative results) or correct features attributions (“Red X, Blue Y”) (cf. MCC-250 results). These capabilities can be attributed to the natural breaking of permutation equivariance [54], compared to bidirectional models relying entirely on positional embeddings. Furthermore, as demonstrated in Flamingo [55], multi-modal decoders can reason over multiple images, enabling further flexibility in the model’s capabilities (e.g., elements from multiple images can be trivially composed together, as demonstrated in Fig.2 and Fig.5). We acknowledge that bi-directional models may outperform autoregressive ones on other embedding tasks [56], but argue that an autoregressive model is better suited for the tasks studied in MultiFusion due to the benefits outlined above. [16] Constantin Eichenberg, Sidney Black, Samuel Weinbach, Letitia Parcalabescu, Anette Frank. MAGMA - Multimodal Augmentation of Generative Models through Adapter-based Finetuning. EMNLP, 2022. [54] Amirhossein Kazemnejad, Inkit Padhi, Karthikeyan Natesan Ramamurthy, Payel Das, and Siva Reddy. The Impact of Positional Encoding on Length Generalization in Transformers. arXiv preprint arXiv:2305.19466, 2023. [55] Jean-Baptiste Alayrac, Jeff Donahue, Pauline Luc, Antoine Miech, Iain Barr, Yana Hasson, Karel Lenc et al. Flamingo: a visual language model for few-shot learning. In Proceedings of the Conference on Neural Information Processing Systems (NeurIPS), 2022. [56] Michihiro Yasunaga, Armen Aghajanyan, Weijia Shi, Richard James, Jure Leskovec, Percy Liang, Mike Lewis, Luke Zettlemoyer, and Wen-tau Yih. Retrieval-augmented multimodal language modeling. arXiv preprint arXiv:2211.12561, 2022. --- Rebuttal Comment 1.1: Comment: Concerns are addressed. Thank you. The overall score is updated.
Summary: In this paper, the authors present a novel approach to expressing complex concepts with arbitrarily interleaved multimodal and multilingual input. Their approach leverages pre-trained models and allows an efficient fusion of different component without training a model from scratch. Strengths: 1. The paper is well-written and easy to follow 2. The experiments are well-designed and allow one to use existing pre-trained models while reducing the demand to train a system from scratch. 3. The result on various benchmarks are promising and would invite more discussion in this line of work. Weaknesses: The motivation to attempt such a problem is rather weak. Under what circumstances, would one want to have interleaved multimodal input to generate images? Is it because we want to control the input? If so, why not compare the proposed approach with similar models such as ControlNet and DreamBooth? Technical Quality: 3 good Clarity: 3 good Questions for Authors: See my comments in Weaknesses Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback and subsequently explain the motivation of MultiFusion in more detail. ### W1) Motivation and comparison. **Motivation** There exist several motivations for using interleaved multimodal inputs. As demonstrated by the experiments in the paper, reference images often contain more fine-grained and detailed information than textual descriptions can provide (for example, for the style of an image as in Fig. 6b). Furthermore, visual inputs (next to textual) are universally understood by users further broadening model accessibility. On the other hand, text offers more abstract control of the generation and is able to express complex concepts. Consequently, combining the two modalities offers the best of both worlds, substantially increasing model expressibility. **ControlNet and Dreambooth** Dreambooth, on the other hand, particularly focuses on generating specific subjects rather than using images as arbitrary reference information. Furthermore, DreamBooth requires computationally expensive fine-tuning for each subject, whereas MultiFusion can use image inputs natively during inference – again increasing accessibility. Similarly, ControlNet focuses on a completely different problem than Multifusion in providing dedicated control over scene composition using low resolution inputs such as edge maps. Moreover, ControlNet requires training a dedicated model for each control modality. In fact, we believe DreamBooth and ControlNet to be orthogonal approaches and see use cases where either may be used in combination with MultiFusion.
Summary: In this paper, the authors introduce MultiFusion, a novel approach that enables the expression of complex and nuanced concepts in text-to-image diffusion models (DM) through arbitrarily interleaved inputs of multiple modalities and languages. The “fusion” concept is at the core of the whole work: to fuse modalities together, pre-trained models (a LLM and a stable diffusion backbone) are fused together. Experimental results highlight the efficient transfer of capabilities from individual modules to the downstream image generation module. Notably, MultiFusion empowers the image generation module to effectively utilize multilingual, interleaved multimodal inputs, even when trained solely on monomodal data in a single language. The contributions of this work include the fusion of modalities for image generation, experimental evaluations, and the introduction of a benchmark dataset for further analysis and comparison regarding the multimodal compositionality of the models. Strengths: 1. **Innovative model fusion approach**: The paper introduces an innovative approach by combining a partially frozen multilingual Language Model (LLM) with a stable diffusion backbone. This fusion results in an interesting multilingual and multimodal encoder capable of seamlessly interleaving between input items, treating them as a modality-agnostic sequence. 2. **Multilingual alignment investigation**: The authors conduct an investigation into the model's multilingual capabilities by translating the prompts from the DrawBench dataset. This exploration demonstrates an understanding of the importance of multilingual alignment. While there is a question regarding the accuracy of the translations, the authors acknowledge the potential benefit of utilizing literal translations in training the multilingual encoder, even though nuances in meaning may not be fully captured. This highlights the authors' attention to addressing the challenges and complexities of multilingual representation. 3. **Contribution of benchmark dataset**: The authors contribute to advancing research in multimodal compositionality by producing and sharing the MCC-250 dataset. This benchmark dataset, described in detail in the supplementary material, serves as a valuable resource for assessing the compositionality of multimodal inputs, specifically comprising English text and images. The production and release of this dataset demonstrate the authors' dedication to promoting reproducibility, comparison, and further progress in the field of multimodal compositionality. Weaknesses: 1. **Lack of clear architectural design and novelty**: The paper suffers from a lack of clarity in explaining and justifying its design choices. While references are provided, the underlying motivations and problem-solving aspects of these choices are not adequately explained. While Figure 1 attempts to illustrate the model structure, it is not accompanied by a clear rationale and explanation for the chosen modules and their interactions in the text. Enhancing the clarity of the architectural design would elevate the novelty and originality of the proposed approach. It is suggested to provide a high-level description that guides the reader in understanding the motivations behind specific choices. By focusing on the "why" rather than the low-level details, readers can, for example, grasp the purpose of unlocking only the biases in the LLM. Currently (line 130-131), it is unclear if this is a crucial step to obtain good results while keeping a parameter-efficient regime, or if it is marginal in that regard. The supplementary material can be utilized to provide additional low-level details for interested readers. 2. **Lack of clarity in Figure 4 and semantic search paragraph**: While Figure 4a demonstrates higher similarities of translated prompts in the authors' method compared to competitors, it does not provide insights into the similarities between the reference and other negative samples. This additional information is crucial to establish the range of similarities that can be considered as genuinely low. Furthermore, Figure 4b indicates that the AltDiff competitor generates potentially more consistent images in each language, suggesting that the embedding similarity between references and translations may not be entirely representative. Clarifying these aspects would enhance the understanding of the results and provide a more comprehensive evaluation of the proposed method's performance. 3. **Missing standard deviation in tables**: Including standard deviation in the results would provide important information about the variability and statistical robustness of the findings. By incorporating this measure, the paper would strengthen the reliability and credibility of the reported results. 4. **Performance comparison and insights**: In Figure 4b, the AltDiff method demonstrates better performance, raising questions about the potential benefits of adding more languages to the MultiFusion method. While the authors suggest that alignment remains similar despite MultiFusion being fine-tuned using only English data, additional experiments are needed to provide substantial evidence that adding more languages to MultiFusion indeed yields improved results. Further investigation and insights in this area would enhance the value and understanding of the proposed method. In offering these critical observations, I would like to emphasize that my intention is not to be harsh, but rather to provide constructive feedback. I acknowledge that explaining such a complex pipeline can be challenging and that significant effort has been invested in this work. However, I strongly believe that there is room for improvement in describing the architectural choices and highlighting the strengths of the paper, and I’ve tried my best to give possible suggestions in this regard. I’m convinced that addressing these aspects would significantly improve the quality and impact of the paper, but I don’t think this is something that could be fixed within the rebuttal period. In any case, I remain open to reconsidering my recommendation if any relevant insights emerge during the discussion. Technical Quality: 1 poor Clarity: 1 poor Questions for Authors: **Lack of Chinese column in Figures 4a and 4b**: It is unclear why this column is missing or was included solely for the competitors if the proposed method was not tested on this language. There's a typo on line 139: extracted Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 1 poor Presentation: 1 poor Contribution: 2 fair Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback. Tab. 5-7 as well as Fig. 11 and 12 can be found in the supplementary PDF of the rebuttal. We agree on the importance of design justification. We reiterate on design choice details for clarity in our global response under the section "Reiteration on system design". ### W1) Architectural design Previous work has demonstrated context-sensitive LM text encoders improve the expressiveness of downstream image generation models [40,2]. Accordingly, we model the backbone of MultiFusion’s (Mf) encoder as a 13B autoregressive transformer [10] trained on a multilingual corpus. We justify this decision over a bi-directional architecture by arguing that decoder models outperform bi-directional models on tasks such as manipulation with natural language (“Subject X with background Y) (cf qualitative results) or correct features attributions (“Red X, Blue Y”) (cf MCC-250 results). These capabilities have previously been attributed to the natural breaking of permutation equivariance [54], compared to bidirectional models relying entirely on positional embeddings. Following MAGMA [16], we add an image prefix and dedicated adapters to enable multimodal capabilities. We argue that adapters are a suitable architectural choice for multimodal prompts, as previous research has already performed extensive ablations on adapter architectures and demonstrated their improved understanding of multimodal inputs over other methods [16]. Our choice of semantic embeddings was guided by the intuition that a focus on the semantics of a text prompt would best capture the information relevant to image generation and thus simplify learning the mapping from embeddings to image outputs. We decided to obtain high-quality semantic embeddings through parameter-efficient bias [4] instead of full model finetuning, based S-GPT [33]. Early experiments have confirmed higher rates of convergence (based on visual inspection of generated outputs) for experiments using semantic embeddings. Consequently, bias tuning is an essential condition and not an optional architecture choice for successfully fusing an image generation model. In line with previous research [13], we finetune the cross-attention parameters of SD on LAION aesthetics. We adjusted Fig. 1 & caption and section 3 to clarify details on the architecture and to better reflect the design choices. We would like to clarify that the expected level of implementation details in the main body is highly subjective. We aim to strike a balance between high-level motivation and low level details to satisfy the majority of readers. Indeed, multiple of the other reviewers specifically asked for more low-level information in the main body of the paper. [54] arXiv:2305.19466 ### W2) Fig 4a and semantic search + W4) Comparison and insight We agree that the addition of a baseline similarity between uncorrelated sentences is useful for the interpretation of Fig 4a. In fact, the baseline is roughly equivalent to the performance of CLIP reported on zh prompts, suggesting that the regular CLIP model is not aligned for this language and further reinforcing the increased performance of MultiFusion. We explicitly marked the baseline in Fig 4a). We believe there to be a misunderstanding in the assessment of Fig 4b, which we would like to address. We do not believe that the results allow for a strong statement of AltDiffusion (AD) outperforming MF or vice versa due to the significant overlap in error bars. In fact, a good alignment of AD’s images is to be expected as the model’s image generative training is explicitly done on (aligned) multilingual data. The key insight from this experiment lies MF achieving comparable performance despite the image generation being only trained on English data. We attribute this capability to better-aligned embeddings. These alignment results suggest that aligned multilingual data on a downstream task is not necessary to achieve alignment. Rather, good embedding alignment of the backbone model in combination with readily available monolingual task-specific data is sufficient for multilingual alignment on that task. Investigating more languages is indeed an interesting avenue for future work. However, the error bars already indicate that the performance of AD is not significantly better than MF’s. Thus demonstrating MF’s potential benefit in low-resource domains with only a few or even no image-text pairs available. This is, however, out of the scope of the current contribution and, therefore, also not a claim we make in the paper. ### W3) Standard deviations We agree that standard deviations (std) generally improve the interpretability of results, wherefore we included error bars in Fig. 4. In line with the literature do not report std for FID and CLIP in Table 1. We did not initially include std for the empirical analysis in Tab. 2 as we simply reported the binary success rate over the entire benchmark. By considering the per prompt success rate over multiple samples, we now also report standard deviations over differing object compositions and will extend Tab. 2 accordingly. All models exhibit comparatively high stds (ca. 30PP for 2 objects), suggesting a fair amount of outliers for which the models perform significantly better/worse than on average. We believe the investigation of these tasks to be a promising avenue for future work and adjusted the discussion of the results accordingly. ### Q1) Lack of Chinese columns The models presented in Fig. 4 were evaluated on the languages that they have been trained on. Thus, we do not provide scores for AD on German and MF on Chinese. Further, we argue that it is crucial to include scores for MF on German and AD on Chinese, as these are the respective secondary languages of the models, i.e. the ones with the most aligned training data. We adjusted the caption of Fig. 4 to better reflect this specific aspect of the experiment. --- Rebuttal Comment 1.1: Comment: I appreciate the author's response, and I thank them for that. While I remain somewhat unconvinced by the paper's contribution and robustness, considering the perspectives of other reviewers and the thorough response, I am inclined to adjust my rating to a borderline reject (4).
Summary: This work proposed a novel method to build multilingual multimodal generation models that supports prompts composed of interleaved text and image. It combines a strong pre-trained multilingual language model with the image generation model from Stable Diffusion (SD) and achieves alleviated capabilities such as prompting with text and image combined. It also shows that the new model does better in composition generations, as one can provide reference images as part of the prompt. In this work, a multilingual language model is trained in the first place (13B encoder-decoder structure model trained on 400B tokens), which itself is a strong multilingual model. It then adds an adapter module to the LLM model to support input in image format, following methods proposed in MAGMA. Finally, it aligns the trained encoder with the diffusion model taken from Stable Diffusion, with 15M text-image pairs. As a result, it can support multilingual, multimodal prompt for image generation, without training on massive text-image pairs dataset. In experimentation, it showed that using both text and image as prompts can be beneficial, especially in composition generations. It also shows superior performance to existing multilingual text-image generation model AltDiffusion, possibly due to better alignment in multilingual embeddings. Further, the support of taking image as prompts can enable varies applications such as negative prompting with image, image composition, image variation and style modification. 
 Strengths: * Novel and efficient method: fusing different pre-trained models works very well which can bootstrapping existing models such as stable diffusion to achieve different input format, avoiding the heavy cost of training model from scratch. * Enables prompting using both image and text and generates better images both in terms of metrics such as FID and human evaluation, comparing to baselines that only takes text prompt. * Better results on composition generations from considering reference images in prompts. * Well written overall and addressed limitations of the work very well. Weaknesses: * Some of the details such as model parameter size, training data source and size are not presented in the main paper (included in appendix), which can be less clear when interpreting results presented in experimental section. It would be better to point those factors out when comparing with baselines in the main paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. As you mentioned in the limitation section, the model cannot produce images that’s exact or close to the image in prompt. I wonder if there can be simple modifications made that can achieve this? 2. How is the interleaved data being used? Does it effect the results if you change the order of the interleaved data? 3. Have you tried other methods in addition to adapter to fuse image and text modality? 4. Have you done any ablation studies on the semantic embeddings? 5. On the MSCOCO dataset for generation, where is the image prompt come from (referring results in table 1)? 6. Have you compared the multilingual LM trained to other ones (mT5 etc.) in multilingual benchmarks? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: The authors addressed the limitations relatively well in the paper: 1. The generated image cannot do copy of exact prompt images; 2. Sometimes the image prompts need to be carefully chosen and do not always work; 3. It suffers the same shortcomings (such as inappropriate content) as other generation models trained on very large-scale crawled dataset (LAION). Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback and subsequently address each limitation and question separately. [Tab. 5-7, as well as Fig. 11 and 12, can be found in the supplementary PDF of this review] ### W1) Training details We supply further information on the training data sizes, parameter counts, and GPU hours for all components in Tab. 5 along with details on data splits, languages and modalities in Tab 6. We will add both tables to the final version of the paper. ### Q1) Image replication from prompt Direct replication of an input image would most likely require a different architecture used in encoding embeddings. While this in itself is an interesting research question, our approach aims to enable a more fine-grained conditioning of the image generation process through multimodal prompts that can be arbitrarily interleaved with one or more images. Nonetheless, our diffusion model can easily be combined with existing diffusion-based image editing techniques that can faithfully reconstruct and subsequently alter an image [58, 59]. In this case, MultiFusion would facilitate multimodal image editing, which we believe to be a promising avenue for future research. ### Q2) Interleaved input data and order influence The interleaved inputs are concatenated to one input vector and subsequently fed into the LM, which outputs embeddings for conditioning MultiFusion’s image generation U-Net. Changing the order of interleaved inputs will change the embedding produced by the encoder and thus affect the conditioning of the denoising process, leading to a different output. This can be attributed to the autoregressive generation of embeddings with causal attention by the LM. The effect is particularly important for image prompts, where the relationship between multiple concepts is not specified in natural language. Thus, we provide qualitative examples of reversed image prompts in Fig. 12, which we add to the appendix. We can observe the effects of autoregression and causal attention in all three examples of Fig. 12, showcasing that the object or background of the first input image has the highest influence on the output image. ### Q3) Ablations on multimodal architecture We limited our investigation to adapters for multimodal fusion. Previous research has already performed extensive ablations on adapter architectures and demonstrated their improved understanding of multimodal inputs over other methods [16]. Based on these findings, we argue that adapters are a suitable architectural choice for the task at hand. We adjusted Section 3 accordingly to better reflect these design choices. ### Q4) Ablations on semantic embeddings Our choice of semantic embeddings was guided by the intuition that a focus on the semantics of a text prompt would best capture the information relevant for image generation and thus simplify learning the mapping from embeddings to image outputs. We decided to obtain high-quality semantic embeddings through parameter-efficient bias [4] instead of full model finetuning, based on the work of [33]. Early experiments have confirmed higher rates of convergence (based on visual inspection of generated outputs) for experiments using semantic embeddings. Consequently, we do not report ablation results without semantic fine-tuning, as it is an essential condition and not an optional architecture choice for successfully fusing an image generation model. We adjusted section 3 of the paper accordingly to provide more clarity on this behavior. Q5) MS-COCO image prompt The reference image used for the experiment in Tab. 1 is the ground truth image from COCO. We realize this is strong supervision, most likely not available for a real-world use case. However, the key takeaway of the experiment is the additional and more fine-grained information provided by an image input over text alone. We argue this conclusion to be reasonable, given the fact that MultiFusion does indeed not replicate the input image but produces a variation with more aligned details. We modified the discussion of the experiment to reflect the strong supervision signal provided by the input image. Q6) Comparison to multi-lingual LMs This is an interesting comparison that we believe to be relevant for future work! We focused our empirical study on multilingual reasoning with MultiFusion in comparison to current state-of-the-art image generation approaches (as highlighted in section L209, Fig. 5a). At the same time, we believe our method to be robust enough to be reproduced with other multilingual LMs. However, we think that limitations may be encountered when using a bidirectional (e.g. mT5) instead of an autoregressive model, similar to the issue highlighted in response to 2Kwy [54]. [4] Elad Ben Zaken, Yoav Goldberg, and Shauli Ravfogel. BitFit: Simple parameter-efficient fine-tuning for transformer-based masked language-models. In Proceedings of the Annual Meeting of the Association for Computational Linguistics, 2022. [16] Constantin Eichenberg, Sidney Black, Samuel Weinbach, Letitia Parcalabescu, Anette Frank. MAGMA - Multimodal Augmentation of Generative Models through Adapter-based Finetuning. EMNLP, 2022. [33] Niklas Muennighoff. Sgpt: Gpt sentence embeddings for semantic search.arXiv preprint arXiv:2202.08904, 2022. [54] Amirhossein Kazemnejad, Inkit Padhi, Karthikeyan Natesan Ramamurthy, Payel Das, and Siva Reddy. The Impact of Positional Encoding on Length Generalization in Transformers. arXiv preprint arXiv:2305.19466, 2023. --- Rebuttal Comment 1.1: Comment: Thank the authors for the detailed information and updates included in rebuttal. Thanks for addressing all my comments and questions. I have read all the information and would like to keep the original score for recommendation.
Rebuttal 1: Rebuttal: We thank all the reviewers for their detailed and helpful feedback. We are encouraged that they found our solution for expressive image generation to be well-motivated (8eHc), novel (sjMA, g96C), and well-written (sjMA, 5v9N). Reviewers highlighted the efficient fusion of pre-existing models for simple, sample-efficient finetuning of a diffusion model (sjMA, 5v9N, 2Kwy), and improved capacity for flexible expression of complex concepts from combining complementary strengths in multimodal prompt interweaving (8eHc, sjMA, g96C, 2Kwy). We are pleased about the recognition of the importance and value of compositional robustness benchmarking (8eHc, g96c) as well as multilingual alignment evaluation (g96c). Based on the reviewers’ suggestions, we provide further information on architectural choices, datasets as well as the overall training procedure. In the supplementary PDF, we supply further information on the training data sizes, parameter counts, and GPU hours for all components in Tab. 5, along with details on data splits, languages, and modalities in Tab 6. Further, we share additional qualitative (Fig. 11) and quantitative (Tab. 7) ablations on attention manipulation as well as qualitative examples of reversing the order of image inputs (Fig 12.). We consolidate common concerns and responses here and reply to the remaining comments individually in the hope to address them accordingly. **@8eHc, sjMA, g96C, 2Kwy** ### Data: We use proprietary datasets for both multimodal and LM training. However, we acknowledge that downstream capabilities are derived from these models and hope that the information in tables 5 and 6 can provide additional insight. ### Reiteration on architecture design: Previous work has demonstrated that text encoders based on context-sensitive LMs improve the expressiveness of downstream image generation models [40,2]. Accordingly, we model the backbone of MultiFusion’s encoder as a 13B autoregressive transformer [10] trained on a multilingual corpus (cf Tab. 5 and 6). We justify our choice of an autoregressive decoder model over a bi-directional architecture by arguing that decoder models outperform bi-directional models on tasks such as manipulation with natural language (“Subject X with background Y) (cf qualitative results) or correct features attributions (“Red X, Blue Y”) (cf MCC-250 results). These capabilities have previously been attributed to the natural breaking of permutation equivariance [54], compared to bidirectional models relying entirely on positional embeddings. Following the method proposed by MAGMA [16], we add an image prefix and dedicated adapters to enable multimodal capabilities. The adapters are added to each attention and feed forward layer of the transformer and are trained autoregressively on a combination of large scale image-text datasets (cf Tab. 5 and 6), while the parameters of the language model remain frozen. We argue that adapters are a suitable architectural choice for multimodal prompts with arbitrarily interleaved sequences of text and image tokens, as previous research has already performed extensive ablations on adapter architectures and demonstrated their improved understanding of multimodal inputs over other methods [16]. Our choice of semantic embeddings was guided by the intuition that a focus on the semantics of a text prompt would best capture the information relevant to image generation and thus simplify learning the mapping from embeddings to image outputs. We decided to obtain high-quality semantic embeddings through parameter-efficient bias [4] instead of full model finetuning, based on the work of [33]. The finetuning follows the supervised contrastive learning objective outlined in section 4.1.1 of [33]. In the final step we finetune the cross-attention parameters of SD on the LAION aesthetics dataset following the standard diffusion objective, which is in line with previous research [13]. We adjusted Fig. 1 and its caption, as well as section 3, to clarify details on the architecture and to better reflect the design choices. **@8eHc, sjMA** ### Ablations: ***Attention Manipulation***: Attention Manipulation is required to counteract the fact that images are encoded by an order of magnitude more tokens than short text prompts. We show representative examples of how attention manipulation strengthens the influence of the text prompt on the generated output in App D Fig. 9 as well as Fig. 11 of the rebuttal. Further, we compute additional FID scores (cf. Tab. 7) for the multimodal prompt ablating the attention manipulation weight on the text prompt, showcasing that with increasing weight the FID scores approach that of text only prompting. The experiment empirically verifies that a higher attention manipulation weight on text prompts increases their influence on the generated image. ***Semantic Finetuning***: Early experiments have shown higher rates of convergence (based on visual inspection of generated outputs) for experiments using semantic embeddings. Consequently, we do not report ablation results without semantic fine-tuning, as it is an essential condition and not an optional architecture choice for successfully fusing an image generation model. We adjusted section 3 of the paper accordingly to provide more clarity on this behavior. [2] Yogesh Balaji, et al. eDiff-I. arXiv:2211.01324 [4] Elad Ben Zaken, et al. BitFit. arXiv:2106.10199 [10] Tom Brown, et al. Language models are few-shot learners. arXiv:2005.14165 [16] Constantin Eichenberg, et al.. MAGMA. arXiv:2112.05253 [33] Niklas Muennighoff. Sgpt. arXiv:2202.08904 [40] Teven Le Scao, et al. BLOOM. arXiv:2211.05100 [54] Amirhossein Kazemnejad, et al The Impact of Positional Encoding on Length Generalization in Transformers. arXiv:2305.19466, 2023. Pdf: /pdf/962ee4b2729139a46e094118c550cdc03b3840ed.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper presents an approach for creating a model that can take interleaved sequences of images and multi-lingual text as input, and generate novel images as output, by fusing together pre-trained models: (1) a ResNet image encoder from CLIP (2) an encoder-decoder text Transformer LM and (3) a Stable Diffusion (SD) image decoder. Most of the weights of the models are frozen, with some fine-tuning of adapter layers, the biases of the LM, and the cross-attention layers of the SD U-Net. Multimodal training is done on a combination of large scale image captioning, and VQA datasets, using the standard text-conditioned diffusion objective. The encoder-decoder text model was pre-trained on multi-lingual data, making the resulting image generation model also multi-lingual. Strengths: S1) The aim of this work, enabling an image generation model to take both images and text (in multiple languages) as input, is exciting and well-motivated by some of the qualitative examples in the paper: multi-modal inputs give complementary info, and multi-lingual text capabilities should broaden model accessibility. S2) I found the experiments on compositional robustness (MCC-250), with the improved results from the combination of this text encoder and the image inputs, interesting and think it has the potential to be a timely addition to the ongoing conversation about the role of the pre-trained text-encoder in compositional robustness of image generation (but see suggestions on baselines below). Doing a human evaluation user study was also a real strength of these experiments. S3) The qualitative results were compelling, particularly Figure 5 in the main text and Figure 4 in the appendix. Weaknesses: W1) The experimentation was a bit thin. - Although there is definitely a shortage of current benchmarks for the new capabilities presented by this model, the contribution of the paper would be stronger if it were able to reappropriate existing benchmarks or create new ones to evaluate some of these capabilities (e.g. negative prompting with images, multimodal image composition). - The method has a few steps (e.g. contrastive fine-tuning on a natural language inference dataset; training on a large number of multimodal datasets, both VQA and captions; and using attention manipulation), but I couldn't find any ablations on these components. This, in combination with the lack of details on the [apologies, the rest of this sentence was missing earlier] datasets, makes me worried about whether the overall approach will benefit future work. - The quantitative results that are presented here would be more convincing with a few (hopefully) easy-to-run variants of the current settings (another classifier-free-guidance weight; ablating image inputs in the MCC-250 experiment); see questions below. The compositional robustness results are interesting, but giving an image as input is a pretty strong (and potentially unrealistic) source of supervision. [update after response] : I still feel that point a) above, about capability evaluation, is a weakness, but the author response definitely helped address the other points. Thank you! W2) The method relies on proprietary datasets and models for the language model (and possibly also for the image datasets, see questions below). I don't think this would be a crucial weakness except that almost no information is given about these datasets and models, even in the appendix. Given that the LM is frozen when doing the multimodal training, and that the capabilities of the fused system (with respect to multi-linguality, and the compositional robustness experiments) seem very likely to me to depend on the properties of this LM, more openness (ideally, using a publicly-released multimodal encoder-decoder transformer, like mT5-XXL, which also has 13B parameters) would really enhance the scientific value of this paper. - The encoder is described as a "13B transformer encoder-decoder similar to GPT-3", but GPT-3 is a decoder-only model, trained with a language modeling objective. - The LM dataset is described only as "400B tokens of English, German, French, Italian, and Spanish", and it's unclear whether the multimodal training data includes datasets other than the ones listed in lines 17-18 of the appendix. - The German-English versions of SNLI and MNLI used for the semantic embedding objective also seem to be proprietary. W3) The writing was somewhat unclear. In particular, a lot of details about the model (the pre-trained models used, the training data for the full approach) were unspecified in the main text, although outlined in the appendix (Section A). Some details about the experiments were also unclear, see questions. [update after response]. The response effectively addressed both W2 and W3 -- thanks! Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Q1) It would be helpful to give any of the following results that are available: - Scores for Table 1 with guidance scale 1.0, as SD seems to surpass MF as guidance scale decreases. - Results for Table 2 that also remove the image input (i.e. use just the text), to see if there's a benefit in compositional robustness from using the pre-trained encoder-decoder model (this would be very cool if so!). - Ablations of any components of the approach, e.g. the semantic embedding fine-tuning, the attention manipulation, or some of the datasets used in multimodal training (e.g., how much value do the VQA datasets add). Q2) Could more information be given about the encoder-decoder LM, in particular what objective was it pre-trained with (e.g. a denoising objective? prefix-LM)? What is in the training data (e.g. web pages? books? is there paired data across languages, or all monolingual corpora)? Q3) What data is used to train the multimodal adapters (to input images) and the SD U-net? It wasn't totally clear to me from the appendix. In particular, are other multimodal datasets used beyond the ones listed in lines 17-18 of the appendix? "such-as" and "like" make it seem like there could be others -- can you say anything about them, if so? Q4) What image is being provided as input in the Table 1 results? Is it the ground-truth image? If so, could you give some intuition for this? The results are much better with multimodal and image, but ground-truth image would be really strong supervision (and probably also explain why multimodal is worse). Not crucial for the author response, but it would also really improve the paper to clarify: - How are the translations of SNLI/MNLI (line 24 of the appendix) generated? - how is the contrastive training on the entailment dataset done? - give details of the multimodal training (e.g. learning rates, dataset size, GPU hours) for both the adapters and the SD U-net. - I didn't understand Fig 4a, as it seems that similarity needs a translation in two language pairs (e.g. en--de) but the categories here are single language. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: I felt that the limitations section was pretty solid in qualitatively outlining weaknesses of the approach, although I'd appreciate experiments to quantitatively support the claim that attention manipulation can help prevent image context overriding text context. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough review and insightful questions and will address them in the following. We strongly believe that the changes made during the review process have improved the quality of the paper. [Rebuttal Pdf contains Tab. 5-7 as well as Fig. 11 and 12] ### Q1) Further Results 1. For guidance scale 1.0 StableDiffusion (SD) achieves a FID score of 26.09 and MultiFusion (MF) achieves FID scores of 32.81 (Text), 24.22 (Multimodal) as well as 18.93 (Image). These results indeed show that SD does surpass MF for guidance scales <= 2. However, in line with previously reported results for Stable Diffusion, model performance significantly degrades for scales < 2. Further, CLIPScores for guidance scale 1.0 are 0.27 for SD and 0.25 for MF, which is in line with results for guidance scales 8.0 - 2.0. We will extend Tab. 1 with these results for the final version of the paper. 2. We appreciate the reviewer’s suggestion to further illustrate the benefit of multimodal over text-only input and extend Tab. 2 respectively. Due to the limited time and resources available during rebuttal, we conducted the user study on a smaller scale with 1 sample per prompt. For the final version of the paper, we will provide the complete study at full scale and extend Tab. 2. |Methods|Zero obj|One obj.|One obj. w/ correct color|Two obj.|Two obj. w/ correct colors| |--|--|--|--|--|-| | Stable Diffusion [%]| 0.49|99.50| 90.25| 45.63| 28.57 | Composable Diffusion [%] | 2.93| 97.01| 88.83| 36.65| 25.12 | MultiFusion [%]| 0.67| 99.32| 93.57| 62.37| 54.24| | MultiFusion (text) [%]| 0.8| 99.00| 80.00| 43.00| 24.00| One can observe that MultiFusion text-only is roughly on par with the other text-only models. Consequently, it is indeed the multimodal inputs and not the change in encoder architecture providing the performance increase. 3. Early experiments have shown higher rates of convergence (based on visual inspection of generated outputs) for experiments using semantic embeddings. Consequently, we do not report ablation results without semantic fine-tuning, as it is an essential condition and not an optional architecture choice for successfully fusing an image generation model. We adjusted section 3 of the paper accordingly to provide more clarity on this behavior. Similarly, Attention Manipulation is required to counteract the fact that images are encoded by an order of magnitude more tokens than short text prompts. We show representative examples of how attention manipulation strengthens the influence of the text prompt on the generated output in App D Fig. 9 as well as Fig. 11 of the rebuttal. Further, we compute additional FID scores (cf. Tab. 7) for the multimodal prompt ablating the attention manipulation weight on the text prompt, showcasing that the higher the weight the closer the FID score is to a text-only prompt. This quantitatively verifies that a higher attention manipulation weight on text prompts increases the influence on the generated image. Our multimodal dataset is based on the findings and extensive ablations of MAGA [16], thus we did not deem it crucial to perform our own dataset ablations. ### Q2) LM Details We thank the reviewer for pointing out the inconsistency in the abstract regarding the LMs architecture. As stated in the main body of the paper, MultiFusion uses an auto-regressive transformer, i.e. a decoder-only architecture similar to GPT-3. We adjusted the appendix accordingly. We supply further information on the training data sizes for all components in Tab. 5, along with details on data splits, languages, and modalities in Tab 6. We will add both tables to the final version of the paper. Only the semantic bias training uses paired multi-lingual data. All other training data is sourced from monolingual corpora. ### Q3) Multimodal training The multimodal components (image prefix and adapters) are trained autoregressively on a combination of large-scale image-text datasets (c.f. Tab.5 and 6). Following MAGMA [16], the image tokens are prepended to the text tokens, and the language modeling loss for next token prediction is computed over the text only. The cross-attention parameters of SD are finetuned on the LAION aesthetics dataset following the standard diffusion objective. We added further clarification to the paper regarding these training details. ### Q4) Reference Image The reference image used for the experiment in Tab. 1 is indeed the ground truth image from COCO. We agree with the reviewer that this is strong supervision, most likely not available for a real-world use case. However, the key takeaway of the experiment is the additional and more fine-grained information provided by an image input over text alone. We argue this conclusion to be reasonable, given the fact that MultiFusion does indeed not replicate the input image but produces a variation with more aligned details. We modified the discussion of the experiment to reflect the strong supervision signal provided by the input image. ### Additional questions: The translations of SNLI and MNLI were automatically generated using the DeepL API, which we clarified in the appendix of the paper. Further training details are included in Tables 5 and 6 The scores in Fig 4a) are indeed based on paired data. In this case, we report the average pair-wise similarity of the displayed language with all other languages. We adjusted the caption of the Figure accordingly to provide further clarification. [16] Constantin Eichenberg, Sidney Black, Samuel Weinbach, Letitia Parcalabescu, Anette Frank. MAGMA - Multimodal Augmentation of Generative Models through Adapter-based Finetuning. EMNLP, 2022. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the detailed response, which addressed many of my concerns! I've raised my score from a 5 to a 6.
null
null
null
null
null
null
Probabilistic Inference in Reinforcement Learning Done Right
Accept (poster)
Summary: The authors propose a model-based RL algorithm for MDPs with unknown rewards and transition dynamics, which approximates the posterior probability of an action being optimal in a given state to ensure efficient exploration. The approach is mostly contrasted with model-free RL as inference, but there is also an experiment comparing with a Thompson sampling-based baseline. EDIT AFTER DISCUSSION I revised my score from 3 to 7. Strengths: The presentation is clear and the proposed method is easy to understand. Weaknesses: I think the abstract is rather misleading and the authors are missing a large portion of existing literature on the topic. RL as inference, as presented by Levine [30], is a model-free algorithm. Any shortcomings of this approach notwithstanding, it is simple and computationally efficient, as model-free methods generally are. The abstract of this submission promises that the authors fix some problems with RL as inference, when in fact they just use a model to compute optimal actions instead. Sure, in some sense one could argue that this is better, but not exactly novel. The RL as inference framework is thus only tangentially relevant to the proposed method and the authors should compare with model-based approaches instead. There is some discussion of the relationship to PSRL in Section 6, but way too little and too late. I don't do model-based RL myself, so I'm not really in a position to evaluate how the proposed method relates to existing approaches in that space. I also don't really know the right papers to use as baselines, but here are a few articles I found with a quick search in case that's helpful. Asmuth, J. and Littman, M. Learning is planning: Near Bayes-optimal reinforcement learning via Monte-Carlo tree search. In UAI, 2011. Asmuth, J., Li, L., Littman, M. L., Nouri, A., and Wingate, D. A Bayesian sampling approach to exploration in reinforcement learning. In UAI 2009, Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, 2009, pp. 19–26. AUAI Press, 2009. Guez, A., Silver, D., and Dayan, P. Efficient bayes-adaptive reinforcement learning using sample-based search. In NIPS, 2012. Strens, M. J. A. A Bayesian framework for reinforcement learning. In Proceedings of the Seventeenth International Conference on Machine Learning (ICML 2000), pp. 943–950, 2000. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: Where does this paper stand with respect to existing literature on Bayesian approaches to model-based RL? Only once this question is answered exhaustively would it be possible to review this paper. I don't see how that's going to be possible within this cycle, though. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 1 poor Contribution: 3 good Limitations: Unclear, as this paper is not properly positioned with respect to existing work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. In this rebuttal we hope to convince you that our work is adequately positioned with respect to existing works on Bayesian model-based RL, and that its framing in the light of ‘RL as inference’ is relevant. __Where does this paper stand with respect to existing literature on Bayesian approaches to model-based RL?__ Thanks for the pointers to existing model-based literature! Bayesian model-based RL is indeed a rich line of research, inspired by Thompson (1933) and dating back from the 2000s (e.g., 2nd and 4th reference of the reviewer). These early works have laid the algorithmic foundations for the PSRL algorithm analyzed by Osband et al. (2013, 2017), which remains state-of-the-art both theoretically and empirically, along with the more recent K-learning. For completeness, we also point out that there are information-theoretic approaches (e.g., information-directed sampling [Russo and Van Roy, 2014], [Hao and Lattimore, 2022]) that provide elegant insights but are challenging to implement in practice (because of the difficulty of estimating mutual information). Finally, another line of research has adopted a tree-search approach to Bayesian RL (e.g., 1rst and 3rd reference of the reviewer), yet these approaches do not provide learning guarantees and are beyond the scope of this paper. All in all, we believe that PSRL, RLSVI and K-learning are the most relevant related works on no-regret Bayesian model-based RL and this is why we focus our theoretical (Section 6) and empirical (Section 8) comparison on them. [Russo and Van Roy, 2014] Learning to Optimize via Information-Directed Sampling. [Hao and Lattimore, 2022] Regret Bounds for Information-Directed Reinforcement Learning. __RL as inference, as presented by Levine [30], is a model-free algorithm…__ “RL as inference” in Levine, 2018 is not an algorithm but a framework. Since it is not an RL algorithm it is neither model-free nor model-based, although both model-free and model-based algorithms can in principle be derived from it. Our paper derives a _new_ framework that overcomes the (quite serious) shortcomings of the previous one. From our framework we can also instantiate both model-free and model-based algorithms, since we can use standard RL tools to solve the variational optimization problem we propose. We derive one particular model-based algorithm VAPOR, but our framework is not bound to be model-based. For instance, VAPOR-lite (Section 8 and Appendix E.3), which optimizes in the space of policies instead of occupancy measures, is a model-free, policy-gradient-based algorithm that approximates our probabilistic inference framework. __…they just use a model to compute optimal actions instead…__ We wish to clarify that one of our key insights is to sample (according to its posterior probability) the optimal action at any state _conditioned on the state being optimal_. We refer to this as state-action optimality (Definition 1) and prove that sampling from it yields efficient exploration and provides a principled view of RL as inference. We point out the subtle importance of such conditioning, without which simply following ‘action optimality’ is well known to lead to inefficient exploration (see our illustrative example in Section 3.1). --- Rebuttal Comment 1.1: Title: Clarifying questions Comment: Having read the rebuttal and the other reviews, I think there's something interesting here, but the presentation of the paper makes it very difficult to judge what's novel, what problems it solves, and what are the limitations of the proposed approach. The paper is full of lengthy derivations and vague terms like "genuine statistical inference", "rigorous Bayesian treatment", or "false posterior", but there's little analysis of the objectives produced and how they compare to existing methods, and effectively no discussion of shortcomings and limitations. Before going further, some clarifying questions: 1. It seems that the key step is to optimize in the space of occupancy measures. Is this approach novel? Or is anything about the optimization algorithm used novel? If not, what's the closest existing algorithm that operates in the space of occupancy measures? 2. How much does the entropy term matter? It seems to be what's gained by the lengthy derivations. Did you run ablations without this term, just using the unconditional expected reward with an exploration bonus? 3. Why does VAPOR always go right in Table 1? It has both entropy and exploration bonuses, so how can it produce a deterministic policy? 4. In VAPOR-lite, once you throw away the entropy term, are you just adding a heuristic exploration bonus to the entropy-regularized actor-critic, or does it do anything more? --- Reply to Comment 1.1.1: Title: Response to Reviewer MCkd (1/2) Comment: Thanks for your clarifying questions! $\newcommand{\PG}{\mathbb{P}_{\Gamma^\star}}$ Our contributions can be centered around the new object that we uncover as key for inference and control, $\PG$: - We start by formalizing what it means for a ‘state-action pair to be optimal’. This event is at the core of the ‘RL as inference’ framework but had never been properly analyzed, leading to serious exploration shortcomings. Crucially, we reveal that the posterior probability of this event, $\PG$, is an occupancy measure. - We prove that $\PG$ suffices for principled exploration (i.e., extracting a policy from it has a guaranteed regret bound). - Since computing $\PG$ is intractable, we propose a variational optimization problem (VAPOR) that tractably approximates it. - We solve this optimization problem in two different ways: exactly (giving a tabular model-based algorithm with regret guarantees) and approximately (giving a scalable model-free policy-gradient algorithm). - We show that both TS and K-learning can also be directly linked to $\PG$, thus shedding a new light on these algorithms and tightly connecting them to our variational approach, and unifying these approaches within our framework. We will also make sure to expand on the limitations of our work in the revised version, in particular the challenge in solving the VAPOR optimization problem, the open problem of relaxing Assumption 1 for the regret analysis, and if there is a tighter way to approximate VAPOR with policy gradients. --- Rebuttal 2: Title: Updated evaluation Comment: Following the discussion, here's my updated evaluation. I think there's an interesting contribution in this paper that could be published, but there are serious issues with presentation that make me hesitate to recommend acceptance. I don't agree with the framing of the paper, I think there's a substantial amount of discussion of existing literature that's missing, and, most importantly, the limitations of the proposed method are not discussed clearly enough. I think the key contribution is a tractable optimization objective, VAPOR, that approximates the optimal policy, correctly taking into account the epistemic uncertainty. The key insight is to optimize in the space of occupancy measures, targeting the occupancy measure corresponding to the optimal policy. I hesitate to call it Bayesian inference, since there aren't really any observations or posteriors in the usual sense. The "variational approximation" of VAPOR is also hardy justified. The solution of VAPOR is shown to be an upper bound on the expected reward of the optimal policy, but that of itself doesn't guarantee anything. The bound on KL between VAPOR policy and the optimal policy is self-referential. Ultimately, the justification of the algorithm is derived from the regret analysis, which is fine, but that's specifically not about the inference interpretation. I suppose it is debatable whether this whole procedure is in some sense Bayesian inference, but claims in published papers should not be debatable. Separately from the framing issue, the applicability of VAPOR is quite limited (I discuss the proposed extensions below). Specifically, there are two significant limitations: 1. VAPOR is only applicable in the tabular setting, and its optimization space grows with the product of states, actions, and time steps. 2. VAPOR can only handle epistemic uncertainty over reward distribution, and not the transition dynamics. I think those limitations are acceptable, but they need to be stated very clearly throughout the paper. As is, they are absent from the abstract and not prominently stated in the paper. The extension to uncertain transition dynamics relies on very restrictive assumptions. It's bad enough to require Dirichlet priors, but I think assuming independence is even worse. I don't mind a result like this being included, but again it should be very clear what the limitations are, along with some examples of priors which do and don't satisfy those assumptions. It's even less clear to me what happens to those assumptions later in the process, as beliefs are updated. VAPOR-lite, applicable in a non-tabular setting, is described very briefly and only in the appendix. If the authors want to claim that VAPOR is applicable outside of the tabular setting, VAPOR-lite needs to be given a prominent sport in the main body of the paper, with adequate analysis. As is, the discussion of existing literature is lacking, but I think the authors' responses cover the shortcomings. They just need to be included in the paper in some coherent form. Finally, this is all assuming that the theoretical analysis of the convergence bounds is correct. I'm not competent to check it, so I'm relying on the other reviewers for that. Overall, I think there's a good contribution here, but the authors oversell the paper quite a bit. Either walking the claims back or providing additional evidence for the questionable claims would work, but I think the latter would necessitate another round of reviewing. --- Rebuttal Comment 2.1: Title: Response to Reviewer MCkd Comment: We appreciate the reviewer’s involved discussion. We wish to clarify some claims made by the reviewer. - “*There aren't really any observations or posteriors in the usual sense*”. Our object of interest is $\mathbb{P}_{\Gamma^\star}$, the **posterior** probability of state-action optimality conditioned on **observed data**. Computing this exactly requires Bayesian inference. - The variational approximation and regret analysis are “*debatable / hardly justified*”. **RL is both inference and control**: it is important to analyze the control behavior resulting from our inference framework, and we believe regret is a relevant measure of performance. Our approach is ‘variational’ because we replace exact inference of an intractable posterior $\mathbb{P}_{\Gamma^\star}$ with a (convex) optimization problem. - “*VAPOR can only handle epistemic uncertainty over reward distribution, and not the transition dynamics*”. Lemma 7 establishes an equivalence relationship which implies that VAPOR can handle epistemic uncertainty in the transition dynamics. - “*The bound on KL between VAPOR policy and the optimal policy is self-referential*”. The bound relates the KL divergence of the policies to the absolute difference between the value functions. Thus it draws a connection between optimism (familiar to RL practitioners) and quality of approximation as measured by KL divergence (familiar to variational Bayes researchers). - “*The solution of VAPOR is shown to be an upper bound on the expected reward of the optimal policy, but that of itself doesn't guarantee anything [...] The justification of the algorithm is derived from the regret analysis*”. The upper bound property and the concentration property is what guarantees the regret bound. The upper bound is ‘optimism’, an idea that is very standard in reinforcement learning. - “*VAPOR is only applicable in the tabular setting, and its optimization space grows with the product of states, actions, and time steps.*” This is basically the case for all RL algorithms (without suitable parametrization) too. The point is to properly analyze our framework and obtain a guaranteed regret bound (on which we focus our paper), then transfer to more complicated setups with suitable parametrization of the policy or occupancy measure (which we briefly touch upon with VAPOR-lite, which is promising for more thorough future investigation). - It is “*bad/worse to assume independent Dirichlet transition priors*" $\alpha$. This is the canonical prior for transitions in Bayesian RL, see “Bayesian Reinforcement Learning: A Survey” (Ghavamzadeh et al., 2016). This is because visiting a state-action pair simply increments by 1 the appropriate entry of the vector $\alpha$ — note the ease of the update because the Dirichlet is the **conjugate** prior of the categorical distribution (which models transition probabilities of discrete-state-action-MDPs). We hope this clarifies “*what happens to those assumptions later in the process, as beliefs are updated*”. We also note that the state-of-the-art Bayesian regret analyses (including PSRL, K-learning, RLSVI) assume it — it is an open question in the community how to obtain a $L \sqrt{S A T}$ Bayes regret bound without this assumption.
Summary: The paper views reinforcement learning as a Bayesian variational inference problem, and proposes VAPOR, an algorithm which computes an approximately optimal occupancy measure and its corresponding policy. Bayesian regret of VAPOR is analyzed to have a sub-linear bound. In numerical experiments, VAPOR shows good performance in simple GridWord and DeepSea environments. For more complex Atari environments, a further approximated VAPOR-lite is used and is compared with its entropy-regularized counterpart. Strengths: - While the idea of "RL as inference" has been considered, this paper proposes a new approach which aims to directly approximate the optimal occupancy measure in the Bayesian setting. This approach has the potential to better approximate Bayesian optimal reinforcement learning policy which might handle hard exploration situations. - The motivation of the algorithm is explained well. The main idea of VAPOR is based on inequality (5) in Lemma 4. This inequality provides an upper bound of the optimal Bayesian performance, and the objective of VAPOR is basically finding the policy achieving the upper bound. Some analysis of the approximation is provided in Lemma 5. - The performance of VAPOR is analyzed in terms of Bayesian regret in Theorem 1 with known transitions, and then the analysis is extended in Theorem 2 under the independent Dirichlet assumption on the transition dynamics. Bayesian regret is shown to be sub-linear in both cases. - VAPOR shows good performance compared with Thompson sampling based methods in simple GridWord and DeepSea environments. - For more complex Atari environments, a further approximated VAPOR-lite is used and is compared with its entropy-regularized counterpart. Weaknesses: - Although Lemma 5 provides a bound on the KL-divergence between the VAPOR solution and the optimal occupancy measure, the bound is in terms of the gap of the VAPOR approximation of the optimal Bayesian performance in (5). So this bound is basically using a property of VAPOR itself to bound another property of VAPOR and none of them can be evaluated. Therefore, beside numerical experiments, we have no idea how loose the upper bound (5) can. - Since the considered dynamics are time-inhomogeneous, the space of occupancy measure VAPOR is solving has dimension growing in the time horizon. This means that VAPOR might not be able to handle problems with long horizon, and this contradicts the statement at the end of Section 5 that VAPOR produces stationary policies, while it doesn't due to the time-inhomogeneous nature of the occupancy measure. - In VAPOR, $\sigma$ seems to be a hyper-parameter, but it's not clearly defined. In particular, $\sigma$ seems to be a constant in most lemma and theorem statements, but $\sigma$ looks like a multi-dimensional vector with entry $\sigma_l(s, a)$ in (2) and (4). This confusion makes the description of the algorithm inconsistent and it might lead to some errors. - The issue of $\sigma$ mentioned above might lead to a major error in the regret analysis and the VAPOR algorithm. In the proof of Lemma 12, equation (14) implies $c_l(s, a) = \sigma_l(s, a) \sqrt{n^t_l(s, a) \vee 1}$. According to (14), $c_{\max} = \max_{l, s, a} \sigma_l(s, a) \sqrt{n^t_l(s, a) \vee 1}$. Therefore, unless $\sigma_l(s, a)$ decrease at least at the rate of $1/\sqrt{n^t_l(s, a) \vee 1}$, the value of $c_{\max}$ will not be a constant. But from the statement of Theorem 1, $\sigma$ seems to be treated as constant which is inconsistent with the proof of Lemma 12. It seems like, the uncertainty measure $\sigma$ needs to follow a decay procedure. Gradually decreased uncertainty is generally required for a policy to have sub-linear regret because non-decreasing randomness would lead to linear regret due to the constant noise in policy. - Lemma 7 is very counter-intuitive. If it's true in general, uncertainty in the transition dynamics can be easily handled by replacing it with its mean. This would establish a kind of certainty equivalence for MDP which seems too strong to be true. I try to follow the proof of Lemma 7 and found several issues. Those issues may be correctable, but even if Lemma 7 is correct, it is probably too strong due to Assumption 1. Although Assumption 1 has been used in the literature, this assumption is restrictive and is not applicable in most applications. For example, any MDP with an unknown but fixed (over times/steps) transition function violates the assumption because the transition functions are not independent over times/steps. - VAPOR-lite sounds promising with its application in complex environments like Atari games, but the paper only provides very limited information for VAPOR-lite and there is no details available for its implementation. The numerical results are also very limited, with no results on individual games and no comparison with popular existing algorithms. Technical Quality: 1 poor Clarity: 2 fair Questions for Authors: - As described in the weakness, is $\sigma_l(s, a)$ a constant or it follows a decay process? This seems critical to the algorithm and its analysis. It would be great if the authors can clarify the role of $\sigma_l(s, a)$ and correct the proofs one way or another. - There are several issues in the lemmas that lead to the proof of Lemma 7. - In the proof of Lemma 13, summations are missing from Bellman operators. It seems like the authors may want to use some short-hand notations for those summations, what are the definitions for those notations? It's not possible to follow the proof without proper descriptions and definitions. - In the proof of Lemma 14, the second sentence claims to be conditioned on $\phi$, but $\hat{\mathcal B}_l$ is defined using the distribution of $\hat \phi$ instead of $\phi$. Is there some typos for $\hat \phi$ and $\phi$? Either way, the proof requires more steps transiting from $\phi$ to $\hat\phi$ to be true. - In the proof of Lemma 18, why can one brings the expectation $\mathbb E_\phi$ inside the Q-function? This seems to be a consequence of independent Dirichlet, but one needs to show that it leads to some kind of independence for the Q-function. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 1 poor Presentation: 2 fair Contribution: 3 good Limitations: Beside possible errors in the analysis, the assumption makes the analysis in the paper limited and there is no discussion on this aspect. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough review. When we encounter a reviewer that has clearly taken the time to read the paper in detail, as you have, and given the paper a low score, that is a signal to us that the paper is unclear in the highlighted areas — so thank you for bringing these issues to our attention! In this rebuttal we hope to clarify one-by-one your inquiries on the technical soundness, which we will incorporate in the revised version. __Q1: Is $σ_l(s,a)$ a constant or does it follow a decay process?__ Thanks for flagging that $σ$ was overwritten as a vector following a decay process (Lemma 3) and as a constant (Theorem 1). In short, the correspondence is $σ^{\text{vector}}_l(s,a) = \frac{σ^{\text{constant}}}{\sqrt{n_l(s,a)}}$. First, the constant $σ^{\text{constant}}$ controls how noisy the rewards are (larger noise corresponds to a larger constant), we will replace it with the notation $c_0$ to avoid confusion. Second, we assume that the posterior uncertainty decays with data (of size $n$) at a particular $\frac{c_0}{\sqrt{n}}$ rate, which is a quite natural rate (also considered in the PSRL, RLSVI, K-learning analyses) that arises commonly in practice — it holds, for instance, for Gaussian or bounded reward noise. As such, the condition of Lemma 12 should now read: ‘We _assume_ that the uncertainty $σ^t_l(s,a)$ decays at least as fast as $\frac{c_0}{\sqrt{n^t_l(s,a)}}$’. __Q2: Clarifications on the proof of Lemma 7.__ - __Proof of Lemma 13.__ We will clarify the dot product notation $^\top$ in line 737 that gives the summation, i.e., for any step $l \in [L]$ the Bellman operator $B_l: \mathbb{R}^{S_{l+1} \times A} \rightarrow \mathbb{R}^{S_{l} \times A}$ is defined for any $Q_{l+1} \in \mathbb{R}^{S_{l+1} \times A}$ and $(s,a) \in \mathcal{S}_l \times \mathcal{A}$ as $(B_l Q_{l+1})(s,a) := r_l(s,a) + P_l(\cdot \mid s,a) ^\top \max_{a'} Q_{l+1}(\cdot, a') := r_l(s,a) + \sum_{s’ \in \mathcal{S_{l+1}}} P_l(s’ \mid s,a) \max_{a'} Q_{l+1}(s’, a')$. - __Proof of Lemma 14.__ Thanks for flagging that it should indeed be $\hat\phi$ instead of $\phi$ in the conditioning of the second sentence. We will expand the proof with more detail, with steps broken out clearly (due to the rebuttal character limit we do not include it here but we can add it as an Official Comment if the reviewer wishes). We also recall that the proof is not our contribution as it follows exactly the steps of the proof of Lemma 3 of RLSVI (Osband et al., 2019). - __Proof of Lemma 18.__ We understand your question as: why can we use $\mathbb{E}(P Q) = \mathbb{E}(P) \mathbb{E}(Q)$? This is due to the time-inhomogeneity of the MDP, which implies that $P$ at time $l$ and $Q$ at time $l+1$ are independent, since the future return from a fixed state-action pair cannot be influenced by the dynamics that gets the agent to that state-action. __KL-divergence bound of Lemma 5.__ The result shows that the weighted KL-divergence between $\mathbb{P}_{\Gamma}^{\star}$ and its variational approximation is bounded by how much the VAPOR objective is optimistic. Thus it draws a connection between optimism (familiar to RL practitioners) and quality of approximation as measured by KL divergence (familiar to variational Bayes researchers). For this we included Lemma 5, which we do not feel is a major piece of the paper, it is more about building intuition. __'Stationary' terminology.__ Following some prior works we used the term ‘stationary’ to describe a strategy that does not depend on the episode number K (unlike many frequentist algorithms that have a log(K) dependence in the algorithm). In other words, stationary algorithms are entirely a function of the beliefs, not of the episode number. We will clarify this definition in the paper. __Lemma 7 is counter-intuitive.__ Yes indeed, we believe that Lemma 7 is actually an important contribution of the paper! Intuitively speaking, it doesn’t really matter to the algorithm where the uncertainty comes from (between r and P), just that there is uncertainty at some far away state-action that must be reduced by visiting it. Note that the amount of uncertainty moved from the transitions into the rewards is large, O(L) for finite horizon L, so it’s not a ‘free lunch’ so to speak. We also emphasize that the result holds only for Dirichlet posteriors over P, so quite a lot of structure is required on the problem for it to hold. However, Dirichlet posteriors over P are very natural for transition functions so we believe it is useful in practice. In fact, similar results have been used in the literature for TS, RLSVI, and K-learning. Our contribution is to make it general (i.e., it holds for any algorithm) and extend to the full sub-Gaussian case rather than just the Bernoulli reward case. __Assumption of time-inhomogeneous dynamics.__ This assumption is not inherent to the VAPOR algorithm, but rather to existing $L\sqrt{SAT}$ Bayesian regret analyses. It is straightforward to instantiate the VAPOR algorithm under time-homogeneous dynamics (as we do in the GridWorlds of Fig. 2 and 6). The assumption is solely used to present clean theoretical results (which require the property that the transition function and the value function at the next state are conditionally independent). In fact, the analyses of K-learning, PSRL, RLSVI also require this assumption. Note that the example you gave (MDP with unknown but fixed transition function P) can be converted into a time-inhomogeneous MDP by unrolling (essentially copying the states L times), picking up an additional factor of sqrt(L) in the regret bound. In practice, this does not seem to matter much (most of these algorithms perform well in either case), but we take it for ease of analysis. __VAPOR-lite limited information.__ We will include much more details on the implementation given in Appendix E.3 (e.g., detailed pseudo-code and hyper-parameters) and add the learning curves for each individual Atari game. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for the detailed responses. From the response, Theorem 1 indeed requires an additional assumption to be correct. The likely non-verifiable assumption not only makes the results much weaker, there are still many unspecified details missing for this possible assumption. If the assumption considers the vector-version $σ_l(s,a)$ under the prior distribution as in Lemma 3, they are still time independent, and therefore cannot decay with $t$. If the assumption will involve some time-dependent $σ^t_l(s,a)$ which might come from the posterior, the paper needs to add more details on the posterior distribution and some analysis on its evolution under the proposed policy. I feel like these issues require some major revision and another round of reviews. --- Reply to Comment 1.1.1: Title: Response to Reviewer QdEN Comment: We thank the reviewer for engaging. We believe that the reviewer has misunderstood our use of standard and familiar tools in Bayesian inference and hope to correct that misunderstanding below. We do not require any additional assumptions for Theorem 1. The only fact that we rely on is that in Bayesian inference the posteriors concentrate as we gather more data. This is entirely standard, e.g., [1] and references therein. The only question is how fast do the posteriors concentrate: Under our already stated assumptions the variance of the posteriors concentrate like 1/(#data samples). Concretely, as the agent navigates the environment it visits state-actions at *different schedules*, so the posteriors concentrate (uncertainty decays) at the same 1/(#data samples) *rate*, but at *different schedules*. As is usual in RL, we have assumed sub-Gaussian additive reward noise. That implies the following: - If each state-action reward posterior starts with some $\kappa$ uncertainty (the prior) and the reward noise has variance $c_0^2$ and for simplicity take $\kappa \gg c_0$ (if not it can only help the posteriors concentrate faster), - Then at time $t$ the uncertainty has decayed to $c_0^2 / n^t(s,a)$, where $n^t(s,a)$ is the number of times the agent has visited state-action (s,a) before episode $t$ (ie, the number of data samples of the reward from state-action (s,a)). This rate is all we need to prove the regret bound. It is ‘time-dependent’, but only in the sense that as time progresses the agent is gathering more data and the posteriors (or confidence sets) are concentrating, this is entirely standard in RL, both frequentist and Bayesian approaches (otherwise how would the agent ever learn?). The sub-Gaussian assumption is standard in the literature and arises commonly in practice. For instance, in both Atari and DeepSea (the experiments we ran) the assumption holds because the rewards are bounded. In slightly more detail: - Theorem 1 reads: For known $P$ and $c_0$-sub-Gaussian additive reward noise, it holds that $\mathcal{BR}(\text{VAPOR}, T) \leq c_0 \sqrt{S A T}$. - The *definition* of $c_0$-sub-Gaussian additive reward noise means that: at any time $t$, step $l$, state-action pair $(s,a)$, $\mathbb{E}^t [\exp(x (r_l(s,a) - \mathbb{E}^t r_l(s,a)))] \leq \exp \left(x^2 c_0^2 / 2 (n_l^t(s,a) \vee 1) \right), \quad \forall x \in \mathbb{R}.$ - This means that at any time $t$, we can *instantiate* the uncertainty $\sigma_l^t(s,a)$ in the VAPOR optimization problem as: $\sigma_l^t(s,a) = \frac{c_0}{\sqrt{(n_l^t(s,a) \vee 1)}}$. [1] Conjugate Bayesian analysis of the Gaussian distribution, Murphy, 2007.
Summary: The paper provides a Bayesian treatement for the reinforcement learning as probabilistic inference framework for the discrete state/action space. As extension to the standard formulation, posterior probabilities of state-action optimalities are regarded and formally defined. For tractability, an approximation of the probabilities via variational inference is derived. Further, the case of unknown dynamics is considered and links to other approaches discussed. The authors evaluate their method on a grid world problem, deep sea, and (in an reduced form) to atari. Strengths: The problem which is considered in the paper is interesting for the use in state-of-the-art reinforcement learning. While not being obvious at first sight why a Bayesian treatment is advantageous, the simple decision problem makes that very clear. By providing a formalization of the Bayesian formulation and the variational approximation and details about the algorithm the contribution of the paper is very good. The paper is well-written and easy to follow. While just briefly looking at the appendix, all theorems seem to be proven and assumptions listed. Weaknesses: There is no code provided in the supplementary material or paper. In these days, I would expect this for an accepted paper at NeurIPS. Future work and limitations are not sufficiently discussed in my opinion. For atari evaluation, only a reduced version of the algorithm is considered (without weighted entropy) Information on how to solve the VAPOR objective are only given in the appendix. The paper seems very crowded, for example the table besides lemma 6 is a bit confusing. It is good, however, that it has a lot of content/contribution. Minor: The quantity $\lambda$ seems not to be well-defined for me as it is used a function but not defined this way. It took me a while to understand what it really is. This should be written in a more consistent way. line 110: Add "pair" after state-action. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: How strong is the assumption for VAPOR that $r_l$ is a sub-Gaussian (in Lemma 3)? Is it possible to apply the method to continuous state/action spaces? What are the runtimes for the experiments? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The formulation is limited to discrete/action spaces and does not seem to be scalable. No runtimes are provided. It would have been very interesting to see how much time it takes to solve the two-player zero-sum geme in every iteration. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable comments and suggestions (such as expanding on the future work and limitations), which we will incorporate in the revised version. __No code.__ We will include more details on the implementation of both VAPOR and VAPOR-lite, such as detailed pseudo-code (including the code snippet of the VAPOR optimization problem that we solve using CVXPY) and hyper-parameter list. __$\lambda$ notation.__ We will clarify on lines 71-74 that the notation $\lambda_l \in \mathbb{R}+^{S \times A}$ means that $\lambda_l$ is a function $S \times A \rightarrow \mathbb{R}_{+}$ for every $l \in [L]$. __Q1: How strong is the assumption for VAPOR that $r_l$ is a sub-Gaussian (in Lemma 3)?__ We consider the sub-Gaussian mean reward assumption in the main paper since it is very standard in the literature (e.g., K-learning [O’Donoghue, 2021], PSRL [Osband and Van Roy, 2017], RLSVI [Osband et al., 2019]) and it allows us to present clean results and regret bounds. This assumption is not required _computationally_ for VAPOR (only for the analysis): in Appendix C.3 we extend Lemma 3 to the general case, and we refer to Appendix C.5 for the resulting VAPOR optimization problem. __Q2: Is it possible to apply the method to continuous state/action spaces?__ Our new framework of ‘RL as inference’ may be instantiated and approximated in various ways, targeting theoretical guarantees or scalability, which dictates which standard RL tools to use to solve the variational optimization problem we propose. In particular, we derive a tabular, model-based algorithm VAPOR with theoretical guarantees (Algorithm 1). Meanwhile, by optimizing in the space of policies instead of occupancy measures, VAPOR-lite is a model-free, policy-gradient-based algorithm that approximates our probabilistic inference framework, so it can readily leverage existing policy-gradient techniques suitable for continuous state/action spaces. __"Only a reduced version of the algorithm is considered (without weighted entropy)".__ We will make sure to add more details on VAPOR-lite in the revised version. We will clarify that VAPOR-lite does consider weighted entropic regularization (but in the space of policies instead of occupancy measures, which is a weaker but more scalable form of regularization). As such, VAPOR-lite retains the core algorithmic novelty of VAPOR of weighting optimism and entropy regularization on a per state-action basis. __Q3: What are the runtimes for the experiments?__ Although VAPOR requires solving a convex optimization problem at each iteration, which makes it slower than dynamic programming approaches like Thompson Sampling, the problem is an exponential cone program that can be solved efficiently using modern optimization methods. We point out that we do not need to solve the two-player zero-sum game since the minimization over $\tau$ conveniently admits a closed-form solution (see Equation 4). In our experiments, with the CVXPY solver ECOS (and 1e-8 absolute tolerance), solving the VAPOR optimization problem took a few seconds on the largest DeepSeas, which was sufficient for our purposes. There is a natural trade-off between a less accurate optimization solution (i.e., better computational complexity) and a more accurate policy (i.e., better sample complexity), which can be balanced with the choice of CVXPY solver, its desired accuracy, its maximum number of iterations, etc. We will include runtime details and discussion in the revised version. As for VAPOR-lite, its computational cost is essentially the same as the replay actor-critic baseline (it only needs to compute the uncertainty signal σ with an ensemble of reward predictors). --- Rebuttal Comment 1.1: Comment: Thank you for answering my questions. After reading the other reviews and discussions, I have decreased my confidence as the other reviewers probably have a deeper understanding of the paper. I still think the paper is well written and the technical contributions seem good to me, but there might be some issues as mentioned by reviewers QdEN and MCkd. I especially agree with them that limitations are not sufficiently discussed in the paper.
Summary: This paper undertakes a rigorous Bayesian treatment of the posterior probability of state-action optimality. It proposes a variational approach to approximate the state-action optimality. The proposed method involves a tractable convex optimization problem and is provably efficient. This paper also conducts experiments showing that the proposed method compares favorably with previous methods. Strengths: This paper, for the first time, undertakes a rigorous Bayesian treatment of the posterior probability of state-action optimality. It proposes a novel method having deep connections to previous work. Related work has been adequately cited. This paper seems to be technically sound, though I did not check the proof in Appendix. All claims are well supported by theoretical analysis and experimental results. It is also clearly written and well organized. I believe the result presented in this paper is significant. What I find most interesting is that the proposed method finds a balance between optimism and entropy regularization, and the resulting reward bonus is the product of surprise and uncertainty. Weaknesses: This paper formally treats the posterior probability of state-action optimality and demonstrates its connection to Thompson sampling. Similar analysis has been given in [O’Donoghue et al.](https://openreview.net/pdf?id=S1xitgHtvS). This should be mentioned in the paper. The proposed method is novel. However, its connection to previous work, especially K-learning, should be discussed in more depth. Specifically, K-learning could be understood as a variational approximation of the policy induced by the state-action optimality ([O’Donoghue et al.](https://openreview.net/pdf?id=S1xitgHtvS)). VAPOR proposed in this paper additionally optimizes the stationary state distribution, which, to my understanding, complicates the optimization by introducing the need for forward message passing. It is not discussed in this paper whether and why this is beneficial. In Figure 2, this paper compares the stationary state distribution induced by VAPOR and TS. I believe it should additionally provide a comparison with K-learning as K-learning also approximates the policy induced by the state-action optimality. This paper provides a performance comparison in environments like DeepSea and Atari. As Bayesian approaches, it should also compare the Bayesian regret by simulating in MDPs randomly generated from prior. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Is there a theoretical benefit for optimizing both the policy and stationary state distribution? Is there an intuitive explanation for the balance between optimism and entropy regularization? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: This paper has not discussed the limitations of the proposed method and the theoretical analysis. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable comments and for highlighting connections to previous work, on which we will expand in the revised version. We answer their queries below: __Discussion of O’Donoghue et al.__ Thanks for highlighting that O’Donoghue et al., 2020 argue that TS solves at each episode an inference problem that samples from “the _joint_ [posterior] probability over all the binary [_action_] optimality variables”. While this is stated without proof nor rigorous definition of this joint quantity (which appears quite convoluted at first glance), our paper reveals that it can be condensely expressed: it represents the policy induced by the posterior probability of _state-action_ optimality (Def 1) (which is an occupancy measure). Thanks to this new definition, we can write a formal link between TS and RL as inference (Lemma 8). __Connection to K-learning.__ As you say, O’Donoghue et al., 2020 show that K-learning can be derived as an approximate inference procedure, although this hinges on the _assumption_ that inference is over the moment generating function of $Q^\star$ (Equation (7) of O’Donoghue et al., 2020). On the other hand, VAPOR’s derivation through our inference perspective is free from any assumption on the distribution of the optimality variables. Interestingly, we can show that VAPOR with equal temperature variables $\tau$ approximately recovers K-learning, thus shedding a new light on K-learning as a variational approximation of our probabilistic inference framework with an additional constraint of equal $\tau$ variables. __Suggestions of additional experimental insights.__ Thank you for the two suggestions (i.e., plot the occupancy measure of K-Learning in Figure 2, and compare the Bayesian regret of the approaches by simulating in MDPs randomly generated from a prior), which we will incorporate in the revised version. __Q1: Is there a theoretical benefit for optimizing both the policy and stationary state distribution?__ We show that the core quantity that formalizes RL as inference from a Bayesian viewpoint — the posterior probability of state-action optimality — is an occupancy measure (Lemma 1). Our variational approximation VAPOR naturally optimizes in this space. Intuitively, having a forward message passing enables the agent to capture prior episode information (i.e., to condition on prior steps in the episode being optimal), which is key to taking consistent actions (see our example in Figure/Table 1). Directly optimizing over occupancy measures is difficult in practice, thus we also propose VAPOR-lite which instead optimizes over the policies, akin to standard policy gradient methods (in this case, the optimization problem is no longer concave but good maxima can be found). __Q2: Is there an intuitive explanation for the balance between optimism and entropy regularization?__ This is a great question, in particular because it naturally falls out of our approach. One interpretation of having both exploration mechanisms work in tandem may be that optimism is providing a guidance on _which_ areas of the state space to visit next (i.e., uncertain states with high intrinsic reward). Entropy regularization, meanwhile, could be seen as providing guidance on _how_ to reach such desired states, where stochastic trajectories are preferred (which adds some local coverage/exploration). --- Rebuttal 2: Title: Acknowledgement Comment: I have read through all reviews and rebuttals and decided to keep my original score for acceptance.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper identifies "the posterior probability of state-action optimality" denoted $\mathbb{P}_{\Gamma^*}$ as a key object for inference and control and provides a variational optimization approach to estimate it. Clear presentation, insightful analysis, and experiments on GridWorld, DeepSea, and Atari are provided. Strengths: Outstanding paper in all respects. A fresh and principled approach to RL as inference. High quality execution both on the theory and experimental side. Weaknesses: None Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: None Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: There is no dedicated section for limitations, but limitations are sufficiently discussed in the conclusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 10: Award quality: Technically flawless paper with groundbreaking impact, with exceptionally strong evaluation, reproducibility, and resources, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the complimentary review! We are delighted that you think our work is valuable. We welcome any further comments.
null
null
null
null
null
null
Diversify Your Vision Datasets with Automatic Diffusion-based Augmentation
Accept (poster)
Summary: The paper proposes a generative data augmentation method, ALIA, which utilizes large image captioning and language models to summarize the domain description and employs language-guided image editing methods to create augmented training data. Experiment results show that ALIA outperforms recent data augmentation methods on fine-grained classification tasks. Strengths: - S1. The paper is well-written and easy to follow. - S2. The idea of using prompts to extract domain information and class-agnostic descriptions is novel. - S3. The empirical result of the iWildCam experiment is promising. It is impressive that the task model trained on additional ALIA-generated images outperforms that uses real samples. Weaknesses: - W1. Limited technical contribution. Although the idea of using prompts to extract domain information and class-agnostic descriptions is novel, most building parts of ALIA are off-the-shelf methods, for example, BLIP, GPT-4, Img2Img and Instruct Pix2Pix. - W2. The proposed method relies on pre-trained models heavily. As mentioned in the limitation section, ALIA is likely not as effective for unseen test domains. If ALIA performs poorly on a target dataset, no feedback can be sent to fine-tune ALIA on the dataset. - W3. ALIA is only tested on three datasets, where the dataset classes are mostly similar (see Q1). Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Q1. While being a generic data augmentation method, it seems that each tested dataset is comprised of similar objects, e.g., CUB contains birds, FGVC-Aircrafts contains aircraft and iWildCam contains animals. Does ALIA assume the dataset classes to be similar? On a more diverse dataset like ImageNet, can the large language model summarize good domain descriptions and create valid edits? - Q2. In line 118. The authors constrain the number of domain descriptions to be less than 10. Can the authors explain more about the choice of the number of domain descriptions? How does it affect the quality of the augmented samples? - Q3. In lines 191-192. The authors mention generating twice as much data as the original data to ensure “enough” data is available after filtering. How do we know whether the amount of filtered data is “enough” or not? Should we generate more data if the original dataset size is smaller? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I find no negative societal impact in this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your feedback and agree that our method is dependent on the ability of the pretrained models we use. We address your other questions and concerns below. **Datasets.** Our datasets consist of similar objects because we focused on the difficult setting of fine-grained classification. This choice was to see if diffusion could maintain the fine grained features of the class while changing the spurious features. That being said, ALIA does not need the classes to be similar, and could be applied to improve performance on datasets like ImageNet. Furthermore, we agree that more datasets are better and thus we added in an additional contextual bias dataset Waterbirds, which shows that ALIA outperforms everything but the real data baseline. **Choice of number of domain descriptions.** Our choice of less than 10 domain descriptions was due to compute constraints, since we generate two edits for each image in the dataset for each prompt. While the number of domain descriptions doesn't affect the quality of the augmented samples, having a really small number of prompts could potentially affect the diversity of the resulting augmented set. **Amount of data to generate.** The choice to generate twice as much data as the dataset (2 edits per image) is largely due to the real data baseline. Because we set aside real data to compare to (+Real), we needed all generated data techniques to add in the same amount of data per class as this real data baseline. In practice one could edit a small percentage of their dataset and see what percentage of images were filtered out, then use that to determine roughly how many images to edit in order for the filtered dataset to be of the desired size. For example, if one wanted to add in 10 samples per class and profiling on a small generated dataset shows that around 50% of the images get filtered out, at least 20 images should be edited per class. --- Rebuttal Comment 1.1: Comment: Thanks for the reply. The authors addressed most of my concerns by providing additional results on the Waterbirds dataset and insights on choosing different numbers of domain descriptions and the amount of data to generate. In summary, the proposed method demonstrates a good utilization of LLM and generative models for data augmentation. I decided to increase my score from 6 to 7. In the final version, I recommend the authors include additional experiments to validate that the number of domain descriptions doesn't affect the effectiveness of the augmented samples. --- Reply to Comment 1.1.1: Title: Thank you Comment: We are glad we have addressed most of your concerns. We agree that an ablation on the number of domain descriptions would provide useful insight into the method and will strive to include those results by the camera ready.
Summary: The paper introduces ALIA to generate domain descriptions of the dataset from captions of each image. These captions are then used to generate more data using Stable Diffusion. There is also a filtering process to remove corrupted images or those with minimal edits. ALIA improves performance over the baselines on several datasets. Strengths: - The paper was well written and easy to follow. - Fine-grained classification is an interesting and difficult problem. It is also nice to see a different setting compared to existing papers trying to improve robustness on IN-Sketch/R etc. Furthermore, the method performs well compared to the baselines. - I like the discussion on filtering failed edits. It is important but usually not given alot of attention. Weaknesses: - Data and computational efficiency - How many new images were generated for the baselines and ALIA? It would be useful to have a plot of how the performance of the methods changes with generated data and compute time. Augmentation methods like CutMix, RandAug have the advantage of being relatively efficient and more scalable than generation methods. This may matter if the difference in accuracy is not as large e.g. between RandAug and ALIA in Fig 5. - Background domain from test images - ALIA seems to be informed of the test domain, at least for iWildCam, while the other baselines are not. Domain adaptation methods that makes use of unlabelled target images, or test time adaptation methods may be more relevant baselines. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Other than the questions above, - There are existing datasets with contextual bias e.g. Waterbirds, CelebA. I was wondering why the authors chose to create one with the Aircraft benchmark. - Is it interesting that img2img does not work well for the Aircraft dataset, do the authors have any intuition why? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful review. We hope that we have adequately addressed your concerns below: **Data and Computational Efficiency.** We generate 2 edits per image per prompt in the training set for ALIA. Depending on the output of the LLM, the number of prompts vary from 4-10. Using the HuggingFace implementation of Img2Img editing on a GeForce RTX 2080 Ti, we can produce 2 edits of a given image in approximately 2 seconds, resulting in a generation time of around 15 minutes for the smallest dataset (Airbus VS Boeing) and around 7 hours for the largest dataset (iWildCam Subset). We will include the time it took to generate our datasets in the appendix. While our method does require a lengthy generation time if the dataset is large, our primary focus in this paper was to test the quality of our augmentations rather than the efficiency. **Background Domain from Test Images.** For iWildCam we do use unlabeled test images to generate the domain prompts for editing, but crucially we only use images from the background class, so we do not obtain any data of the animals in the test domain. While there do exist domain adaptation baselines which can make use of this test data, our paper is focused on data augmentation methods specifically. **Choice of Contextual Bias Benchmark.** We chose the Aircraft dataset because it is a challenging fine-grained dataset, with contextual bias that wasn't artificially created like Waterbirds. However, we do have results for the Waterbirds dataset, which are included in the global response and pdf. As you can see, ALIA outperforms all baselines aside from adding real data in the case of 100% contextual bias. **Img2Img Failure on Aircraft dataset.** Our hypothesis as to why img2img does not work well for the Aircraft dataset is because of the lack of patterns in the image. Many of the planes are displayed against a uniform blue background and are mostly white themselves. We saw a similar effect when trying to edit simple cartoons, or plane pictures taken randomly from Google. That being said, we predict that this problem will go away with better diffusion models or models trained on more images similar to the plane images. We will add this explanation to the appendix. --- Rebuttal Comment 1.1: Title: Thanks for the clarifications! Comment: Thanks for the additional results on waterbirds, runtime and clarifications. It is nice to see a paper focusing on non ImageNet tasks. I also appreciate the insights on how edits can affect datasets differently and the discussion on filtering failed edits. I have read the other reviews and responses too and have decided to increase my score to 7. --- Reply to Comment 1.1.1: Title: Thank you Comment: We are delighted that our rebuttal addressed all your questions/concerns and appreciate your raised score!
Summary: The paper proposes a method to augment an existing vision dataset with samples that are likely reflective of task-relevant variations that are potentially missing in the same. To do this, the authors propose a method, ALIA (Automated Language-guided Image Augmentation), which utilizes off-the-shelf image-generation and language model pipelines to first generate diverse domain descriptions using a captioning + LLM pipeline which can then be used to edit an existing image to generate diverse variations of the same. Specifically, ALIA consists of three steps – (1) prompting an LLM to generate domain descriptions (likely capturing visual variations) based on a collection of generated captions associated with the original dataset, (2) using an image editing pipeline (Img2Img / Instruct Pix2Pix) to generate edits and (3) automated filtration steps to ensure the generated images and visually consistent and can reliably augment the existing dataset. From experiments conducted across three benchmarks – domain generalization (iWildCam), fine-grained classification (CUB) and classification in the presence of contextual bias (custom split of FGVC-Aircraft) – the authors show that ALIA guided expansion of the data distribution is most-effective in improving performance over a vanilla baseline and other prior augmentation strategies considered. Additional ablations outline the extent to which prompting, filtration and the editing mechanism impact performance. Strengths: The following points outline the strengths associated with the submission. - The paper is generally well-written and easy to follow. The authors do a good job of motivating the base observations – (1) circumventing additional data curation by using generated images and (2) adopting conditional text-guided edits as a more structured way to generate samples to augment an existing dataset. The introductory section does a good job of outlining the necessity of individual components in ALIA – the necessity of invoking task-agnostic language descriptions to guide edits, avoiding fine-tuning of individual large-scale models and including an automated filtration step. - In my opinion, compared to prior work (as noted by the authors as well), the novelty of the proposed approach lies in automating the pipeline to generate augmented samples with minimal interventions. While it remains to be seen the extent to which “avoiding fine-tuning” will translate to more complex settings (images with multiple objects) and tasks (structured prediction), ALIA seems like a novel step in this direction of expanding data distributions by ensuring broad coverage of variations likely to be seen in test-time settings (modulo the obvious caveats acknowledged by the authors in Section 6). - The proposed method seems to work and leads to improvements over a vanilla baseline, other diffusion-guided (conditional / unconditional) editing schemes and prior augmentation strategies, and is often competitive with an oracle setting where one has access to data from the test distribution. With the exception of the points raised under weaknesses, the significance of ALIA lies in the fact that it is a straightforward and timely combination of existing techniques in language modeling and image generation / editing / personalization, and works fairly well off-the-shelf across multiple settings. Additionally, I particularly like the filtration process – designed by first identifying potential failure modes and subsequent methods to counter those. - The ablations (coupled with underlying hypotheses wherever applicable) are useful and provide actionable insights about the extent to which components in ALIA are sensitive to the data and the model at hand – for instance, while InstructPix2Pix is more useful compared to Img2Img, in the contextual bias settings, it leads to artifacts for iWildCam edits. These observations are likely going to be useful for future work attempting to build on top of ALIA. Weaknesses: The following points outline the weaknesses associated with the submission. Most of these points are either associated with the significance and completeness towards the intended goal. - Given that the description of a “domain” is left somewhat open-ended in the current version (L83-85), the paper would benefit from including the discussion (supported perhaps by quantitative results) surrounding – (1) what kind of semantic / stylistic variations are missing in the base-dataset and (2) whether ALIA explicitly counters those scarcities by introducing relevant edits. While the examples provided for the contextual bias settings in Figure 7 help highlight how over-represented settings are pruned (and under-represented ones are highlighted), a discussion surrounding all different settings would significantly strengthen the submission. - Following up on the previous point, in addition to above, including an analysis informing the extent to which one needs to augment an existing dataset would be useful as well – is it necessary to consider an expansion of 20-100% in most settings? It may not matter as much for small-scale settings, but for someone intending to build on top of ALIA, it may be useful to know if one runs into a “diminishing return” scenario for different datasets at any point – i.e., if gains obtained by adding more diverse samples become increasingly marginal beyond a point. - L215-216 states that domain descriptions for the background images from the test-set were used to generate augmented samples in iWildCam. Coincidentally, iWildCam is the only setting where +ALIA > +Real. Since assuming access to the specific “kind” of target domain variations during training is not entirely fair (one does not know the test-time variation apriori), the paper would benefit from relaxing claims surrounding “beating real data” for this specific setting. - For the contextual bias experiments, it might be beneficial to consider settings where spurious correlations have been studied heavily (for instance, the NICO [A] benchmark). Not only would that involve a more diverse “base” dataset, but also help compare with more sophisticated algorithms (a subset of which are designed to explicitly counter spurious correlations). This is motivated from the fact that supplementing with “targeted” data may not always be the best solution in contextual bias settings. [A] – Towards Non-I.I.D. Image Classification: A Dataset and Baselines Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: The points outlined under strengths and weaknesses influence my rating. Regarding weaknesses, my suggestions are intended more towards improving completeness and significance of the results presented in the current submission. Among these, I think (1), (3) and (4) are crucial weaknesses which are perhaps central to the claims of the paper. Addressing these would definitely help me in improving my rating of the paper. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: The only potential negative societal impact that I foresee is that models trained on ALIA augmented datasets can “unintentionally” inherit biases present in the base models (LLM, Diffusion) in the ALIA pipeline. While this may not matter as much for the datasets being considered for experiments in the submission, it is perhaps crucial for other work attempting to build on top of ALIA for sensitive situations. Adding a discussion (with obvious disclaimers) would improve the current submission. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful feedback and suggestions, we hope to have addressed each of your concerns with the following: **More clarity around domain shift.** We agree that more clarity around what each domain shift is, in addition to quantitative proof that ALIA improves accuracy under this domain shift, is crucial to showing the efficacy of our method. Below we describe the shifts in each dataset (if available) as well as proof that ALIA improves performance in each of these cases. Note that Cub2011 has no explicitly defined domain shift. * *iWildCam:* This dataset is constructed such that the locations of the camera trap differ from train to test; specifically, the locations containing (1) a lake in the distance or (2) a dirt trail with trees are two locations not present in the training set. A sample of images from these locations are included in the global rebuttal PDF. We see that our prompt generation technique does produce domain descriptions that describe the test domain, such as “a camera trap photo of a {} near a large body of water in the middle of a field.”, “a camera trap photo of an {} in a forest in the dark.”, and “a camera trap photo of a {} walking on a dirt trail with twigs and branches.” We further see in Figure 4 of the paper that when evaluating on these two locations, we get high performance compared to other approaches. * *Airbus VS Boeing:* The domain shift in this dataset is the existence of Airbus planes on grass and Boeing planes on road in the test set. More explicitly, examples considered in-domain are Airbus on road, Boeing on grass, Airbus in the sky, and Boeing in the sky, which appear in both the training and test set. An exact breakdown of the number of samples in each group are in the Appendix. As shown in the global response and PDF, our method is able to improve on the in-domain performance of all augmentation methods, while also beating the baseline and traditional data augmentation methods on the out-of-domain performance. * *Waterbirds:* as described in the global rebuttal, the domain shift here is the presence of Landbirds on Water and Waterbirds on Land in the test set. As shown in the global response and PDF, our method is able to roughly match the in-domain accuracy of the other methods while drastically improving the accuracy of all other augmentation methods. **Amount of Generated Data VS Accuracy.** Although we did not have time to investigate this for all datasets, we were able to experiment on how accuracy changes with the amount of generated data added on Cub2011. As shown in the results (contained in the global rebuttal PDF), ALIA is able to achieve accuracy gains up to 1000 images (20% of the original training set), at which point accuracy starts to decline. In contrast, images generated from text alone see a decrease in accuracy almost immediately, reaching below the accuracy of the original training set at 2000 images. We will work to get similar plots for the other datasets by the camera ready. **Claims of beating real data.** We agree that we should provide more context to the claim of ALIA beating real data, especially considering that we do have knowledge of the test domain. We will update our manuscript accordingly. **Spurious correlation datasets.** While we agree that NICO is a more diverse benchmark that simulates real world domain shifts, we chose not to use it in our evaluation because the images seemed as though they could be easily mimicked with Stable Diffusion. In these cases, the best approach would likely use Txt2Img data, as it is more diverse than image editing. --- Rebuttal Comment 1.1: Title: Thanks for the response! Comment: Thanks for responding to my concerns and providing additional experimental results. My primary concerns surrounding the shifts ALIA tackles and the impact of the amount of generated data were adequately addressed by the rebuttal. Most concerns from other reviews seem to have been addressed (with supporting discussion as well) adequately as well. I would encourage the authors to include (additional) discussions from the reviews in the revised version. --- Reply to Comment 1.1.1: Title: Thank you Comment: We are glad that our rebuttal resolved the concerns you had. Thank you for the suggestion of analyzing the amount of generated data added; we think it has provided much more clarity on our method.
Summary: This paper explores improving robustness to variations in domains with the use of modern large vision and language models. They propose ALIA (Automated Language-guided Image Augmentation) which generates automated descriptors of data domains and augments training data with language-guided image editing. They use a model trained on the original dataset as a quality filter to ensure class-relevant information is maintained. They show this approach leads to nice gains on fine-grained datasets or data with significant clutter, for both classification and detection. After reading the authors rebuttal and discussing with them, I will maintain my score of 7. Strengths: It’s clever to combine recent large models in this way, using image captioning to generate appropriate background or context caption templates across based on what is seen in the dataset, and allowing language-guided editing to provide diversity in a more realistic manner than randaug or cutmix based on those templates. In particular, the use of semantic features and visual features (via classifier confidence) to filter failures both where the image is only minimally changed or where the image is changed in a way that corrupts class information is quite nice. The experiments, across several datasets and dimensions of challenges including domain shift and contextual bias, were nicely done and visualizations were clear and informative. Weaknesses: Since the domain descriptions are build with the existing dataset, it seems that this method may not work as well for cases where the dataset in question is not representative of all potential domains seen in practice, or where the dataset has significant bias or gaps. Additionally, since the diverse augmented data is conditioned on training examples and the category in question often is minimally changed, there is still limited diversity for rare species with few examples during training, particularly rare pose diversity. It would be nice to explore this more deeply on a large-scale fine grained dataset with significant imbalance, such as iNaturalist. The biggest weakness of this paper is lack of clarity and transparency about some of the data choices made. In particular, it wasn’t clear what protocols were used when selecting subsets of iWildCam for train and test. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Could performance be further improved by including domain descriptors for test data (assuming access to test data at inference time, but not test labels) for cases with domain shift between train and test? In some cases, context is highly important and valuable species identification information for human experts, and certain species would not ever been seen, eg, perched on a birdfeeder, out at night, in the water. Have you considered learning class-appropriate subsets of language templates per-species, to reduce potentially confusing and unrealistic generated context? I noticed that you considered only a subset of iWildCam classes and camera locations, what was the reasoning behind the choice of subsampling? Did you artificially balance the dataset at all (ie removing rare categories or capping common ones) or did you keep a realistic shift in subpopulation distribution across different domains? How were the real test-domain images sampled for comparison, uniformly across the test camera sites and categories of interest or?, Did you make sure that the test data was identical when comparing (ie remove that added test data from evaluation for all models)? It would be good to be very explicit about all of these choices whenever you use a non-standard split of a dataset Does performance ranking across baselines change if we consider different metrics? For example, if we consider top-1 accuracy or break down performance per class on iWildCam instead of looking at the macro f1? Does this method improve more for rare classes or common ones? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The limitations section is well done, and points out several clear potential failure modes of this method. More detailed analysis of failure cases would be appreciated. For which classes does this method help, and for which does it hurt? Are there any consistent patterns in data that the model finds difficult to predict accurately, even after augmentation? It would help the reader build additional intuition for where gaps remain and in which cases this method is ready for deployment. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your appreciation of our work! We agree that ALIA is not suitable for cases were there are unseen domains which are significantly different from the domains seen in training or cases where the class relevant features need to be significantly augmented (e.g. pose change). As we will detail below, our subset of iWildCam does include class imbalance, but since the animals are mostly well represented by Stable Diffusion, we are unsure of how well ALIA can handle class imbalance for rare species. Due to time constraints we were not able to test ALIA on iNaturalist, but we will strive to have results for the camera ready. **Including Domain Descriptors for Test Data.** Yes! For our iWildCam experiment, we did exactly this: took the unlabeled background images for the new domains and used those to generate the domain descriptions. This is an especially useful tool in cases of a large domain shift like sim to real transfer, and the user can also explore using their own constructed prompts if they anticipate a particular test domain but have no data for it. **Class-Specific Domain Descriptions.** This is another great idea that we have been exploring. For instance, one could generate domain descriptions per class and use these descriptions to generate diversity within a class, or inversely could use these to uncover potential biases in the dataset and direct the augmentation to a subset of classes. There are a lot of interesting avenues around selecting which images to augment with which prompts, and we see this as exciting future work. **iWildCam Subset Construction.** We apologize for not being clearer on how and why this dataset was constructed. We chose to construct a subset of iWildCam because (1) the experimentation and generation time for the entre wilds dataset was prohibitively expensive and (2) some classes didn’t have enough examples for us to split into a train, val, test, and extra set. While we did not artificially balance the classes, we did want classes which would have at least 40 examples in each split. In an effort to keep this constrained setting as close to the original iWildCam dataset as possible, we constructed the train/val/test/extra splits such that each split contains non-overlapping locations and we did not subsample within locations to balance the class support. Since iWildCam has a train, id_test, and test split already, we sampled the locations with at least two classes to put into our subset, ensuring that the train subset was sampled from the train set, the val subset was sampled from the id_test set. We split the sampled locations from the iWildCam test set into two disjoint groups, which formed the test set and extra set for our subset. These new splits were set before experimentation, so all methods were trained, validated, and evaluated on the same data. We have included a breakdown of the class counts of each split below. We will also include this description in our final manuscript. Please let us know if you have any further questions surrounding the construction of this dataset. | Split | Background | Cattle | Elephant | Impala | Zebra | Giraffe | Dik-Dik | | :--- |:----: |:---: | :----: | :---: | :---: | :----: | :---: | | Train | 2210 | 801 | 366 | 981 | 720 | 460 | 514 | | Val | 2006 | 206 | 53 | 140 | 119 | 58 | 244 | | Test | 127 | 1416 | 2003 | 4553 | 144 | 47 | 213 | | Extra | 402 | 506 | 91 | 625 | 85 | 47 | 468 | **iWildCam Metrics.** Below we include the accuracy for our iWildCam experiments, which again shows that our method outperforms all baselines as well as real data. Since we do not add in the same number of images per class in our experiment, it would be hard to draw useful conclusions on which classes our method does well for, but our intuition is that the edited data for classes which stable diffusion cannot recreate well will likely be of lower quality even after filtering, and thus may result in worse performance. We will strive to provide further quantitive analysis on the failure modes of ALIA by the camera ready. | Dataset | Accuracy | | :--- | :---: | | Baseline | 67.51(6.15) | | +CutMix | 77.56(4.77) | | +RandAug | 72.97(3.87) | | +Txt2Img | 75.59(3.36) | | +ALIA | **84.87(1.92)** | | +Real | 74.10(3.37) | --- Rebuttal Comment 1.1: Title: Thank you for the clarifications Comment: Thank you to the authors for the clarifications on data subset construction. This line of research on improving specialized OOD performance with augmentation from generated images is distinct from the more common in-distribution challenge presented by datasets like ImageNet. This work demonstrates a nice step forward in exploring what works well for OOD diversification, despite restricting the realism and possibly usefulness of the test case by focusing on common categories that are well represented in Stable Diffusion (I would still be very interested in seeing iNaturalist results in the camera ready). Because of this, I will keep my score of 7 for this work. --- Reply to Comment 1.1.1: Title: Thank you Comment: We are glad that our rebuttal resolved your questions. We are currently working on implementing ALIA for iNaturalist and will strive to have the results for the camera ready!
Rebuttal 1: Rebuttal: We thank all reviewers for their detailed reviews. We are delighted that they found the paper easy to follow and saw value in our discussions surrounding failed edits and different data augmentation techniques. As requested, we have run a few more experiments to ensure a comprehensive evaluation. We have plotted the resulting data tables and included all plots in the attached PDF. ### Waterbirds [Contextual Bias] Waterbirds[1] is a synthetically created dataset which creates contextual bias by taking species of landbirds and waterbirds from the CUB-200[2] dataset and pasting them onto forest and water backgrounds from the Places[3] dataset. For the training and validation sets, all landbirds appear on forest backgrounds and all waterbirds appear on water backgrounds, while the test set has an even representation of backgrounds and bird types. Further experimental details are contained in the PDF. As shown below and in Figure 1 of the PDF, ALIA is able to roughly match the in-domain accuracy of the other methods while drastically improving the accuracy of all other augmentation methods. Note that we don't bold the +Real numbers in the table because it is considered an oracle baseline. | Dataset | ID Accuracy | OOD Accuracy | Class Balanced Accuracy | | :--- | :----: | :---: | :----: | | Baseline | 97.34(0.23) | 27.14(3.06) | 62.24(1.58) | | +CutMix | **98.01(0.33)** | 28.84(5.29) | 63.42(2.49) | | +RandAug | 97.61(0.51) | 30.32(6.31) | 30.32(6.31) | | +Txt2Img | 97.61(0.11) | 29.66(3.81) | 63.64(1.92) | | +ALIA | 96.17(0.47) | **46.63(3.96)** | **71.40(1.84)** | | +Real | 91.45(0.45) | 89.40(0.49) | 90.43(0.47) | [1] Sagawa, et al. "Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization" [2] Wah, et al. "The Caltech-UCSD Birds-200-2011 dataset" [3] Zhou, et al. "Places: A 10 million image database for scene recognition" ### Number of generated images VS accuracy on Cub2011 While our main experiments restrict the number of generated images added to the dataset, one of the benefits of diffusion generated augmentation is that one can create infinite amounts of data. Figure 2 in the PDF depicts the accuracy on CUB as a function of the number of generated images added (200/400/1000/2000/4000), where the grey line is the baseline accuracy on the original 4994 images. ALIA is able to achieve accuracy gains up to 1000 images (20% of the original training set), at which point accuracy starts to decline. In contrast, images generated from text alone see a decrease in accuracy almost immediately, reaching below the accuracy of the original training set at 2000 images. We suspect this is because much of the text to image data is from a different distribution than the regular training data, and thus adding small amounts can increase robustness while large amounts cause the model to overfit to this distribution. Since image to image edits use language to edit the training images directly, these images are less likely to be out of distribution as compared to the text to image data. ### Airbus VS Boeing ID/OOD accuracy breakdown In order to show that ALIA does improve accuracy on the unseen domain, we break down the results from Figure 6 of the main paper into in-domain accuracy and out of domain accuracy. Note that we don't bold the +Real numbers in the table because it is considered an oracle baseline. As shown below as well as in Figure 3 of the PDF, ALIA is able to improve on the in-domain performance of all augmentation methods while beating the baseline and traditional data augmentation methods on the out-of-domain performance. | Method | ID Accuracy | OOD Accuracy | Class Balanced Accuracy | | :--- | :---: | :---: | :---: | | Baseline | 85.79(1.99) | 31.68(0.60) | 64.64(0.75) | | +CutMix | 85.96(1.30) | 32.11(1.50) | 64.96(1.35) | | +RandAug | 87.23(1.07) | 30.17(0.09) | 64.87(0.65) | | +Txt2Img | 81.52(1.45) | **41.62(0.21)** | 65.85(1.12) | | +ALIA | **88.11(0.84)** | 36.20(0.36) | **68.84(0.76)** | | +Real | 87.07(1.16) | 44.34(2.25) | 71.81(1.29) | Pdf: /pdf/f20962a0fc27e93b6291650022af971849ef5139.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper presents a data augmentation methods. It first leverages the advanced image captioning and large language models to generate prompts. Next, it uses diffusion-based models to generate more images. In addition, it trained a model to filter failure or similar cases during image generation. Extensive experiments over 3 specialised tasks demonstrate the effectiveness of the proposed method. Strengths: 1. This paper is general clear and easy to follow 2. The proposed method is simple and technically sounds 3. The proposed method effective improves the classification performances over the reported three tasks Weaknesses: This paper studied the generated images through diffusion models on "specialised" classification tasks. However, generated images through diffusion models have been proved to be effective on improving "general" classification tasks [8, 35]. Therefore, the novelty and contribution of the proposed method seems to be incremental. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. It have been studied in [8, 35] that images generated from diffusion models could help to improve the image classification performances. The author claims that "we use diffusion models to do image editing with text rather than generating images from text alone, resulting in augmentations that closely resemble the training data without finetuning." However, I'm thinking stable diffusion models (text2image models) could generate more diverse data as compared to text editing diffusion models (e.g. InstructPix2Pix). Then why text editing methods are better? Experimental results to compare the augmented data through text2image and text editing methods are expected. 2. This paper aims to improve the performances on specialised tasks that have very few training data. However, some of these tasks may contain out-of-domain data that may be difficult to be generated through diffusion models. Discussion regarding to these cases are expected. 3. It is mentioned "We fine-tune a ResNet50 for the CUB [40] and iWildCam [15] datasets and a ResNet18 for the Planes [21] dataset". Why use different networks for different datasets? 4. In Appendix, the generated images through img2img of Airbus and Boeing look very strange. Discussion regarding to these results are expected. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your review and you bring up important questions on how our work differs from the several works that explore using diffusion generated data to improve "general" classification tasks. We aimed to address this in the introduction of our paper and we hope to provide more clarity below. **Motivation for Image Editing/OOD data.** We agree that text2image models produce more diverse images than image editing methods like InstructPix2Pix, and thus in a setting where the training/test images are well represented in the training set of Stable Diffusion, using images generated from text alone is likely to result in bigger gains than edited data. While this is an exciting avenue to explore, we are interested in how well these text conditioned generative models can do when the task is out-of-domain; that is, when text to image models cannot produce images similar to those in the training set. In these settings, it is unclear if/how one can utilize these powerful models to improve accuracy. The key insight of our work is that grounding the augmentation in the training data itself is an effective method of utilizing the high domain-level knowledge of Stable Diffusion (e.g. background, weather, lighting, etc) to vary existing images, thus maintaining the class-relevant features. As shown qualitatively, images generated from text alone look dissimilar to real training images while image edits result in images that are less diverse but more similar to the types of images seen in this specialized task. On datasets of various sizes, we show quantitatively that this results in an augmentation technique that beats both traditional augmentation strategies as well as using Txt2Img for augmentation. **Why use ResNet18 for Airbus VS Boeing.** We chose ResNet18 because the dataset is rather small and it made the experiments much less compute intensive. We will update the manuscript with the ResNet50 results for the camera ready. **The generated images for Airbus and Boeing look very strange.** Our hypothesis as to why img2img does not work well for the Aircraft dataset is because of the lack of patterns in the image. Many of the planes are displayed against a uniform blue background and are mostly white themselves. We saw a similar effect when trying to edit simple cartoons, or plane pictures taken randomly from Google. That being said, we predict that this problem will go away with better diffusion models or models trained on more images similar to the plane images. We will add this explanation to the appendix. --- Rebuttal Comment 1.1: Title: general vs specialized Comment: Reviewer u22M, I'm curious how you define the difference between "general" classification tasks and "specialized" classification tasks?
null
null
null
null
null
null
Unbiased Compression Saves Communication in Distributed Optimization: When and How Much?
Accept (poster)
Summary: The authors consider the distributed convex optimization problem in the centralized setting. In this setting, the authors provide the new lower bounds on the total communication cost under the assumption that nodes send unbiased independent compressed vectors to a master. They also provide improved analysis of the current state-of-the-art ADIANA method in the general convex and strongly convex regimes. Strengths: The paper provides two important things to the distributed optimization community: 1. While the upper bounds obtained by the ADIANA and CANITA methods are well known, finally, the authors provide lower bounds in the described setting. 2. The improved analysis of ADIANA is very surprising. I checked the proof. I'm not 100% sure, but the improved analysis seems to be correct. I believe that it is an interesting contribution on its own. All in all, I think that this paper deserves to be published in NeurIPS. Weaknesses: However, I would like to point out some very important weakness that I want the authors to fix. The authors compare the total communication cost of compressed methods with the accelerated (Nesterov's) method. They claim that it is possible to improve the complexity by $\min [n, \kappa].$ I agree with this fact but only under the assumption that **the local smoothness constants are equal to the global smoothness constant.** The author ignore the fact (or I missed it in the text, then sorry me for that) that the local smoothness constants can be $n$ times larger than the global one. The authors define the local $L$--smoothness constant as $$f_i(y) \leq f_i(x) + <\nabla f_i(x), y - x> + \frac{L}{2} ||x - y||^2 \quad \forall i$$ While the non-compressed methods define it as $$f(y) \leq f(x) + <\nabla f(x), y - x> + \frac{L}{2} ||x - y||^2.$$ The $L$--constant of $f_i$ can be $n$ times larger than $L$--constant of $f$! I want to ask the authors to clarify it clearly in the paper. Ideally, in the abstract and in the contributions. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive comments and the valuable question. As the reviewer accurately points out, the convergence analysis presented in our paper is contingent upon the condition on local smoothness constants. Under the difference of local and global smoothness, it remains uncertain whether our assertion regarding the potential communication cost reduction by independent unbiased compressors, on the order of $\Theta(\sqrt{\min\\{n,\kappa\\}})$, remains valid. Despite our earnest efforts, we have been unable to clarify this open question within the confines of the demanding rebuttal deadline. Consequently, to address the reviewer's valid concern, we will make revisions in our abstract and contribution sections and explicitly clarify that our claims hold under the assumption that the local smoothness and the global smoothness constants are constrained by a common upper bound $L$. For example, we will revise statements in Line 18-19 in the abstract as "Our results reveal that using independent unbiased compression can reduce the total communication cost by a factor of up to $\Theta(\sqrt{\min\\{n,\kappa\\}})$ under the assumption that all local smoothness constants are constrained by a common upper bound $L$". Additionally, we will also revise statements in Line 100-101 in the introduction as "independent unbiased compression can decrease total communication costs by up to $\Theta(\sqrt{\min\\{n,\kappa\\}})$ when all local smoothness constants are constrained by a common upper bound $L$." On the other hand, we hope the reviewer can understand that **it is not uncommon to make the assumption that all local functions share a common upper bound of smoothness in literature on distributed optimization**, which can significantly simplify the convergence analysis. For example, most algorithms listed in Table 1 in our paper such as CGD [20], ACGD [25], DIANA [32], ADIANA [25], CANITA [26], and NEOLITHIC [13] assumes $L$-smoothness across all local functions. We hope this response can resolve the reviewer's concerns. We are looking forward to the follow-up discussion with the reviewer, and more than happy to address any further comments or questions. --- Rebuttal Comment 1.1: Title: Respond Comment: Thank you for the rebuttal! I didn't mean that you would start reproving everything for the "global smoothness" case. I only kindly asked you to add comments in the paper that the comparison between your results and the classical AGD method are only valid under the assumption that "that all local smoothness constants are constrained by a common upper bound." From the rebuttal, it is clear that we are on the same page! The authors addressed all my questions, and I will keep my score. BTW, I've recently found the paper \[1\]. They consider a slightly different setup, bidirectional communication. However, in their Contribution C, they say that they improve CANITA's $1 / \varepsilon^{1/3}$ rate to $\ln 1/ \varepsilon.$ In your paper, you have the gap between the upper bound and the lower bound ($1 / \varepsilon^{1/3}$ vs $\ln 1/ \varepsilon.$). It seems that this paper closes that gap, so your lower bound is tight. \[1\]: https://arxiv.org/pdf/2305.12379.pdf --- Reply to Comment 1.1.1: Comment: We really appreciate your follow-up feedback. We'll definitely clarify the smoothness issue in the revision as we stated in our rebuttal. We are also happy to discuss the recent paper of 2Direction with the reviewer. As mentioned by their authors, the rate of 2Direction in the low accuracy regimes improves the $\Theta\left(\frac{1}{\epsilon^{1/3}}\right)$ term to $\Theta\left(\log\frac{1}{\epsilon}\right)$. However, when simplfiying to our setting where $r=0$, $L=L_{\max}$, $\alpha=1$, their dominant term in the high precision case (namely, $\epsilon$ is sufficiently small) $\left(1+\frac{\omega^{1/2}}{n^{1/6}}+\frac{\omega^{3/4}}{n^{1/4}}+\frac{\omega}{\sqrt{n}}\right)\frac{\sqrt{L\Delta}}{\sqrt{\epsilon}}$ is worse than $\left(1+\frac{\omega}{\sqrt{n}}\right)\epsilon^{-1/2}$, the major term in our lower bound. Therefore, despite of the inspiring work, it remains open to find an algorithm that, in the generally-convex case, can tightly match both the $\ln(1/\epsilon)$ term and the $1/\sqrt{\epsilon}$ term simultaneously. Finally, we thank the reviewer for pointing out this recent work (which came out after our submission) and we will also comment on this work in our later revisions.
Summary: The authors study the extent to which unbiased compression can reduce the total communication cost of optimizing smooth convex functions. Additionally, they refine the analysis of ADIANA ([25]) to show that the bounds are near-optimal. Strengths: Quantifying how much unbiased compression helps is an interesting topic. While I believe that nearly anyone (that uses compression) works with unbiased compressors in the distributed case, it is interesting to learn by how much it helps compared with biased compressors. Weaknesses: It is not very surprising to me what without independence, unbiasedness cannot help in general. Further, I do not recall seeing works that do not use independence to get the error-cancellation, *other than works that use correlated compression to drive the error lower than that of i.i.d. compressors* (e.g., Correlated quantization for distributed mean estimation and optimization, ICML 2022). Also, the writing is hard to follow and seems to be contradictory in places. For example, You write, "Compared to lower bounds when using unbiased compression without independence [13], our lower bounds demonstrate significant improvements when n and κ are large". However, the upper and lower bounds of [13] seem to match in your Table 1, while your lower bound seems *weaker* than that of [13]. In general, there cannot be two matching sets of upper and lower bounds. Some of the claims also seem inflated; I don't view $O(\omega \epsilon^{-O(1)})$ as being near optimal for a lower bound of $\Omega(\omega\log\epsilon^{-1})$ -- the dependency on $\epsilon$ is exponentially worse. The improvement in the analysis of ADIANA is only applicable to some parameter range as it shaves an additive factor. (As a side note, it took me a minute to understand that by **ADIANA (Ours)** you don't actually mean that it's your *algorithm*, but just a refined analysis.) Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: Can you please clarify the differences in the asymptotics with [13]? (Also, what is specified in the table seems to be different than in their paper, where they have an additional additive term.) Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: Seems fine Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed comments. We have clarified all questions as clearly as possible, and will be glad to address any further comments or questions. **1. Not surprising that unbiasedness cannot help without independence** We respectfully disagree with this opinion. Lots of literature believes that compression always saves the total communication costs, regardless of the relationship between compressors. While the reviewer's intuition is correct, it is essential to provide rigorous proofs, as what we contribute in the paper, to translate intuition into facts. It would be inappropriate to make definitive claims without in-depth mathematical clarifications. **2. Rare works do not use independence** Please see Point 3 in the "global response". **3. Contradictions in lower bounds** Please note that our lower bound/optimal complexity is proved in a different setup from [13]. Specifically, [13] does not assume the independence of compressors. In contrast, we impose the independence of compressors and only consider the lower bound and convergence of algorithms under this additional assumption. Due to the difference in setups, our results are different from and do not contradict with [13]. Furthermore, as supported with our upper bound part, our lower bounds are nearly tight (at least in the strongly convex case) and thus represent the optimal complexities. Given the fact that ours and [13] are both optimal complexities in their respective setups (with or without independence), our complexities improves [13] by factors depending on $n$, $\kappa$, and $\omega$ (also see the response in point 7). This reveals the nontrivial advantage brought by independence of compressors. We hope this clarifies the reviewer's confusion. **4. Inflated claims** To resolve reviewer's concern, we will replace the statement of "nearly optimal" with "nearly optimal in the strongly convex case" throughout the paper to avoid inflated claims. The polynomial gap exists in the generally convex (GC) case whereas our result in the strongly convex (SC) case is polynomially tight. However, it is worth noting that even with such gap, our result is still the **state-of-the-art** in the GC case. The existence of this tiny gap does not affect the major conclusion in our paper: independent unbiased compression saves communication costs up to $\sqrt{\min\\{n,\kappa\\}}$ times while unbiased compression alone cannot. The established result in the GC case still outperforms the complexity in [13] with advantage at most $\sqrt{\min\\{n,\kappa\\}}$ times when $\epsilon\lesssim (\frac{1+\omega/\sqrt{n}}{1+\omega})^6$ such that the $\epsilon^{-1/3}$ term is dominated. **5. ADIANA improvement** Our improvement over existing ADIANA analysis is significant. - First, the additive term $\omega^{3/4}/n^{1/4}\sqrt{\kappa}$ shaved by us is not trivial and can **dominate** in certain regimes. When $\min\\{n,\kappa^2/n\\}\gtrsim \omega\gtrsim n^{1/3}$, the term $\omega^{3/4}/n^{1/4}\sqrt{\kappa}$ dominates $\sqrt{\kappa}$, $\omega\sqrt{\kappa} /\sqrt{n}$, and $\omega$, and thus the rate of ADAINA in [25] becomes $\omega^{3/4}/n^{1/4}\sqrt{\kappa}\ln(1/\epsilon)$. Compared to our complexity in this case, the ratio is $$\frac{\mathrm{ADIANA}[25]}{ \mathrm{ADIANA [Thm3]}} \asymp \frac{ \omega^{3/4}/n^{1/4}\sqrt{\kappa} }{ \\omega+(1+\\omega/\sqrt{n} )\sqrt{\kappa} }\asymp \min\left\\{\frac{\sqrt{\kappa}}{(n\omega)^{1/4}}, \frac{\omega^{3/4}}{n^{1/4}},\frac{n^{1/4}}{\omega^{1/4}}\right\\}\gtrsim 1.$$ Here $\lesssim$, $\gtrsim$, and $\asymp$ denote (in)equalities that hold up to a numeric constant. Note that the ratio can be as large as $\min\\{\kappa^{3/8}/n^{1/4} ,n^{1/8}\\}$ if compressors are with $\omega\asymp \min\\{\sqrt{n}, \sqrt{\kappa}\\}$. Therefore, when $\kappa \gtrsim n$ and $\omega\asymp \sqrt{n}$, our improved rate is faster than the original one by $n^{1/8}$ which is significant as $n$, the number of workers, can be large. - Second, shaving the additive term directly leads ADIANA to tightly matches with the lower bound in the SC case under independent unbiased compression. This milestone has not been attained or even explored in literature, to our knowledge. - Third, we establish the first convergence of ADIANA in the GC case, while exsiting ADIANA does not. In addition, our convergence rate outperforms all existing literature. **6. ADIANA (Ours)** Will rephrase as ADIANA (Thm3). **7. Asymptotic differences with [13]** First, [13] studies stochastic optimization where gradients are noisy with $\sigma^2$-bounded variances (see [13, Assumption 3]). Our work instead considers deterministic distributed optimization where gradients are noiseless, corresponding to the case of $\sigma =0$. The results of [13] can apply to noiseless gradients by setting $\sigma^2$-dependent terms to zero. This is the reason why we removed the $\sigma$-dependent terms in our table. We are not sure what the word "asymptotic" mean by the reviewer. We conjecture the reviewer indicates the simplified rates of [13] when the additional $\sigma$-dependent terms removed. In this case, the main difference of ours and [13] lies in the independence of compressors. In [13], compressors are not assumed to be independent and those complexities also apply to non-iid compressors. As we compared in Sec 5 in details, our rate is smaller than theirs justified by the factor $$\frac{(1+\omega)\sqrt{\kappa}}{\omega+(1+\omega/\sqrt{n})\sqrt{\kappa}}=\frac{1}{1+\omega}+\frac{\omega}{1+\omega}\left(\frac{1}{\sqrt{n}}+\frac{1}{\sqrt{\kappa}}\right)\asymp\frac{1}{\min\\{1+\omega,\sqrt{n},\sqrt{\kappa}\\}}.$$ When adopting compressors with $\omega \gtrsim\min\\{\sqrt{n},\sqrt{\kappa}\\}$, our rate is $\min\\{\sqrt{n},\sqrt{\kappa}\\}$ times faster. The improvement mainly comes from exploiting the independence across compressors while [13], though more generally applicable to correlated compressors, inevitably sacrifices convergence. --- Rebuttal Comment 1.1: Title: Can we have your response to our rebuttals? Comment: Dear Reviewer JbF9, The reviewer-author discussion period will end **tomorrow**. Could you please let us know whether our rebuttal has resolved your concerns? If not, could you please point them out so that we can try to address them as best as we can? Thanks very much for your time and efforts to review our work. --- Rebuttal 2: Title: Need more clarifications? Comment: Dear reviewer JbF9, We thank you for your valuable comments. We have made detailed responses to address your concerns, but we have not received your replies to our current clarifications yet. We thus kindly ask if our responses have addressed all your concerns. If not, we are more than happy to provide more clarifications. Best, The authors of paper 5699
Summary: This paper investigates the communication saving of unbiased compression on distributed optimization. The main contributions are: i) a communication lower bound of distributed optimization algorithms with (not necessarily independent) unbiased compression; ii) a communication lower bound of distributed optimization algorithms with (independent) unbiased compression; iii) refined analysis of ADIANA to show tightness of these bounds; iv) discussions on the importance of independence to the communication saving. Strengths: i) Overall, this paper is well written and easy to follow. ii) The investigated problem is important to distributed optimization, and the results are convincing. Weaknesses: i) Section 1 claims that quantization and sparsification are modeled as unbiased compressors in some papers. But they can be biased too. Please clarify. ii) The error cancellation explanation of independence to unbiased compression makes sense. But, if $x_i$ at different nodes are quite different, do we still need independence to achieve the communication saving? iii) Table I considers both convex and strongly convex cases. Are there any difficulties in analyzing the non-convex case? iv) Some bounds in Table 1 are summations of two terms, for example, $\ln(1/\epsilon)$ and $\epsilon^{-1/2}$, $\epsilon^{-1/3}$ and $\epsilon^{-1/2}$, etc. Is it possible to reduce to one term? v) Which compressors satisfy Assumption 2 (unbiased compressor) and Assumption 3 (independent compressor)? Please comment below the assumptions. vi) The communication saving is discussed via comparing the lower bounds. However, in some datasets some algorithms might perform much better than the lower bounds. In this case, the comparisons would yield different conclusions. Please comment on this. vii) I give a score of 6 at this stage, and would like to change it after the rebuttal period. Technical Quality: 3 good Clarity: 3 good Questions for Authors: i) Section 1 claims that quantization and sparsification are modeled as unbiased compressors in some papers. But they can be biased too. Please clarify. ii) The error cancellation explanation of independence to unbiased compression makes sense. But, if $x_i$ at different nodes are quite different, do we still need independence to achieve the communication saving? iii) Table I considers both convex and strongly convex cases. Are there any difficulties in analyzing the non-convex case? iv) Some bounds in Table 1 are summations of two terms, for example, $\ln(1/\epsilon)$ and $\epsilon^{-1/2}$, $\epsilon^{-1/3}$ and $\epsilon^{-1/2}$, etc. Is it possible to reduce to one term? v) Which compressors satisfy Assumption 2 (unbiased compressor) and Assumption 3 (independent compressor)? Please comment below the assumptions. vi) The communication saving is discussed via comparing the lower bounds. However, in some datasets some algorithms might perform much better than the lower bounds. In this case, the comparisons would yield different conclusions. Please comment on this. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed comments. All questions have been clarified as best as we can. We are glad to address any further comments or questions. **1. Section 1 claims that quantization and sparsification are modeled as unbiased compressors in some papers. But they can be biased too.** Thanks for pointing it out. In Section 1 (Line 34~36) we wrote: *In literature [3, 20, 15], these compression techniques are often modeled as a random operator $C$, which satisfies the properties of unbiasedness $\mathbb{E}[C(x)] = x$ and $\omega$-bounded variance $\mathbb{E}[\\|C(x) − x\\|^2] \leq \omega\\|x\\|^2$.* We did not mean that all these compressors are all modeled as unbiased ones, and we'll address the existence of biased compressors in later revisions to prevent misleading. Specifically, we will add the following comments directly after Line 37: *Besides, part of the compressors can also be modeled as biased estimators with $\mathbb{E}[\\|C(x)-x\\|^2]\leq(1-\delta)\\|x\\|^2$ where $\delta\in(0,1]$ [13,39,40].* **2. If $x_i$ at different nodes are quite different, whether we still need independence to achieve the communication saving.** As long as the independence exists, the aggregation of compressed messages always benefits from the error cancellation because all the cross terms in the expansion of $\mathbb{E}[\\|\frac{1}{n}\sum_{i=1}^n C_i(x_i)-x_i\\|^2]$ are zero in expectation so that $\mathbb{E}[\\|\frac{1}{n}\sum_{i=1}^n C_i(x_i)-x_i\\|^2]=\frac{\omega}{n^2}\sum_{i=1}^n\\|x_i\\|^2$. As for non-independent compressors, in the worst case, there remains the possibility that the compression error of the aggregated messages approaches to the upper bound obtained from Cauchy's inequality, i.e., $\mathbb{E}[\\|\frac{1}{n}\sum_{i=1}^n C_i(x_i)-x_i\\|^2]\approx \frac{\omega}{n}\sum_{i=1}^n\\|x_i\\|^2$, which can be greater than the one of independent compressors, up to $n$ times. **3. Difficulties in analyzing the non-convex case.** Unfortunately, in the non-convex scenario, we have not found any algorithm that belongs to the considered family and is capable to achieve provable savings in total communication costs. One close work is (MARINA: Faster Non-Convex Distributed Learning with Compression, ICML 2021) which exhibits an extraordinary convergence rate but requires transmitting full (i.e., uncompressed) gradients occasionally, making it jumping out of the algorithm class we consider. As for ADIANA, it is specially designed for the convex case, and it may not even converge in the non-convex case. **4. Whether summations like $\ln(1/\epsilon)$ and $\epsilon^{-1/2}$, $\epsilon^{-1/3}$ and $\epsilon^{-1/2}$, etc. are possible to be reduced to one term?** Each term can be the dominating one depending on the magnitude of precision $\epsilon$ and the setup-related constants including $L$ and $\Delta$. For example, in the lower bound of the generally-convex case: \begin{equation} \tilde{\Omega}\left(\omega\ln\left(\frac{1}{\epsilon}\right)+\left(1+\frac{\omega}{\sqrt{n}}\right)\frac{\sqrt{L\Delta}}{\sqrt{\epsilon}}\right), \end{equation} The $\ln(1/\epsilon)$ term can dominate when the parameters satisfy $\omega=\sqrt{n}\gg L\Delta/\epsilon$, and the $1/\sqrt{\epsilon}$ term can dominate in a high-precision regime where $\frac{\ln(1/\epsilon)}{1/\sqrt{\epsilon}}$ is sufficiently small. In the upper bound of CANITA in the generally-convex case: \begin{equation} \mathcal{O}\left(\omega\frac{\sqrt[3]{L\Delta}}{\sqrt[3]{\epsilon}}+\left(1+\frac{\omega^{3/4}}{n^{1/4}}+\frac{\omega}{\sqrt{n}}\right)\frac{\sqrt{L\Delta}}{\sqrt{\epsilon}}\right), \end{equation} The $1/\sqrt[3]{\epsilon}$ term can dominate when $\omega=n^{1/3}\gg \left(\frac{L\Delta}{\epsilon}\right)^{1/6}$, and the $1/\sqrt{\epsilon}$ term can dominate in a high-precision regime where $\epsilon$ is sufficiently small. Therefore, these summations should not be reduced into one term without further conditions. **5. Which compressors satisfy Assumption 2 (unbiased compressor) and Assumption 3 (independent compressor)?** We'll comment the following examples under Assumption 3: *In fact, lots of compressors in the literature satisfy Assumption 2, see, e.g., standard dithering [3, Lemma 3.1], natural compression [14, Thm 3], natural dithering [14, Thm 8], ternary quantization [48]. As long as the worker-associate compressors have irrelavant sources of randomness (e.g., using different seeds for randomness), they further satisfy Assumption 3.* We also refer the reviewer to Point 3 in the "global response" where we show examples that independent compressors can be transformed into dependent compressors when using the same random seed. **6. The communication saving is discussed via comparing the lower bounds. However, in some datasets some algorithms might perform much better than the lower bounds. In this case, the comparisons would yield different conclusions.** As described, our lower bounds consider the worst case of the objective function that satisfy the $L$-smoothness and convexity. Thus our lower bounds only provide insights and laid down the standard for comparing the theoretical convergence rate relying merely on the $L$-smoothness and convexity. We'll comment this under Theorem 1: *It is worth noting that the theoretical results can vary from practical observations due to the particularities of certain datasets, where these datasets may exhibit additional structures with which algorithms can enjoy faster convergence, compared to the one theoretically justified without using any additional condition. However, such studies are beyond the scope of our work and we will leave it for future work.* We thank the reviewer again for his careful and valuable comments. We hope these response can clarify the reviewer's questions. We are looking forward to the follow-up discussion with the reviewer, and more than happy to address any further comments or questions. --- Rebuttal Comment 1.1: Comment: I have read the response and increased my score to 7. --- Reply to Comment 1.1.1: Title: Thanks very much for raising the score Comment: We are very happy that your concerns have been addressed. Thanks very much for your valuable feedback and comments. --- Rebuttal 2: Title: Need more clarifications? Comment: Dear reviewer MRty, We thank you for your valuable comments. We have made detailed responses to address your concerns, but we have not received your replies to our current clarifications yet. We thus kindly ask if our responses have addressed all your concerns. If not, we are more than happy to provide more clarifications. Best, The authors of paper 5699
Summary: The paper analyzes the communication cost of unbiased compression algorithms for distributed optimization and proves that unbiased compression alone cannot reduce communication cost since whatever is saved via compression is lost due to an increase in the number of communication rounds needed for convergence. The authors then show that if in addition to being unbiased if the compressors are also independent then the lower bound on the number of communication rounds is reduced by a factor of $\Theta(\sqrt(\min(n, \kappa)))$ where $n$ is the number of nodes and $\kappa$ is the condition number of the function being minimized. They then improve the convergence analysis of ADIANA, an existing algorithm for communication compresssion in distributed optimization, and show that the lower bounds can be matched upto a log factor. Simulations on distributed least squares and distributed logistic regression corroborate their analysis and validate their claims. Strengths: 1. The authors identify an important gap in the existing literature on communication compression where emphasis is primarily laid on the unbiasedness of compressors, without also considering the effect of independence. They systematically build and explain their analysis providing theoretical proofs for all their claims. The theoretical analysis coupled with the clear explanation can make this an important and impactful addition to the literature. 2. Considering the total communication cost (per-round cost x number of rounds) makes the analysis more realistic and complete than previous works which just show savings in either per-round cost or on the number of rounds. Moreover, by proposing a general lower bound for all distributed optimization algorithms with unbiased and independent communication compression this work clarifies the target communication cost for all such communication compression algorithms which can make them easier to evaluate and analyze theoretically. Weaknesses: 1. The writing is a bit heavy on notation and the analysis may be hard to follow for readers unfamiliar with the literature. Especially since their is not clear example or intuition provided in the main paper to indicate why independence may help. 2. It is a bit unclear why ADIANA is chosen for analysis. The transition from deriving a general lower bound to analyzing a specific algorithm feels a bit abrupt. If possible, it would be good to provide some intuition for the kind of properties that would make a communication compression algorithm likely to be close to the lower bound thereby justifying the choice of ADIANA for analysis. This is a minor point however, as I understand that such an intuition may not be available at this time so it is okay if that is the case. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. To address the first point under weaknesses above, I would recommend moving the content of Appendix A which provides intuition on why independence helps to the main pape. Algorithm 1 can be moved to the Appendix to make space, since it appears to be just a reproduction of the ADIANA algorithm from [25]. 2. Can you also provide an example of a compression scheme where the compressors are unbiased but not mutually independent and provide some intution for its sub-optimality or show it through any of the experiments in Section 6? 3. Were any changes made to the ADIANA algorithm at all in your analysis or experiments or was the algorithm the same as that in [25] everywhere? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have identified the major limitation of the analysis being limited to convex functions only and have marked that as an issue to be addressed in future work which is fine with me. I have identified a couple of more minor limitations under Weaknesses above and look forward to the authors' responses to those. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive feedback and valuable comments. Below are our answers to the concerns you proposed. **1. I would recommend moving the content of Appendix A which provides intuition on why independence helps to the main pape. Algorithm 1 can be moved to the Appendix to make space, since it appears to be just a reproduction of the ADIANA algorithm from [25].** Thanks for the great suggestions! We put the argument of intuition in Appendix mainly because of the page limit. We will re-organize the paper as the reviewer suggested in the revision. **2. Can you also provide an example of a compression scheme where the compressors are unbiased but not mutually independent and provide some intuition for its sub-optimality or show it through any of the experiments in Section 6?** - A proper example can be the scaled random sparsification where each worker randomly chooses $s$ entries out of the total $d$ entries, scales their values to guarantee unbiasedness, and transimts them to the server, see Appendix B in our paper for reference. When each worker samples $s$ entries randomly and independently, the compressors are mutually indepedent and unbiased. When all workers **share the same random seed** to sample $s$ entries, the compressors are unbiased but correlated with each other. - According to the above example, it is evident that if compressors are independent, then the indices of transmitted entries by all workers are likely to be uniformly diverse, and the server apparently can observe more entries in this case and thus the compression-incurred information distortion is milder. In contrast, when using the same random seed, all compressors will sample entries with the same indices, and the server will observe much fewer entries in this scenario. This would explain the intuition behind the sub-optimality of mutually dependent unbiased compressors. - The sub-optimality of the dependent case can be viewed through Line 342~347 in Section 6, where we can see clearly through Figure 1 that ADIANA with random-$s$ compressors with shared randomness behave much worse than those with identical independent compressors. - Unbiased compressors with the same seed are still widely used in practice due to its compatibility with All-Reduce operation, a highly effective protocol of distributed gradient aggregation used by default in PyTorch and TensorFlow, see Point 3 in the "global response" for more details. For example, unbiased compressors with the same seed are used in references [R1] and [R2] listed below. [R1] T. Vogels et. al., PowerSGD: Practical Low-Rank Gradient Compression for Distributed Optimization, arXiv:1905.13727 [R2] C. Xie et. al., CSER: Communication-efficient SGD with Error Reset, arXiv:2007.13221. **3. Were any changes made to the ADIANA algorithm at all in your analysis or experiments or was the algorithm the same as that in [25] everywhere?** At the level of algorithmic design, the only difference in ours is to remove the proximal mapping as we consider the smooth case. However, **the improvement of analysis is significant**. We adopt effective choices of parameters (such as $\theta$, $\alpha$, $\beta$) to obtain an improved rate and thus attain the optimality in the strongly convex case. Furthermore, the analysis in the generally convex case is brand new and no convergence result of ADIANA exists in the generally convex case, to our best knowledge. **4. Justfying the choice of ADIANA to achieve lower bound** We choose to improve upon ADIANA mainly because **its previous convergence rate (in the strongly convex case) is the closest to our lower bound among all existing algorithms**. Another intuition is that DIANA-type algorithms is known to **benefit from independence of compressors** in the sense that it provably outperforms the best achievable rate proved by [13] when using non-independent compressors. While we believe there can be other algorithms with similar performance, we simply choose to improve upon ADIANA to verify the tightness of our lower bounds, which is important in addressing the *how much* question we concern. We thank the reviewer again for his careful and valuable comments. We hope these response can clarify the reviewer's questions. We are looking forward to the follow-up discussion with the reviewer, and more than happy to address any further comments or questions. --- Rebuttal Comment 1.1: Title: Re Comment: Thank you for the detailed response. The example and intuition provided for unbiased but dependent compressors makes sense to me and I think adding these points to the paper would definitely help readers understand why incorporating independence can lead to tighter bounds and lower communication cost in practice. I do not have any other questions or concerns and as I have already recommended acceptance, I will keep my score. --- Reply to Comment 1.1.1: Title: Thanks very much for the follow-up comments Comment: We are very happy that your concerns have been addressed. Following your suggestions, we will add these examples and intuitions to the main paper in the revision. Thanks very much for your valuable feedback and comments.
Rebuttal 1: Rebuttal: We thank all the reviewers for their careful review and valuable feedback. We have addressed each question raised by the reviewers in the separate rebuttals below and are glad to address any further concerns. As several matters have been brought up by multiple reviewers, we present a global response to these concerns below. **1. Paper organization** We thank all the reviewers for suggestions on reorganizing some parts of our manuscript, and adding comments or examples for further details. However, the page limit imposes unavoidable sacrifice of the content that we eager to present in the main text. As a result, we have to present details to important concepts and theorems, and refer the rest (e.g., typical examples, standard concepts, customary definitions) to our **Appendix in the Supplementary Material**. We will reorganize our main text by carefully following the advice provided by reviewers (e.g., make modifications as stated in the separate rebuttals) once we have more space allowed when preparing the camera-ready draft. **2. Concerns related to the lower bounds.** We find that some concerns of the reviewers are on the understanding of lower bounds. To precisely understand our results, we remark that the lower bound complexity should always be interpreted in the sense of **the limit of best-performing algorithms in convergence (or communication costs) when facing the worst-case instances under pre-specified assumptions of objectives and compressors**. In other words, we aim to investigate the theoretically best-achievable performance under the pre-specified assumptions of objectives and compressors. As a result, the conclusion naturally binds to the setup regularized by pre-specified assumptions. Once the setup is modified (e.g., the compressors or objective functions are assumed to further enjoy certain properties), the according **worst-case** instance and the best achievable performance also vary. Consequently, when new assumptions are introduced, some algorithms can surpass the present lower bounds, which does not contradict with our established results since they are with different setups. On the other hand, the communication saving based on the lower bounds is concerned about the worst cases under **the exact settings in our paper**, and any additional assumption on the problem will change the setting and hence may affect our conclusions. **3. Concern on the use of correlated compressors. Rare works do not use independence.** We respectfully disagree with the opinion that "rare works do not use independence". Non-IID unbiased compressors can offer practical advantages over IID compressors in many scenarios. Specifically, non-IID random sparsification compressors are more compatible with **All-Reduce** operation, which is a highly effective protocol of distributed gradient aggregation used by default in PyTorch and TensorFlow. However, IID sparsification compressors output compressed gradients with indices of non-zero entries poorly aligned across workers, hindering the efficacy of all-reduce, as illustrated in Fig. 2 in the pdf attached to this "global response". In fact, as noted in [R1, page 7], **gradient compression provides limited benefits if it is not compatible with all-reduce**. To enable unbiased compression to work with all-reduce, compressors across workers must share the same random seed, making them non-IID, as shown in Algorithm 4 of the well-known PowerSGD paper [R2]. In addition, the CSER algorithm also use a shared random seed across compressors [R3, Page 5]. [R1] S. Agarwal et. al., *On the Utility of Gradient Compression in Distributed Training Systems*, arXiv:2103.00543 [R2] T. Vogels et. al,, *PowerSGD: Practical Low-Rank Gradient Compression for Distributed Optimization*, arXiv:1905.13727 [R3] C. Xie et. al., *CSER: Communication-efficient SGD with Error Reset*, arXiv:2007.13221. **4. More figures** We provide two additional figures related to our rebuttal in the attached pdf. **Figure 1.** In Fig.1, we display the results of an additional experiment, where we compare ADIANA with three different compressors and non-compression Nesterov's accelerated algorithm using a more practical dataset, CIFAR-10. **Figure 2.** In Fig.2, we display the intuition why random sparsification compressors with shared randomness are more compatible with All-Reduce compared with those with independent randomness. We thank all the reviewers again for the valuable comments. We are looking forward to the follow-up discussion, and more than happy to address any further comments or questions. Pdf: /pdf/4b94cb1bb5ac7bcabc410ee4c3ed97107acfc10e.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: Several communication compression strategies have been proposed in the last few years to improve distributed optimization. The authors consider the tradeoffs between lowering the communication costs per round vs the number of communication rounds and understand how they affect total communication costs. They theoretically formulate this and prove that under some assumptions, unbiased compression might not save total communication. However, if the compressors used by all workers are further assumed independent, they prove that total communication cost can be decreased. They also prove lower bounds on certain classes of algorithms by refining the analysis for ADIANA. Strengths: The results are provably improved. The presentation is clear and easy to follow. Weaknesses: The lower bounds are proved only for certain classes of algorithms. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: It would be better if some experiments from the appendix were moved to main body. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive comments. All questions have been clarified below. **1. The lower bounds are proved only for certain classes of algorithms.** We agree that our lower bounds only apply to the class of certain algorithms described in Section 2.3. However, these settings are broad enough to cover most first-order algorithms in literature, e.g., [38, 25, 20, 61] , as well as all algorithms listed in Table 1. Therefore, our established lower bounds remain valuable and sufficient to address questions Q1 and Q2 raised in the introduction. **2. It would be better if some experiments from the appendix were moved to main body.** We temporarily put these experiments into Appendix because of the page limit. We will reorganize them once we have more space allowed. Specifically, we will move Figure 4 and 5 to Section 6 next to Figure 2 and modify the description in Line 339~340 to: *We set $n=400$ and separately choose independent random-$\lfloor d/20\rfloor$ compressors (Figure 2), independent natural compression (Figure 4) and independent random quantization compressors (Figure 5) as the compressors used in the compression algorithms.* We thank the reviewer again for his careful and valuable comments. We hope the above response can clarify the reviewer's questions. We are looking forward to the follow-up discussion, and more than happy to address any further comments or questions.
Summary: The paper explores the conditions of unbiased compression to reduce the communication cost in distributed optimization. Specifically, the paper first presents a theoretical formalization of the total communication cost (TCC) in distributed optimization. With this formulation, the paper proves that unbiased compressor alone cannot necessarily save TCC. Then the paper proves lower bounds on the convergence complexity and shows that independent unbiased compressor provably saves TCC. The paper also improves ADIANA and provides the upper bound of the TCC that can be reduced. Experiments are also provided to support the theoretical findings. Strengths: Originality: The paper focuses on how to save TCC, which is a more practical concern in distribution optimization. Though the paper relies on the findings (independence on unbiased compressors) of previous works, it provides new findings (on the upper bound of the TTC that can be reduced) and applicable modifications of existing methods to obtain an match the bound. Overall the paper seems novel to me. Quality: The claims in the paper are well supported by theoretical analysis. Clarity: The paper is well organized and easy to read. The authors give good introduction and clearly list the questions (Q1, Q2) they need to answer. They also provide detailed intuition on why independent unbiased compressor saves communications. Weaknesses: Overall the paper looks good to me. One concern is that the current dataset is relatively small. It would be better if the author can provide experiments on a larger real dataset to better prove the effectiveness of the theoretical findings. Technical Quality: 3 good Clarity: 3 good Questions for Authors: N/A Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive comments and valuable suggestions. The datasets used in our current manuscript follow the experimental settings in prior works (e.g. [25, 26]) in the literature. According to the reviewer's suggestion, we conducted additional experiments with a larger dataset, CIFAR-10. The new experimental results are shown in Fig. 1 in the pdf attached to our “global response” to all reviewers. We are looking forward to the follow-up discussion with the reviewer, and more than happy to address any further comments or questions.
Summary: The paper investigated the overall communication costs for federated learning algorithms. A lower bound on the per-round communication cost is presented, then an analysis of an existing algorithm, ADIANA, is provided to obtain an improved upper bound. Strengths: The general ideas for the settings and the results are clearly presented. Besides, the presented bounds are clean compared to related works in this field. Weaknesses: There is an absence of several crucial details, or perhaps typos, which makes it difficult to concretely understand the results. For example, one central concept required in this work is the unbiased compressor, which based on the current manuscript, applies to all real input values. However, if we require it to output a discrete variable with a bounded number of bins, such a compressor may not exist as it is not clear how the unbiased condition could hold for all $x\in\mathbb{R}$. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: There are possible places of improvement in terms of clarity. For example: 1. Generally, it would be better if any concept is defined before it is formally used. For example, in the current manuscript, the definition of “linear spanning” is lost in the texts. 2. Similarly, the assumption of fixed communication load per round could be introduced before the definition of $T_{\epsilon}$ in Sec. 2.4, otherwise, the meaning of minimization in its definition is not clear. 3. For strongly convex results, it might be cleaner to replace $L/\mu$ with $\kappa$. 4. The notation of $U^{int}_{\omega}$ (or $U_\omega$) can be removed (or moved to appendices) if not used. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The limitation is appropriately discussed in Section 7. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed comments. All questions have been clarified as best as we can. We are glad to address any further comments or questions. **1. There is an absence of several crucial details, or perhaps typos, which makes it difficult to concretely understand the results.** The example about unbiased compressors is replied in the next point. We apologize if there are more typos or absence of details that affect the readability. We hope that the reviewer can point them out directly if possible, so that we can give specific responses. **2. Concept on unbiased compressor.** The concept of unbiased compressor defined in Assumption 2 is standard in literature. It is consistent with existing concepts on unbiased compressor, such as Eq. (2) in reference [40] on EF21, Definition 2 in reference [26] on CANITA, Definition 1 in reference [25] on ADIANA, Lemma 2 in reference [49] on ECQ-SGD, Assumption 3 in reference [R1] on compressed push-pull, Assumption 4 in reference [13] on NEOLITHIC, Definition 4 and Remark 1 in reference [R2] on DIANA, Lemma 3.1 in reference [3] on QSGD. Many examples fall into the family of compressor, see Examples 1 and 2 in references [25, 26]. [R1] Z. Song et. al., Compressed Gradient Tracking for Decentralized Optimization Over General Directed Networks, IEEE TSP 2022. [R2] Samuel Horváth, Dmitry Kovalev, Konstantin Mishchenko, Sebastian Stich, and Peter Richtárik. Stochastic distributed learning with gradient quantization and variance reduction. *arXiv preprint arXiv:1904.05115,* 2019. **3. It would be better if any concept is defined before it is formally used. The definition of “linear spanning” is lost in the texts.** We agree. We temporarily put some definitions in the Appendix because of the page limit. For example, the formal definition of “linear spanning” is stated in **Appendix D**. We will move the definition of "linear spanning" in Definition 2 in Appendix D to subsection 2.3 prior to the definition of algorithm class in Definition 1. **4. The assumption of fixed communication load per round could be introduced before the definition of $T_\epsilon$ in Sec. 2.4, otherwise, the meaning of minimization in its definition is not clear.** The convergence complexity $T_\epsilon$ is defined as the minimal number of communication rounds needed for guaranteeing an algorithm to find an $\epsilon$-accurate optimum. The notion is customary in literature of communication compression (see, e.g., [20, 25, 32, 25, 26]) and is not explicitly related to the per-round communication load. We chose to state $T_\epsilon$ first as it is widely used and self-explanatory. The definitions of per-round communication workload and total communication costs are less investigated in literature. Therefore, we chose to write an entire section to motivate and argue for them. As suggested by the reviewer, the comparison in terms of $T_\epsilon$ is more meaningful for algorithms with the same per-round communication cost. We thank reviewer gfiB for pointing out the potential misleadingness. We will introduce the matter of the fixed per-round communication cost before $T_\epsilon$ when preparing the camera-ready draft. Specifically, we will move the following assumption in Line 198~199: *Let each worker equip with a non-adaptive compressor with the same fixed per-round communication cost, i.e., the compressor outputs compressed vectors of the same length (size)* to an additional remark as follows directly after Remark 1 in subsection 2.4. **Remark 2.** Though the definition of $T_\epsilon$ can be independent of the per-round communication cost, which is specified through the degree of compression $\omega$ (i.e., choice of compressor $C_i$'s), we further assume here that these $C_i$'s equipped by each worker are **non-adaptive compressors with the same fixed per-round communication cost**, i.e., the compressor outputs compressed vectors of the same length (size). **5. For strongly convex results, it might be cleaner to replace $L/\mu$ with $\kappa$.** We agree. Thanks for advice. **6. The notation of $U_\omega^{ind}$ (or$U_\omega$) can be removed (or moved to appendices) if not used.** These notations are convenient for us to describe our theoretical results (e.g., in line 177, 228, 259, 263) and highlight the difference between ours and [13]. We thank the reviewer again for his careful and valuable comments. We hope these response can clarify the reviewer's questions. We are looking forward to the follow-up discussion, and more than happy to address any further comments or questions. --- Rebuttal 2: Title: Need more clarifications? Comment: Dear reviewer gfiB, We thank you for your valuable comments. We have made detailed responses to address your concerns, but we have not received your replies to our current clarifications yet. We thus kindly ask if our responses have addressed all your concerns. If not, we are more than happy to provide more clarifications. Best, The authors of paper 5699 --- Rebuttal Comment 2.1: Title: Consistency between unbiased compressor and fixed communication Comment: We would like to thank the author for the response. My concern about the unbiased compressor remains, as the communication in this work is measured in bits (as far as I understand, as stated on line 208 or in Figure 1), but the well-known results in [25,26] deliver continuous variables, which require infinitely many bits to make it unbiased. I suppose the authors may be intended to apply an additional quantization step which would introduce some bias, but this is unclear from the manuscripts. Especially when the authors assume fixed and deterministic load per round (instead of variable length), the only class of compression function C that enables decodability has to map the input to a fixed finite set, and any input larger than the maximum in that set can not be mapped to a distribution with unbiased expectation. So for the results to be meaningful, a modification for at least some of the assumptions in the current manuscript is needed. Here are the possibilities. 1. The end-to-end compression function is not unbiased. 2. The communication load can be non-deterministic. 3. The compressor only applies to a subset of the real inputs. 4. There is a random seed shared between all machines and the randomness of C is taken over the shared value. 5. There is an allowed exceptional event with a small probability that the assumptions on the compressors can be violated. It would also be helpful if a concrete example of the unbiased estimator is presented that requires finitely many bits for communication. --- Reply to Comment 2.1.1: Title: Author response (Part 1/2) Comment: We really appreciate the reviewer's deep thought and sharp observation on the concept of unbiased compressors. Below are our responses. **1. No finite-bit unbiased compressor over the entire real space.** We agree with the reviewer. With finite bits, a compressor cannot estimate an arbitrarily large value in an unbiased manner and thus cannot facilitate the compression for all real numbers/vectors. **2. It is a common issue in literature** The reviewer's sharp insights indeed expose a common issue existing in a large body of literature. For most literature on unbiased quantization (such as Q-SGD [3] and natural compression [14]), while the quantization schemes therein are defined over the entire real space, they only apply these schemes to values represented by float32 or float64. Apparently, quantizing arbitrarily large (or small) values in the entire real space will result in infinite bits, which contradicts the purpose of saving communication. Thus it makes more sense to quantize values originally represented with float32 or float64 to values with much fewer bits. Our work follows this convention. While we define the unbiased compressor over the entire real space, we apply it to input values represented with float32 or float64. Since the input values have already been represented with finite bits, the output values of our defined unbiased compressor are also with finite bits, which is consistent with our setting that "each communication round has fixed and deterministic load". This convention is also followed by other literature such as [R1], which was accepted by NeurIPS2022, to compare the amount of saved communicated bits in comparison to other baselines. [R1] Wang, B., Safaryan, M. and Richtárik, P., 2022. Theoretically better and numerically faster distributed optimization with smoothness-aware quantization techniques. Advances in Neural Information Processing Systems, 35, pp.9841-9852. --- Reply to Comment 2.1.2: Title: Author response (Part 2/2) Comment: **3. A refined definition for unbiased compressor** While we are following the convention in literature, we are glad to provide a more precise definition for unbiased compressors stuided in the manuscript to clarify the reviewer's confusion. In fact, an unbiased compressor with finite bits exists if and only if the following conditions hold: - **The valid input value must be bounded.** It's clear that using finite bits cannot provide an unbiased estimate for arbitrarily large (or small) values. - **The valid non-zero input must be bounded away from zero.** Since the compression uses finite bits, there exists the smallest distance that bounds all possible input values away from 0. Given the above conditions, we will modify the classical definition of unbiased compressor as follows: **Assumption 2’** We assume unbiased compressors $\\{C_i\\}_{i=1}^n$ satisfy \begin{equation} \mathbb{E}[C_i(x)]=x,\quad\mathbb{E}[\\|C_i(x)-x\\|^2]\le\omega\\|x\\|^2, \end{equation} for a constant $\omega\ge0$ and any input $x\in\mathcal{X}\subset\mathbb{R}^d$, where $\mathcal{X}$ satisfies the following condition: **There exists positive constants $0<\epsilon_m<M$ so that it holds for all $ x=(x_1,\cdots,x_d)^\top\in\mathcal{X}$ that $|x_i|\in\\{0\\}\cup[\epsilon_m,M]$, $i=1,\cdots,d$.** We believe it is a proper definition because - **It is well justified in practice.** The inputs of compressors under machinery computations; therefore, their maximum value and non-zero lower bound are constrained by the computation precision of machines. For instance, when performing computations using float32, the resulting values are bounded above by approximately 3.4e38 and away from zero by at least about 1.18e-38 if non-zero. - **It is consistent with unbiased compressors proposed in previous works.** Natural compression [14] is a good unbiased compression scheme, following which we can provide a concrete example of unbiased compressors satisfying Assumption $2^\prime$: Letting $\mathcal{C}(0)=0$ and \begin{equation} \mathcal{C}(t)=\left\\{ \begin{array}{2} sign(t)\cdot2^{\lfloor\log_2|t|\rfloor}, &\text{with } p(t),\\\\ sign(t)\cdot2^{\lceil\log_2|t|\rceil}, &\text{with } 1-p(t), \end{array} \right. \end{equation} where $t\ne0$ and probability $p(t)=\frac{2^{\lceil\log_2|t|\rceil}-|t|}{2^{\lfloor\log_2|t|\rfloor}}$, it's clear that $d(2+\lceil\log_2(\lceil\log_2(M)\rceil-\lfloor\log_2(\epsilon_m)\rfloor+1)\rceil)$ bits are sufficient for constructing such a compressor that satisfies Assumption $2^\prime$ with $\omega=1/8$. - **It does not affect the theoretical results of our work.** -- **Upper bound**. Since the convergence criterion $\epsilon$ is typically much larger than the machine precision $\epsilon_m$ (e.g.,1.18e-38), the distortion incurred by finite bits has little effect on the ideal computations in the real space. In this case, all inputs of the compressors, which are calculated by machinery computations, naturally fall into the valid input set $\mathcal{X}$ we consider. As a result, machine precision $\epsilon_m$ does not affect our established upper bound for ADIANA. In fact, it is convection in optimization literature to ignore the effect of machine precision on the convergence rate and complexity. -- **Lower bound**. In our paper, we established the lower bound for $\inf_{A\in\mathcal{A}}\sup_{C_i \in \mathcal{U}\_\omega} T(A, \\{C_i\\})$ where $\mathcal{A}$ is the class of algorithms defined in Definition 1, $\mathcal{U}\_\omega$ is the class of unbiased compressor defined in Assumption 2, and $T(A, \\{C_i\\})$ is the convergence complexity achieved with algorithm $A$ and compressors $C_i$'s. When we consider the new class of unbiased compressor $\mathcal{U}\_\omega^\prime$ specified by Assumption $2^\prime$, it is easy to see that $\mathcal{U}\_\omega \subseteq \mathcal{U}\_\omega^\prime$. Let $T_L$ be the lower bound of $\inf_{A\in\mathcal{A}}\sup_{C_i \in \mathcal{U}\_\omega} T(A, \\{C_i\\})$, it naturally holds that \begin{equation} T_L \le\inf_{A\in\mathcal{A}}\sup_{C_i \in \mathcal{U}\_\omega} T(A, \\{C_i\\}) \le \inf_{A\in\mathcal{A}}\sup_{C_i \in \mathcal{U}'\_\omega} T(A, \\{C_i\\}) \end{equation} In other words, our established lower bound also holds for the new class of unbiased compressors defined by Assumption $2^\prime$. Since the upper bound and lower bound are not affected by the newly introduced compressor assumption up to machine precision $\epsilon_m$, they still nearly match each other. In other words, the modified family of unbiased compressors does not affect our theoretical results. **Summary** We really appreciate the sharp observation and useful suggestions proposed by the reviewer. It is actually a common issue in literature, and it can be practically addressed by considering finite-bit inputs only. We hope this can resolve the reviewer's concerns, and we are looking forward to the follow-up discussions.
null
null
Dynamic Regret of Adversarial Linear Mixture MDPs
Accept (poster)
Summary: This paper studiess adversarial RL under episodic linear mixture MDPs with adversarial full-information reward and stationary unknown transition kernels. This paper first proposes a new algorithm to deal with the adversarial reward and then provides an upper bound of dynamic regret for the proposed algorithm. Then this paper proposes a lower bound for dynamic regret and claims acheiving optimal in terms of the number of episodes $K$ and the non-stationary measure $P_T$. Strengths: 1. This paper provides a noval algorithm to deal with adversarial RL without prior knowledge. The main steps of the algorithm seem reasonable and the paper provides detailed insight of each steps. 2. This paper provide an upper bound of dynamic regret and the dependenece of $K$ in upper bound achieves optimal. Weaknesses: 1. The algorithm and result of this paper only apply to finite state case, which makes the result less significant. 2. The proof of the lower bound lacks rigor in several aspects. Firstly, the proof of the lower bound is divided into two separate cases, which is not in line with the definition of a lower bound. Ideally, the author should construct one hard instance that encompasses both cases. Secondly, in the proof of the second case (dynamic regret case), the proof merely takes the maximum between the results of the two cases with respect to the range of $\Gamma$. 3. Due to the existence of numerous studies employing alternative approaches to address non-stationarity and yield diverse results, I think more comprehensive and detialed discussions with non-stationary RL are needed. For example, although line 108 indicates that [1] is not applicable to the adversarial setting when the comparators are arbitrarily chosen. In fact, people are usually concerned about the case where camparators are optimal policy. In this case, does the non-stationary RL discussed in [1] cover the adversarial RL in this paper? It is important to discuss the relationship between the findings of [1] and this paper in such cases. [1] also achieves optimal regret with $K^{2/3}\Delta^{1/3}+\sqrt{K}$, where $\Delta$ denotes another non-stationary measure. Why does the dependence on $K$ differ? From my understanding, this discrepancy seems to arise from distinct non-stationary measures. [1] Chen-Yu Wei and Haipeng Luo. Non-stationary reinforcement learning without prior knowledge: An optimal black-box approach. In Proceedings of the 34th Conference on Learning Theory (COLT), pages 4300–4354, 2021. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: As listed in weakness. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: This work does not pose any negative impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable comments. We will address your concerns and questions below. --- In particular, we believe there are some misunderstandings about the distinction between the non-stationary *stochastic* MDPs (studied in other works) and the non-stationary *adversarial* MDPs (this paper), as well as the lower bound argument. Those confusions could potentially be due to our lack of sufficient explanations in the paper. We would take this opportunity to make the clarifications below. If your concerns are appropriately addressed, please consider updating your score. Thanks! --- Q1: "The algorithm and result of this paper only apply to finite state case, which makes the result less significant." A1: We clarify that our algorithm can be applied to MDPs with the large state case. In Appendix B, we show that the computational complexity is independent of the number of state $S$. Additionally, our regret bound is also independent of $S$ (please kindly check the supplementary version, in which we have refined the analysis and obtained bounds independent of $S$). --- Q2: "The proof of the lower bound lacks rigor in several aspects. Firstly, the proof of the lower bound is divided into two separate cases, which is not in line with the definition of a lower bound. Ideally, the author should construct one hard instance that encompasses both cases. Secondly, in the proof of the second case (dynamic regret case), the proof merely takes the maximum between the results of the two cases with respect to the range of $\Gamma$." A2: We respectfully disagree with this comment. It’s a typical strategy to divide the entire problem into several distinct cases to establish the lower bound, which allows us to find hard instances more conveniently. This is both reasonable and sound. Let's consider a problem denoted as $A$, which consists of two cases, $A_1$ and $A_2$, where the difficulty of case $A_1$ is represented by $L_{A_1}$ and the difficulty of case $A_2$ by $L_{A_2}$. In this context, it is evident that problem $A$ cannot be easier than either case $A_1$ or case $A_2$, since all the difficult instances of cases $A_1$ and $A_2$ are encompassed within problem $A$. Consequently, the difficulty of problem $A$ is at least equal to the maximum difficulty between cases $A_1$ and $A_2$. Thus, we can assert that the difficulty of problem $A$ is $\max(L_{A_1}, L_{A_2}) \geq (L_{A_1} + L_{A_2}) / 2 \geq \Omega(L_{A_1} + L_{A_2})$. We will add more explanations in the revised version. --- Q3: "more comprehensive and detailed discussions with non-stationary RL are needed. " A3: Thank you for the suggestion. We will certainly enrich the discussion on the existing literature. However, it's crucial to emphasize that non-stationary MDPs can be divided into two main threads: (i) non-stationary *stochastic* MDPs, which most previous works are concerned with; and (ii) non-stationary *adversarial* MDPs, which this paper focuses on. The two settings, along with their respective results and algorithmic components, are very different and generally incomparable. They can be viewed as two distinct models for addressing non-stationary online learning and decision-making processes. In fact, these two settings are typically studied independently, even in simplified scenarios such as full-information and bandit online learning. For a more detailed discussion on the significant differences between these two setups, **please refer to A1 for Reviewer Xpcd**. Furthermore, we note that all the relevant works (to the best of our knowledge) on non-stationary adversarial MDPs are covered in our paper. We will add additional discussions on other related works, including those concerning non-stationary stochastic MDPs, to provide readers with a more comprehensive understanding of the literature. Thank you. --- Q4: "does the non-stationary RL discussed in [1] cover the adversarial RL when the comparators are optimal policies in this paper?" A4: No, the reduction-based framework in [1] can handle non-stationary **stochastic** MDPs, but it *cannot* be applied to non-stationary **adversarial** MDPs even the comparators are set as optimal policies. Indeed, Wei and Luo (2020) propose a generic reduction, which requires the base learner to satisfy a certain property enjoyed by typical UCB-type algorithms. When a new instance of the base algorithm surpasses this optimistic estimator, it can be inferred that the environment has undergone changes, prompting a restart of the algorithm to disregard prior information. However, this approach of constructing an optimistic estimator using a UCB-type algorithm can only be applied effectively in a **stochastic** setting. In the **adversarial** setting, where no model assumptions are made and comparators can be *arbitrary*, this approach encounters significant difficulties. This difference highlights the significant challenges inherent in handling non-stationary adversarial MDPs. In fact, **as discussed in A2 for Reviewer Xpcd**, non-stationary adversarial online learning can be sometimes much harder than non-stationary stochastic online learning, even for the multi-armed bandit setting. Thanks for your question. We will clarify this point more clearly in the revised version. --- Q5: "It is important to discuss the relationship between the findings of [1] and this paper in such cases" A5: We clarify that [1] studies the non-stationary stochastic MDPs, which is significantly different from our non-stationary adversarial MDPs as mentioned in A4. Consequently, both our problem setting and corresponding results are not directly comparable. We should have discussed more on this point, and we will revise the paper to make it clear in the next version. Thanks! --- Rebuttal Comment 1.1: Comment: Thank the authors for answering all my questions. Most of my concerns have been sufficiently addressed. Hence I raise my score. Please incoporate the discussion on related work during rebuttal in final version. I also see the latest response of Reviewer Xpcd and find Zhong et. al, (2021) also considering nonstationary adversarial settings. However, in the A1 to Reviewer Xpcd, the authors just claim that Zhong et. al, (2021) only considers nonstationary stochastic setting. I think the authors should also clarify this. --- Reply to Comment 1.1.1: Comment: Thanks for your comment and for raising the score! We will certainly integrate the discussion on related works during the rebuttal phase into the final version. In terms of [Zhong et al., 2021], we sincerely apologize for the oversight in our initial summary regarding related works and also thank Reviewer Xpcd for highlighting this contribution. However, as mentioned in the new response to Reviewer Xpcd, [Zhong et al., 2021] mainly address the non-stationary stochastic setting, though they indeed also explore the non-stationary adversarial setting. Their contribution to the non-stationary adversarial setting essentially extends the algorithm and results in [Fei et al, 2020] to accommodate non-stationary transition kernels with linear function approximation, *without touching the essential difficulty of handling non-stationarity in the adversarial MDPs*. In contrast, we aim to **improve** the regret bound of [Fei et al. 2020] directly. To achieve this goal, we design a meta-base two-layer structure rather than the restart mechanism used by [Fei et al, 2020] and [Zhong et al., 2021] to handle the adversarial rewards. While we acknowledge that our result does not strictly surpass prior results, this paper **makes the first non-trivial step towards obtaining optimal dynamic regret in the non-stationary adversarial MDP setting** (our result is indeed optimal in certain regimes). We believe the methodology and techniques in this work will inspire subsequent research in this area to obtain the optimal regret bound for all regimes. For a more comprehensive comparison, please refer to our response to Reviewer Xpcd. We'd also like to delve into further discussions to address any remaining concerns that the reviewer may have. Thanks!
Summary: This paper explores the problem of reinforcement learning in non-homogeneous MDPs with adversarial full-information reward feedback and unknown transition kernels. The authors propose a new algorithm that has advantages in dynamic regret and does not require prior knowledge as input. The algorithm achieves optimal performance in certain cases. The paper provides a theoretical foundation for reinforcement learning in non-homogeneous MDPs by studying the upper and lower bounds of dynamic regret. Strengths: This work investigates the dynamic regret in linear mixture MDPs with adversarial rewards, which is a problem of great significance and relevance. The authors make notable contributions by offering a precise upper bound on regret as well as a lower bound for this problem. To derive the upper bound, the authors introduce new techniques, including the fix-share mechanism and a multiplicative stability lemma, which hold potential value and may be of independent interest to researchers in related fields. Weaknesses: - The authors have not included a highly related work titled "Optimistic Policy Optimization is Provably Efficient in Non-stationary MDPs," which addresses the more challenging setting of linear mixture MDPs with adversarial rewards and non-stationary transitions. - When comparing this work with both (Fei et al., 2020) and "Optimistic Policy Optimization is Provably Efficient in Non-stationary MDPs," it is worth noting that this work demonstrates improvements in terms of the parameters K and P_T. However, it introduces an additional dependency on S_T. Since S_T can be linear in K in the worst-case scenario, the resulting regret bounds may not strictly improve upon the previous findings. - Considering the omission of the aforementioned related work and the potential limitations of the current results in terms of the additional dependency on S_T, it may be valuable for the authors to discuss these aspects in the paper. This would help readers gain a comprehensive understanding of the work's contributions and its relationship to the existing literature. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - While the fix-share mechanism and multiplicative stability lemma used in this work aim to enhance the regret upper bound, it is important to assess whether these techniques strictly improve upon existing works. The authors should provide a clear analysis or comparison demonstrating how their approach surpasses or builds upon the achievements of previous methods. If the improvements are not strictly superior but rather offer refinements or alternative perspectives, they should be clearly stated to avoid potential misconceptions. - In addition, offering further explanations on why the reduction-based framework in (Wei and Luo, 2020) is not applicable in adversarial linear mixture MDPs, even with full-information feedback, would add value to the paper. Discussing the specific challenges or characteristics of adversarial linear mixture MDPs that render the reduction-based framework ineffective or impractical will enable readers to grasp the unique complexities of the problem domain. By providing these explanations, the authors can highlight the significance of their proposed approach and its suitability for addressing the specific challenges posed by adversarial linear mixture MDPs. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 1 poor Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your helpful review. We will answer your questions individually. If your concerns are properly addressed, please consider updating the score. Thanks! --- First, the reviewer may have an important misunderstanding on the two different settings: (i) non-stationary *stochastic* MDPs (other works) and (ii) non-stationary *adversarial* MDPs (this paper). This confusion could be due to our insufficient discussions. We'd clarify the salient differences below. Q1: a related work [Zhong et al. 2021], which is more challenging A1: Thanks for bringing this paper, which also studies the non-stationarity issue in MDPs and thus related to our work. We will undoubtedly cite and discuss it in the next version. However, it’s crucial to point out that the setting of [Zhong et al., 2021] is *fundamentally different* from ours. - They study non-stationary **stochastic** MDPs, where the reward is assumed to be *stochastically* generated by parametric models with parameters continuously drifting. - In contrast, we study non-stationary **adversarial** MDPs, allowing rewards *adversarially* chosen. The objective is to be competitive with a sequence of time-varying compared policies. So we respectfully disagree with the comment that setup in [Zhong et al., 2021] is more challenging than ours -- they are actually *incomparable*. Below we highlight more detailed differences. For simplicity, we consider the fixed transition kernel scenario. - In non-stationary **stochastic** MDPs, rewards are generated **stochastically** according to some parametric models that may vary over time, for example, $r_k(s, a) = \phi(s, a) \theta_k^* +noise$, and the aim is to optimize regret against the drifting parameter $\theta_1^*,...,\theta_K^*$. - Contrarily, non-stationary **adversarial** MDPs do *not* make any stochastic assumption over the rewards $r_k$. Instead, they compete with *arbitrary* feasible sequence of time-varying compared policies, which could be chosen with hindsight as the oracle comparators that best fit the underlying environments with an optimal balance between bias (due to various factors like sample randomness) and variance. Algorithmic ingredients to handle non-stationarity are also significantly different (sliding window/restart/weights for stochastic MDPs; and two-layer structures for adversarial MDPs). To summarize, the two settings and respective algorithms/results are *incomparable*. They can be viewed as two distinct models for non-stationary online learning. Actually, in some cases the adversarial setting would be even harder, see A2 for discussions. We will include those discussions in the next version to enhance the literature review. Thanks! ---- Q2: "why ... framework in (Wei and Luo, 2020) is not applicable in adversarial linear mixture MDPs" A2: This framework [Wei and Luo, 2020] can handle non-stationary **stochastic** MDPs, but cannot be applied to our adversarial case. Indeed, their reduction requires the base learner to satisfy a UCB-like property. When a new instance of base algorithm surpasses this optimistic estimator, the MASTER algorithm will suspect environment suffering changes and perform the restart. However, this strategy is challenging to apply in adversarial setting, since the construction of a UCB-type base learner is difficult when there are no statistical model assumptions. Indeed, non-stationary stochastic/adversarial online learning and decision making are usually examined independently, even in simplified scenarios such as full-information and bandit online learning. Actually, in certain instances, non-stationary adversarial setting can be *more challenging* than the non-stationary stochastic one. Consider the multi-armed bandits problem. - It is extremely difficult to handle its non-stationary *adversarial* model, and actually achieving an optimal bound without knowing the non-stationarity level is provably *impossible* for adaptive adversaries [Marinov and Zimmert, 2021], and remains open for oblivious adversaries. - In contrast, it's feasible to derive an optimal strategy for a non-stationary *stochastic* setting without the non-stationarity level using the technique of [Wei and Luo, 2022]. More technical discussions can be found in P3 of [Luo et al., 2022]. We will clarify it in the revised version. T. Marinov and J. Zimmert. The Pareto Frontier of Model Selection for General Contextual Bandits. NeurIPS 2021. C.-Y. Wei and H. Luo. Non-stationary Reinforcement Learning without Prior Knowledge: An Optimal Black-box Approach. COLT 2021. H. Luo, M. Zhang, P. Zhao, and Z.-H. Zhou. Corralling a Larger Band of Bandits: A Case Study on Switching Regret for Linear Bandits. COLT 2022. --- Q3: issue on switching number $S_T$ A3: We agree with the comment and acknowledge that involving data-dependent quantity $S_T$ is not fully satisfactory. But we believe our contributions are still interesting enough to the community with reasons below. - Our result is more attractive than (Fei et al,. 2020) under regimes such as stationary environment where the best base-learner rarely changes, as well as piecewise-stationary environment where the environment only change certain times (then $S_T$ aligns with the change frequency). - Importantly, we improve the dependence on $K$ and $P_T$, matching the lower bound established in our paper. As noted in Remark 3, the dependence on $S_T$ presents a significant technical challenge, primarily because meta regret is now a weighted regret across all states, where the weight for each state is determined by **arbitrary** comparators $\pi_1^c,...,\pi_K^c$. This is in contrast with existing literature and presents a unique challenge. We take this for future research and will emphasize more in the revised version. --- We hope above responses sufficiently address your concerns. We're happy to provide further clarifications to additional questions during the reviewer-author discussion period. Thanks! --- Rebuttal Comment 1.1: Comment: Thanks for your comprehensive response. I agree with the authors' statement that non-stationary stochastic MDPs and non-stationary adversarial MDPs are fundamentally different. However, it seems that [zhong et al. 2021] also consider the non-stationary adversarial MDPs (see Table 1 and Theorem 4.3 in [zhong et al. 21]). Given this context, where your work isn't the first to tackle this problem and doesn't strictly surpass prior outcomes, I am inclined to maintain my original score. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for highlighting this another contribution made by [Zhong et al. 2021] that we missed. [Zhong et al. 2021] indeed explored non-stationary adversarial MDPs, extending the algorithm of [Fei et al. 2020] to accommodate non-stationary transition kernels with linear function approximation. Given the fact that the restart mechanism used in [Fei et al. 2020] can be slightly modified to address the non-stationary transition and that the algorithm therein can be naturally extended beyond the tabular case, the extension from [Fei et al. 2020] to [Zhong et al. 2021] is more or less straightforward, *without touching the essential difficulty of handling non-stationarity in the adversarial MDPs*. Indeed, when examining the tabular case with the fixed transition studied by [Fei et al. 2020], the algorithm and regret bound in [Zhong et al. 2020] are the **same** as that in [Fei et al. 2020]. In contrast, we aim to **improve** the regret bound of [Fei et al. 2020] directly. The restart mechanism can only achieve a dynamic regret of $O(T^{⅔} P_T^{⅓})$ in both [Fei et al. 2020] and [Zhong et al. 2021], which is not optimal as demonstrated by the lower bound established by our paper. Thus, we aim to design an **optimal** algorithm to address adversarial rewards, which is very challenging. To achieve this goal, we design a meta-base two-layer structure rather than the restart mechanism to handle the adversarial rewards. Although our result does not strictly surpass prior results, **we are the first result to achieve a minimax optimal dynamic regret of $O(\sqrt{T P_T})$ under certain regimes** (for example, when $S_T$ is small). In comparison, the regret bound achieved via **the restart mechanism remains sub-optimal across all regimes**. Therefore, our paper makes the first non-trivial step towards obtaining optimal dynamic regret in the non-stationary adversarial MDP setting. We believe the methodology and techniques in this work will inspire subsequent research in this area to obtain the optimal regret bound for all regimes. We hope the reviewer can re-evaluate the contribution of this paper based on the above facts. Thanks! --- Reply to Comment 1.1.2: Title: Thanks for the review! Have we properly addressed your concerns? Comment: Dear Reviewer, We sincerely appreciate your constructive feedback and are especially grateful for bringing the paper by [Zhong et al., 2021] to our attention. We will update the paper to cite [Zhong et al., 2021] and incorporate the above discussions in the next version. Given that the author-reviewer discussion period is soon coming to an end, please let us know if our response has properly addressed the concerns and potential misunderstandings. We will be happy to provide clarification if you have any further questions. Thanks! Best, Authors
Summary: The paper "Dynamic Regret of Adversarial Linear Mixture MDPs" studies reinforcement learning in episodic non homogeneous MDPs with adversarial rewards in full information feed back setting. The motivation for this work is that In many applications, reward functions may be picked adversarially, and changing over time. Moreover, the state and action spaces may be huge. Hence, this paper considers adversarial rewards and linear function approximation. Previous work lacks in one of these requirements. The authors analyse that their algorithm enjoys a "dynamic regret" which is close to "optimal" by deriving a lower bound, and can recover the static setting, and improves the adversarial reward regret for tabular MDPs. Strengths: A novel algorithm is proposed that achieves good dynamic regret, under non stationary adversarial regime, without the knowledge of the non-stationarity measure, which poses many challenges for the analysis. The analysis in the paper successfully overcame these and obtained upper bound and a matching lower bound. Static case can be exactly recovered from these results, and in case of tabular case, it strictly improves the existing bound. Weaknesses: Though out of scope for this work, I would like to know the author's perspective on the following (since they also mentioned these in the paper). 1. This work considers full information setting, bandit setting is not considered, and much more complex. 2. The dependence on S is bad in large state space regime, and on H when the non-stationary measure is large. These could not be avoided from the analysis. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Please refer to above section. I appreciate if these are discussed further by the authors. Another question is in Remark 2. How do we retrieve the static case from Theorem 1? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate the insightful and constructive comments from the reviewer. In the following, we will respond to each of the questions raised. --- Q1: “This work considers full information setting, bandit setting is not considered, and much more complex.” A1: It would indeed be very interesting to consider the bandit setting, but as far as we know, this presents significant challenges. Investigating dynamic regret with adversarial rewards and bandit feedback, even within the more simplified online learning setting (like multi-arm bandits), is not well understood. The high-level challenge here includes how to effectively deploy the meta-base structure given the very limited bandit feedback, and how to ensure a sufficiently small meta regret to avoid ruining the overall bound. We believe that substantial novel ideas are required to develop non-trivial guarantees for the bandit setting. --- Q2: "The dependence on $S$ is bad in large state space regime, and on $H$ when the non-stationary measure is large" A2: Thank you for your question. We would like to address your concerns as follows. - The dependency on $S$ is indeed not ideal, particularly in the large state space regime. We have addressed this issue in the supplementary-material version, where we present an alternative and refined analysis for the algorithm. This analysis achieves a dynamic regret bound that is independent of $S$, but at the price of introducing another data-dependent quantity $S_T$, which represents the switching number of the best base-learner for each round and essentially captures the degree of environmental non-stationarity. - The dependence on $H$ in our result aligns with the most recent research, such as the works of Fei et al. [2] and Zhao et al. [58]. It is important to note that a gap persists in the dependence of $H$ even when the transition kernel is known, as indicated by Theorem 1 and Theorem 2 of Zhao et al. [58]. Addressing this issue and closing this gap is a crucial issue for future research. In summary, while our dynamic regret guarantees are already optimal in some contexts, the limitations pointed out by the reviewer indeed exist. We will take those as future work and try to improve the result further. --- Q3: "In Remark 2, how do we retrieve the static case from Theorem 1?" A3: For the static regret, the comparators are the same, that is, $\pi_1^c = \ldots = \pi_K^c = \pi^*$, i.e., $P_T=0$. Set $\gamma = 0$, the dynamic regret in Theorem 1 becomes $\text{D-Regret}(K) \leq \frac{\eta KH^3}{2} + \frac{H \log A}{\eta}$. By setting the step size as $\eta = \sqrt{\frac{\log A}{KH^2}}$, we obtain the $O(\sqrt{H^4 K \log A})$ static regret. Thanks for your question, and we will add more explanations in the revised version. --- Rebuttal Comment 1.1: Comment: I thank the authors for answering my questions. Consider including the comparable works in adversarial settings, as the other reviewers pointed out. I have no further questions, and retaining my original score.
Summary: This study analyzes the problem of online learning episodic inhomogeneous MDPs with _full-information_ reward functions (changed over episodes $k$ and step $h$) and the unknown transition kernel (unchanged over episodes). The MDPs are linear mixture MDPs and the performance measure is _dynamic_ regret (against _any sequence of policies_ with a non-stationarity measure of $P_T$ which controls how varied the sequence is). The proposed algorithm improves on existing upper bounds while relaxing on the requirement of prior knowledge (that of $P_T$). A lower bound is also provided which suggests optimality of the proposed algorithm on a few problem parameters. The presentation is excellent and the proposed algorithm seems interesting. I recommend its acceptance. Strengths: 1. The presentation is excellent and technically substantive. I am not an expert on the exact topic but feel comfortable following most of the discussion. 1. The proposed algorithm is interesting and uses techniques from other (recent) works (some from adjacent fields). Weaknesses: 1. From the point of view of someone who is not intimately familiar with the exact topic, the problem setting might seem a bit artificial/too restrictive, e.g., full information of reward functions. But it is clear from the cited works that many features of the setup are up-to-date with what the community is actively studying. The authors also acknowledged out some of such limitations. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Can you provide some intuition of what the switching number of best base-learner, $S_T$, measures? I found it challenging to grasp due to the presence of the expectation taken over (counterfactual) state distribution generated by $\pi^c_k$. Are there some regimes for this parameter that are interesting (and provide intuitive connections to existing studies)? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Adequate. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your insightful comments! Below we will address your questions. --- Q1: “Can you provide some intuition of what the switching number of best base-learner, $S_T$, measures? Are there some regimes for this parameter that are interesting (and provide intuitive connections to existing studies)?” A1: Thanks for the question. The quantity $S_T$ denotes the switching number of the best base-learner for each round, which essentially reflects the degree of environmental non-stationarity. Consider the following two examples. 1. In the stationary environment (i.e., reward function remains unchanged), $S_T$ could be relatively small as the best base-learner would seldom change. 2. In the piecewise-stationary environment, $S_T$ would align with the frequency of these environmental changes. In this regard, $S_T$ can be considered as an additional measure of the level of non-stationarity. But admittedly, we acknowledge that $S_T$ is a data-dependent quantity, and one may prefer to obtain bounds that rely solely on the problem-dependent quantity, such as $P_T$. As noted in Remark 3, this would introduce technical challenges that are difficult to address. We take this issue for future research and will emphasize more on this point in the revised version. --- Q2: " I found it challenging to grasp due to the presence of the expectation taken over (counterfactual) state distribution generated by $\pi_k^c$" A2: Sorry for the confusion. We define $S_T$ by taking the expectation over the state distribution generated by $\pi_k^c$ to derive a more refined bound. The expectation can be conveniently replaced by considering the maximum over all states. --- Thank you for raising these questions. We will revise our paper to highlight the points discussed above and further improve the presentation in the next version.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper studies RL in the adversarial episodic inhomogeneous MDP setting with transition kernels. It should be noted that the paper considers full-information feedback & linear mixture MDPs (both are very strong assumptions). The authors propose an algorithm drawing inspiration from policy optimization and prediction with expert advice problems (hence the full-information feedback setting). An upper bound on the dynamic regret is provided. The authors also offer an accompanying lower bound. Strengths: The paper is generally well-written and seems technically sound. An extensive literature review is provided and the authors offer comparison and discussion with closely related works. The regret bound improves upon those in existing works, and is near-optimal. Weaknesses: Among the key technical innovations listed in the introduction compared to existing works (Fei et al [2], Cai et al [34]), I feel like the extension from tabular MDPs to linear mixtures MDPs not technically challenging, given the fact that proximal policy optimization can be naturally extended beyond the tabular setting. In that case, the real contribution lies in the improvement of the regret bounds. Could the authors highlight what are the key innovations (either in the algorithms or in the analysis) that lead to this improvement? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Could the authors highlight what are the key innovations (either in the algorithms or in the analysis) that lead to this improvement? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors discuss some limitations & future works in the conclusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable comments and suggestions. We provide our response to each question below. ---- Q1: "Could the authors highlight what are the key innovations (either in the algorithms or in the analysis) that lead to this improvement?" A1: The key innovations over the prior works (especially the works of Fei et al. [2] and Cai et al. [34]) for non-stationary MDPs lie in the different ways to handle the environmental non-stationarity. Cai et al. [34] focused solely on the static regret of adversarial linear mixture MDPs where a single, fixed policy served as the comparator. Their algorithm and analysis do not support dynamic regret minimization, which requires to compare with a sequence of time-varying comparators. Fei et al. [2] studied the same problem setup as ours, i.e., the dynamic regret of adversarial linear mixture MDPs. Their algorithm follows a **restarting strategy** to handle the non-stationarity. Specifically, the algorithm will periodically restart itself to discard previous information. However, this restart period requires the prior knowledge of the environmental non-stationarity $P_T$ as an algorithmic input, which is unfavorable. In contrast, our approach tackles the environmental non-stationarity via the **two-layer meta-base framework**. We first design a policy optimization algorithm which is capable of tracking time-varying compared policies (this is achieved in a non-trivial manner, and see Lemma 1 in the paper), though the optimal step size tuning still requires the prior knowledge of $P_T$. To remove this unpleasant dependence, we introduce a meta-base two-layer structure that concurrently maintains multiple base-learners. Each base-learner is associated with a candidate step size, ensuring that the optimal step size is well approximated when considering all base learners together. Subsequently, we employ a meta-algorithm to track the optimal base-learner and thus can achieve favorable guarantee without knowing $P_T$ ahead of time. While this framework can be kind of standard in modern online learning, several new technical challenges need to be solved when applied to online MDPs. For example, the standard dynamic regret analysis relies a telescoping argument in online convex optimization, which won't work in online MDPs because regret here is defined on the expectations across changing policies. We address this issue by leveraging a fixed-share mechanism and presenting a novel multiplicative stability lemma, as detailed in Lines 203-208. Furthermore, the standard implementation of the two-layer structure requires us to update and evaluate multiple base-learners simultaneously. However, we can only deploy a single combined policy in the environment. We handle this challenge by a new analysis. More technical highlights can be found in the end of page 2 in our paper. In summary, a proper deployment of the meta-base two-layer structure is the key innovation of our proposal method to achieve the improvement. We appreciate your feedback and will revise the paper to more clearly emphasize these discussions in the next version. Thanks! --- Rebuttal Comment 1.1: Title: Thank you Comment: I would like to thank the authors for their response. I have read through and agreed with the issue pointed out by Reviewer Xpcd, especially with omitted related work. Please incorporate your response to the final version of this paper. In general, I think the paper is in good shape. But I should admit that although I am familiar with statistical RL, I am **not** familiar with the literature on adversarial MDPs. Hence, I cannot comment on the novelty and contribution of the paper on top of the existing work. --- Reply to Comment 1.1.1: Comment: Thanks for your response. We will certainly incorporate the discussion on related works during the rebuttal phase into the final version. To summarize, our paper **makes the first non-trivial step towards obtaining optimal dynamic regret in the non-stationary adversarial MDP setting** (our result is indeed optimal in certain regimes). To achieve this goal, we design a meta-base two-layer structure rather than the restart mechanism to handle the adversarial rewards. We believe the methodology and techniques in this work will inspire subsequent research in this area to obtain the optimal regret bound for all regimes. We'd also like to delve into further discussions to address any remaining concerns that the reviewer may have. Thanks!
null
null
null
null
null
null
Gradient-Free Kernel Stein Discrepancy
Accept (poster)
Summary: The paper provides a method to estimate the kernel Stein discrepancy without gradients. Strengths: The paper provides a method to estimate the kernel Stein discrepancy without gradients. Weaknesses: The paper is not clearly written for someone who is not familiar with the field. After reading, I'm still confused about whether someone else has published a similar method before. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: How limited is the class of distributions that allow gradient free estimation of the kernel Stein discrepancy? And the paper probably cherry picked good experiments, how often do the mentioned failure modes occur? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The paper is not clearly written for someone who is not familiar with the field. After reading, I'm still confused about whether someone else has published a similar method before. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking a look at our manuscript. Please see our global response to all Reviewers on the topic of how our manuscript has been presented. > After reading, I'm still confused about whether someone else has published a similar method before. Thank you for the opportunity to clarify this point. A gradient-free Stein operator had been described in the context of Stein variational gradient descent (Han and Liu, 2018), but a Stein _discrepancy_ based on this operator had not been proposed or analysed. Our work contributes the first such gradient-free Stein discrepancy, which is based on a reproducing kernel Hilbert space Stein set. Reviewer DoGh has summarised the situation well, explaining that "whilst the gradient-free Stein operator has been previously discussed in the community (Han and Liu, 2018), its theoretical properties have not received sufficient attention. This paper established conditions under which the GF-KSD can detect and control convergence of a sequence of empirical distributions to a target probability measure (Theorem 1 and 2). To my knowledge, this contribution fills an important gap in the existing literature and significantly enhances our understanding of the topic". > How limited is the class of distributions that allow gradient free estimation of the kernel Stein discrepancy? For concreteness, we have interpreted this question as asking "for which $p$ does gradient-free kernel Stein discrepancy (KSD) offer convergence control?", since convergence control is the principal theoretical justification for using gradient-free KSD (GF-KSD) as an objective to be minimised. Please let us know in the discussion if this was not what was being asked. If we choose $q = p$ we obtain the standard KSD, and in this case the class of $p$ for which KSD provided convergence control is known to include all distributions $p(x)$ on $\mathbb{R}^d$ with Lipschitz continuous $\nabla \log p(x)$ and _distantly dissipative_ tails (see Section 2.2). The latter condition is implied when $p$ is strongly log-concave outside a compact set (i.e. sub-Gaussian tails). For $q \neq p$, the distant dissipativity condition applies to $q$ rather than $p$, but then our $\inf_{x \in \mathbb{R}^d} q(x) / p(x) > 0$ condition in Theorem 2 implies that $p$ must also have sub-Gaussian tails in order for the preconditions of Theorem 2 to hold. Thus, GF-KSD applies to distributions $p$ with sub-Gaussian tails -- this is more general than standard KSD since the requirement for $p$ to be distantly dissipative is removed. We will make sure to emphasise this point in the revised manuscript. Thank you for your feedback; we hope that our clarifications, our commitment to improve the manuscript, and the remarks of Reviewer DoGh will have given you a more positive impression of our work.
Summary: This paper explores the use of Stein discrepancies in scenarios where the score function of the target distribution is unavailable or computationally impractical to evaluate. The authors introduce a novel approach called gradient-free kernelized Stein discrepancy (GF-KSD), which leverages a Stein operator developed by Han and Liu (2018) that does not rely on the score function. The authors establish sufficient conditions for the resulting divergence to control and detect convergence. The empirical evaluation of this divergence extends its application to Stein importance sampling and Stein variational inference, surpassing the scope of the previous work by Han and Liu (2018). Strengths: **Theories**: Whilst the gradient-free Stein operator has been previously discussed in the community (Han and Liu, 2018), its theoretical properties have not received sufficient attention. This paper established conditions under which the GF-KSD can detect and control convergence of a sequence of empirical distributions to a target probability measure (Theorem 1 and 2). To my knowledge, this contribution fills an important gap in the existing literature and significantly enhances our understanding of the topic. **Discussions**: Key results on both the theoretical and empirical sides are sufficiently discussed. Limitations of the GF-KSD are thoroughly examined and supported by empirical evidence (Section 3.3). Discussions on the choice of the degree of freedom, the density, are also included (Section 3.1). **Structure**: This paper exhibits excellent writing with clear motivations throughout. J. Han and Q. Liu. Stein variational gradient descent without gradient. In Proceedings of the 35th International Conference on Machine Learning, pages 1900–1908. PMLR, 2018. Weaknesses: **Experiments**: My only major concern is over the empirical results. The problems examined in the experiments are primarily toy examples with relatively low dimensionalities. where the one with the highest dimension is a 8-dimensional inference problem for a Lotka-Volterra model. The highest-dimensional problem considered is an 8-dimensional inference problem for a Lotka-Volterra model. To further validate the applicability of the proposed method in more realistic scenarios, additional empirical evidence on higher-dimensional problems would be beneficial. Specifically, it would be interesting to investigate the performance of the default choice of Laplace approximation when the dimensionality of the target distribution is high. This analysis could shed light on whether the default approach continues to yield satisfactory results in non-toy scenarios. Moreover, the reported numerical instability issue in Section 4.2 warrants further investigation, particularly in high-dimensional problems. Understanding the extent to which this instability becomes prominent in higher dimensions is crucial as it may impact the practical viability of the GF-KSD approach. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: It would be helpful to elaborate on the numerical instability issue noted in Section 4.2. Does this issue only occur when using GF-KSD for Stein variational inference, or does it also occur in the experiments in Section 4.1? Is that an artefact of the specific form of the target densities chosen for this experiment? Can this be avoided by a judicious choice of $q$? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Examples where the proposed approach can fail are extensively discussed in Section 3.3. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your kind comments and for your eloquent assessment of our manuscript. As the reviewer most familiar with the literature on KSD, we sincerely hope that you will champion our work for acceptance, in light of the rather mixed scores we have received. > To further validate the applicability of the proposed method in more realistic scenarios, additional empirical evidence on higher-dimensional problems would be beneficial. We completely agree; as we mentioned to Reviewer 9g9f, our manuscript is limited to establishing the theoretical foundations of Gradient Free KSD (GF-KSD), enumerating its possible failure modes, and providing positive proofs of concept. Subsequent follow-up work will be required to understand in detail the scenarios where GF-KSD can be effective and when it will encounter failure modes. We will be sure to emphasise the need for a subsequent detailed empirical investigation in the conclusion section of our revised manuscript, including to problems that are higher-dimensional. > It would be helpful to elaborate on the numerical instability issue noted in Section 4.2. Does this issue only occur when using GF-KSD for Stein variational inference, or does it also occur in the experiments in Section 4.1? Is that an artefact of the specific form of the target densities chosen for this experiment? Can this be avoided by a judicious choice of $q$? Thank you for the opportunity to clarify this point: The numerical issues that arose in Section 4.2 were encountered due to our simultaneous learning of both $p$, the distributional target, and $q$, the axuiliary distribution used to set up GF-KSD. At early stages along the optimisation path, where the distribution $q$ is not yet an accurate reasonable of $p$, extreme values of the ratios $q/p$ were encountered and gradient clipping was required to regularise the stochastic gradient descent. This problem was not present in Section 4.1, where $q$ was fixed as a Laplace approximation to $p$, such that extreme values of $q/p$ were not encountered. These results are encouraging; GF-KSD is numerically stable in the context of Stein Importance Sampling when the Laplace approximation is used. The use of GF-KSD in Stein Variational Inference requires further investigation; for example, there is at present no theoretical justification for the joint learning of $q$ along the optimisation path - we simply report that this numerical strategy was found to work well and represents "a promising avenue for further research". --- Rebuttal Comment 1.1: Comment: Thank you for your response. With that said, I still think a non-toy numerical example would significantly improve the paper -- the proposed method is advertised to be a rescue when the score function of the statistical model is impractical to evaluate; despite the fact that GF-KSD is more practical than the standard KSD in these cases, for the presented examples many alternative methods exist and have been demonstrated to work reasonably well, making it unclear whether this method is practically useful (see also the comments by Reviewer 9g9f). Also, my concern of whether the default choice of Laplacian approximation can still give good performance in non-toy, high dimensional examples is not yet answered. Due to the above, I have kept my scores, but happy to re-evaluate if the authors provided convincing numerical evidence to address these concerns before the rebuttal period ends.
Summary: This paper proposed a posterior approximation using a new Stein discrepancy, which does not require derivatives of the statistical model. For that purpose, the authors derived the new discrepancy, called gradient-free KSD, and studied its statistical and convergence behaviors theoretically. Then the authors developed algorithms for differential equations, for which stable computation of derivatives is difficult, and a new sampling algorithm that bypass the Hessian calculations. Strengths: - A new KSD that does not require gradients, similar to the idea of importance sampling, is proposed. This leads to new algorithms for differential equations where the gradient is difficult to compute and for problems requiring Hessian calculation. - The authors studied the theoretical property of the proposed GF-KSD by extending the existing KSD theory. - Not only theoretical analysis but also detailed numerical investigations of the choice of parameters and $q$ are carried out with the actual use in mind. Weaknesses: - The writing style is such that the main paper alone is not complete, and it is assumed that the reader will read the Appendix. For example, Eq. 6 of Line 115 does not appear in the main text, and the tilted Wasserstein distance defined in Theorem 1 is introduced without any explanation of its properties in the main text. - The writing style could be improved since the discussion about existing research and the explanation of the proposed method are mixed, making the paper difficult to read. -Some parts are mathematically undefined or under-discussed - In Definition 2, sup is undefined - In Line 158, at last, $\not \to$ is undefined. - I don't know how widely the tilted Wasserstein distance (TWD) in Theorem 1 is known to the general public, but there is no discussion of the properties of TWD. Therefore I could not understand how important Theorem 1 is, that is, how important it is when it is said that convergence of TWD leads to convergence of GF-KSD; even after reading the proof of Theorem 1, I could only understand that TWD is a convenient form of the usual Wasserstein distance, which is obtained after applying the triangle inequality. - I could not understand the importance of the proposed method because I am not sure for what problems the proposed method is effective. I agree that it may be useful for differential equation problems, but the authors only applied the method to very small models of Lotka-Volterra in the experiments. In such a setting, MCMC is the standard approach, and even if the model is high-dimensional, we can solve it efficiently by variational inference. Also, although the combination with Stein variational inference seems interesting, I wondered if GF-KSD is really flexible enough to generate samples for complex real data under the two restrictions suggested in Section 3. One restriction is that the tail of $q$ should not be far from the target distribution; the other is that it must not be high-dimensional. I think the application to differential equations seems promising, so it would be better to find a problem setting where GF-KSD is more useful than MCMC and standard variational inference. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: I would appreciate it if the authors would answer the concerns described in Weakness. As for minor questions; - What do the dotted and solid lines in Figure 1(b) correspond to? - Is it required to adjust parameters of the Laplace distribution, KDE, and GMM for $q$ in some way ? If so, what is the recommended method ? - Looking at Figure 3 (b), it seems that the number of samples ($n$) must be very large ($\log n=5$, i.e., $n=150$) even for low-dimensional problems such as d=8 in order for there to be any difference in energy distance. Is my understanding correct? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The limitation of the proposed method is discussed in detail. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking a look at our manuscript. Please see our global response to all Reviewers on the topic of how our manuscript has been presented. > In Definition 2, sup is undefined The symbol "$\sup$" is the supremum; our understanding is that we can assume familiarity with "$\sup$" for NeurIPS, but we respectfully defer to the Area Chair for guidence on this point. > In Line 158, at last, $\nrightarrow$ is undefined. The symbol "$\nrightarrow$" is the logical opposite of $\rightarrow$, i.e. "does not converge"; our understanding is again that we can assume familiarity with "$\nrightarrow$" for NeurIPS, but we again respectfully defer to the Area Chair for guidence on this point. > I don't know how widely the tilted Wasserstein distance (TWD) in Theorem 1 is known to the general public, but there is no discussion of the properties of TWD. Therefore I could not understand how important Theorem 1 is, that is, how important it is when it is said that convergence of TWD leads to convergence of GF-KSD; even after reading the proof of Theorem 1, I could only understand that TWD is a convenient form of the usual Wasserstein distance, which is obtained after applying the triangle inequality. Thank you for the opportunity to clarify this point: The TWD was introduced quite recently, in Proposition 3.3 of Huggins and Mackey 2018, so it is not widely known at present. The special case where the tilting function $g(x) = 1$ is constant corresponds to the standard 1-Wasserstein distance. Loosely speaking, for a tilting function $g(x)$ that is bounded away from $0$ on a compact subset $K \subset \mathbb{R}^d$, the topologies induced by TWD and 1-Wasserstein distances will be identical (i.e. convergence in one implies convergence in the other, and vice versa). This is because the tilting function $g(x)$ can be viewed as importance weights, so that instead of comparing two measures $p(x)$ and $q(x)$ directly we compare instead their importance re-weighted measures, proportional to $g(x)p(x)$ and $g(x)q(x)$, and it is clear that convergence of either implies convergence of the other under compactness. However, if $g(x)$ can approach either $0$ or $\infty$, which is what happens in our work concerning measures on $\mathbb{R}^d$, then the topologies of TWD and 1-Wasserstein distances do not coincide in general. On the other hand, we note that TWD induces a much weaker topology than, for example, divergences such as Kullback--Leibler or Hellinger, since it does not require absolute continuity of measures. Further, we emphasise that Theorem 2 (convergence control) is really the main result of our manuscript, as opposed to Theorem 1 (convergence detection). This is because convergence control justifies the design of algorithms that seek to minimise GF-KSD, guaranteeing the consistency of the approximations that are generated. Based on your useful feedback, we commit to expand our discussion of TWD in the manuscript to include the above points, which should broaden the accessibility of our manuscript. > I could not understand the importance of the proposed method because I am not sure for what problems the proposed method is effective [...] I wondered if GF-KSD is really flexible enough to generate samples for complex real data. This is an excellent question; our manuscript takes only a first step toward answering it, establishing the theoretical foundations of Gradient Free KSD (GF-KSD), enumerating its possible failure modes, and providing positive proofs of concept. Subsequent follow-up work will be required to understand in detail the scenarios where GF-KSD can be effective and when it will encounter failure modes. We will be sure to emphasise the need for a subsequent detailed empirical investigation in the conclusion section of our revised manuscript. > What do the dotted and solid lines in Figure 1(b) correspond to? In the caption of Figure 1 we state that "the colour and style of each curve in (b) indicates which of the sequences in (a) is being considered", but we will rephrase this to be explicit that the (e.g.) dashed blue curves in (b) correspond to the sequence of approximating distributions displayed as dashed blue curves in (a). > Is it required to adjust parameters of the Laplace distribution, KDE, and GMM for $q$ in some way? If so, what is the recommended method? Please allow us to emphasise that KDE and GMM are not part of any method that we propose, we simply used these to generate some examples of distributions $q$, for the purpose of illustration in Figure 1. We will add additional emphasis on this point in the manuscript. The Laplace approximation has no degrees of freedom to be specified. > Looking at Figure 3 (b), it seems that the number of samples ($n$) must be very large ($\log n = 5$, i.e., $n = 150$) even for low-dimensional problems such as $d=8$ in order for there to be any difference in energy distance. Is my understanding correct? That is completely correct, but we would argue that $n = 150$ is not a "very large" number of samples if we aim to accurately represent an 8-dimensional target. Of course, "very large" will be application-specific, but if we were to take a Cartesian product of a grid of size just 3 in each dimension then we would need $n = 3^8 = 6561$ samples, which is much greater than $n = 150$. Thank you once again for your constructive suggestions on how our manuscript can be improved; we hope that our commitment to resolve them will be reflected in your updated reviewer scores.
Summary: The authors study the kernel Stein discrepancy based on a Stein operator, introduced previously in [Liu 2018], which does not require access to the gradient of the target density. As their main theoretical result, they prove that, under certain assumptions on the target distribution (and the auxiliary distribution q) and the approximating sequence, the discrepancy controls weak convergence. The authors provide recommendations for the choice of the auxiliary distribution q in the construction of the discrepancy and present an experimental study which identifies certain failure modes of the discrepancy. They also discuss two applications to posterior approximation. Strengths: The idea of the authors to use the gradient-free Stein operator in order to define a new kind of KSD is interesting and worth studying. There are certainly a lot of situations where the evaluation of the gradient of the model is costly in practice. Giving the practitioners an option to avoid evaluating it when computing the KSD is certainly useful. The discussion of the properties of the new GF-KSD is very thorough and based on both theoretical results and a substantial experimental study. I particularly liked the the experiments revealing failure modes of GF-KSD as those are not necessarily captured by the theoretical results. Weaknesses: The main theoretical contribution of the paper seems to be provided by Theorem 2. However, I am a bit concerned about its real applicability. The authors require that the auxiliary distribution $q$ have tails heavier than the target $p$. On the other hand, they require that $q$ is distantly dissipative, which precludes the use of heavy-tailed $q$. This all means that $p$ must not be heavy-tailed if the assumptions of Theorem 2 are to be satisfied. Moreover, the authors suggest four potential choices of $q$ (Prior, Laplace, GMM and KDE) but note that GMM and KDE are impractical in general. At the same time, the Prior method does not satisfy the assumptions of Theorem 2 for heavy-tailed priors. The Laplace method, on the other hand, seems to satisfy the assumptions of Theorem 2 only for targets which are sub-Gaussian. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Related to what I wrote above, could the authors state clearly what class of targets $p$ their Theorem 2 applies to? Could they also try to characterise the class of targets $p$ for which one can easily construct a useful auxiliary distribution $q$ in practice (using one of the methods of section 3.1), such that $q$ and $p$ satisfy the assumptions of Theorem 2? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: I believe the real applicabilty of the theoretical results presented by the authors is somewhat limited, to an extent that is not clearly acknowledged in the paper. But I am looking forward to reading the authors' response as perhaps I'm missing something. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful feedback, especially at short notice, which is greatly appreciated. > This all means that $p$ must not be heavy-tailed if the assumptions of Theorem 2 are to be satisfied. This is completely correct - but we emphasise that the standard kernel Stein discrepancy (KSD) also has this requirement. That is, the standard KSD is only guaranteed to have convergence control in settings where the target distribution $p$ is distantly dissipative (Theorem 8 of Gorham and Mackey, 2017), meaning that $p$ cannot be heavy tailed. Despite this limitation, standard KSD has been widely used for applications such as sampling and variational inference, for which convergence control is required. This is encouraging, in the sense that there are a wide variety of important problems for which $p$ is not heavy tailed and KSD has been successfully used. There is active research on Stein's method for heavy tailed targets in the Probability community [for example, UB2022], but (as far as we are aware) the application of these techniques to the problem of posterior approximation has yet to be attempted. Thank you for raising this point - we will explicitly highlight the non-applicability of both KSD and GF-KSD to heavy-tailed $P$ in the revised manuscript, when discussing the preconditions of Theorem 2. > Related to what I wrote above, could the authors state clearly what class of targets $p$ their Theorem 2 applies to? Could they also try to characterise the class of targets $p$ for which one can easily construct a useful auxiliary distribution $q$ in practice (using one of the methods of section 3.1), such that $q$ and $p$ satisfy the assumptions of Theorem 2? Thank you for the opportunity to clarify this point -- we first provide a mathematical answer, and then also a practical answer which addresses the fact that an appropriate and actionable choice of $q$ is required. Mathematical answer: If we choose $q=p$ we obtain the standard KSD, and in this case the class of $p$ for which KSD provided convergence control is known to include all distributions $p(x)$ on $\mathbb{R}^d$ with Lipschitz continuous $\nabla \log p(x)$ and _distantly dissipative_ tails (see Section 2.2). The latter condition is implied when $p$ is strongly log-concave outside a compact set (i.e. sub-Gaussian tails). For any $p$ (not necessarily distantly dissipative) with sub-Gaussian tails there exists a dominating Gaussian distribution $q$, i.e. with $\inf_{x \in \mathbb{R}^d} q(x) / p(x) > 0$, and since Gaussians are distantly dissipative Theorem 2 can be applied. Thus GF-KSD is in principle _more widely applicable_ than standard KSD. We will make sure to emphasise this point in the revised manuscript. Practical answer: Of course, if $p$ is implicitly defined then it may not be clear how $q$ should be selected. If $p$ itself is distantly dissipative then $q$ could be taken to be a tempered version of $p$ (i.e. $q(x) \propto p(x)^t$ for some $0 < t < 1$), which would automatically guarantee that $\inf_{x \in \mathbb{R}^d} q(x) / p(x) > 0$. But this "solution" raises the question of how to take the gradient of $q$, if taking the gradient of $p$ is presumed to be difficult. As a more practical strategy, we propose to take $q$ to be a Laplace approximation of $p$, in the hope that the curvature of $p$ at the mode is an accurate proxy for the tail decay of $p$. This strategy was seen to perform well in the experiments reported in Sections 3.1 and 4.1, but one can of course construct examples where this strategy will fail. In preparatory work, we also considered taking $q$ to be a Laplace approximation with an "inflated" covariance matrix; a conservative choice aimed at guaranteeing the $\inf_{x \in \mathbb{R}^d} q(x) / p(x) > 0$ condition is satisfied. This did not lead to improved performance in any of our experiments, so was omitted from the manuscript. Please allow us to emphasise that this is an instance of the fundamental problem that if $p$ is implicitly defined via Bayes' theorem, such that the mathematical properties (tail decay, curvature, etc) of $p$ are unknown to the user, then we cannot hope to predict how well _any_ sampling algorithm will work. In specific applications where the prior and the likelihood are mathematically tractable, we may be able to _derive_ appropriate choices of $q$ using similar methodology that is used for constructing dominating measures for rejection sampling; this would be an interesting direction for further work. All of the above discussion will be incorporated into the revised manuscript. Thank you once again for your comments and questions, which we hope we have adequately addressed. To summarise; the theoretical applicability of GF-KSD is strictly greater than standard KSD in terms of convergence control, but neither KSD nor GF-KSD are appropriate when $p$ is heavy-tailed. The practical choice of $q$ requires some care, but a promising strategy based on Laplace approximation was found to work well in the experiments we performed. We hope that our commitment to clarify these points in the manuscript will allow you to increase your score for our work. References: [UB2022] Upadhye NS and Barman K, 2022. A unified approach to Stein’s method for stable distributions. Probability Surveys, 19, pp.533-589. --- Rebuttal Comment 1.1: Title: Reply to the authors Comment: Thank you very much for your answers. I have increased my rating of the paper.
Rebuttal 1: Rebuttal: __On the Presentation of our Manuscript__ Our manuscript is theoretical in nature and is written for readers who have some familiarity with the concept of kernel Stein discrepancy (KSD). This is a small but important subsection of the NeurIPS community, who we believe will be the readers most interested in this work (e.g. KSD is currently being used in the community for applications such as goodness-of-fit testing, sampling, gradient estimation, and variational inference). Excellent introductions to KSD already exist, such as "A Short Introduction to Kernelized Stein Discrepancy" on Qiang Liu's website, or "Measuring Sample Quality with Kernels" by Gorham and Mackey (ICML 2017); there is also now a Wikipedia page on "Stein Discrepancy". For readers with this background, we believe our mansucript strikes an appropriate balance between precision and concision. For example, Reviewer DoGh comments that "this paper exhibits excellent writing with clear motivations throughout".
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Efficient Adversarial Attacks on Online Multi-agent Reinforcement Learning
Accept (poster)
Summary: This paper proposes a series of novel attacks against MARL under different assumptions. From the theoretical perspective, it then analyzes the efficacy of the proposed attacks. Strengths: + The paper proposes a series of novel attacks against online MARL and provides a decent theoretical analysis for the efficacy of the proposed attack. This paper lays a theoretical foundation for the adversarial attacks against online MARL. The proposed attacks can be potentially extended by follow-up works to launch practical attacks against MARL systems. The novelty of the problem and the technical depth are enough for a top-tier ML conference. Weaknesses: I do not idenitfy any critical weakness in the proposed technique and the corresponding theoretical analysis. As a theoretical paper, it is a little bit over to ask questions about the practical perspective. However, as an attack paper, it is somewhat important to discuss the practicability of the proposed attacks. As such, I would suggest the authors add such a discussion related to the proposed technique. For example, the authors could provide some suggestions for practitioners to solve the proposed objective functions, which could be hard to optimize. In addition, what would be the computational cost for the proposed objective functions? Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Why the proposed assumptions that the attackers can freely alter the target agents' actions and rewards are practical and realistic? 2. What are the practical suggestions for optimizing the proposed objective functions? 3. What is the computational cost of solving the proposed attack objective functions? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: This paper does not include a section discussing the limitation of the proposed attacks. Given that the paper is mainly about theoretical analysis of attack efficacy. It could include a discussion section (maybe in the appendix) to discuss the potential limitations maybe on the practical side, i.e., how realistic are the assumptions that the attackers can freely manipulate the action and reward? What is the computational cost of solving the proposed attack objective functions? In addition, the authors checked the broader impact of the checklist. However, I do not find a (sub)section or paragraph clearly addressing the potential negative social impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Why the proposed assumptions that the attackers can freely alter the target agents' actions and rewards are practical and realistic? In some applications of RL models, action decisions and reward signals may need to be transmitted over communication links. When data packets containing the reward signals and action decisions etc are transmitted through the communication links, an attacker can implement adversarial attacks by intercepting and modifying these data packets. Hence, the proposed adversarial attacks are possible and realistic. As the first step of understanding the potential risks of different adversarial attacks on online MARL, we did not limit the attacker's abilities. The attackers can freely alter the target agents' actions and rewards. There may exist some situation that the attacker can not freely alter any actions and rewards and can only manipulate part of actions or rewards. Considering limited attacker and finding the defending algorithms is an important future direction for us to pursue. > What are the practical suggestions for optimizing the proposed objective functions? In the practical side, the proposed mixed attack strategy works for many realistic problems. The mixed attack strategy does not require any information about the underline Markov game. It is simple yet effective. As the first step of understanding the impact of adversarial attacks on online MARL, our analysis is focused on the tabular case. The proposed mixed attack strategy can work in some realistic problems where the action space is continuous. In the continuous action problems, the attacker does not aim to force the agents to learn the exactly target policy but some policies nearby the target policy. The proposed mixed attack strategy still works. For example, in the continuous space, the attacker can change the non-target action to $r_{i.h}*e^{c |a_{i,h}-a^+_{i,h}|}$ instead of $0$, in order to avoiding sparse reward. Then the agents will still learn a policy that is close to the target policy. > What is the computational cost of solving the proposed attack objective functions? For the proposed black-box attack strategy (the approximation mixed attack), the computational cost is $O(S^2AH\tau+mKH)$. The proposed algorithm will compute the $Q$-values for each visited action-state pair at every steps and every episodes in the exploration phase. The computation of $Q$-value costs $O(S)$. Thus, the total computational cost in the exploration phase is $O(S^2AH\tau)$. In the attack phase, the attacker only need to change the action and the reward for each agent so that the computational cost in the attack phase is $O(mKH)$. We will add this discussion to the revised paper. --- Rebuttal Comment 1.1: Comment: Thank the authors for the clarification. I do not have further questions. Please add the rebuttal changes to the paper. --- Reply to Comment 1.1.1: Title: Thanks! Comment: We thank the reviewer again for the valuable feedback and the time invested. We will definitely add the rebuttal changes to the paper.
Summary: This paper presents a novel approach to adversarial attacks in Multi-agent Reinforcement Learning (MARL), offering significant insights into various attack settings. The authors explore different attack strategies, including reward poisoning, action poisoning, and a combination of both, across various settings: white-box, grey-box, and black-box. The paper's key contribution is the efficient adversarial attack strategy that perturbs both the action and reward of the agents. Strengths: - The paper addresses a novel and significant problem of adversarial attacks in Multi-agent Reinforcement Learning (MARL) for reward/action poisoning. - The paper is moderately easy to follow. - The authors have considered different settings such as white-box, grey-box, and black-box, which adds depth to the study. Weaknesses: - The related work section of the paper does not seem to cover the literature comprehensively. It would be beneficial for the authors to provide a more thorough review of the existing literature, especially focusing on the works that have addressed similar problems in the past, how they did it (for reward/attack poisoning). - The paper lacks numerical results. While the authors have provided some simulation results in the appendix, it would be beneficial to include more numerical results in the main body of the paper to strengthen their arguments. Furthermore, this is a quite practical problem, and it is beneficial to thoroughly assess the concepts introduced by the authors by means of numerical simulations. - Attack detection is not discussed by the authors. -The authors have analyzed the performance of the proposed attack algorithm on V-learning. It would be interesting to see how the proposed black-box strategy performs on other MARL algorithms. - It's not clear how tight the bounds proposed by the authors are. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: See above Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Authors do not seem to discuss limitations thoroughly. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The related work section of the paper does not seem to cover the literature comprehensively. It would be beneficial for the authors to provide a more thorough review of the existing literature, especially focusing on the works that have addressed similar problems in the past, how they did it (for reward/attack poisoning). Thank you very much for this comment. We will add the following discussions to the revised paper: [Ma et al., 2019] studies reward poisoning attack against batch RL in which the attacker is able to gather and modify the collected batch data. [Rakhsha et al., 2020] proposes a white-box environment poisoning model in which the attacker could manipulate the original MDP to a poisoned MDP. [Behzadan and Munir, 2017, Zhang et al., 2020., Rangi et al., 2022] study online white-box reward poisoning attacks in which the attacker could manipulate the reward signal before the agent receives it. [Sun et al., 2021] proposes a practical black-box poisoning algorithm called VA2C-P. Their empirical results show that VA2C-P works for deep policy gradient RL agents without any prior knowledge of the environment. [Rakhsha et al., 2021] develops a black-box reward poisoning attack strategy called U2, that can provably attack any efficient RL algorithms. [Xu et al., 2021] investigates training-time attacks on RL agents and the introduced attacker can manipulate the environment. [Wang et al., 2021] studies the backdoor attack in two-player competitive RL systems. The trigger is the action of another agent in the environment. They propose a unified method to design fast-failing agents which will fast fail when trigger occurred. [Liu et al., 2022] studies the controllable attack by constraining the state distribution shift caused by the adversarial policy and offering a more controllable attack scheme. [chen et al., 2022] considers a situation that $\alpha$-fraction of agents are adversarial and can report arbitrary fake information. They design two Byzantine-robust distributed value iteration algorithms that can identify a near-optimal policy with near-optimal sample complexity. [Mohammadi et al., 2023] studies targeted poisoning attacks in a two-agent setting where an attacker implicitly poisons the effective environment of one of the agents by modifying the policy of its peer. [Ma et al., 2022] considers a game redesign problem where the designer knows the full information of the game and can redesign the reward functions. The proposed redesign methods can incentivize players to take a specific target action profile frequently with a small cumulative design cost. [Gleave et al., 2020, Guo et al., 2021] studies the poisoning attack on multi agent reinforcement learners, assuming that the attacker controls one of the learners. [Wu et al., 2022] studies the reward poisoning attack on offline multi-agent reinforcement learners. > The paper lacks numerical results. Due to the page limitation, we put the Numerical results in Appendix B. In addition, in the author rebuttal pdf file, we also added more experimental results to discuss how the attack loss and cost are affected by the environment's parameters. > Attack detection is not discussed by the authors. We agree with the reviewer's comment that we did not consider the attack detection problem, which is certainly very important. In this paper, we assumed that the agents do not know the existence of the attacker. In fact, in the online setting, if the agents have no prior information of the MG, the proposed white and gray box attack is hard to be detected. As we consider the Markov attack strategy in this paper, the post-attack environment under the Markov attack strategy is still a Markov game. Without reference, the agents can not figure out whether the environment they observe is a post-attack environment or an attack-free environment. The proposed black attack may be detected, as the transition probabilities of the post-attack environment change over time. The goal of our paper is to understand and identify the impacts of different adversarial attacks. We hope our work can inspire follow-up work that can detect and mitigate such attacks so that RL models can be used in safety-critical applications. It is an important future direction for us to pursue. We will add these discussions in the revised paper. > It's not clear how tight the bounds proposed by the authors are. Currently, we are not able to give a information-theoretic lower bound, as the attack cost/loss depends on both the Markov game and the agent's learning algorithms. However, we can consider a special case of attacking V-learning to discuss the tightness of the bounds in Theorem 9. V-learning uses an adversarial bandit method to choose actions, which has a regret lower bound scaling on $\Omega(\sqrt{K})$. Consider a Markov game where the policy that maximize the attackers' rewards is the unique {NE, CE, CEE} of the original Markov game. The attacker does not need to attack. In this case, there exists an white-box attacker whose attack cost is 0 and attack loss scales on $\Omega(\sqrt{K})$. The bounds in Theorem 9 scales on $\Omega({K}^{2/3})$. The proposed black-box attack's cost and loss are sub-linear but are not optimal. It is because that we use a exploration-then-attack method. An adaptive attack method could potentially achieve optimal cost/loss dependency on $\sqrt{K}$. Finding the optimal attack strategy is an interesting future direction for us to pursue. --- Rebuttal Comment 1.1: Comment: I thank the authors for their comments. However, I still have concerns regarding the problem of detection and tightness of the bounds. Furthermore, I understand that the numerical results are in the appendix, but given the nature of the topic, I believe it is better to move the results in the main body of the paper.
Summary: This paper investigates the impact of adversarial attacks on Multi-Agent Reinforcement Learning (MARL) models. The authors propose an attacker who can manipulate rewards and actions to guide the agents into a target policy or maximize cumulative rewards under a specific reward function. The paper presents an adversarial attack model where the attacker aims to force the agents to learn a target policy or maximize cumulative rewards. Loss and cost functions are used to evaluate the effectiveness of the attack, with cost representing the cumulative manipulation and loss measuring the deviation from the target policy or regret compared to the attacker's optimal policy. The attack problem is studied in three settings: white-box, gray-box, and black-box. The paper demonstrates the limitations of action poisoning-only attacks and reward poisoning-only attacks. Certain Markov Games (MGs) are identified where these strategies are inefficient. However, sufficient conditions are provided under which these attacks can efficiently target MARL algorithms. Efficient strategies for action poisoning and reward poisoning attacks are introduced, and their costs and losses are analyzed. Then a mixed attack strategy is proposed in the gray-box setting, and an approximate mixed attack strategy is introduced for the black-box setting. These strategies can force sub-linear-regret MARL agents to choose actions according to the attacker's target policy with sub-linear cost and loss. The impact of the approximate mixed attack strategy on V-learning, a decentralized MARL algorithm, is investigated. No experiments have been done. Strengths: Originality: Firstly, this paper introduces an adversarial attack model specifically tailored for Multi-Agent Reinforcement Learning (MARL) systems. While adversarial attacks in single-agent RL have been studied before, this paper focuses on the challenges and implications of attacks in the context of MARL. The consideration of both action poisoning and reward poisoning, as well as the introduction of a mixed attack strategy, adds originality to the research. Quality: The analysis considers different attack settings, providing a comprehensive understanding of the impact of adversarial attacks on MARL algorithms. The paper also provides conditions under which the attack strategies can be effective, contributing to the quality of the research. Clarity: The paper is easy to follow. The introduction provides an overview of the problem and motivation, while the contributions are explicitly listed, making it easy for readers to understand what the paper aims to achieve. The attack model, attack settings, and attack strategies are explained in a concise and understandable manner. Significance: With the increasing use of MARL in various applications, understanding the vulnerabilities and countermeasures against adversarial attacks is crucial for the safe and reliable deployment of these systems. By investigating the limitations of existing attack strategies and proposing new attack strategies, the paper sheds light on the security aspects of MARL and contributes to the development of more robust and trustworthy MARL algorithms. Weaknesses: Limited comparison with existing work: The paper lacks a comprehensive comparison with prior research on adversarial attacks in MARL. While it briefly mentions existing works on attacks in single-agent RL and MARL, it does not provide a detailed comparison or highlight the novelty and differentiation of its proposed attack model and strategies. A more extensive discussion and comparison with related works would strengthen the paper's contribution and contextualize its findings within the existing literature. No empirical evaluation: The paper would benefit from an empirical evaluation of the proposed attack strategies. While the analysis provides theoretical insights into the effectiveness of the attack strategies, it lacks practical validation through experiments on real-world or simulated MARL environments. Conducting experiments and presenting empirical results would enhance the credibility and applicability of the proposed attack strategies. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Some experiments would benefit the paper. It would be great if the authors can resolve my concerns in the weakness. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > "Limited comparison with existing work" Thank you very much for your comment. We will add the following discussion to the revised paper. Adversarial attacks on single agent RL have been studied in various settings [Behzadan and Munir, 2017, Huang and Zhu, 2019, Ma et al., 2019, Zhang et al., 2020b, Sun et al., 2021, Rakhsha et al., 2020, 2021, Rangi et al., 2022]. Among the existing works on attacks in single-agent RL, the most related paper is [Rangi et al., 2022], which studies the limitations of reward only manipulation or action only manipulation in single-agent RL and proposed an attack strategy combining reward and action manipulation. There are multiple differences between our work and [Rangi et al., 2022]. First, the MARL is modeled as a Markov game, but the single-agent RL is modeled as a MDP. In Markov game, each agent's action will impact other agents' rewards. Second, the learning object of single-agent RL and MARL is different. The single-agent RL algorithms learn the optimal policy, but MARL algorithms learn the equilibrium. Since the attacks on one agent will impact all other agents and the equilibrium is considered as the agents' learning object, we have to develop techniques to carefully analyze the impact of attacks and the bound of the attack cost. For example, we developed the value function difference decomposition in equations (22)-(27), (29)-(38), (41)-(43), and (51), which build a connection between the value function difference and the number of times that the non-target actions are chosen in MARL setting. [Ma et al., 2022] considers a game redesign problem where the designer knows the full information of the game and can redesign the reward functions. The proposed redesign methods can incentivize players to take a specific target action profile frequently with a small cumulative design cost. Ma's work considered the norm-form game but we considered the Markov game. The norm-form game is a simple case of the Markov game with $H = 1$. [Gleave et al., 2020, Guo et al., 2021] study the poisoning attack on multi agent reinforcement learners, assuming that the attacker controls one of the learners. In our work, the attacker is not one of the learners, but an external unit out of the original Markov game. The attacker can poisoning the reward/action of all learners at the same time so that can fool the learners to learn a specific policy. [Wu et al., 2022] studies the reward poisoning attack on offline multi-agent reinforcement learners. The attacker can poisoning the reward of the agents. We considered the online MARL. In offline MARL, the attacker can estimate the underline Markov game from the offline datasets. In online MARL, the attacker may not have the knowledge (reward/transition functions) of the Markov game. > "No empirical evaluation." Due to the page limitation, we put the Numerical results in Appendix B. In particular, in Section B.1, we empirically compared the performance of the action poisoning only attack strategy ($d$-portion attack), the reward poisoning only attack strategy ($\eta$-gap attack) and the mixed attack strategy. We considered two different target policies. In Case 1, any action poisoning only attack and reward poisoning only attack will fail. In Case 2, condition 1 and 2 hold so that $d$-portion attack and $\eta$-gap attack work. Furthermore, in Section B.2, we considered a synthetic environment and empirically compared the performance of the mixed attack strategy and the approximate mixed attack strategy. In addition, in the author rebuttal pdf file, we also added more experimental results to discuss how the attack loss and cost are affected by the parameters. --- Rebuttal Comment 1.1: Title: Thank you for your response Comment: I would thank the authors for their detailed responses to my questions. I have no concerns now and would like to raise my rating from 5 to 6. Though I got to know the experiments are in the appendix, I still suggest putting some experiment results in the main text when preparing the camera-ready version. --- Reply to Comment 1.1.1: Title: Thanks! Comment: We thank the reviewer again for the thoughtful comments and valuable recommendations. We will definitely add some experiment results in the main text of the revised version.
Summary: This paper studies adversarial attacks in online multi-agent RL, focusing on reward and action poisoning attacks. It provides a set of characterization results for three different attack modalities: white-box, gray-box and black-box attacks. The authors first discuss the limitation of action-only or reward-only poisoning attack, showing that there are instances of the problem setting where these are not efficient and successful. However, for white-box versions of these attacks, the authors provide sufficient conditions that enable feasibility. For all three attack modalities, they demonstrate that the combination of action and reward poisoning attacks can always force targeted behavior, and provide upper bounds on the expected cost and loss of proposed attack strategies. Strengths: - The paper is clearly written and is enjoyable to read. While there are typos, these should be easy to fix. More importantly, the technical content is clearly conveyed, and the paper provides intuitions bending the main technical results. - The paper complements prior work on poisoning attacks in MARL, which considered offline RL setting. Hence, the characterization results are novel and contribute to the line of work on poisoning attacks in RL. That said, the results appear to be similar to the ones presented in [Rangi et al., 2022]. It would be useful to have additional discussions on how these differ in terms of their technical content, proof techniques, etc.. - The paper considers several variations of poisoning attacks under different attack modalities, while showcasing the existence of efficient and successful reward+action poisoning attack strategies. Such a systematic study is likely to be valuable to researchers working on poisoning attacks in RL. - The paper provides a good overview of the related work on adversarial attacks in RL. Specifically for MARL, the most important references seem to be covered. However, there are a couple of recent references that could be included in the list (see below). I encourage the authors to do so. > - Wang et al;, BACKDOORL: Backdoor Attack against Competitive Reinforcement Learning > - Mohammadi et al., Implicit Poisoning Attacks in Two-Agent Reinforcement Learning: Adversarial Policies for Training-Time Attacks > - Liu et al., Controllable Attack and Improved Adversarial Training in Multi-Agent Reinforcement Learning > - Chen et al., Byzantine-Robust Online and Offline Distributed Reinforcement Learning Weaknesses: - In terms of results, the paper does not discus the optimality of the upper bounds in their formal results. For example, it could be interesting to discuss the tightness of the bounds in Theorem 9, and how they compare to those of the underlying learning algorithms (V-learning). - Following up on my previous remark, additional discussions regarding some of the assumptions in the problems setting would be useful to have. For example, the paper assumes that the target policy receives strictly positive rewards (page 4). Additionally, it would be good to formally specify the learners' model, and reference it in Theorem 1 and 2. - This work is primarily theoretical, but the paper could benefit from having additional experiments (in addition to the numerical simulations reported in the appendix), e.g., those that would demonstrate the efficacy of the black-box approach. Namely, bound in (8) and (9) are inversely proportional to $R_{min}$, so it would be interesting to investigate how this dependency affects the practicality of the proposed approach. - Practical considerations are not discussed, e.g., it would be useful to have some discussion on how to enable scalability and go beyond the tabular setting. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: It would be great if the authors could additionally answer the following clarification questions: - Could you explain how these results compare to the ones from [Rangi et al., 2022] in terms of, e.g., proof techniques? - For Eq. (1), Theorem 1 and 2, what is the assumed model of the MARL agents? I.e., what is needed for these statements to hold? - Can you explain why the analysis does not allow sparse rewards for the target policy (the assumption on page 4)? What happens if the rewards are equal to $0$? - In the section that talks about reward-only white-box attacks, why doesn't it suffice to set the reward to the lowest value whenever the target action is not taken? - The dependency on $m$ seems to be smaller in gray-box than in white-box attacks. Could you comment on the practical benefits of white-box attacks? - Could you comment the optimality of the upper bounds (attack loss/cost)? More specifically, could you comment on the tightness of the bounds in Theorem 9, e.g., in terms of horizon $H$? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: This paper provides a theoretical analysis of poisoning attacks in multi-agent RL, so I don't believe this work will have any negative societal impact. The limitations of this work could have been discussed in greater detail, e.g., in the concluding remarks. For example, instead of summarize the results of the paper, the concluding remarks could focus more on the tightness of the bounds in Theorem 9, or the practical aspects of this work, e.g., how to go beyond the tabular setting. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > "Specifically for MARL, the most important references seem to be covered. However, there are a couple of recent references that could be included in the list (see below). I encourage the authors to do so." Thank you very much for bringing these very interesting papers to our attention. We will include these recent related works in the revised version. > "In terms of results, the paper does not discuss the optimality of the upper bounds in their formal results. For example, it could be interesting to discuss the tightness of the bounds in Theorem 9, and how they compare to those of the underlying learning algorithms (V-learning)." Currently, we are not able to provide a information-theoretic lower bound, as the attack cost/loss depends on both the Markov game and the agent's learning algorithms. However, we can consider a special case of attacking V-learning to discuss the tightness of the bounds in Theorem 9. V-learning uses an adversarial bandit method to choose actions, which has a regret lower bound scaling on $\Omega(\sqrt{K})$. Consider a Markov game where the target policy that maximizes the attackers' rewards is the unique {NE, CE, CEE} of the original Markov game. The attacker does not need to attack. In this case, there exists a white-box attacker whose attack cost is $0$ and attack loss scales on $\Omega(\sqrt{K})$. The bounds in Theorem 9 scales on $\Omega({K}^{2/3})$. The proposed black-box attack's cost and loss are sub-linear but are not optimal. It is because that we use a exploration-then-attack method. An adaptive attack method could potentially achieve optimal cost/loss dependency on $\sqrt{K}$. Finding the optimal attack strategy is an interesting future direction for us to pursue. > "Following up on my previous remark, additional discussions regarding some of the assumptions in the problems setting would be useful to have. For example, the paper assumes that the target policy receives strictly positive rewards (page 4). Additionally, it would be good to formally specify the learners' model, and reference it in Theorem 1 and 2." The assumption that the target policy receives strictly positive rewards is essential for the success of the gray-box attack. The attack cost/loss of the gray-box attack scale on $O(m \mathcal{R}(T)/R_{min})$. If the target policy receives zero rewards and $R_{min} = 0$, the gray-box attack does not work. In Theorem 1 and 2, we do not limit the learners' model, as the object in Equation (1) only includes the restriction of the post-attacked Markov game. The agents do not need to be an efficient learner. However, the object in Equation (1) implies that the learners are rational and can converge to one of NE, CE or CEE if any exists. As action poisoning only attacks and the reward poisoning only attacks can not always change the target policy to a NE, CE or CEE, so the rational learner can not converge to the target policy. We will add these discussions in the revised paper. > "This work is primarily theoretical, but the paper could benefit from having additional experiments (in addition to the numerical simulations reported in the appendix), e.g., those that would demonstrate the efficacy of the black-box approach." Thanks for the suggestion. We added more experimental results to discuss how the attack loss and cost are affected by the parameters in the author rebuttal pdf file. > "Practical considerations are not discussed, e.g., it would be useful to have some discussion on how to enable scalability and go beyond the tabular setting." The gray-box attack strategies can be directly used in large-scale environments, even in some high-dimensional continuous environment. However, in the continuous space, the attacker does not change the non-target action to $0$ but to $r_{i.h}*e^{c |a_{i,h}-a^+_{i,h}|}$, in order to avoiding sparse reward. The ideas of the black-box attack strategies still work. However, the exploration phase should resort to some function approximation methods to efficiently explore an approximate target policy. Then the attack phase keeps the same. We will add these discussions to the revised paper. > "Could you explain how these results compare to the ones from [Rangi et al., 2022] in terms of, e.g., proof techniques?" We added more discussion about our work with the existing work. Due to the rebuttal length limit, we put the discussion into the author rebuttal to every reviewers. > "In the section that talks about reward-only white-box attacks, why doesn't it suffice to set the reward to the lowest value whenever the target action is not taken?" For a normal-form game, the method suggested by you will work. However, it does not always work in Markov games. An example is provided in Appendix D.2. The intuitive idea is that reducing the current reward of the non-target action may not impact the agent's choice as the long-term reward of the non-target action could be huge. > "The dependency on $m$ seems to be smaller in gray-box than in white-box attacks. Could you comment on the practical benefits of white-box attacks?" The proposed white-box attack strategies are the action poisoning only attacks and the reward poisoning only attacks, which only manipulate the reward or action. The proposed gray-box attack strategy (mixed attack strategy) requires the ability of manipulating the action and reward at the same time. The mixed attack strategy also works in white-box case. However, in some situations where the attacker can only manipulate the reward or action, the mixed attack strategy does not work but the proposed white-box attack strategies work. > More specifically, could you comment on the tightness of the bounds in Theorem 9, e.g., in terms of horizon? Currently, we are not able to provide a formal information-theoretic lower bound, as the attack cost/loss depends on both the Markov game and the agent's learning algorithms. It is an important future direction for us to pursue. --- Rebuttal Comment 1.1: Title: Thank you for your response Comment: Thank you for your detailed response and for answering my questions. It would be great if you could update the paper accordingly. I don't have further clarification questions.
Rebuttal 1: Rebuttal: We thank the reviewers for their valuable feedback, suggestions, and time invested. Here, we answer the reviewers' common concerns. **Related works.** Due to the page limit of the main paper, we do not provide a comprehensive comparison with prior research on adversarial attacks. We will add the following discussion to the revised paper. Among the existing works on attacks in single-agent RL, the most related paper is [Rangi et al., 2022], which studies the limitations of reward only manipulation or action only manipulation in single-agent RL and proposed an attack strategy combining reward and action manipulation. There are multiple differences between our work and [Rangi et al., 2022]. First, the MARL is modeled as a Markov game, but the single-agent RL is modeled as a MDP. In Markov game, each agent's action will impact other agents' rewards. Second, the learning object of single-agent RL and MARL is different. The single-agent RL algorithms learn the optimal policy, but MARL algorithms learn the equilibrium. Since the attacks on one agent will impact all other agents and the equilibrium is considered as the agents' learning object, we have to develop techniques to carefully analyze the impact of attacks and the bound of the attack cost. For example, we developed the value function difference decomposition in equations (22)-(27), (29)-(38), (41)-(43), and (51), which build a connection between the value function difference and the number of times that the non-target actions are chosen in MARL setting. Here, we discuss the related work on the adversarial attacks on single MARL. [Ma et al., 2019] studies reward poisoning attack against batch RL in which the attacker is able to gather and modify the collected batch data. [Rakhsha et al., 2020] proposes a white-box environment poisoning model in which the attacker could manipulate the original MDP to a poisoned MDP. [Behzadan and Munir, 2017, Zhang et al., 2020., Rangi et al., 2022] study online white-box reward poisoning attacks in which the attacker could manipulate the reward signal before the agent receives it. [Sun et al., 2021] proposes a practical black-box poisoning algorithm called VA2C-P. Their empirical results show that VA2C-P works for deep policy gradient RL agents without any prior knowledge of the environment. [Rakhsha et al., 2021] develops a black-box reward poisoning attack strategy called U2, that can provably attack any efficient RL algorithms. [Xu et al., 2021] investigates training-time attacks on RL agents and the introduced attacker can manipulate the environment. Here, we discuss the related work on the adversarial attacks on MARL. [Ma et al., 2022] considers a game redesign problem where the designer knows the full information of the game and can redesign the reward functions. The proposed redesign methods can incentivize players to take a specific target action profile frequently with a small cumulative design cost. Ma's work considered the norm-form game but we considered the Markov game. The norm-form game is a simple case of the Markov game with $H = 1$. [Gleave et al., 2020, Guo et al., 2021] study the poisoning attack on multi agent reinforcement learners, assuming that the attacker controls one of the learners. In our work, the attacker is not one of the learners, but an external unit out of the original Markov game. The attacker can poisoning the reward/action of all learners at the same time so that can fool the learners to learn a specific policy. [Wu et al., 2022] studies the reward poisoning attack on offline multi-agent reinforcement learners. The attacker can poisoning the reward of the agents. We considered the online MARL. In offline MARL, the attacker can estimate the underline Markov game from the offline datasets. In online MARL, the attacker may not have the knowledge (reward/transition functions) of the Markov game. [Wang et al., 2021] studies the backdoor attack in two-player competitive RL systems. The trigger is the action of another agent in the environment. They propose a unified method to design fast-failing agents which will fast fail when trigger occurred. [Liu et al., 2022] studies the controllable attack by constraining the state distribution shift caused by the adversarial policy and offering a more controllable attack scheme. [chen et al., 2022] considers a situation that $\alpha$-fraction of agents are adversarial and can report arbitrary fake information. They design two Byzantine-robust distributed value iteration algorithms that can identify a near-optimal policy with near-optimal sample complexity. [Mohammadi et al., 2023] studies targeted poisoning attacks in a two-agent setting where an attacker implicitly poisons the effective environment of one of the agents by modifying the policy of its peer. **Experimental results** Due to the page limitation, we put the Numerical results in Appendix B. In particular, in Section B.1, we empirically compared the performance of the action poisoning only attack strategy ($d$-portion attack), the reward poisoning only attack strategy ($\eta$-gap attack) and the mixed attack strategy. We considered two different target policies. In Case 1, any action poisoning only attack and reward poisoning only attack will fail. In Case 2, condition 1 and 2 hold so that $d$-portion attack and $\eta$-gap attack work. Furthermore, in Section B.2, we considered a synthetic environment and empirically compared the performance of the mixed attack strategy and the approximate mixed attack strategy. In addition, in the author rebuttal pdf file, we also added more experimental results to discuss how the final attack loss and cost (after $10^6$ episodes) are affected by the parameters $R_{min}$. Noted that the attack loss in mix-attack strategy (gray-box) is the count of the non-target arm defined in Loss1, but the attack loss in approximation mix-attack strategy (black-box) is the regret of the attacker's reward defined in Loss2. Pdf: /pdf/8555f5a54adb8f6830a12e037831dbe164ec8046.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Generalized Logit Adjustment: Calibrating Fine-tuned Models by Removing Label Bias in Foundation Models
Accept (poster)
Summary: This paper addresses biases in foundation models, such as CLIP, which due to the imbalanced training datasets, are skewed towards frequent semantics. The authors propose a Generalized Logit Adjustment (GLA) method, an optimization-based approach for debiasing these models. Despite inherent challenges, the GLA method demonstrates significant improvements across multiple tasks and datasets, achieving 1.5 pp accuracy gains on ImageNet and large average improvement (1.4-4.6 pp) on 11 few-shot datasets. Notably, the GLA method does not require access to the pre-training dataset, making it practical for fine-tuning scenarios. The paper offers both theoretical justification and extensive empirical evidence for the proposed method. Strengths: 1. **Solid Theoretical Analysis**: The paper provides robust theoretical justification for the Generalized Logit Adjustment (GLA) method. It formalizes the estimation of label bias as a constrained optimization problem, proving that GLA is a Bayes optimal classifier. 2. **Practical Solution**: The proposed method doesn't require access to the pre-training dataset, which makes it practical for fine-tuning scenarios. This approach is useful given privacy and copyright concerns associated with accessing the label distribution of pre-training datasets. 3. **Comprehensive Evaluation**: The paper uses a comprehensive benchmark for evaluation, considering three real-world settings and three fine-tuning paradigms. This diverse approach to evaluation adds credibility to the findings. 4. **Significant Performance Improvement**: The paper reports notable improvements across various tasks, demonstrating the efficacy of the proposed GLA method. The substantial gains across multiple datasets reinforce the value of the method. Weaknesses: 1. **Limited Validation**: The paper only presents results with a limited set of tasks and models. While they used the CLIP model for the zero-shot setup, the validity and effectiveness of the GLA framework with other models (like BERT, GPT, etc.) have not been validated. 2. **Estimating Pre-training Bias**: The estimation of pre-training label bias is done with downstream data. However, this could be problematic if the downstream data distribution significantly differs from the pre-training data distribution, leading to incorrect bias estimation and sub-optimal performance. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: The only two questions are listed in weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: While it's a promising beginning to uncover the biases in foundational models, it's crucial to recognize that the exploration should not be confined to computer vision alone. It is necessary to extend the studies to other fields as well. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1** The validity and effectiveness of the GLA framework with NLP models. **A1** We appreciate the reviewer for the comment. Applying this algorithm to other tasks and fields is also our future research direction. For now, we primarily focus is on enhancing the fine-tuning performance for discriminative models. The NLP zero-shot models are dominated by generative models, applying our GLA framework for those models presents challenges. For example, the language generation is a markov process, which makes each output depend on outputs of previous steps. Namely, it is not trivial to estimate the biasedness of a sequence by our GLA where we only computes the bias on a pre-defined and independent label space. We will discuss it in the revision version and leave it as a future work. --- **Q2** Estimating pre-training bias when the downstream data distribution significantly differs from the one of pre-training data. **A2** While we acknowledge the concern raised, it is crucial to understand that our work is primarily focused on addressing the label shift problem~(also known as long-tail problem). Yes, you are right in pointing out that our GLA paradigm might not be optimally effective when the downstream data distribution diverges significantly from the pre-training data distribution, e.g. some downstream samples $\mathbf{x}_s$ do not have density in pre-training domain ($P_p(\mathbf{x}_s) \approx 0$) . We recognize this especially in the case of niche domains such as medical data, which are scarcely represented in the pre-training stage. We believe that incorporating such downstream data into the foundation model training might be the only way to resolve the extreme case, which is beyond our scope. Our methodology, nevertheless, has demonstrated substantial efficacy in the presence of label shift and can effectively mitigate the impact of pre-training label bias in a broad array of scenarios. --- Rebuttal Comment 1.1: Comment: Thank the authors for the explanation. I have read the comments and rebuttal, I decide to maintain the score as 7.
Summary: This paper studies the label bias in Foundation Models, like CLIP. Specifically, authors model the estimation of the label bias as a constrained optimization problem, and propose a Generalized Logit Adjustment (GLA) method to debias foundation models. Experimental results demonstrate the effectiveness of proposed method. Strengths: - Show that the skewness of pre-training distribution affects the performance of downstream tasks, i.e., label bias of foundation models. - Propose a GLA method to ensemble the debiased zero-shot and fine-tuned models. - Extensive experimental results demostrate the superiority of proposed method over conventional fine-tuning. Weaknesses: - Authors should discuss the difference with existing label bias estimation methods, e.g. [1,2]. - [3] also studied the label bias in CLIP. What's the difference? - There exist some strong assumption for proposed method. [1] Lipton Z, Wang Y X, Smola A. Detecting and correcting for label shift with black box predictors. In ICML, 2018: 3122-3130. [2] Garg S, Wu Y, Balakrishnan S, et al. A unified view of label shift estimation. In NeurIPS, 2020, 33: 3290-3300. [3] Wang X, Wu Z, Lian L, et al. Debiased learning from naturally imbalanced pseudo-labels. In CVPR, 2022: 14647-14657. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - The Proposition 1 shows that $f_{gla}$ is the best model, or nearly optimal Bayesian classifer. Thus what leads to the error between the realistic results and the ideal performance of optimal Bayesian classifer. - What's the source of the validation data for a specific problem. In all experiments, the compared fine-tuning methods whether or not to use the validation data. - Please illustrate that ''Intuitively, since the zero-shot and fine-tuned models provide diverse predictions, conditioned on the two predictions is equivalent to adding the logits in log space". Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The proposed method can effectively deal with label bias in foundation models. However, it depends on some strong assumptions. E.g., the estimation relies on the validation data. How to guarantee that a good estimation can be obtained with poor validation data. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1** Discussion with existing label estimation methods[1][2]. **A1** [1-2] and our GLA work under different scenarios and require different forms of input. [1-2] face a problem where the test prior is unknown but training data are given. In the context of foundation model adaptation, [1-2] require both \textbf{labeled pre-training data} and **unlabeled downstream test data** to estimate label shift by matching the moment of confusion matrices. However, the pre-training data is often inaccessible, which makes [1-2] impractical for foundation models. In contrast, the proposed GLA targets on the setting where the pre-training label prior is unknown but the test distribution is class-balanced. Our method estimates the label bias of pre-training data by adjusting the margin to produce a Bayes optimal classifier. Technically, we only have to access **downstream source data** for debiasing. We will add the above discussion in the revised version. --- **Q2** What's the difference with [3]? **A2** Our GLA converges to the true class-probability of pre-training data while the one estimated by DebiasedPL[3] is asymptotically biased to the downstream distribution. Recall that the DebiasedPL[3] uses the moving average of the outputs of zero-shot model $P_\text{zs}(y|\mathbf{x})$ (predicted probability of downstream data) as the class-specific margin for debiasing. The probability for class $y$ estimated by DebiasedPL[3] can be viewed as $\hat{p}(y)=\frac{1}{N} \sum_{i=1}^N P_{zs}(y|\mathbf{x}_i)$, where $\mathbf{x}_i$ is the sample from downstream dataset $\mathcal{D}_s \sim P_s^N$. As $N \rightarrow \infty$, $\hat{p}(y) \rightarrow E_{\mathbf{x}\sim P_s}[P_{zs}(y|\mathbf{x})]$. However, the true class-probability of pre-training distribution should be $p(y)=E_{\mathbf{x} \sim P_p}[P_\textbf{zs}(y|\mathbf{x})]$. As long as $P_s \neq P_p$, i.e., pre-training distribution differs from downstream distribution, the class-specific margin estimated by [3] is biased. In contrast, In Proposition 2, we prove that $f_\text{zs}(\mathbf{x})-\log \mathbf{q}$ is the Bayes optimal classifier given $f_\text{zs}$ that achieves the lowest error on the target distribution. Our estimation of $\hat{\mathbf{q}}$ converges to the true one. Note that we are not intended to undermine the value of [3], as they focus on semi-supervised learning where the labels of training data are not available. In contrast, our GLA is given labeled fine-tuning data and able to estimate a more accurate label shift. We will add the discussion in the revised version. --- **Q3** There exist some strong assumption for proposed method. E.g., the estimation relies on the validation data. How to guarantee that a good estimation can be obtained with poor validation data? **A3** We are sorry that we did not emphasize the implementation details, leading to your misunderstanding. In all conducted experiments, **no validation set** was used for $\pi_p$ estimation. Recall that in line 126-127, we mentioned that we have two choices: ``we can use validation data or the balanced training data'' to estimate $\pi_p$. As the training samples are more abundant than validation samples, we choose to use training set rather than validation set. For many-shot and few-shot learning, as the training set is balanced at hand, we directly use it for $\pi_p$ estimation. For long-tailed learning, we re-balanced the training set by up-sampling to estimate $\pi_p$. For the sake of fairness in comparison, the validation set is only used to search hyper-parameters for all baselines and our methods. --- **Q4** What leads to the error between the realistic results and the ideal performance of optimal Bayesian classifier? **A4** Our Proposition 1 analyses the asymptotic behavior of GLA. In practice, GLA's performance hinges on the estimation of pre-training $\pi_p$, a factor influenced by the pre-trained model's generalization gap. From structural risk minimization theory[1-2], the test error $\delta_j=E_{\mathbf{x} \sim \hat{P}_j}[\ell(f(\mathbf{x},j)]$ $- E_{\mathbf{x} \sim P_j}[\ell(f(\mathbf{x},j)] $ for class $j$ is bounded by $C/\sqrt{n_j}$, where $n_j$ is the class size for $j$ and $C$ is a constant that reflects model complexity. When the sample size of less frequent classes is small during pre-training, the large generalization gaps might lead to the estimation error of $\pi_p$. [1] Kaidi Cao, Colin Wei, Adrien Gaidon, Nikos Arechiga, and Tengyu Ma. Learning imbalanced datasets with label-distribution-aware margin loss. NeurIPS, 2019. [2] Shiori Sagawa, Pang Wei Koh, Tatsunori B. Hashimoto, and Percy Liang. Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization. ICLR, 2019. --- **Q5** Illustrate that ''since the zero-shot and fine-tuned models provide diverse predictions, conditioned on the two predictions is equivalent to adding the logits in log space". **A5** The detailed explanation is included in Appendix Eq.(20-21). Denote the output $\mathbf{e}=f_{ft}(\mathbf{x})$ and $\mathbf{z}=f_{zs}(\mathbf{x})$, then $P_t(y|\mathbf{e},\mathbf{z})=\frac{P_s(y|\mathbf{e})}{P_s(y)}\frac{P_p(y|\mathbf{z})}{P_p(y)}C_1,$ where $C_1$ is a constant that not rely on $y$ (see line 323-324 for further explanation on $C_1$). We can rewrite $P_s(y)=\exp(\log P_s(y))=\exp(\pi_s(y))$ and $P_p(y)=\exp(\log P_p(y))=\exp(\pi_p(y))$. As the underlying class-probability $P_s(y|\mathbf{e}) \propto \exp(\mathbf{e}_y)$ and $P_p(y|\mathbf{z}) \propto \exp(\mathbf{z}_y)$. The above equation equals to $P_t(y|\mathbf{e},\mathbf{z})=\frac{\exp(\mathbf{e}_y)\exp(\mathbf{z}_y)}{\exp(\pi_s(y))\exp(\pi_p(y))} C_1/C_2=\exp(\mathbf{e}+\mathbf{z}-\pi_s-\pi_p)_yC_1/C_2,$ As the logits have to apply exponential function to get the probability, multiplying two probability equals to adding logits. We refer to ``log space'' as both logits and $\pi=\log p$ represent the logarithm of probability. --- Rebuttal Comment 1.1: Title: The concerns are not perfectly clarified Comment: Thanks for your detailed response, and most of them are not perfectly clarified. Firstly, the assumption in which test distribution is class-balanced in A1 seems to be impractical. The more reasonable setting is that the test prior is unknown. Secondly, DebiasedPL is general and works in more scene, e.g., semi-supervised learning. Though proposed method has theoretical Bayes optimal classifier, the strong assumption and limited improvement weaken the contribution. Thirdly, though no validation set are used to estimation $\pi_p$, the used balanced training data can not ensure the solution is optimal. E.g., the noisy label or imbalanced test distribution may deteriorate the performance. I tend to decrease my score to 4. Reviewer egNQ --- Reply to Comment 1.1.1: Title: Some misunderstandings Comment: Thanks for your reply. We found some of your points to be unreasonable. Firstly, [1-2] require the test data to estimate the test prior. If our algorithm is provided with test data, we can directly use Eq.(3) for debiasing and remove our assumption on test prior. Only given the zero-shot model and downstream training data without an assumption on test prior, **no algorithm**(including DebiasedPL) can guarantee an optimal performance for testing: your can always construct a test distribution that significantly differs from the training distribution to undermine the model performance. Therefore, your requirement is not reasonable. Secondly, DebiasedPL is general but not optimal in the fine-tuning settings. It is unreasonable to ask our method to work on every case, as we focus on fine-tuning not semi-supervised learning. Thirdly, our algorithm can be easily extended to imbalanced test set. Suppose the log probability for imbalanced test set is $\pi_t$, the GLA model is modified to $f_{gla}=softmax(f_{zs}(x)+f_{ft}(x)-\pi_p - \pi_s + \pi_t)$. As for noisy label, we think it unfair to ask our method to deal with it, it is clearly out of the scope of our work. We look for your reply if you have further concerns.
Summary: This paper proposes a distribution estimation method that estimates the distribution of pre-training data for large VLMs like CLIP. Combined with downstream data distributions, a logit adjustment is applied to the outputs of both the zero-shot model and the fine-tuned model. By using a simple ensemble of the zero-shot and fine-tuned models, improved performance is achieved on the reported benchmarks. Strengths: The proposed method introduced performance improvements over compared baselines on most of the reported benchmarks. Weaknesses: Here are some questions: 1) The claim "their predictions are complementary to those of fine-tuned models" in line 30 cannot be supported by the cited references. Neither [3] nor [14] can support fine-tuned and zero-shot models are complementary. 2) In addition, in Section 4.2, where the authors attempt to demonstrate the complementarity of fine-tuned models and zero-shot models, the reasoning provided is not persuasive. The authors argue that fine-tuned models are biased towards downstream distributions, while zero-shot models are robust to distribution shifts. However, it remains unclear why a distribution shift-robust model should be complementary to a model that has already been adapted to a shifted distribution. If the authors claim and can prove that zero-shot models can provide something that is missing in the fine-tuned model and cannot be provided by the shifted downstream dataset, it needs to be explicitly demonstrated and proven. Otherwise, this assumption lacks solidity, which consequently undermines the validity of all subsequent proofs. 3) The rationale behind the method used to estimate the pre-training data distribution is not clearly explained. The paper does not provide sufficient information on why the method is designed in this particular way, nor does it address whether there are any means to verify the effectiveness of this method in accurately estimating the pre-training data distribution. Even though pre-training data is often proprietary, the authors could still perform toy experiments to demonstrate the efficacy of the proposed method. Given that this is a crucial component of the proposed method, it should not be overlooked. 4) The biggest weakness of this paper is the improvements introduced by the proposed method is very limited (~1% on average across all reported benchmarks). Edit: most of my concerns are clarified. I will change my rating. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Same as above. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 1 poor Limitations: The biggest limitation is the limited performance improvements. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1**The biggest weakness of this paper is the improvements are very limited. \end{point} **A1**We hold a different view. For ImageNet-1K full training with 1,000 classes, improving $1.5 \%$ accuracy over end-to-end fine-tuning is a noteworthy improvement. For ImageNet with 16 training shots, we observed a $3.5\%, 2.6\%, 2.1\%$ improvement over the previous prompt-tuning SOTA: CoOp, PLOT and ProGrad. On Place365-LT datasets, our GLA that only tuning prompt can even surpass the previous end-to-end fine-tuning SOTA BALLAD by a large margin ($47.2\%$ vs. $45.7\%$). In light of these results, it is clear that our GLA demonstrates substantial gains across various datasets. --- **Q2**The claim "their predictions are complementary to those of fine-tuned models" in line 30 cannot be supported by the cited references. Neither [3] nor [14] can support fine-tuned and zero-shot models that are complementary. **A2** The underlying assumption of ensembling methods is that the individual models make independent errors[3,14] (See details in Section V.4 Independent Errors of [14]) otherwise the performance of ensembling will degenerate to the one of single model~(Imagine if we ensemble two identical models, the ensemble's performance will not improve). In the context of fine-tuning zero-shot models, this assumption is supported by empirical studies conducted in Section 5.1 of [1] which explores a series of measures of diversity, finding the complementarity of predictions between zero-shot and fine-tuned models. [1] Wortsman et al. Robust fine-tuning of zero-shot models. CVPR. 2022. --- **Q3** In addition, in Section 4.2, where the authors attempt to demonstrate the complementarity of fine-tuned models and zero-shot models, the reasoning provided is not persuasive. The authors argue that fine-tuned models are biased towards downstream distributions, while zero-shot models are robust to distribution shifts. However, it remains unclear why a distribution shift-robust model should be complementary to a model that has already been adapted to a shifted distribution. If the authors claim and can prove that zero-shot models can provide something that is missing in the fine-tuned model and cannot be provided by the shifted downstream dataset, it needs to be explicitly demonstrated and proven. **A3** We gracefully disagree. Assumption 1, diversity of zero-shot and fine-tuned models, often featured in theoretical analyses in distribution shifts [1-4] and empirically verified by prior work[5], appears to be a judicious assumption to adopt. Intuitively, zero-shot models and fine-tuned models leverage different cues to predict. For instance, zero-shot models rely on robust features for decisions that can achieve high performance on sketch and adversarial samples, while the fine-tuned models that trained on real images typically fail on these samples, as they rely on spurious correlations that only hold on real images. In fact, Assumption 1 is a weaker assumption compared to commonly used assumptions in distribution shifts community [1,2,3]. In these papers, the in-distribution and out-of-distribution features are assumed to be disjoint parts of the inputs, each generated independently based on the label. In our paper, we only require the conditional independence of the outputs. In addition, we can find the same assumption to describe the relation between the fine-tuned models and the zero-shot CLIP in [5]. Crucially, empirical studies detailed in Section 5.1 and Appendix Section E of [5] explore a range of diversity measures across predictions and features, revealing that zero-shot and fine-tuned models, despite a shared backbone, yield diverse predictions, which supports our claim. To sum up, we just follow the commonly used assumption in theoretical analysis of the robustness/out-of-domain generalization community, which is also empirically proven to be reasonable. [1] Vaishnavh Nagarajan, Anders Andreassen, and Behnam Neyshabur. Understanding the failure modes of out-of-distribution generalization. ICLR, 2021 [2] Yining Chen, Colin Wei, Ananya Kumar, and Tengyu Ma. Self-training avoids using spurious features under domain shift. NeurIPS, 2020. [3] Shiori Sagawa, Aditi Raghunathan, Pang Wei Koh, and Percy Liang. An investigation of why overparameterization exacerbates spurious correlations. ICML, 2020 [4] Ananya Kumar, Tengyu Ma, Percy Liang, and Aditi Raghunathan. Calibrated ensembles can mitigate accuracy tradeoffs under distribution shift. UAI, 2022. [5] Wortsman et al. Robust fine-tuning of zero-shot models. CVPR. 2022. --- **Q4** The rationale behind the method used to estimate the pre-training data distribution is not clearly explained. **A4** The rationale behind the method (Step 1) to estimate pre-training label distribution has been justified in Proposition 2: the pre-trained label prior $\pi_p$ can be achieved when $f_{zs}-\log \mathbf{q}$ arrives at the lowest error $R_t$, as we prove that such model corresponds to the Bayes optimal classifier given fixed $f_{zs}$. --- **Q5** Even though pre-training data is often proprietary, the authors could still perform toy experiments to demonstrate the efficacy of the proposed method. **A5** To address your concerns, we devise a toy experiment where we have the true label bias at hand. In specific, we first train model on an imbalanced training set: CIFA10-LT~[1], then only use the testing set to estimate the label distribution. Figure 1 in the attached document illustrates a strong correlation between the estimated and real distributions, further supported by a small KL-divergence (0.00062). The toy experiments demonstrate the correctness of our proposed debiasing method. [1] Cui et al. "Class-balanced loss based on effective number of samples." CVPR 2019. --- Rebuttal Comment 1.1: Comment: Thanks for the clarification. I was wrong at the performance. So I will change the final rating. The clarification for Q2,3 needs to be added into the main text. --- Reply to Comment 1.1.1: Title: Response to Reviewer iLax Comment: We are grateful for your feedback and we will include the clarification for Q2 and Q3 to enhance the quality of our work.
Summary: Logit adjustment method has been widely used in long-tailed recognition, in which the source dataset is available, thus the class margin can be set in advance. However, the large-scaled foundational data cannot be accessed. To reduce the recognition bias of foundational models on downstream tasks, this paper proposes a generalized logit adjustment (GLA) method. The main contribution of GLA is to estimate the class margins for zero-shot models, after that, the fine-tuned and debiased zero-shot models are assembled together to achieve consistent improvement. The authors demonstrate the effectiveness of the proposed method through extensive experiments, including three tasks and three finetuning approaches. Strengths: 1) Estimate the class margins for foundational models is an important task, especially when the pretrained datasets are not available. This paper proposes an effective generalized logit adjustment method for the zero-shot models. 2) To estimate the class margins, the authors consider it as a simple constrained optimization problem. They also provide theoretical analysis for their GLA method. The proofs seem to be ok. 3) The authors verify their method with extensive experimental results, including many-shot learning, few shot learning and long-tailed recognition tasks. The presented results deem to be good. Weaknesses: The main weaknesses include the following three aspects. 1) The proposed method is simple, and the theoretical proofs look a little reluctant. a. Section 4.1 sees to be redundant. The Definitions and Lemma are less important for the following analysis. b. The proof and conclusion of Proposition1 are not convincing. c. There are some false assumptions about the discussion of Corollary 1. In case 1, the authors claim that the zero-shot model can not provide further improvement when the fine-tuned model is given. Obversely, this is wrong. Because their combination can further improve recognition accuracy. 2) The authors did not show the algorithm steps when estimating class margins (Eq. 3). They also did not provide the complexity and convergence of this optimization problem, and the choice of hyper-parameter \lambda. 3) Unfair comparisons make results look good. For example, in long-tail learning, the proposed GLA is an ensemble model with larger network parameters. Thus, it is not surprising that it outperforms other baselines. Please provide the model parameters and inference time to show the improvement. Other comments: 1) Typo in line 109: w_k is not model parameter. 2) In Eq.4, v should be bold. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1) How to initialize q? 2) Your assumption in Lemma 2 does not always hold, the target distribution may be unbalanced. 3) Please provide more analysis for your proofs. The constant C_2 in Eq. 21 and the conclusion of Eq. 30 are not easy to understand. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors discuss the limitations of their method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1** The method is simple, and the proofs look a little reluctant. **A1** We hold a different view. **A simple method with statistical grounding is our strength rather than weakness**. As Reviewer *wcSK* claimed ``The proposed method is both simple to understand and implement. This means that it could have a significant impact. Simple methods can easily be deployed by both practitioners and by researchers (as a strong baseline for future work). ''. In fact, it is true evidenced by our experiment: despite simple, our method can easily beat more complicated methods like BALLAD, PLOT and ProGrad, which shows the value of our work. Regrading to proofs, we believe there exists some misunderstanding, and we elaborate the details as below to address your concerns. --- **Q2** Section 4.1 is redundant. **A2** We gracefully disagree. Definition 1 and Lemma 1 are crucial. In Proposition 1, we first use Definition 1 to prove our GLA model is Bayes optimal. Then, we leverage Lemma 1 to prove our GLA model has the lowest risk. We use Definition 1 and Lemma 2 to prove Proposition 2. Without Section 4.1, the proof will be incomplete. --- **Q3** Explain the conclusion of Eq. 30 **A3** First, we'd like to clarify that the conclusion of Eq.30 hinges on the preliminaries in Section 4.1. Therefore, the weakness you raised, ``Section 4.1 seems to be redundant. '' stems from a misunderstanding of the dependence between Section 4.1 and the subsequent proofs. Eq. 30 is restated: $\arg\max_{y}(f_{zs}(\mathbf{x})-\pi_p)_y $ $= \arg\max_{y} softmax(f_{zs}(\mathbf{x})-\pi_p)_y$ $=\arg\max_{y} P_t(y|f_{zs}(\mathbf{x}))$ The first equality applies because $\text{softmax}(\cdot)$ does not change $\arg\max$ results. The second equality holds because of Eq.29. Recall that **Definition 3**: The Bayes Optimal classifier $y^*$ for $P$ given input $\mathbf{x}$ is defined as: $y^*(\mathbf{x}) = \arg\max_{y} P(y|\mathbf{x})$ Given $f_\texttt{zs}(\mathbf{x})$ as the input, $\arg\max_{y}(f_\texttt{zs}(\mathbf{x}) - \pi_p)(y)$ is the Bayes optimal classifier, as it equals to $\arg\max_{y\in \mathcal{Y}}P_t(y|f_\texttt{zs}(\mathbf{x}))$. According to **Lemma 1**: The Bayes optimal classifier $y^*$ for $P$ has lower risk than all classifier $\hat{y}: \mathcal{X} \rightarrow \mathcal{Y}$. $\mathcal{R}(y^*)\leq \mathcal{R}(\hat{y})$. Therefore we have the conclusion in Line 338-339: ``any other classifier $f_h(\mathbf{x})$ has higher risk than $f_\texttt{zs}(\mathbf{x})-\pi_p$.'' --- **Q4** Explain C_2 in Eq.21 **A4** Because $P_s(y|\mathbf{e})=\text{softmax}(\mathbf{e})_y$ and $P_s(y|\mathbf{z})=\text{softmax}(\mathbf{z})_y$. For some constants $C_s, C_p$, we can express $P_s(y|\mathbf{e})=\exp(\mathbf{e})_y/C_s$ and $P_p(y|\mathbf{e})=\exp(\mathbf{e})_y/C_p$. Replace this into Eq. (20), we get Eq. (21): $=\exp(\mathbf{e}+\mathbf{z}-\pi_s-\pi_p+\pi_t)_yC_1/(C_s \cdot C_p)=\exp(\mathbf{e}+\mathbf{z}-\pi_s-\pi_p+\pi_t)_yC_1/C_2$ We denote $C_2=C_s \cdot C_p$ to simplify Eq. 21. --- **Q5** Target distribution may be unbalanced in Lemma 2 **A5** We argue that it is very easy to extend our Lemma 2 to unbalanced target distribution. In Eq.20 we incorporate $P_t(y)$ into constant $C_1$ as we assume $P_t(y)=1/K$ is constant. For an unbalanced target distribution, we modify Eq.20 into $P_t(y|\mathbf{e},\mathbf{z})=\frac{P_s(y|\mathbf{e})}{P_s(y)} \frac{P_p(y|\mathbf{z})}{P_p(y)}P_t(y) C_1, $ Let $\log P_t(y)=\pi_t(y)$, we rewrite Eq.21 as: $=\exp(\mathbf{e}+\mathbf{z}-\pi_s-\pi_p+\pi_t)_yC_1/C_2 $ Using Eq.21-24, we arrive at our Lemma 2 under imbalanced target distribution: $ P_t(y|f_{ft}(\mathbf{x}),f_{zs}(\mathbf{x}))=softmax(f_{ft}(\mathbf{x})+f_{zs}(\mathbf{x}) -\pi_s-\pi_p+\pi_t)_y $ --- **Q6** How to initialize q? Algorithm steps in Eq. 3? **A6** We have detailed these in Line 383 and 384: q is initialized as a uniform distribution and optimized for 2,000 steps. --- **Q7** Complexity and convergence of the optimization. **A7** From Proposition 2, we know that $\mathbf{q}=\exp(\pi)$ is the minimizer of Eq.3. Thus, the optimization problem is guaranteed to have a solution. We use gradient descent to solve the problem, whose complexity is $O(nkm)$ where $k$ is the iteration steps, $n$ is the sample size and $m$ is the label size. --- **Q8** hyperparameter lambda. **A8** We'd like to like to clarify that lambda is **not a hyperparameter**. lambda is updated to maximize the Lagrangian function in Eq.4 rather than a pre-defined hyper-parameter. --- **Q9** In case 1, the authors claim that the zero-shot model can not provide further information when the fine-tuned model is given. Obviously, this is wrong. **A9** There seems to be a misunderstanding. In fact, **we are on the same page: we also claimed Case 1 is unlikely to happen**. Case 1 discusses the situation when $R_t(f_{gla})=R_t(f_{ft})$, i.e., our GLA models degenerate to fine-tuning models. This situation happens when ``the zero-shot model can not provide further improvement when the fine-tuned model is given''. As you claims, this is not likely to happen, which supports our claims. --- **Q10** Ensemble is unfair. **A10** We see your point, but we gracefully disagree. First, our GLA does not involve external data or models, all baselines are given the same zero-shot models and fine-tuning data. Second, our baselines include other ensemble methods, e.g. WiSE-FT and naive ensemble, and our GLA shows superiority. Given that we share the same model parameter and inference time as these ensemble baselines, we believe the comparison is fair. --- **Q11** w_k is not model parameter **A11** We're sorry that you misunderstood the point. w_k is the parameter of classification head initialized by prompting: it is updated during fine-tuning. --- **Q12** v should be bold **A12** That is not true. We only have **one** equality constraint in Eq.3: $1-\sum_{i\in [K]} \textbf{q}_i$. Therefore, v should be a scalar rather than be bold.
Rebuttal 1: Rebuttal: # Response to All Reviewers Dear Program Chair, Senior Area Chair, Area Chair, and Reviewers, First of all, we gratefully thank all the reviewers for their thoughtful comments and feedback. In this paper, we identify label bias in foundation models like CLIP and underscore its adverse effects on downstream task performance. We propose the Generalized Logit Adjustment (GLA) framework for fine-tuning foundation models, which boosts the performance by effectively eliminating label bias. The contribution of this paper is four-fold: **1. Solid Theoretical Analysis:** We formalize the estimation of label bias as a constrained optimization problem and prove that our GLA model is a Bayes optimal classifier (Proposition 1). **2. Simple and Practical Solution:** Our GLA is a post-hoc approach without introducing hyper-parameters, which makes it easy to implement. In addition, we does not necessitate access to the pre-training dataset, which makes it practical for fine-tuning scenarios. **3. Comprehensive Evaluation:** We consider three real-world settings and three fine-tuning scenarios, conducting experiments on over 20 datasets. **4. Significant Performance Improvement:** The GLA method demonstrates significant improvements, e.g., it achieves 1.5 pp accuracy gains on ImageNet and large average improvement (1.4-4.6 pp) on 11 few-shot datasets. We add a PDF that contains additional experimental analysis on bias estimation: we devise an experiment to demonstrate the estimated label distribution strongly approximates to the true one. As our paper received mixed ratings, i.e., three positive (755) and two negative (44), it would be appreciated if the reviewers could have a look at our responses and revision. We have tried our best to address your concerns in our responses in detail. Hope that our responses answered the questions. Please let us know at your early convenience if you have further questions or concerns. Best regards, Authors of Paper \#6116 Pdf: /pdf/ecbd0e80382537268b027f7abfe84ce3444ea762.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes a method for combining the predictions of zero-shot and few-shot classifiers. The method removes the need for a weighting hyper-parameter to interpolate between the predictions. Furthermore, it also removes a source of bias (due to the frequency of words in the pre-training dataset). Strengths: ## Simplicity The proposed method is both simple to understand and implement. This means that it could have a significant impact. Simple methods can easily be deployed by both practitioners and by researchers (as a strong baseline for future work). ## Presentation The presentation of the method is clear, concise, and easy to understand. ## Experimental Results The experimental results show that the proposed method improves over a selection of sensible baselines in a range of settings, with some strong improvements in several cases. Weaknesses: ## Correctness While the proposed method clearly results in improved performance in several settings, it is not clear to me that the reason for the improvement suggested by the authors is correct. That is, I am not convinced that the proposed method to estimate the label bias of the pre-training dataset is working as intended. Does $\log \mathbf{q}$, αs described in eq 3 and sec 4.3, provide a good estimate of $\pi_p$? I do not believe that the paper provides theoretical or empirical evidence that this is the case and that the mechanism behind the success of the method is due to the suggested bias improvement, rather than something else. ############## ### Edit, after rebuttal and discussion. The authors have now addressed my concerns above. They have demonstrated that their approximation is accurate when the downstream and pre-training datasets are similar. They have also shown that when the downstream and pre-training datasets are different, the approximation becomes worse. Nonetheless, the proposed method still works well, and the authors have given intuition for why this might be the case. Given the authors proposed updates to the paper, I believe that the story of the paper will now be accurate and clear. I've raised my score from 5 to 7. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Additional experimental and theoretical analysis showing that $\log \mathbf{q} \approx \pi_p$ would significantly strengthen the paper. Concretely, if this weakness were addressed, I would happily increase my score by 1-2 points. The true value of $\pi_p$ could, for example, be estimated by using the LAION-400m dataset, which is similar in scale and content to the true pre-training dataset of CLIP and results in similar zero-shot accuracies. This was the approach taken in the "A Simple Zero-shot Prompt Weighting Technique to Improve Prompt Ensembling in Text-Image Models" paper (https://arxiv.org/abs/2302.06235), which addresses similar biases in the context of prompt selection in zero-shot classifiers. Additionally, including the above-mentioned paper in the related work would be useful. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The theoretical and experimental analysis does not support the main claim of the paper, as discussed above under "weaknesses". Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1** Additional experimental and theoretical analysis showing that $\log \mathbf{q}=\pi_p$ **A1** To further address the concerns about our bias estimation, we further provide explanation of our Proposition 2 and some empirical evidences. In Proposition 2, we proved that when given the outputs of the zero-shot model $f_\texttt{zs}(\mathbf{x})$, $f_\texttt{zs}(\mathbf{x}) - \pi_p$ will be Bayes optimal classifier for the downstream distribution $P_t$, therefore it is the minimizer of $R_t(f_h(\mathbf{x}))$. In Proposition 2, $f_h(\mathbf{x})$ is defined as the arbitrary classifier that uses $f_\texttt{zs}(\mathbf{x})$, it includes the hypothesis of $f_\texttt{zs} - \log \mathbf{q}$. Therefore, $\pi_p$ is also the minimizer of Eq.(3). By solving the constrained optimization problem, we can get a good estimation of pre-training label bias. We subsequently provide two pieces of evidence to further validate the correctness of label bias estimation. **Evidence 1: Experiments to compare the true label bias and the estimated one.** We devise more intuitive experiments where we have the true label bias at hand. In specific, we first train models using ResNet32 backbones on an imbalanced training set: CIFAR10-LT [1], then only use the test set to estimate the label distribution. Figure 1 in the attached document illustrates a strong correlation between the estimated and real distributions, further supported by a small KL-divergence (0.00062). The experiments demonstrate the correctness of our proposed debiasing method. **Evidence 2: Estimated $\pi_p$ is transferable across different zero-shot models.** In Section 5.1 and C.1, we demonstrate that when different models are pre-trained on the same pre-training dataset, the pre-trained bias estimated by one zero-shot model (e.g. CLIP-ViT-B/32) can be subsequently transfered to debias other models (e.g. CLIP-ViT-B/16 and CLIP-ViT-B/14). As shown in Table 9, debiased CLIP-ViT-B/{16, 14} show clear performance gains over original zero-shot models. --- **Q2** Discuss [2] in the related work. **A2** We thank the reviewer for the suggestion. [2] automates the prompt engineering by prompt scoring, which also targets on alleviating word frequency bias in pre-training data. Our approach diverges from [2] in two main aspects. The first difference is that our approach focuses on debiasing zero-shot models given fixed prompts, contrasting with [2] that optimizes prompting process. Secondly, unlike [2] that necessitate access to a subset of the pre-training data, our GLA is exempt from this requirement. We will include the discussion in the related work in the revised version. ---- [1] Cui et al. "Class-balanced loss based on effective number of samples." CVPR 2019. [2] Allingham et al. A Simple Zero-shot Prompt Weighting Technique to Improve Prompt Ensembling in Text-Image Models --- Rebuttal Comment 1.1: Comment: Thanks for your response. Unfortunately, I am still unconvinced. To be clear, I am not arguing with the point that $f_{zs}(\mathbf{x}) - \pi_p$ is the Bayes optimal classifier, nor am I arguing against $f_{zs}(\mathbf{x}) - \log\mathbf{q}$ being included in $f_h(\mathbf{x})$. What is not clear to me is that $\pi_p$ is approximated by $\log \mathbf{q}^*$. Proposition 2 simply tells us that the risks should be the same, on average. It is not apparent to me that this means that $\pi_p = \log \mathbf{q}^*$. On a more conceptual level, it would be very surprising to me that you can estimate the marginal log-probs of the pre-training distribution without access to pre-training data. Thus, I hypothesize that your method's good performance is not due to estimating $\pi_p$ well but due to some other mechanism. While Evidence 1 does somewhat support your claim, it is not convincing enough. This experiment is very different from the setting of interest. In particular, you are using training and test inputs from the same distribution, even if the labels have different distributions. It is unclear whether you would see the same behavior for a zero-shot classifier trained on a large and broad "internet scale" dataset. On the other hand, I don't find Evidence 2 compelling. I think these results highlight a definite strength of your method. Still, they don't necessarily suggest that $\pi_p$ is being estimated, just that the quantity $\log \mathbf{q}^*$ is transferable, which could be due to any number of other reasons. As I mentioned in my review, an experiment that I would find compelling is to estimate $\pi_p$ using the LAION dataset. Allingham et al. find that even using 20K images from LAION, the prompt selection for a zero-shot classifier can be debiased. There is something that I need to clarify, which may impact my thoughts above. How can we have a marginal distribution (i.e., $P_p(y)$) over labels under the pre-training distribution? The pre-training dataset for CLIP doesn't have any notion of the classes from the training or test distributions. Assuming we had access to the pre-training dataset, would we estimate $\log P_p(y)$ by averaging the logits of the constructed zero-shot classifier over all of the images in the training data? I look forward to your response. --- Reply to Comment 1.1.1: Title: Response to wcSK Comment: Thanks for your prompt and valuable response. A. Further explanation on $\pi_p$: We agree with your intuition. The $\log\mathbf{q}$ we estimated is not the marginal log-probability over the entire pre-training distribution but the label bias aligns with the downstream distribution. In Evidence 1, as you pointed out, while the training and test sets have different label distributions, their conditional distribution $P(x|y)$ remains invariant. In such situation, our estimation is guaranteed to converge to the true training label bias. For CLIP models, the pre-training data are diverse. It is likely that some of the pre-training data fall out of the distribution of downstream domain and leads to an inaccurate estimation for entire pre-training distribution. However, we'd like to point out that removing the label bias of entire pre-training distribution is not optimal for downstream tasks. For instance, suppose the pre-training dataset contains "sketch'' and "photo'' styles for "dog'' and "cat'' samples. Suppose the sample size of "dog'' and "cat'' is equal but there are much more "sketch dogs'' that "sketch cats''. In other words, although the entire distribution is class-balanced, each style domain is imbalanced, leading to biased zero-shot predictions within each domain. In such scenario, it we want to deploy the zero-shot models in "sketch dogs and cats'' domain, removing the balanced label bias of entire pre-training distribution is ineffective. The optimal label bias should be estimated on the ``sketch'' distribution. B. Experiment using LION-400: We are currently estimating using LION-400. Once the experiments are complete, we will append the results. C. How to estimate $P(y)$ when we have the access to the pre-training data: We believe that averaging the logits across all training images provides a good estimation of $\log P(x)$. Although CLIP does not has a notion classes, it can be prompted with a class name to approximate $P(y|x)$. Therefore, average outputs across all training images is doing $E_{x \sim P_p}[P(y|x)]=P_p(y)$.
null
null
null
null
null
null
NeRF-IBVS: Visual Servo Based on NeRF for Visual Localization and Navigation
Accept (poster)
Summary: This paper addresses visual localisation (i.e. the estimation of the 6-degrees-of-freedom pose of a camera) from scene coordinate regression. The problem tackled is the need for a large amount of pose and depth labels to train scene coordinate regression networks. The paper proposes a solution to reduce the label requirements by producing pseudo-groundtruth depth labels with a NeRF trained on the scene. The lower data requirement comes at the cost of a lower accuracy of the scene coordinate regression network. This is compensated by a pose refinement step based on visual servoing: the coarse pose is refined until the view rendered by the nerf on the estimated pose matches the target pose. More specifically, the pose is estimated until the local features in the rendered view are close to their corresponding points in the target image. The pose updates are estimated to reduce the discrepency between these feature positions. The method proceeds as follows: - Given a set of calibrated images of the scene, train a nerf. - Generate pseudo-groundtruth depth with the nerf. - Train the scene coordinate regression with the pseudo-groundtruth depth. - Infer a coarse pose from the trained scene coordinate regression network. - Refine the coarse pose using visual-based servoing The method is compared against state-of-the-art localisation method from several categories: scene coordinate regression network based (e.g. DSAC++), pose estimation network based (e.g. PoseNet, Direct-PoseNet, DFNet) and feature based (PixelLoc). Strengths: S1: The problem of reducing the data requirement to train scene coordinate regression network is an interesting problem. S2: The approach to use NeRF to produce pseudo-groundtruth 3D labels and the use of this synthetic depth inside a localisation experiment is insightful. S3: The paper cites and compares against relevant works. Weaknesses: W1: The writing in general is good and one get the high-level description of the technical derivations. However, some parts are confusing which requires the reader to re-read a paragraph multiple times or to guess the technical derviation being described. (See Limitations for detailed comments and suggested updates). Also, there are several english typos that can easily be caught with an automatic spell-grammar checker. W4: The paper claims that one advantage of the method is that it requires less labeled data (calibrated images with depth) than other methods but there is no experiments to support this claim (e.g. gradually reducing the number of training images and comparing the performance drop). Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Q1: L146,147: what is the difference between $m_r$ and $n_r$? from reading, one gets that $m_r$ is a pixel. Q2: L171: does the next ibvs run starting from the output pose of the previous ibvs? Or do all the ibvs start from the coarse pose from the scene coordinate regression network? Q3: L187: where is $\hat{n}^r$ used? Q4: L188: where is $d$ used? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: Figure 1 would gain from having a technical description in addition to the current caption. Something along the lines of: "Left: The coarse pose estimated with the scene coordinate regression network is refined with IBVS. Right: Illustration of the pose updates / navigation in the NerF scene." Technical typos: - L4: "require groundtruth 3D labels for supervision": only regression-based methods require 3D labels (i.e. depth). Structure-based appraoches do not. - L39: the reference [25] indeed uses nerf for data augmentation but [10,11] uses nerf to refine camera poses with a render-and-compare strategy (10 with photometric loss and 11 with a feature loss). This sentences need to be updated accordingly. - Unsuported claim that the proposed method need less training images: - L46: "which is significantly fewer than typical visual localization method". This claim is not supported. - L86-87: "Our method ..." - L103: "We utilize few posed images" - L241: "It is important to note that we use fewer ..." - The paper claims that RNR-Map [19] requires RGBD but the original paper claims that it is based on RGB vision only. It is would be useful to have this clarified. - Eq1: M is undefined even though one can guess that it is the number of points along the ray - L116: $t_n$ is undefined - Eq2: Isn't the depth $D$ already defined in the camera coordinate frame? If so, then there is no need to transform the points with $T^{-1}$ i.e. it should be $P_W = D K^{-1} p$. - L146: "depth of correspondences" -> "depth of pixels" - Eq7 assumes that the set of correspondences remain the same for all IBVS, which is an assumption only introduced later in 3.2.2. Instead, the assumption should be mentionned before, even if only in a short sentence. The reader can then be referred to 3.2.2 for more details. - L201: "we can obtain accurate ...": how are these correspondences derived? one can guess that is with L144-147 but it would be helpful to the reader if it was explicitly mentionned. - L226: Pixloc is actually a feature-based refinement only method so it does not fully qualify as a structure-based method. - Fig4: e(x) ... are undefined even though one can guess that these are distance errors Confusing writing that impedes with the understanding of the technical details: - L7: "only a few". It might be better to either give a quantification of how much less or only write "less" images. - L8: "To achieve this, we first use a few posed images with coarse 3D labels provided by NeRF to train a coordinate regression network, which is used to provide the coarse pose for unseen view. " It would be easier to read if written with a different order: - pseudo ground-truth 3d labels are computed with nerf - the scene coordinate regression network is trained - a coarse pose is estimated from the regression network - the coarse pose is refined with IBVS - L53: "the correspondences with depth": the reader can guess the derivation bein run i.e. select pixels in one image and find their match in the second image using the depth and the image poses. But the reader should not have to guess the technical derivations so it is better to write it down explicitly. - Same comment for L141: the "depth correspondences" - L61: "navigation prior" is undefined. Does this refer to the "simulated" navigation run in the NeRF scene? - L125: "geometric constraints" is undefined - L127: Does "edge areas" refer to the scene's edge or the image's edge? One could guess that it is the scene's edge but it is better to specify it. - L136: - "we first initialize IBVS ...". At this stage, IBVS has not been introduced and it is not part of the common knowledge in the visual localisation community (this is more of a robotic common knowledge). So it would be helpful to the reader to have a high level description of IBVS before the low-level one that is already in the paper. This will help the reader understand what "initializing the IBVS" means (e.g. "The initial pose in the IBVS optimisation is the coarse pose and the 3D points that will support the optimisation are generated with the NeRF depth."). - the term pixel velocity is not introduced and it is not a term common in the visual localisation community which is more used to the render-and-compare notations than the visual servoing notations. - L168: this sentence is paradoxial. An alternative formulation could be: "The assumption is violated so accumulation errors will occur." A non exhaustive list of typos: - L26: the other line of approach *is* [...] regresse*s* - L75: "To achieve ...". It seems this sentence should not be there. - L76: has attracted - L103: "Then the same ...": this sentence misses a verb - Fig4: Deired -> Desired ### Post-Rebuttal #### Addressed weaknesses: W3: The evaluation on 12 scenes compares only against methods that do not require groundtruth depth labels, whereas the 7 scenes evaluation also compared against method that do use groundtruth depth labels. - Update: The paper does evaluate methods using ground-truth depth (only DSAC is missing in the 12 scenes experiments). W2: The paper claims to contribute not only to localisation but also navigation but there is no comparison to previous navigation work (RNR-MAP). Also, the claim that RNR-MAP requires depth information seems opposite to the RNR-MAP's claim to be RGB-vision based only (in the related work section). - Update: the comparison to rnr-map is not applicable so this limitation is obsolete. However, the contribution to the navigation task should be clarified. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the constructive comments and suggestions. **W1: Paper writing could be improved.** Thanks for the valuable suggestion. We have polished the whole paper to make it more smooth and concise for readers. Please refer to **Q2** in the **"General Response"** for specific modification details. **W2: The paper claims to contribute not only to localization but also navigation but there is no comparison to previous navigation work (RNR-MAP). Also, the claim that RNR-MAP requires depth information seems opposite to the RNR-MAP's claim to be RGB-vision based only.** Thanks for the valuable suggestion. 1) RNR-MAP [1] indeed uses depth maps in the construction of the map, which is described in the section “3. RNR-Map” of RNR-MAP. Also, the inputs of the navigation system use depth information, which is described in "Figure 3. Navigation System Overview". 2) Our IBVS-based navigation (6-degree-of-freedom navigation, without visual odometry information) is not configured in the same way as RNR-MAP (3-degree-of-freedom navigation, with visual odometry information). Therefore, the comparison between IBVS-based navigation and RNR-MAP is unfair. 3) Our main contribution to visual navigation is to enhance IBVS-based navigation. The main point is that our method enables navigation based on IBVS without using custom markers and the depth sensor compared to general IBVS-based navigation methods [9,17]. It is obvious to enhance the general IBVS-based navigation. Therefore, it does not need to be compared with other visual navigation methods to illustrate enhancement. **W3: The evaluation on 12 scenes compares only against methods that do not require groundtruth depth labels, whereas the 7 scenes evaluation also compared against method that do use groundtruth depth labels.** Thanks for your valuable comment. Actually, our method is compared with both types of methods in 7Scenes and 12Scenes. Specifically, in the 7Scenes dataset, our method is compared to both methods that without using groundtruth depth labels (DFNet [10], FeatLoc++Au [2], MS-Transformer [26, TransPoseNet [32], PoseNet [18]) and methods that use groundtruth depth labels (PixLoc [29], HACNet [20], DSAC* [8], DSAC++ [7], SA [5]). In the 12Scenes dataset, our method is compared to both methods that without using groundtruth depth labels (FeatLoc++Au and FeatLoc++ [2], PoseNet [18]) and methods that use groundtruth depth labels (SA [5], HACNet [20]). **W4: There are no experiments to support this claim that one advantage of the method is that it requires less data than other methods.** Thanks for your valuable comment. Please refer to **Q1** in **“General Response”** for the number of training data. Actually, we have included these tables in the section **"1.1 Amount of Training Data for Each Scene"** of the supplementary material. And we will include these tables in the paper to make it clear to readers. **Q1: L146,147: what is the difference between $m_r$ and $n_r$ from reading, one gets that $m_r$ is a pixel.** $n_r$ is defined in L147 as the image coordinate of the correspondences. Specifically, $n_r$ is the pixel coordinate $m_r$ minus the pixel coordinate of the camera's optical center. We will add the explanation of $n_r$ to the paper. **Q2: L171: does the next ibvs run starting from the output pose of the previous ibvs?** The next ibvs run starts from the output pose of the previous ibvs and we will add this detail to the paper. **Q3: L187: where is $\mathbf{\hat{n}_r}$ used?.** In L188, $\mathbf{\hat{n}_r}$ is used to compute the coordinate distance $d = ||\mathbf{\hat{n}_r} - \mathbf{n_r}||_2$ **Q4: L188: where is $d$ used?** In "3.2.2 Correspondence Selection", we empirically set a threshold $\tau$ for the coordinate distance $d$. We will filter out the outliers in the correspondences when the coordinate distance $d$ is greater than the threshold $\tau$. We will add the symbol $d$ after the coordinate distance in L191 to make it clear for readers. **Limitations: Technical typos** Thanks for the valuable suggestion, we have carefully studied all the comments. Below we will provide point-by-point responses to all the suggestions. 1) We update "while state-of-the-art regression-based methods require dense ground truth 3D labels for supervision" in L4. 2) We update "Current NeRF-based visual localization methods employ NeRF for data augmentation purposes [25], or use nerf to refine camera poses with a render-and-compare strategy [10,11]" in L39. 3) Please refer to **W4**. 4) Please refer to **"1."** in **W2**. 5) We add "M is the number of points along the ray" for Eq1. 6) We add "$t_n$ is the near bound of the camera ray" in L116. 7) Depth $D$ is defined in the camera coordinate frame. But $D\mathbf{K}^{-1}\mathbf{p}$ only transforms points from the image coordinates frame to the camera coordinates frame. We require camera pose $\mathbf{T}$ to transform the point from the camera coordinate frame to the world coordinate frame: $P_w=D\mathbf{T}^{-1}\mathbf{K}^{-1}\mathbf{p}$ in Eq2. 8) In L146, the depth of the correspondences is used to initialize the Jacobi matrix of IBVS. Therefore, depth $Z_c$ is the depth of the correspondences. 9) We add "The set of correspondences remains the same for all IBVS" in the "3.2.1 IBVS Module" for Eq7. 10) In L136, we have rewritten the method section to make IBVS module clearer for readers. Please refer to the beginning of **Q2** in the **"General Response"** for specific modification details. We add "In IBVS, the pixel velocity indicates the desired velocity to control the correspondences motion towards the target coordinate which is the image coordinates error of correspondences multiplied by a scaling factor". 11) For issues in L201, Fig4, L7, L8, L53, L61, L125, L127, and L168, please refer to **Q2** in the **"General Response"** for specific modification details. 12) For the typos. We perform grammatical checks on the whole paper and correct all grammatical errors. --- Rebuttal Comment 1.1: Title: Thank you for the informative rebuttal Comment: The rebuttal addresses all the comments and questions raised in the review, this is very much appreciated. The polishing of the writing is indeed a very important part of the paper value so the additional effort is well valuable. W3: Indeed, the paper evaluates methods that use ground-truth depth on 12 scenes. The review will be updated accordingly. However, the DSAC family is not reported when it is considered one of the state-of-the-art method. This is a bit surprising especially given that the authors do report the DSAC results on 7-scenes. W2: The previous comment on RNR-map being RGB-based is due to a reference confusion: RNR-map is described in [19] and not [1]: Vision-Only Robot Navigation in a Neural Radiance World, with [1] indeed running rgb-based only navigation. The review will be updated accordingly. #### Additional comments: - Section 3.2.2. describes the filtering of the pixel correspondences established in 3.2.1 so a more reader-friendly title could be "Correspondence filtering" or "Correspondence Geometric Verification". #### Additional questions: - Could the contribution related to the visual navigation be clarified? The current understanding is that the IBVS-Nerf is helpful to gather pixel correspondences without the need for any external markers. These pixel correspondences will then be used in the IBVS. - "Navigation" prior is not a standard term. Could it be specified? (e.g. is it a set of poses the navigation will go through? is it the initial position the navigation will start from?) --- Reply to Comment 1.1.1: Title: Response to additional comments of Reviewer F153. Comment: We sincerely thank you for your efforts in reviewing our paper and your constructive suggestions again. We really enjoy communicating with you. **W2**: Sorry for the reference confusion. We update the references “Nerf-nav [1] designs a smooth and collision-proof navigation strategy based on the density provided by NeRF” in “Related Work”. **W3**: DSAC++ [7] and DSAC* [8] do not report specific median errors in the 12Scenes dataset. Furthermore, HACNet [20] has better performance than DSAC* which is the most advanced method in the DSAC family, based on the evaluation of indoor 7Scenes (**0.03/0.9** vs 0.03/1.36) and outdoor Cambridge (**0.3/0.16** vs 0.34/20.6). Therefore, we consider that the comparison between HACNet and our method is sufficient. **Additional comments** Thanks for the valuable comments, we update the title of "Section 3.2.2" to "Correspondence filtering". **Additional questions** Thanks for the valuable questions. 1. "Navigation prior" is non-collinear correspondences and navigation trajectories corresponding to the correspondences, where the navigation trajectories are the set of poses the navigation will go through. We add the details in the paper to make it clear to readers. 2. General IBVS-based navigation [9,17] requires an external maker to obtain correspondences that meet the requirement of IBVS-based navigation. Specifically, IBVS-based navigation requires correspondences that are non-collinear and remain in the camera’s field of view during navigation. Based on the navigation prior provided by NeRF-IBVS, the correspondences that meet the requirement of IBVS-based navigation can be obtained without the external maker. Also, general IBVS-based navigation requires the depth sensor to get the depth of correspondences to update the Jacobi matrix of IBVS. However, our enhanced IBVS-based navigation does not require a depth sensor. Specifically, NeRF-IBVS can back-project the correspondences to get 3D points (3D correspondences) based on the rendered depth of NeRF at the beginning of navigation. During navigation, the depth of the correspondences is obtained by projecting the 3D points using the current pose to update the Jacobi matrix of IBVS. In summary, our method enables IBVS-based navigation without using external markers and the depth sensor compared to general IBVS-based navigation methods. Please refer to **"3.3 Visual Navigation"** for the details and we will further clarify the contribution related to the visual navigation in the paper.
Summary: The paper presents a novel visual localization method combined with the NeRF technique to address the issue of training a large number of pose images in existing visual localization models. The proposed method has two advantages: (1) the paper trains a coordinate regression network using a few posed images with coarse 3D labels generated by NeRF. (2) The paper uses an image-based visual servo to explore 3D scenes for pose optimization. The simulation experiments show superior performance in comparison with other methods. Strengths: The proposed method, NeFT-IBVS uses a coarse-to-fine strategy to accurately localize the camera with a few posed images being adopted. The method equips with two leading edges: (1) a NeRF-based posed images are used to train a coordinate regression network in the coarse stage. (2) an optimization algorithm is presented to effectively optimize the coarse pose. Furthermore, the authors also design a new pipeline to reduce the rendering frequency of NeRF to speed up the pose optimization. Weaknesses: 1. The authors claim the proposed method is able to localize the camera with fewer posed images, can the authors give a rigorous description of it? For instance, how many posed images can be used, or can the authors show the reduced ratio of used posed images compared to state-of-the-art methods? 2. Can the authors give a comparison with DSAC method[1] on the 7-Scene dataset? It seems that the proposed method performs worse than it. [1] Brachmann, Eric and Carsten Rother. “Visual Camera Re-Localization From RGB and RGB-D Images Using DSAC.” IEEE Transactions on Pattern Analysis and Machine Intelligence 44 (2020): 5847-5865. 3. For statistical significance, the experiments should be conducted several times and the statistical significance of the results should be determined. Furthermore, some typos are necessary to be corrected. For example, “We” should be rewritten “we” in Line 103. “enables” should be rewritten “enable”. The writing is obligated to be further polished. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: See Weakness part Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Due to the poor rendering quality of NeRF in large outdoor scenes compared to indoor scenes, our method is mainly applied in indoor scenes. Moreover, due to the slow rendering speed of NeRF, the proposed method cannot achieve real-time performance. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the constructive comments and suggestions. **W1: The authors claim the proposed method is able to localize the camera with fewer posed images, can the authors give a rigorous description of it? For instance, how many posed images can be used, or can the authors show the reduced ratio of used posed images compared to state-of-the-art methods?** Thanks for your valuable suggestion. Please refer to **Q1** in **“General Response”** for the number of training data. Actually, we have included these tables in the section **"1.1 Amount of Training Data for Each Scene"** of the supplementary material. And we will include these tables in the paper to make it clear to readers. **W2: Can the authors give a comparison with DSAC method[1] on the 7-Scene dataset? It seems that the proposed method performs worse than it. [1] Brachmann, Eric and Carsten Rother. “Visual Camera Re-Localization From RGB and RGB-D Images Using DSAC.” IEEE Transactions on Pattern Analysis and Machine Intelligence 44 (2020): 5847-5865.** Thanks for your valuable suggestion. Actually, We have performed the comparison in the 7-Scene dataset and the results are given in Table 1 of the submitted manuscript. The mentioned method DSAC is denoted as DSAC* [8]. Although the localization performance of our method is slightly lower than DSAC*, we use fewer data to train the model **(2090 vs 26000)** and achieved comparable performance **(0.05m/1.55$^{\circ}$ vs 0.03m/1.36$^{\circ}$)**. Furthermore, our method can enhance IBVS-based navigation, which enables navigation based on IBVS without using custom markers and the depth sensor. We have included references to baseline methods in all tables to make it more clear for readers. **W3: For statistical significance, the experiments should be conducted several times and the statistical significance of the results should be determined. Furthermore, some typos are necessary to be corrected. For example, “We” should be rewritten “we” in Line 103. “enables” should be rewritten “enable”. The writing is obligated to be further polished.** Thanks for your valuable suggestion. 1) We conduct five experiments on the 7Scenes and 12Scenes datasets and calculate the mean and variance of the results, which are shown in the following table: | | 7Scenes | 12Scenes | |--------------|-----------------------|-----------------------| | **Mean** | 0.05m/1.61$^{\circ}$ | 0.02m/0.92$^{\circ}$ | | **variance** | 3.28$\times 10^{-6}$/0.00407 | 6.45$\times 10^{-7}$/0.00079 | As shown in the table, the mean is almost identical to the results presented in the paper and the variance is very small. Therefore the performance of the proposed method is very stable. 2) We have corrected “We” to “we” in Line 103. We have corrected “enables” to “enable” in Figure 1 and Line 62. In addition, we further polished the whole paper and performed grammatical checks. Please refer to **Q2** in the **"General Response"** for specific modification details. --- Rebuttal Comment 1.1: Comment: Thanks for your rebuttal. --- Reply to Comment 1.1.1: Title: Thanks for your response! Comment: We sincerely thank you for your efforts in reviewing our paper again. We hope we have resolved all your concerns.
Summary: This paper proposes a visual localization pipeline that uses NeRF and image-based visual servoing. The main contribution is that the proposed method uses much fewer images with pose annotations and no 3D ground-truth labels (only uses pseudo ground-truth provided by NeRF to train a coordinate regression network). The method first relies on the coordinate regression network to provide an initial coarse pose, which is then refined with image-based visual servoing. Strengths: The paper presents a creative approach towards the visual localization problem. I believe the utilization of NeRF to learn a coordinate regression model and as pose initialization is novel, considering that prior works used NeRFs mostly for data augmentation. Furthermore, formulating the pose refinement process as a visual servoing problem that uses state-of-the-art image matching methods and NeRF is an interesting idea which fits well the problem. Overall, the paper proposes something new and useful to an important task. Weaknesses: I have a few concerns/comments. Not all of these are necessarily weaknesses. There is discussion over iNeRF in the related work. Authors claim that limitations of iNeRF are that it requires a very good initial pose and relies on photometric loss which is sensitive to artifacts in the rendered images. This is somewhat true for the proposed approach as well as visual servoing, by definition, requires visual overlap between observation and query image and any image matching method is going to be affected by rendering artifacts. In fact, the paper acknowledges this fact and has an extra verification step (section 3.2.2) to filter out outliers from unreliable Superpoint+Superglue correspondences. In my understanding using NeRF to initialize IBVS is analogous to the structured methods (such as [27,36]) retrieving a set of candidate database images to establish an initial pose(s). The proposed method mentions that only renders a single image C_r, but how do you ensure that C_r has visual overlap with the query? How often does this occur? Why not sample around the coarse pose T and generate multiple images? The paper uses the phrase "velocity s of the image coordinates". Isn't that just the optical flow? In terms of pipeline complexity, the method is actually closer to the structured methods (than the regression-based). I am curious to see runtime required for a single query compared to the baseline methods. Can you provide more details as to the number of posed images required to train the proposed method vs the baselines? What is the difficulty of the queries in the visual servoing experiments? i.e. what is the pose difference / overlap between initial and desired state? It would be interesting to divide the test set into easy/hard queries based on their pose diff / ovelap in order to show the limitations of the method. The authors do show the performance of the coarse pose estimation but it is not clear how sensitive IBVS is to that coarse estimate. Minor issue: Please include references of baselines in Table 1, and use "/" instead of "," to separate numbers. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: See weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors mention a couple important limitations in the main paper. I could not find discussion on potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the constructive comments and suggestions. **W1: The proposed method mentions that only renders a single image $C_r$, but how do you ensure that $C_r$ has visual overlap with the query? How often does this occur? Why not sample around the coarse pose T and generate multiple images?** Thanks for your valuable suggestion. The coordinate regression network is able to ensure that the single rendered image $C_r$ corresponding to the estimated coarse pose overlaps with the query image. To verify this argument, we perform quantitative experiments on the 12Scenes dataset. We consider that the single rendered image corresponding to the coarse pose has a good visual overlap with the query image when the position and orientation error of the coarse pose reduces after pose optimization. Therefore, we count how often this happens, and the specific quantitative results are shown in the following table: | 12Scenes | kitchen1 | living1 | bed | kitchen2 | living2 | luke | gates362 | gates381 | lounge | manolis | 5a | 5b | Average | |-----------|----------|---------|--------|----------|---------|--------|----------|----------|--------|---------|--------|--------|--------| | **Frequency** | 96.88% | 99.59% | 94.61% | 96.67% | 95.42% | 97.76% | 100.00% | 95.16% | 99.69% | 90.46% | 90.14% | 91.60% | 95.66% | As shown in the table, the visual overlap between rendered and query images occurs with high frequency. Since there is a high probability that the rendered image corresponding to the coarse pose has visual overlap with the query image and rendering multiple images spends a lot of time. Therefore, we do not sample around the coarse pose T and generate multiple images. **W2: The paper uses the phrase "velocity s of the image coordinates". Isn't that just the optical flow?** Indeed, both the velocity of the image coordinates in our paper and optical flow represent the velocity of pixels, but they are solved differently. In IBVS, the velocity of the image coordinates indicates the desired velocity to control the correspondences motion towards the target coordinate which is the image coordinates error of correspondences multiplied by a scaling factor. The optical flow is primarily solved based on the pixel intensity changes between consecutive frames. **W3: In terms of pipeline complexity, the method is actually closer to the structured methods (than the regression-based). I am curious to see the runtime required for a single query compared to the baseline methods.** Indeed, our method is more time-consuming due to the slow rendering speed of NeRF. For example, the time comparison of our method with state-of-the-art methods HACnet [20] in our device is (3.61 seconds vs 0.07 seconds). In the future, faster and more accurate variants of NeRF can be used to further improve the overall efficiency of the proposed method. **W4: Can you provide more details as to the number of posed images required to train the proposed method vs the baselines?** Please refer to **Q1** in **“General Response”** for the number of training data. Actually, we have included these tables in the section **"1.1 Amount of Training Data for Each Scene"** of the supplementary material. And we will include these tables in the paper to make it clear to readers. **W5: It would be interesting to divide the test set into easy/hard queries based on their pose diff / ovelap in order to show the limitations of the method. The authors do show the performance of the coarse pose estimation but it is not clear how sensitive IBVS is to that coarse estimate.** Thanks for your valuable suggestion. In order to illustrate the sensitivity of IBVS to coarse estimate, we conduct quantitative experiments on the 12Scenes dataset. There is no relevant literature in the field of IBVS to define easy and hard cases, we assume that the test image whose position and orientation errors of the coarse pose are greater than the median error is the hard case, and vice versa as the easy case. We consider that the IBVS module optimizes the pose successfully when the error of the coarse pose is reduced. We count the frequency of successful optimization of the coarse pose in the easy case and hard case respectively. Specifically, the quantitative experimental results are shown in the following table: | | kitchen1 | living1 | bed | kitchen2 | living2 | luke | gates362 | gates381 | lounge | manolis | 5a | 5b | Average | |------------------|------------|-----------|------------|------------|------------|------------|-----------|------------|-----------|------------|------------|------------|------------| | **median error** | 0.23/13.41 | 0.19/9.65 | 0.38/15.26 | 0.28/14.79 | 0.34/14.32 | 0.27/11.52 | 0.11/8.31 | 0.23/13.12 | 0.39/9.78 | 0.32/15.60 | 0.27/16.24 | 0.25/15.21 | 0.27/13.10 | | **easy case** | 100.00% | 100.00% | 99.02% | 100.00% | 100.00% | 99.36% | 100.00% | 99.05% | 100.00% | 98.25% | 99.20% | 98.03% | 99.41% | | **hard case** | 93.75% | 99.19% | 90.20% | 93.33% | 90.86% | 96.15% | 100.00% | 91.27% | 99.39% | 82.71% | 81.12% | 85.22% | 91.93% | The results in the table show that IBVS can be run successfully almost all the time in the easy case and fails with a small probability in the hard case. Thus IBVS is robust to most of the coarse estimation of poses. **Minor issue: Please include references of baselines in Table 1, and use "/" instead of "," to separate numbers.** Thanks for pointing this out. We have added references to baselines in Table 1, and corrected "/" to "," to separate numbers in Table 1 and Table 2. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their answers to my comments. I appreciate both new quantitative experiments they provided in their rebuttal especially the results with respect to the robustness of IBVS to certain coarse pose error. Ideally, this experiment should have been carried out by manually choosing coarse poses (such that you can control the position and orientation error) that would allow a more systematic evaluation of the visual servoing. However, in the context of the overall system, the results do show robustness even with relatively large errors. The authors already mentioned they will include details on the amounts of training data, and I would like to re-iterate the importance of doing so in the main paper, since one of the main arguments for the proposed approach is that it uses less data. --- Reply to Comment 1.1.1: Title: Thank you for your insightful comments! Comment: Thank you again for helping us make the paper stronger, we really enjoy communicating with you. We will include details on the amounts of training data in the main paper.
Summary: This paper tackles the task of visual localization, i.e. estimating a camera pose from a query image. Approaches in the literature require a large number of posed images and even dense 3D supervision for some of them. Authors propose to leverage Neural Radiance Fields (NeRF) to solve the problem of visual localization while requiring fewer posed images and no 3D labels. The method can be decomposed into sequential steps: (1) A NeRF model is trained from a set of posed images, (2) a coordinate regression network is trained on the posed images using depth predictions from the NeRF model to provide coarse 3D labels, (3) a coarse camera pose is predicted by the coordinate regression network from the query image, and is refined by performing Image-Based Visual Servoing. Camera pose refinement is turned into a navigation problem from the coarse to the target pose. The paper compares the localization performance of the proposed method with different baselines of the literature on 2 datasets: 7-scenes and 12-scenes. Authors show that their method outperforms reported baselines trained without 3D labels and is on par with the ones trained with dense 3D supervision while using less training posed images. Finally, the authors show their NeRF-IBVS framework can also be used to perform camera navigation from an initial state to a target state, both specified as images. Strengths: 1. The paper tackles an important problem: performing visual localization from fewer posed images (compared with current state-of-the-art methods) and no required 3D dense supervision. 2. NeRF models are an interesting tool to perform visual localization and it is great to have papers studying this application. 3. The paper considers a decent number of baselines from the literature in the experimental study. Ablation studies are also performed to evaluate the gain brought by the different proposed contributions. Weaknesses: 1. [Major] Paper writing could be improved. The current version of the paper is fine when it comes to understanding the main contributions but several reads were needed to grasp the details of the work. I suggest authors go through the paper again and improve the coherence and writing. 2. [Major] NeRF is a differentiable function mapping and thus, as done in previous work (iNeRF [43] as mentioned by authors in the paper), pose refinement can be performed by freezing the NeRF weights and optimizing camera location based on a rendering loss. It would be interesting that authors compare using IBVS and optimization as in iNeRF to perform the last pose refinement step, i.e. from the coarse to the target position. Optimizing camera pose based on NeRF rendering can be considered as a simpler alternative to the IBVS framework (e.g. no need to build the $L$ matrix). 3. [Minor] This paper introduces a limited set of technical novelties. However, it tackles an important problem and nicely combines previous works, proposing simple yet efficient ideas to perform the target task. Thus, I consider this not an important issue with this work, but should still be mentioned in this review. For example, the authors mention they “design a new fast iteration strategy that reduces the rendering frequency of NeRF”. To the best of my knowledge, this is mainly about querying the NeRF model every n iterations, which might not be considered as a strong contribution. Authors might want to reconsider how they introduce this part of their work. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. [Major] Paper writing could be improved. Some sentences could be made smoother (e.g. some sentences are split in half while they should not). Writing could also be made more concise and efficient, as some parts are quite hard to follow. I am sorry to provide only vague directions here, but I have the feeling there is not one specific part of the paper to improve, but rather smoothness and conciseness should be improved globally to allow a better understanding. Technical sections (e.g. IBVS, correspondence selection) could be presented more progressively by defining some of the core notions early on and diving into the details more smoothly. 2. [Major] An experiment involving a comparison between IBVS and iNeRF-like camera optimization in the last step, i.e. refinement from coarse to target pose, should be performed by authors. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Authors mentioned two relevant limitations of their work: 1. NeRF rendering quality can be low in large outdoor scenes, which might have an impact on the visual localization performance. Experiments are only conducted in indoor scenes in this paper. 2. The low rendering speed of NeRF models prevents their solution from running in real time. Both limitations rather come from the capabilities of current NeRF models than the proposed method. Future improvements in the field of NeRFs might allow better localization in large scenes and with faster runtime. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the constructive comments and suggestions. **W1 and Q1: Paper writing could be improved.** Thanks for your valuable suggestion. We have polished the whole paper to make it more smooth and concise for readers. Please refer to **Q2** in the **"General Response"** for specific modification details. **W2 and Q2: Optimizing camera pose based on NeRF rendering can be considered as a simpler alternative to the IBVS framework. An experiment involving a comparison between IBVS and iNeRF-like camera optimization in the last step.** Thanks for your valuable suggestion. 1) Since our other contribution is to enhance IBVS-based visual navigation. Therefore, our method will not accomplish the visual navigation task, if we replace IBVS framework with iNeRF. 2) Since iNeRF performs time-consuming neural rendering at each optimization iteration, iNeRF spends more time relative to IBVS framework (15.65 seconds vs 3.58 seconds). Therefore, replacing IBVS with iNeRF in our method will substantially increase the time cost of pose optimization. 3) Implementing the replacement of IBVS with iNeRF in our method and testing it on 7Scenes and 12Scenes datasets is not achievable in a short period of time. We think this replacement is a good direction for future research. **W3: “fast iteration strategy” might not be considered as a strong contribution.** Thanks for your valuable suggestion. The fast iteration strategy is simple but greatly accelerates the iteration of pose optimization. It provides a good starting point for possible future work related to IBVS utilizing the scene prior information provided by NeRF. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their efforts in trying to address my concerns and the ones of other reviewers. 1. Writing was an important issue and I am happy authors spent time polishing the paper and adding details to allow an easier understanding of their method. 2. I still believe the comparison with iNeRF would be interesting, but I also understand this limited rebuttal time was too short for authors to conduct such a study. The provided running time comparison is interesting. 3. The additional table providing a comparison with baselines in terms of training data quantity (presented in the general response to all reviewers) is very valuable and should be added to the main paper as data efficiency is an important claim of the paper. --- Reply to Comment 1.1.1: Title: Thank you for your insightful comments! Comment: We sincerely thank you for your efforts in reviewing our paper and your constructive suggestions again. We will include the table containing the number of training data in the main paper to demonstrate the data efficiency.
Rebuttal 1: Rebuttal: # General Response: We thank all reviewers for their insightful and constructive suggestions, which help a lot in further improving our paper. **Q1: The number of training data.** We notice that several reviewers are concerned about the number of data required to train the proposed method compared to the baseline. We provide the specific number of training data as follows: | 7Scenes | Chess | Fire | Heads | Office | Pumpkin | Kitchen | Stairs | All | |----------|-------------------|--------------------|--------------------|-------------------|-------------------|-------------------|--------------------|--------------------| | **Baseline** | 4000(100\%) | 2000(100\%) | 1000(100\%) | 6000(100\%) | 4000(100\%) | 7000(100\%) | 2000(100\%) | 26000(100\%) | | **Our** | **260(6\%)** | **280(14\%)** |**240(24\%)** | **300(5\%)** | **320(8\%)** | **350(5\%)** | **340(17\%)** | **2090(8\%)** | | 12Scenes | kitchen1 | living1 | bed | kitchen2 | living2 | luke | gates362 | gates381 | lounge | manolis | 5a | 5b | All | |----------|------------|-------------|------------|------------|------------|-------------|-------------|-------------|------------|-------------|-------------|-------------|--------------| | **Baseline** | 744(100\%) | 1036(100\%) | 868(100\%) | 768(100\%) | 725(100\%) | 1370(100\%) | 3536(100\%) | 2950(100\%) | 925(100\%) | 1613(100\%) | 1001(100\%) | 1391(100\%) | 16927(100\%) | | **Our** | **185(25\%)** | **170(16\%)** | **215(25\%)** | **190(25\%)** | **180(25\%)** | **340(25\%)** | **350(10\%)** | **295(10\%)** | **230(25\%)** | **270(17\%)** | **250(25\%)** | **280(20\%)** | **2955(17\%)** | Actually, we have included these tables in the section **"1.1 Amount of Training Data for Each Scene"** of the supplementary material. And we will include these tables in the paper to make it clear to readers. **Q2: The writing issue.** We notice that several reviewers suggest that the paper writing could be further polished. Therefore, we have worked on both readability and language and have also involved native English speakers for language improvement. We also added a lot of technical details to make our paper more understandable to the readers. We list the major changes as follows: We have rewritten the method section to make IBVS module clearer for readers and make this paper self-contained. Specifically, we add a subsection for the methods section called **“Preliminaries”** to introduce the core concepts of IBVS: “The aim of Image-Based Visual Servoing (IBVS) [9] is to control the camera to move towards the desired pose based on vision information while minimizing the error of correspondences between the current image and the target image. To achieve this goal, IBVS first calculates the desired image coordinate velocity of correspondences based on the coordinate error of correspondences. Then, the Jacobi matrix is constructed based on the correspondences which establishes the relationship between the image coordinate velocity and the camera velocity. Finally, the desired camera velocity to control the camera motion towards the target is solved based on the Jacobi matrix and the desired image coordinate velocity. The specific details of the IBVS are presented in the supplemental materials”. We add details and fix typos in the paper. We add "M is the number of points along the ray" for Eq1. We add the explicit mention in L201: "After that, the correspondence selection algorithm can obtain accurate and non-collinear correspondences and the IBVS module can obtain navigation trajectories. Therefore we can obtain accurate and non-collinear correspondences with navigation trajectories". We add “$e(x_i), e(y_i)$ is the image coordinate error of correspondences” and correct “Deired” to “Desired ” in Fig4. We update "only a few images" to "fewer images" in L7. We update "To achieve this, we first use NeRF to generate pseudo-3D labels which are used to train the scene coordinate regression network. Then a coarse pose is estimated from the regression network. Finally, we use the image-based visual servo (IBVS) to utilize 3D scenes provided by NeRF for pose optimization" in L8. We update "Then, we establish the correspondences between the rendered image and the query image and query the approximate depth of the correspondences based on the rendering depth map. Finally, the correspondences with depth are used to launch IBVS to guide pose optimization" in L53. We update "where $x, y$ and $Z_c$ denote image coordinates and depth of correspondences in the currently rendered image" in L141. We add "Navigation prior is accurate and non-collinear correspondences and navigation trajectories corresponding to the correspondences" in L61. We rewrite the "geometric constraints" to "multi-view constraints" which are the basic concepts in multi-view stereo in L125. We rewrite the "edge areas" to "edge areas in the image" in L127. We update "Due to rendering errors of NeRF, the assumption that the 3D coordinates of correspondences are accurate is violated so accumulation errors will occur" in L168. We correct “enables” to “enable” in Fig 1 and L 62. We correct “We” to “we” in L 103. We add "$t_n$ is the near bound of the camera ray" in L 116.
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper introduces image based visual servo to refine the coarse poses estimated using a coordinate regression network trained on neural rendering based 3D labels. The paper also uses correspondences in depth between neural rendered image and query image to iteratively refine the pose of the image. The paper also shows that NeRF-IBVS can be used as a navigation prior and subsequently use IBVS navigation. Strengths: 1. The paper uses only a few images for visual localization compared to other methods out there 2. The paper does both visual localization and navigation based on visual servoing. The authors claim to be the first to do that. 3. The paper shows SOTA results on multiple datasets for both visual localization and navigation. 4. The paper provides and effective navigation prior which improves IBVS based navigation. Weaknesses: 1. The paper trains the coordinate regression network using the approximate 3D labels obtained through neural rendering which is not ideal. A NERF trained with a small number of posed images has a lot of rendering noise, as discovered by the authors and shown in Fig 3. 2. It is not clear how the authors decide which pixels in the NERF-rendered images are used to train the coordinate regression net. They do mention that image boundaries are not used, but this is not a satisfactory description. Image boundaries from NERF may well be crisp, and parts of the image near the centre might have noise. 3. The coordinate regression network name is a bit confusing. At first glance, it appears as if it is a network that regresses pose of the camera from the 2D image. Actually, it is regressing 3D positions for each image pixel in the global coordinate frame. 4. The IBVS module is not clear. The authors suddenly introduce camera velocity and image velocity and a Jacobian without mentioning any prior information about this - which is presumably from the visual servoing literature. This material is provided in the supplementary, but should be included in the main paper. 5. The method is not real-time due to the time taken by neural-rendering, during the optimization step 6. The authors have not explored the joint training of NERF and the coordinate regression net, which might be an avenue to pursue. 7. The Quality of NERF rendering is crucial even for getting coarse poses that are subsequently used to train the coordinate regression net, and the PnP on the 3D poses and the 2D images. This will struggle with non-Lambertian and dynamic objects, etc. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. The paper could use better neural rendering techniques for latency and accuracy of depth and image rendering.   2. No details about the RANSAC algorithm used. 3. Can the superglue correspondence network used be made trainable? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: 1. The paper may not be able to handle instances where there are a lot of specular objects in the scene. This can lead to bad coarse pose and in turn IBVS may not converge. 2. The paper doesn't compare runtimes while comparing performance of other visual navigation methods. Some of the methods might have compromised on quality to reduce latency 3.The authors claim IBVS iterations are launched no more than 4 times. How was this arrived at ? Again this really depends on the quality of rendering. Any evidence to support this will help. Typos: -'Deired' -> Desired in caption for Figure 4. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the constructive comments and suggestions. **W1 and W2: The paper trains the coordinate regression network using the approximate 3D labels provided by NeRF which is not ideal. It is not clear how the authors decide which pixels in the NERF-rendered images are used to train the coordinate regression net.** Thanks for your insightful comment. 1) Since the pose optimization module will correct the coarse poses provided by the coordinate regression network with PnP. The coordinate regression networks only need to ensure that the estimated result is in the vicinity of the target, which does not need to be a very accurate result. Therefore the approximate 3D labels with rendering noise obtained through neural rendering are enough to train the coordinate regression network. 2) We empirically found that the rendering noise of NeRF mainly appears in the scene edge region (lack of multi-view constraints), which generally appear in the edge region of images, and the rendering noise in the center region of the images has little effect on the goal that the coordinate regression network only needs to estimate results near the target. Therefore, we empirically crop out information within 40 pixels from the boundary to minimize the effect of the rendering error. 3) To verify the above arguments, we perform quantitative experiments on the 12Scenes dataset. We consider that the coarse results provided by the coordinate regression network are in the vicinity of the target when the position and orientation error of the coarse pose reduces after pose optimization. Therefore, we count how often this happens, and the specific quantitative results are shown in the following table: | 12Scenes | kitchen1 | living1 | bed | kitchen2 | living2 | luke | gates362 | gates381 | lounge | manolis | 5a | 5b | Average | |----------|----------|---------|--------|----------|---------|--------|----------|----------|--------|---------|--------|--------|--------| | **Frequency** | 96.88% | 99.59% | 94.61% | 96.67% | 95.42% | 97.76% | 100.00% | 95.16% | 99.69% | 90.46% | 90.14% | 91.60% | 95.66% | As shown in the table, the average frequency of coarse pose error reduction is greater than 95\% in all scenes. Therefore, the coordinate regression network consistently outputs results near the target despite only cropping the boundary information of the image. **W3: The coordinate regression network name is a bit confusing.** Thanks for your valuable suggestions. Following HACNet [20] and DSAC* [8] in the field of visual localization, which defines the coordinate regression network as regressing dense 3D coordinates directly from 2D images. We add the definition of coordinate regression networks in the methods section. **W4: The IBVS module is not clear.** Thanks for your valuable suggestions. We have rewritten the method section to make IBVS module clearer for readers and make this paper self-contained. Specifically, We add a subsection for the methods section called **“Preliminaries”** to introduce the core concepts of IBVS. Please refer to **Q2** in the **"General Response"** for specific modification details. **W5: The method is not real-time due to the time taken by neural rendering.** Thanks for your insightful comment. The efficiency is not the main problem addressed in the paper, we mainly take full advantage of the prior knowledge of the scene provided by NeRF to enhance visual localization and visual navigation. For example, in the 7Scenes dataset, the proposed method obtains better localization performance than the advanced real-time methods such as FeatLoc++Au [2] (our: 0.05m/1.55$^{\circ}$ vs FeatLoc++Au: 0.14m/5.89$^{\circ}$) and uses fewer posed images for supervision (our: 2090 vs FeatLoc++Au: 26000). Furthermore, our method enables navigation based on IBVS without using custom markers and the depth sensor compared to general IBVS-based navigation methods [9,17]. In the future, we can try to use faster and more accurate variants of NeRF to improve the overall efficiency of the proposed method. **W6: The authors have not explored the joint training of NERF and the coordinate regression net.** Thanks for your valuable suggestions. The coordinate regression network needs the approximate labels provided by NeRF for training. Therefore, it is necessary to train NeRF first and then train the coordinate regression network. If the NeRF and coordinate regression network are trained jointly, it may be necessary to focus on the NeRF training in the early stage and focus on the coordinate regression network training in the later stage, which might be a worthy future direction. **W7: The Quality of NERF rendering is crucial. The proposed method will struggle with non-Lambertian and dynamic objects, etc.** Indeed, the quality of NeRF rendering is crucial for our method. A fundamental solution to this problem (caused by non-Lambertian and dynamic objects, etc.) requires further improvements in NeRF. In the future, the improved NeRF variants can be used to improve the performance of our method. **Q1: The paper could use better neural rendering techniques for latency and accuracy of depth and image rendering.** Theoretically, a better NeRF does lead to performance improvements, which is a good direction for future research. **Q2: No details about the RANSAC algorithm used.** Sorry for the unclear details. Actually, RANSAC is a common module in the field of visual localization. We have explained it in detail in **“1.4 Details of Pose Optimization and Correspondence Selection”** of the supplementary material. **Q3: Can the superglue correspondence network used be made trainable?** Yes, superglue can be trained, but generating the training labels requires a lot of manual annotations. Therefore, we do not train superglue in our dataset and only use its official weights. --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal. You have answered some of my questions, but I maintain that the paper could be more clearly written. For example, I think you should motivate why exactly you need a separate coordinate regression network to get depth for the new rendered image? Don't you get the depth from NERF already? If it is because NERF depth is noisy, you should do an ablation study for using NERF depths to optimize the Visual Servoing. I will maintain my previous rating. --- Reply to Comment 1.1.1: Title: Thank you for your comment! Comment: We sincerely thank you for your efforts in reviewing our paper and your constructive suggestions again. We have added a lot of details to make our paper more clear to the readers. Please refer to Q2 in the "General Response" for major modification details. **The concern about the coordinate regression network:** Our method adopts a coarse-to-fine paradigm to estimate the pose of the query image. In the coarse stage, the coordinate regression network with pnp is used to provide the coarse pose for the query image. Specifically, the coordinate regression network inputs the query image (not the new rendered image) and then outputs its coarse 3D coordinates in the world coordinate frame. Finally, the coarse pose is obtained by pnp based on coarse 3D coordinates and 2D image coordinates. --- Rebuttal 2: Comment: Dear reviewer iS7u, could you please tell the authors (and us) whether your concerns have been answered? Best, AC
null
null
null
null
null
null
Geometric Transformer with Interatomic Positional Encoding
Accept (poster)
Summary: The paper proposes a Transformer based architecture incorporating interatomic information upon learned Atomic Cluster Expansion, integrated into the self-attention and refined residually over the network. The method obtains state-of-the-art performance on a majority of prediction tasks on two popular molecular datasets. Strengths: The paper is well written. The main novelty lies in the integration of learned ACE-based information into the model and its residual refinement. The performances seem extremely significant. Weaknesses: I believe the most significant weakness of the paper is the lack of a literature review, which decreases the novelty level of the suggested method. Transformer-based architectures for molecular property prediction is a field of research of at least three years old. The manuscript cites only two related papers and compares its performance to a very low-accuracy Transformer model. \ To name a few of the firsts:\ Molecule attention transformer. arXiv:2002.08264, 2020\ Relative molecule self-attention transformer. arXiv:2110.05841, 2021\ 3dtransformer: Molecular representation with transformer in 3d space. arXiv:2110.01191, 2021\ Geometry-aware transformer for molecular property prediction. arXiv:2106.15516, 2021\ Geometric transformer for end-to-end molecule properties prediction. arXiv:2110.13721, 2021 Thus, the proposed different embeddings and the integration of information into the self-attention are then not that novel.\ For example, the initial IPE is similar to Physnet, and the integration of pairwise (also multi-scale) information into the transformer is similar to the references above (or other Transformer based propositions applied to other modalities). Also, the proposed solution may be suboptimal compared to the literature and has not been ablated enough (e.g., learned or choice of handcrafted cut-off functions, the integration modality of the IPE into self-attention (beyond the addition presented in the Appendix's ablation); please see Questions). Incorporating information from molecular mechanics/force field into neural networks has been proven as a powerful tool ([31,37,10,32 and many others]) by including beneficial inductive bias. Thus, and as stated in the strengths, besides the very good results, I see the main contributions of the paper in the integration of augmented ACE information and also in the residual refinement of the IPE as the "neural" novelty. Finally, the proposed model seems to have at least 20% higher capacity than the competing transformers and probably more compared to the others. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1) From the ablation study, we can observe that variant 1 has the smallest impact on accuracy decrease among the two variants. To better understand the impact of ACE/IPE, it would be nice to see the ablations of Variant 1 coupled with the residual procedure proposed in the paper (it should not be worse at least). 2) The Transformer literature (including molecular) provides many possible integrations of the IPE. One should provide more ablations rather than only with the standard sum (and besides the statements in line 194-) 3) Can you provide a capacity comparison with the other methods? The (significant) improvement in accuracy may also come from a higher model capacity and better and/or longer training. Also, regarding reproducibility, some details regarding the training are missing (e.g. number of epochs) 4) How do you explain the employment of a minimal number of expansions and harmonics still leads to such substantial results? Also, in case such minimal hyperparameters are chosen, I believe one may simplify the cumbersome general case notation into a clear and easier-to-implement formulation. 5) The Weaknesses Section above should be addressed (e.g., novelty, comparisons). Given the very good performance of the model, addressing these points may certainly rise the paper's impact and rating. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: No specific/significant limitations are presented. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response Thanks for your valuable comments. We provide the point-to-point responses in the following. ### Weakness 1 1. Sorry for the lack of literature review torward the Transformer-based architecture. The works you mentioned are pioneers in the molecular modeling. We introduce them in the Introduction and Related Work, and obtain valuable insights from them to design some variants of positional encoding (see Question 5 for the novelty and comparison). 2. The key idea of different methods is try to design more effective positional encoding. According to your suggestion, we add these parts in the manuscript and conduct more ablation study (see Question 1 and 2). 3. Right. Thanks for your acknowledgement. In addition, the prevailing molecular mechanics/force field methods are based on EGNNs. Here we revisit the power of Transformers with the proposed IPE and we believe it is one of the novelty in this paper. 4. The introduction of IPE does introduce additional parameters. When we used the same settings as Transformer-M, such as the number of layers and hidden layer dimensions, we found that the parameters of the model went up a bit (about 8% higher capacity). Modern EGNN models are indeed relatively small (< 10M), however, due to the CG-product or graph operations in their implementations, Transformer-based approach has a similar training speed to them (see Question 3 for details). ### Question 1 Thanks for your comments. This suggested experiment does provide further evidence of the validity of our proposed framework for IPE. As you suggested, we update the initialized IPE using residuals that come from the transformation of the initial IPE itself, with no further information introduced, i.e. $C_{\eta}^{l+1} = \operatorname{SiLU} \left(C_{\eta}^{l}W_{C}^l \right) + C_{\eta}^{l}$ The experimental results are as you expected, the residual update brings a slight performance gains (Non-updated IPE **0.563**; Residual IPE without additional information **0.551**; Geoformer **0.443**), while the main performance improvement comes from the additional information $\delta C_{\eta}^l = \sum_{\tau=1}^{\eta} W_{\tilde{B}}^l \sum_{v_{\tau}'} C_{vv_{\tau}'} \tilde{A_{vv_{\tau}'}}$ introduced to the IPE module. ### Question 2 Thanks again for pointing these work. We add additional 3 variants (besides Question 1) brought from the ideas of above 4 papers: - for **Varaint 1** from *Molecule attention transformer*, we modify the IPE and attention block as: $$ A=\left(\lambda_a \operatorname{SiLU}(\frac{QK^{T}}{\sqrt{d_k}})+\lambda_c\bf{C}_{\eta}\right)V $$ setting $\lambda_a=\lambda_c=0.33$. - for **Variant 2** from *Geometry-aware transformer for molecular property prediction*, we modify the IPE following its paper: $$ A=\frac{(QK^{T}\odot\bf{C}_{\eta})}{\sqrt{d_k}} \cdot V $$ - for **Variant 3** from *Geometric transformer for end-to-end molecule properties prediction*, we modify the IPE following its paper: $$ A=\operatorname{Softmax}(\frac{QK^{T}}{\sqrt{d_k}})\odot \bf{C}_{\eta} $$ Due to the resource and time limitation, we only conduct the experiments on energy $U_0$: | | Varaint 1 | Variant 2 | Variant 3 | | --- | --- | --- | --- | | U0 | 7.43 | 5.02 | 5.34 | We discovered that the variants involving the positional encoding **multiplicated** with *Key* and *Query* perform better than those using summation, implicitly supporting our theorem. ### Question 3 We have provided the model size and overall training time in the table below, and to answer your question, we have also provided a model using the same size as EGNNs, and you can see that although the performance of the smaller model drops a bit compared to the larger model, the model is still SOTA when compared to the other methods. Following the previous studies, the maximum number of epochs is set to 600. We will add detailed settings in the appendix. | | Model Size | Overall Training Time (GPU-hours) | MAE on U0 | MAE on U | MAE on H | MAE on G | | --- | --- | --- | --- | --- | --- | --- | | SEGNN | 1.03M | 81 | 15 | 13 | 16 | 15 | | TorchMD-NET | 6.86M | 92 | 6.15 | 6.38 | 6.16 | 7.62 | | Equiformer | 3.53M | 61 | 6.59 | 6.74 | 6.63 | 7.63 | | Transformer-M | 47.4M | - | 9.37 | 9.41 | 9.39 | 9.63 | | Geoformer | 50.1M | 55 | 4.43 | 4.41 | 4.39 | 6.13 | | Geoformer-S | 6.4M | 20 | 5.20 | 5.12 | 5.19 | 6.78 | ### Question 4 In MACE [1], with $l=2$ could surpass most of the methods with higher order geometric tensors. Since the basis function matrix in Eq. 7 could represent the $\epsilon+1$ and $\epsilon+2$ body expansion simutaneously as well as the power of Transformers, we could use lower degree to achieve the comparable results. To obtain the rigor and universal derivation, we consider the higher degree of spherical harmonics. [1] Batatia, Ilyes, et al. "MACE: Higher order equivariant message passing neural networks for fast and accurate force fields." *Advances in Neural Information Processing Systems* 35 (2022): 11423-11436. ### Question 5 All five studies endeavor to incorporate distance matrices for representing geometrical models, with certain investigations introducing the adjacency matrix (MAT) or chemical bond information (R-MAT) to further characterize geometries. Moreover, several studies examine the incorporation of attention bias; for instance, Molformer employs Adaptive PE to model molecules of varying sizes, GeoT forgoes softmax in favor of distance matrices as a scaling factor, and the application of multiplication for PE has been previously tried in the Geometric Transformer. Contrasting these pioneer efforts, our methodology is grounded in the principles of ACE, providing a theoretical foundation for the empirical effectiveness of multiplication. Furthermore, as you mentioned, we introduced IPE based on ACE beyond pairwise distances, continuously refining it and bridging the performance gap between transformer-based approaches and EGNN. --- Rebuttal Comment 1.1: Comment: Thank you for your answers and new experiments, I think they really can improve the paper. I have a few remaining questions/comments: ### Weakness 1 - Do you mean you "will" introduce them? ### Question 2 Thank you for running the experiments. Variant 1 is similar to what you did in the ablations and has already been shown to underperform. Regarding variant 2, there are some scaling and attention missing there. Regarding variant 3, there should have been a learned parameterized mapping $\phi$ s.t. you multiply your attention by $\phi(C_{\mu})$ in order to learn the proper masking. ### Question 3 There are still some issues here. For example, PaiNN has 600K parameters and almost reaches Geoformer-S. Also, Transformer-M performance is very low and cannot really be taken into account in comparison while the performance of the other Transformers should be added too. To summarize, I still don't know if the very good performance comes from the high capacity of your model, the ACE or from $SiLU$/attention scaling. (I now know they are not due to the architecture residuals (question 1)). For instance, applying other Transformer models (e.g. the tested Variant 2-3 or others) with ACE may reach better performance at equal capacity. ### Question 4 I assume I can then summarize, as in the former review, that the main novelty comes from the integration of the ACE into existing Transformer architectures. --- Reply to Comment 1.1.1: Comment: Thank you for your active response! We provide the point-to-point responses for the remaining concerns in the following. ### Weakness 1 We have introduced these works in our **Introduction**, **Related Work** and **Ablation section in the our revised manuscript**. The modified expression in our manuscript is as follows: > Several works have explored the integration of different types of positional encoding in Transformers. The MAT [1] and R-MAT [2] approaches methodologically introduce inter-atomic distances and chemical bond information as domain-specific inductive biases in Transformer architecture. Molformer [3] employs Adaptive PE to model molecules of varying sizes; GeoT [4] forgoes softmax in favor of distance matrices as a scaling factor, and the application of multiplication for PE has been previously tried in the Geometric Transformer [5]. [1] Maziarka, Łukasz, et al. "Molecule attention transformer." *arXiv preprint arXiv:2002.08264* (2020). [2] Maziarka, Łukasz, et al. "Relative molecule self-attention transformer." *arXiv preprint arXiv:2110.05841* (2021). [3] Wu, Fang, Dragomir Radev, and Stan Z. Li. "Molformer: Motif-based transformer on 3d heterogeneous molecular graphs." *Proceedings of the AAAI Conference on Artificial Intelligence*. Vol. 37. No. 4. 2023. (Old name: 3dtransformer: Molecular representation with transformer in 3d space. arXiv:2110.01191, 2021) [4] Kwak, Bumju, et al. "Geometry-aware Transformer for molecular property prediction." *arXiv preprint arXiv:2106.15516* (2021). [5] Choukroun, Yoni, and Lior Wolf. "Geometric transformer for end-to-end molecule properties prediction." *arXiv preprint arXiv:2110.13721* (2021). ### Question 2 Indeed, we have adapted our framework in strict accordance with the methods in the original paper. With respect to Variant 2, GeoT intentionally replaces $Softmax$ function with an alternative scaling method. In alignment with this, we use IPE to **replace** the aforementioned scaling function $\phi_w(D)$. With respect to Variant 3, since IPE is its residual updated and trainable, we have made a direct substitution of the original $\phi_w(D)$ with the IPE. In summary, our investigation encompasses a range of variants, comprising addition, weighted addition, no activation, multiplication after activation, as well as a variant presented in Question 1. Our choice, the "multiplication before activation" approach looks like the one that performance the best and is derived theoretically. ### Question 3 1. As per your suggestion, we have included a performance comparison with other Transformers as follows and we have added them as baseline in our revised manuscript. These comparisons highlight the effectiveness of our proposed method in the context of related Transformer-based approaches. | | $\mu$ | $\alpha$ | HOMO | LUMO | gap | $R^2$ | ZPVE | U0 | U | H | G | Cv | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | Molformer | 28 | 41 | 25 | 26 | 39 | 350 | 2.05 | 7.52 | 7.46 | 7.38 | 8.11 | 25 | | GeoT | 29.7 | 52.7 | 25.0 | 20.2 | 43.9 | 300.8 | 1.73 | 11.1 | 11.7 | 11.3 | 11.7 | 27.6 | | Geometric Transformer | 26.4 | 51 | 27.5 | 20.4 | 36.1 | 157 | 1.24 | 7.35 | 7.55 | 7.73 | 8.21 | 28.0 | | Geoformer | 10 | 40 | 18.4 | 15.4 | 33.8 | 27.5 | 1.28 | 4.43 | 4.41 | 4.39 | 6.13 | 22 | 2. Furthermore, as mentioned in Question 2, these Transformer-based approaches gained improvement of performance after introducing IPE (7.35 vs 5.34; 11.1 vs 5.02). In summary, the ablation studies among all variants demonstrate the effectiveness of the proposed IPE. 3. PaiNN's small number of parameters can be attributed to its simple operations (MLP). Moreover, EGNNs that utilize attentions block (see Equiformer and TorchMD-NET) tend to have a relatively large number of parameters as well. However, it is difficult to achieve further performance improvement by merely expanding PaiNN due to the over-smoothing problem. While the size of the model may contribute to the performance, we believe that the primary source of performance stems from the architectural updates and information provided by IPE. Moreover, we think the scaling between the model size and performance gains is a positive attribute. This advantage is particularly crucial in the rapidly evolving field of molecular modeling, where the ability to efficiently train large-scale models can lead to significant advancements in scientific discovery and practical applications. ### Question 4 You're right. In this paper, our primary goal is to bridge the performance gap between Transformer-based methods, and EGNNs which include strong inductive bias, resulting in better performance. So, with the assistance of ACE theory, we **designed an architecture that achieves better performance** both **theoretically** and **practically** to **integrate the extra information IPE brings**. We appreciate your recognition of the performance and contributions in our paper.
Summary: The authors propose an Interatomic Positional Encoding (IPE), and introduce the Geoformer. IPE is motivated by Atomic Cluster Expansion (ACE), which describes the many-body contributions in transformers for molecular modeling. The authors first derive integrated cluster potentials for atom pair i,j in a molecule, then leverage the interatomic potentials to propose the interactomic positional encoding. Further, the authors introduce the Geoformer architecture that implements IPE. Experimentally, the authors compare Geoformer with popular molecular baselines on two datasets, QM9 and Molecule3D, and achieve the best results on most of the tasks. Additionally, the authors conduct ablation studies to varify the effectiveness of interatomic positional encoding. Overall, the paper proposes a novel approach, and the experimental results show a better model performance than exisiting methods. Strengths: The approach inspired by Atomic Cluster Expansion is novel. Indeed, the positional encodings for modelling interatomic interactions have always been one of important factors in designing effective graph transformers for molecular data. Inspired by atomic cluster expansion, the authors propose a novel interatomic positional encoding that 'successfully' describes the interactions between atoms in a molecule, with theoretical justification. Experimentally, Geoformer surpass existing methods by a certain margin, which demonstrates the effectiveness of their approach. Overall, the paper presents a good quality of work with clear writing. Weaknesses: First, I find this paper is not so well-motivated. TorchMD-Net[1] model interatomic interactions by distance filters, and Muformer[2] leverage it and use additional 2D structures in the distance filters to model interatomic interactions. So, employing such information in Transformers is not completely underdeveloped. And it is unclear to me how ACE/IPE surpass previous methods in terms of effectiveness and efficiency. In addition, the overall architecture suffers from numerical and training instabilities, for which the authors have also mentioned the limitation in section 5. Also, the authors do not compare the overall training time and model parameters in the experimental section, it is important to see the comparison of model sizes. Also, the error bars (standard deviations) are not shown in the tables. The authors do not perform sufficient experiments on proper datasets, it is also important to include experiments on MD17. [1] Thölke, Philipp, and Gianni De Fabritiis. "Torchmd-net: equivariant transformers for neural network based molecular potentials." arXiv preprint arXiv:2202.02541 (2022). [2] Hua, Chenqing, et al. "MUDiff: Unified Diffusion for Complete Molecule Generation." arXiv preprint arXiv:2304.14621 (2023). Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. I would like to hear from authors about how ACE/IPE surpass the simplest positional encoding in [1] and [2] in terms of expressive power and calculation time. Moreover, how do the authors choose the trade-off between efficiency and effectiveness? 2. The authors mention that the architecture suffers from training instabilities. I wonder if there could be other techniques for stable training other than just using small learning rates. 3. I would like to see the comparison of model size (number of parameters) and training time, as well as the standard deviation over experimental runs. 4. I would like to see the comparison of model performance on the MD17 dataset, as MD17 provide more valid molecules. I also wonder why the authors do not perform experiments on MD17 in the beginning, as QM9 and MD17 are two of the most standard datasets for molecular simulation. [1] Thölke, Philipp, and Gianni De Fabritiis. "Torchmd-net: equivariant transformers for neural network based molecular potentials." arXiv preprint arXiv:2202.02541 (2022). [2] Hua, Chenqing, et al. "MUDiff: Unified Diffusion for Complete Molecule Generation." arXiv preprint arXiv:2304.14621 (2023). Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response Thanks for your valuable comments. We provide the point-to-point responses at length in the following. ### Weakness 1 1. The distance filter (Radial Basis Fuctions, RBF) in TorchMD-Net is a sub-class of Atomic Cluster Expansion (ACE), which could be considered as the 2-body expansion. We mentioned it in Section 2.2, line 88. Here we add a detailed explanation. The basis function in ACE is a combination of radial basis functions (RBF) and spherical harmonics (SHs), $\phi(r)=\sqrt{4\pi}R_{nl}(\Vert r \Vert)Y_l^m(r)$ in the original paper, with $n$ denoting the degree of RBF, $l$ denoting the degree of SHs, and $m$ denoting the index of SHs. If we set the $l=0$, the SH degrades to scalar $1$. Therefore, the basis functions remain as RBF, which is adopted by TorchMD-Net as the distance filter. 2. Compared to MUformer, which treated the 3D structure information as attention bias in attention block, we provide the theorical proof in Theorem 2, explaining the reason we directly multiply the proposed interatomic positional encoding (IPE) to the *Key* and *Query* from the perspective of the ACE. In a nutshell, we believe that employing the geometric information itself is not novel, but how to effectively utilize them is the most essential question. Concerns about instabilitiy, the comparison of model size and training as well as the standard deviations are replied in Question 2 and 3. ### Weakness 2 Thank you for raising this concern. The main issue for us is that, for the MD17 experiment, it is required to use 0.7% of the data for training because they are generated from the same trajectory. This setup aims to test whether the machine learning force field is data-efficient. However, we think this is not suitable for Transformers, which prefer a large amount of data. Instead, we chose the prevailing QM9 and Molecule3D benchmarks, containing 130,831 and 3,899,647 molecules, respectively. However, as you suggested, we have conducted experiments on MD17 and display the energy and force error. The results show that, equipped with IPE, Transformer can also be applied to problems in limited-data scenarios. **Energy MAE** | Molecule | Geoformer | NequIP | TorchMD-Net | GemNet | PaiNN | | --- | --- | --- | --- | --- | --- | | Aspirin | **0.116** | 0.131 | 0.123 | - | 0.167 | | Ethanol | **0.051** | **0.051** | 0.052 | - | 0.064 | | Malonaldehyde | **0.074** | 0.076 | 0.077 | - | 0.091 | | Naphthalene | 0.087 | 0.113 | **0.085** | - | 0.116 | | Salicylic Acid | **0.093** | 0.106 | **0.093** | - | 0.116 | | Toluene | 0.078 | 0.092 | **0.074** | - | 0.095 | | Uracil | **0.095** | 0.104 | **0.095** | - | 0.106 | **Forces MAE** | Molecule | Geoformer | NequIP | TorchMD-Net | GemNet | PaiNN | | --- | --- | --- | --- | --- | --- | | Aspirin | **0.169** | 0.184 | 0.253 | 0.217 | 0.338 | | Ethanol | **0.063** | 0.071 | 0.109 | 0.085 | 0.224 | | Malonaldehyde | **0.115** | 0.129 | 0.169 | 0.155 | 0.319 | | Naphthalene | 0.043 | **0.039** | 0.061 | 0.051 | 0.077 | | Salicylic Acid | **0.088** | 0.090 | 0.129 | 0.125 | 0.195 | | Toluene | **0.044** | 0.046 | 0.067 | 0.060 | 0.094 | | Uracil | **0.066** | 0.076 | 0.095 | 0.097 | 0.139 | ### Question 1 As discussed in Theorem 1, Remark, line 145, IPE with $v=1$ and $\eta=1$ could be interpreted as introducing more geometric information like angle and dihedral with linear time complexity. In conjunction with the response to Weakness 1, we believe this explains why ACE/IPE outperforms those simple positional encoding, i.e., distance filters and the sum of Gaussian distances. ### Question 2 The training instability is s common issue in EGNNs, like PaiNN and TorchMD-Net (e.g., see https://github.com/shehzaidi/pre-training-via-denoising/issues/3#issuecomment-1324959415) as well as other Transformers. However, to ensure the **equivariance** of the basis functions, we **cannot** directly apply *LayerNorm* to them. Here attempt to linearly shrink the norm of the basis over the last dimension to stablize training. ### Question 3 We list the standard deviations when we repeated an additional 2 trials on the QM9 dataset, and from the results it can be seen that the model is not particularly sensitive to the partition of the dataset and the initial parameters. | | $\mu$ | $\alpha$ | HOMO | LUMO | gap | $R^2$ | ZPVE | U0 | U | H | G | Cv | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | Std | 0.38 | 1.65 | 0.51 | 0.73 | 0.76 | 0.69 | 0.05 | 0.15 | 0.14 | 0.13 | 0.13 | 0.41 | We directly used the model sizes and training times reported in Equiformer and Transformer-M, and experimented with Geoformer on a NVIDIA V100 GPU. It can be seen that in contrast to the Transformer-based method, our method increases the number of parameters by about 8%(47.4M vs 50.1M). Comparing to the EGNNs, although our number of parameters of the model is much larger than theirs, we can achieve faster training speed because some operators in EGNNs slow down training. In addition, to verify that the improvement in model performance does not only come from the larger model size, we also experimented with the model using a similar size as EGNNs, and although the performance drops compared to the larger model, it is still generally better and significantly faster than the other methods. | | Model Size | Overall Training Time (GPU-hours) | MAE on U0 | MAE on U | MAE on H | MAE on G | | --- | --- | --- | --- | --- | --- | --- | | SEGNN | 1.03M | 81 | 15 | 13 | 16 | 15 | | TorchMD-NET | 6.86M | 92 | 6.15 | 6.38 | 6.16 | 7.62 | | Equiformer | 3.53M | 61 | 6.59 | 6.74 | 6.63 | 7.63 | | Transformer-M | 47.4M | - | 9.37 | 9.41 | 9.39 | 9.63 | | Geoformer | 50.1M | 55 | 4.43 | 4.41 | 4.39 | 6.13 | | Geoformer-S | 6.4M | 20 | 5.20 | 5.12 | 5.19 | 6.78 | ### Qustion 4 Same as Weakness 2. --- Rebuttal Comment 1.1: Title: Response by Reviewer Comment: The authors have addressed my concerns in a respectful way. I will change my score from 4 to 5. I hope the authors can properly cite [1][2] in their revised version. [1] Thölke, Philipp, and Gianni De Fabritiis. "Torchmd-net: equivariant transformers for neural network based molecular potentials." arXiv preprint arXiv:2202.02541 (2022). [2] Hua, Chenqing, et al. "MUDiff: Unified Diffusion for Complete Molecule Generation." arXiv preprint arXiv:2304.14621 (2023). --- Reply to Comment 1.1.1: Comment: We appreciate your constructive feedback and are glad to have addressed your concerns effectively. As per your suggestion, we have introduced and cited the following two papers in our revised manuscript: > Several works have incorporated interatomic interactions into molecular modeling. For instance, TorchMD-Net [1] included the radial basis functions (RBF) as distance filter to the attention matrix. MUformer [2] further extended the distance filter by incorporating additional 2D structural information, resulting in improved performance and applicability to molecule 2D-3D co-generation. > [1] Thölke, Philipp, and Gianni De Fabritiis. "Torchmd-net: equivariant transformers for neural network based molecular potentials." arXiv preprint arXiv:2202.02541 (2022). [2] Hua, Chenqing, et al. "MUDiff: Unified Diffusion for Complete Molecule Generation." arXiv preprint arXiv:2304.14621 (2023). We sincerely hope that our revised manuscript will make a meaningful contribution to the research community. Thank you for your support and recognition of our efforts.
Summary: The paper introduces Geoformer, a new geometric transformer for molecular property prediction. The authors argue that while transformers have been dominant in various data modalities, their application to molecular modeling has been limited. To address this, the authors propose Interatomic Positional Encoding (IPE) based on atomic cluster expansion (ACE) theory, which captures complex interactions within molecules. They evaluate Geoformer on the QM9 dataset and the Molecule3D dataset, demonstrating its superior performance compared to the existing methods. Strengths: I am unfamiliar with this topic, but the idea of capturing complex geometric features by introducing Interatomic Positional Encoding seems novel. The motivation is well explained, and the authors also provide some mathematical proofs. The presented positional encoding may contribute to this field. The results on the QM9 dataset and the Molecule3D dataset show that Geoformer outperforms other competitors in terms of most metrics. This highlights the effectiveness of the proposed method in capturing and utilizing valuable geometric information. Weaknesses: I did not see a major weakness, but I have some minor concerns about the experiments. Although this paper handles the problem of molecular property prediction, it seems that the presented geometric transformer block could potentially serve as a plugin in other machine learning and computer vision tasks. For example, the task of point cloud analysis could also benefit from describing complex geometric features. It would be beneficial and convincing if the authors could show the effectiveness of the presented method in such a context. The presented positional encoding includes some learnable parameters, which is different from the traditional encoding strategy, and the encoding is dynamically updated. Therefore, the ablation study towards the effectiveness of the presented method is crucial. However, the authors put some results in the appendix instead of the main paper, which downgrades the importance. It would be better if the authors could shed more light on the ablation study in the main paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to the weaknesses for my concerns about the potential applications of this method. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The method could be computationally expensive as the presented positional encoding involves additional learnable parameters. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Reponse Thanks for raising some valuable concerns. We provide the point-to-point response in the following. ### Weakness 1 After conducting a literature review on point cloud analysis, we discovered related work which incorporates geometric information in the point cloud modeling [1-5]. Though few of these studies consider *equivariance* as in molecules, there are several works like TFN [7] are famous, and the most relavent work would be by Qin et al. [6], who propose *Pair-wise Distance Embedding* and *Triplet-wise Angular Embedding* in Transformers as attention bias to *Key*. Therefore, we believe our proposed interatomic positional encoding (IPE) have the potential to generalize to the field of point cloud analysis beyond the molecule. Due to the time limitation, we cannot present more results in such a context, but would investigate its effectiveness in the future. Thanks for your valuable insights again! [1] Yu, Xumin, et al. "Pointr: Diverse point cloud completion with geometry-aware transformers." *Proceedings of the IEEE/CVF international conference on computer vision*. 2021. [2] Ma, Xu, et al. "Rethinking network design and local geometry in point cloud: A simple residual MLP framework." *arXiv preprint arXiv:2202.07123* (2022). [3] Chen, Zhi, et al. "Sc2-pcr: A second order spatial compatibility for efficient and robust point cloud registration." *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*. 2022. [4] Yu, Hao, et al. "Rotation-invariant transformer for point cloud matching." *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*. 2023. [5] Hou, Ji, et al. "Mask3D: Pre-training 2D Vision Transformers by Learning Masked 3D Priors." *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*. 2023. [6] Qin, Zheng, et al. "Geometric transformer for fast and robust point cloud registration." *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition*. 2022. [7] Thomas, Nathaniel, et al. "Tensor field networks: Rotation-and translation-equivariant neural networks for 3d point clouds." arXiv preprint arXiv:1802.08219 (2018). ### Weakness 2 Due to the page limitation, we had to put some ablation experiments in the Apendix. However, we highlight the analysis of ablation study in the manuscript as you suggested. ### Limitation 1 Indeed, our approach introduces additional computational overhead. Compared with Transformer-M using the same setting (the number of layers and hidden dimensions), our model increases by less than 8% parameters. However, considering the improvement in performance, we believe the extra computational overhead is acceptable. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for the point-to-point response. I appreciate the effort the authors have done towards the potential applications. I believe this paper would be more solid if those comparisons could be incorporated. I would like to keep my original score based on the rebuttal. --- Reply to Comment 1.1.1: Comment: We are unfamiliar with point cloud datasets, which makes it challenging for us to finish training and report results during the rebuttal process. However, this is a highly promising direction. Regarding other points, we have conducted additional ablation experiments and emphasized them in our revised manuscript. We have also included a comparison of model sizes. Furthermore, we have conducted experiments on another molecular dataset, MD17. For further details, please refer to our **Response to Reviewer mN7Q** and **Response to Reviewer efGJ**. These results have made our paper more solid. Thank you for your comments, which have strengthened our article.
Summary: The paper introduced a Transformer-based model with interatomic positional encoding (IPE) for molecular modeling, which the authors termed "Geoformer". Geoformer incorporated novel positional encoding derived from empirical physical knowledge (atomic cluster expansion) and was able to achieve comparable or even superior performance on multiple benchmarks. Strengths: 1. To the best of my knowledge, the proposed approach for injecting physical knowledge, i.e., the atomic cluster expansion into the positional encoding in Transformers is novel in this field, which is dominated by EGNNs. The extensive experiments and ablation studies also demonstrated the expressiveness of the proposed architecture. 2. The idea of merging clusters further considers the interaction/potential between different clusters, while the original ACE only considers one individual cluster. 3. The proposed method has advantages over current EGNNs such as incorporating angles and dihedrals, and seems more computationally efficient because the calculation between clusters is still under linear time complexity. 4. It revisits the Transformer architecture in the field of molecular modeling and outperforms most of the current sota models in QM9 and Molecule3D benchmark. Weaknesses: 1. Some notations and figures about the merged cluster are a bit confusing and can be improved (see the following question section). 2. The authors claimed the proposed model was computationally more efficient and memory-saving. The authors can provide quantitative data for a clearer justification (see the following question section). Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: ### Notations about the merged cluster. 1. The relation between Fig.1 and Eq. 7 is confusing. In Eq. 7, the author shows the matrix of $A$-basis to represent the new basis for the merged cluster, while in Fig. 1, it turns out to be the matrix of $B$-basis to describe the proposed positional encoding. Their relation needs to be further explained in the manuscript. 2. It is hard to follow the relation between the basis of the merged cluster with the proposed interatomic positional encoding for those readers who are not familiar with ACE. Also, only after reading several times could I realize that the matrix of $A$-basis in Theorem 1 is the additional (right) term in Eq.7. The relation between Theorem 1 and 2 should be clarified. 3. What is the difference between $A_{i(i)}$ and $A_{i}$? 4. Following Q3, what is the difference between $A_{i(i)}$ in the merged cluster with the original $A$-basis in (linear) ACE? ### Complexity 5. The author mentioned they only utilized reduced settings, i.e., body-expansion=1 and order of spherical harmonics=1. What is the time/memory complexity for the algorithm? 6. Compared with the explicit calculation like GEM [1], and those approaches like SphereNet [2] which employ the CG-product, how much time/memory would Geoformer reduce? 7. What is the model size of Geoformer compared with other EGNNs? ### Other minor problems 8. The index of interatomic positional encoding (C) in the left corner, Fig. 1 is wrong. 9. how is the molecule graph being padded? [1] Liu, Lihang, et al. "GEM-2: Next Generation Molecular Property Prediction Network by Modeling Full-range Many-body Interactions." (2022). [2] Liu, Yi, et al. "Spherical message passing for 3d graph networks." *arXiv preprint arXiv:2102.05013* (2021). Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: The authors have adequately addressed the limitation and potential negative societal impact in Section 5. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response Thanks for your comments and acknowledgment of our work. We provide the point-to-point responses in the following. ### Question 1 We appreciate your suggestion and have provided more explanation of the construction of the B-basis in Theorem 2 and the Appendix for better clarity. Indeed, the merged B-basis remains in matrix form, similar to the merged A-basis matrix. It is additionally multiplied with CG-coefficients to ensure permutation and isometry-invariance. ### Question 2 Thank you for pointing this out. We understand that readers unfamiliar with ACE or the physics of harmonics might find it challenging to follow. We have added more explanation of the connection between Theorem 1 and Theorem 2. The central point is that "the potential of the merged cluster in Theorem 1 is the physical explanation of the interatomic positional encoding in Theorem 2." In Section 3.1, we demonstrate how to incorporate IPE within the Transformer in practice. We aim to introduce the proposed IPE both theoretically and practically. ### Question 3 and 4 $A_i$ is the original A-basis in ACE. Since we conduct the cluster merging, $A_{i(i)}$ denotes the multiplication of two $A_i$, describing the $\epsilon+1$ body expansion in the manscript line 137. Together, the matrix in Eq. 7 could represent the $\epsilon+1$ and $\epsilon+2$ body expansion simutaneously. ### Question 5 and 6 Due to the linear ACE theory, the complexity of constructing of interatomic positional encoding is $\mathcal{O}(N)$ with $N$ denoting the number of atom with one molecule. In contrast, the explicit extraction method such as GEM, GemNet and SphereNet have a complexity of $\mathcal{O}(N^{3})$ due to extracting dihedral angles for 4-body interaction. The main computational consumption lies in the Attention block, with a complexity of $\mathcal{O}(N^{2})$. We cannot replace this part with the prevailing linear attention at this moment because we have to update learnable IPE. We would explore these possibilities in the future work. Compare to higher-order of spherical harmonics, which require the pre-computed CG-coefficients and products, we streamline this part by reduced setting. Since the number of spherical harmonics is $(l+1)^2$ with degree $l$, it would reduce consumption (tensor paths) by approximately $Nl$. ### Question 7 Modern EGNN models are indeed relatively small (50.7M vs 10M), however, due to the CG-product or graph operations in their implementations, Transformer-based approach has a similar training speed to them (see the below table for details). | | Model Size | Overall Training Time (GPU-hours) | MAE on U0 | MAE on U | MAE on H | MAE on G | | --- | --- | --- | --- | --- | --- | --- | | SEGNN | 1.03M | 81 | 15 | 13 | 16 | 15 | | TorchMD-NET | 6.86M | 92 | 6.15 | 6.38 | 6.16 | 7.62 | | Equiformer | 3.53M | 61 | 6.59 | 6.74 | 6.63 | 7.63 | | Transformer-M | 47.4M | - | 9.37 | 9.41 | 9.39 | 9.63 | | Geoformer | 50.1M | 55 | 4.43 | 4.41 | 4.39 | 6.13 | | Geoformer-S | 6.4M | 20 | 5.20 | 5.12 | 5.19 | 6.78 | ### Minor Problem 1 Thanks for reaching this out. We have modified the $C_{\gamma}$ with $C_{\eta}$ in Fig. 1. ### Minor Problem 2 In one batch, we select the the maximum number of molecular atom, and pad other molecules with $0$, detaching the gradient at these padding indices in *Embedding* layer and add *key_padding_mask* for computing attention correctly. --- Rebuttal Comment 1.1: Title: Comment on Authors' Rebuttal Comment: I appreciate your comprehensive response regarding my questions and concerns. Based on the rebuttal, I believe you have appropriately and adequately addressed all of my concerns on the notation, formulation, and complexity of the model. The theoretical analysis and the additional experiments regarding model sizes and computational resources further justified the efficiency of your proposed model, and I hope you will add this result to the revised manuscript to make it more concrete. In conclusion, I believe that, after properly addressing the above questions, this work provided a novel architecture of a Transformer-based equivariant GNN, with relatively extensive experiments and ablation studies to demonstrate the superior performance over the baselines and computational efficiency compared to normal EGNNs. In this regard, I am happy to raise my original score from 7 to 8. --- Reply to Comment 1.1.1: Comment: Thank you for recognizing our paper and raising your score. We have incorporated these changes to enhance our paper in our revised manuscript.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper proposes a geometric Transformer called Geoformer for geometric molecular modeling. It designs Interatomic Positional Encoding (IPE), taking 3D geometric information into account to parametrize atomic environment for positional encoding in Transformer architecture. Geoformer adopts a learned Interatomic Positional Encoding (IPE) to the attention block of the Transformer. Geoformer is evaluated on QM9 and Molcule3D datasets with various tasks and shows superior performance compared with Transformers and EGNN baselines. Strengths: - Geoformer effectively integrates domain knowledge from physics, specifically Atomic Cluster Expansion (ACE), into the machine learning field for molecular property prediction. The Interatomic Positional Encoding (IPE) with ACE offers a robust method for incorporating geometric priors from other scientific disciplines into the positional encoding of Transformers. - IPE reflects geometric information and interatomic relations within the Transformer architecture. Leveraging IPE, Geoformer demonstrates strong performance on 3D molecular property prediction tasks across QM9 and Molecule3D datasets over various baselines. Weaknesses: Overall, the representation of this paper could be polished further to improve accessibility. Considering this paper is targeted at a machine learning conference, more detailed explanations about the background (ACE) would be beneficial for potential readers from this community. If the space matters, the appendix would be a good place for the additional information. - For example, the term "cluster" is used in Theorem 1 without a clear definition. I presume this is the concept introduced in ACE as well. - Although there are some introductory sentences, Section 3.1 only consists of two theorems without any proper explanation of why these theorems are needed and how they can be used. For example, Theorem 1 is introduced without any prior explanation of its necessity. It would be better to explain why Theorem 1 is needed and its main messages beforehand. - The attention block introduced through Equations 18 to 23 seems like an important contribution of this paper. However, with the given content, it is unclear what intuition leads to the specific form of learnable IPE in Eq 20, 21. - Preliminary and related work sections need to be separated for better explanation. Currently, section 2.1 and 2.3 is more like related work. I suggest moving these to the appendix and providing more information on ACE in the preliminary. - It is questionable whether Theorem 2 is indeed a theorem; it seems like a matrix IPE matrix C was defined for positional encoding. More explanation would be appreciated. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - In line 163, is A_{i, v_{tau}} defined in the same way as equation 3? If so, the bold font should be unified. - Is C_{vv^{prime}_{tau}} the same as C_{vv^{prime}} in the equation 4? - In subsection 4.5 and figure 3, is there any task-related or domain-specific ground truth where IPE focuses more on the atomic relationship in learned IPE? It is necessary for the claim that "IPE further enhances signals and shows significant distinction" to be persuasive. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The limitations are well addressed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response Thanks for your valuable comments. We provide point-to-point responses in the following: ### Weakness 1 The term **cluster** is a concept in the original Atomic Cluster Expansion (ACE) theory, which represents the local chemical environment of centered atoms. A cluster $\alpha$ contains one centered atom $i$, $K$ neighbor atoms (elements, denoted as $j$) with $K$ bonds. We’ve added it in the Appendix. ### Weakness 2 We apologize for the confusion. Theorem 1 serves as the foundation for Theorem 2 and the architecture of Geoformer in Section 3. It describes the proposed interatomic positional encoding (IPE) from a physics perspective. Specifically, we demonstrate how to construct the IPE (Theorem 2) **in theory**, representing the potential of merged cluster (Theorem 1). Next, we illustrate how to leverage the IPE (Theorem 2) within traditional Transformers (Section 3.1) **in practical**. In summary, we aim to demonstrate the proposed IPE both theoretically and practically. ### Weakness 3 We apologize for the oversight. The concepts behind Eq. 20 and 21 originate from AlphaFold2, which utilizes pairwise embedding $z$ with an activation function as a learnable gate, combined with the previous residual. We recommend referring to the *row/column-wise gated self-attention* in AlphaFold2 supplementary. We have also added this reference to the manuscript. ### Weakness 4 Following your suggestion, we separate the sections into **Preliminaries** and **Related Work** and provide a more detailed introduction of ACE. ### Weakness 5 In Theorem 2, we aim to prove why the interatomic positional encoding should be multiplied with the Key and Query in the traditional Transformers, as opposed to other operations such as attention bias or distance filter. Therefore, we consider this construction and proof as a theorem. Additionally, we have included the complete proof in the Appendix. ### Question 1 Thanks for reaching this out. We have removed the bold font $i$ in line 163. ### Question 2 Right. They both represent the Clebsch-Gordan coefficients to ensure the permutation and isometry-invariance. ### Question 3 We appreciate your comments. To testify “IPE further enhances signals and shows significant distinction”, in Figure 3, we compare the IPE with the distance signal, which the most of previous work adopted in the original manuscript to show how IPE functions. Based on your suggestion, we are exploring for some domain-specific ground truth, e.g., electron density, to further substantiate the effectiveness of the proposed IPE. More results would be added if we find something interesting. --- Rebuttal Comment 1.1: Comment: After reading the other reviews and responses, I conclude that this work is worth being introduced to a wider audience at the conference. As mentioned earlier, most of my concerns are about the presentation and clarity of the manuscript. Although it is not allowed to modify the manuscript during rebuttal, I hope the authors will well address these concerns and improve their manuscript for the wider audience. I'll adjust my score from 4 to 5. --- Reply to Comment 1.1.1: Comment: We greatly appreciate your decision to raise the score for our paper. We have meticulously revised the manuscript in accordance with your comments, ensuring that the revised version is accessible to the wider machine learning community. Once again, we are grateful for your invaluable feedback, which has significantly contributed to enhancing the quality of our work.
null
null
null
null
null
null
Beyond Invariance: Test-Time Label-Shift Adaptation for Addressing "Spurious" Correlations
Accept (poster)
Summary: This paper proposes to perform test-time adaptation to exploit the existence of spurious correlations in a setting where $p(x | y, z)$ remains constant, but when the prior $p(y, z)$ changes based on the target task. The authors evaluate this new method on various tasks, including standard image/text benchmarks and a medical imaging dataset on chest X-rays. Their empirical results show that their proposed method achieves better performance than standard ERM and the (approximation of a) invariance-based method. Strengths: The empirical results show that the method achieves better performance than standard ERM and the approximation of an invariant method that assigns a uniform prior over the spurious features on ColoredMNIST and CheXpert. Weaknesses: Their method relies on a more complex version of the label shift assumption, which must now hold on a much larger set of configurations (|Z| \timex |Y|), which is a weakness (that the authors do indeed point out). However, it’s unclear how reasonable this assumption is to make. There are no comparisons to other test-time adaptation methods, so the additional amount of unlabeled data may be inflating the benefits in the reported results. Some small issues with the writing: missing references in the Results and Discussion paragraphs of subsection 4.2, text in figures being difficult to read. As a side note, I feel that the definition of spurious features used in this paper is a bit counterintuitive. These are useful features for a downstream task, which aren’t what I think of as the standard notion of spurious features. Technical Quality: 3 good Clarity: 3 good Questions for Authors: In practice, is this stronger label-shift assumption violated? Some additional discussion about this would be appreciated. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are sufficiently discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. Response to your points is below. “**There are no comparisons to other test-time adaptation methods.**” All the test-time adaptation methods we are aware of are focused on covariate shift, or label shift, but no methods (AFAIK) can handle shift in the correlation between latent factors and the class prior. Thus these methods are unsuitable for our problem setting. “**I feel that the definition of spurious features used in this paper is a bit counterintuitive. These are useful features for a downstream task, which aren’t what I think of as the standard notion of spurious features.**” While it is true that “spurious” suggests the features have no predictive value for the class, in most setups (eg colored MNIST), these features (e.g. background) are in fact predictive of the class, just not in a stable way. Our observation is that we can model this change in correlation, rather than be invariant to it, and thus get better performance. “**In practice, is this stronger label-shift assumption violated?**“ The fact that we get better performance on a variety of datasets is empirical demonstration of the reasonableness of our assumption. However, we do not claim that it captures all (or even most) forms of distribution shift. “**The method relies on a more complex version of the label shift assumption, which must now hold on a much larger set of configurations ($|Z| \times |Y|)$, which is a weakness (that the authors do indeed point out). However, it’s unclear how reasonable this assumption is to make.**” This assumption is actually usually weaker than the standard label shift assumption. Specifically, if we can hold more factors of variation fixed, then we can eliminate potential confounding variables that would make the generative distribution of X (p(X | Y) or p(X | Y, Z)) differ between source and target distributions. The tradeoff is that the discriminative model that we need to learn to use our method p(Y, Z | X) may be harder to learn than p(Y | X). However, this is something we can evaluate empirically. --- Rebuttal Comment 1.1: Title: Reviewer response Comment: Thanks for the clarifications, especially regarding the weakness of the modified version of the label shift assumption. I'm happy to increase my score from a 5 to a 6, and will watch the responses from other reviewers.
Summary: This paper introduces a new method named Test-Time Label Shift Adaptation(TTLSA) to manage distribution shifts in machine learning, specifically focusing on spurious correlations. Rather than eliminating these spurious relations as mentioned in other papers, TTLSA utilizes the spurious features to adjust the model and argues that this could substantially outperform invariant predictors. The method involves augmenting the label space with a "nuisance factor", z, which encapsulates the spurious relations in the data. TTLSA method involves two main steps: 1. Train on Source: The model is initially trained on the source data with augmented labels that include both the target labels (y) and the nuisance variables (z). This step helps to capture the spurious correlations that exist in the source data. 2. Adapt to Target: During testing, TTLSA uses an EM algorithm to adapt to the target data. This step helps the model to adapt to the shift in label distribution between the training and testing data. The authors tested TTLSA on several datasets, including the CheXpert and colored MNIST datasets. In both cases, TTLSA outperformed traditional methods in terms of AUC under shifted distributions. However, the authors acknowledge the approach's limitations: it requires the generative distribution to be preserved across domains and needs access to labeled examples of Z during training. Strengths: **Originality**: The paper introduces a novel method, Test Time Label Shift Adaptation (TTLSA), to handle the problem of distribution shifts in machine learning models. It leverages the spurious relations in the data rather than attempting to eliminate them. This unique approach sets it apart from previous methods. **Quality**: The paper is of good quality. The authors' method is well-defined, and the experiments are carefully designed, involving various benchmark datasets to demonstrate the effectiveness of the proposed method. **Clarity**: The paper is well-written and clear, providing a detailed explanation of the methodology, experimental design, and results. **Significance**: The paper's results on various datasets indicate that TTLSA has the potential to improve machine learning model robustness in the presence of distribution shifts and spurious relations. However, the significance of the method is limited by its assumptions, notably the need for the generative distribution to remain constant across domains. This condition is only based on one additional feature, limiting the method's scope. The results, although promising, should be taken with a degree of caution until further research is conducted to validate these assumptions in more complex scenarios. Overall, the paper holds promise but requires additional exploration and validation. Weaknesses: 1. Assumptions: The TTLSA method assumes that the generative distribution p(x|y, z) is preserved across domains. Here, the authors only address z as a one-dimensional feature. This assumption can be restrictive and may not hold true in many real-world scenarios. 2. Data Requirement: The proposed method requires access to labeled examples of the nuisance factor Z during training. This could be a limitation in scenarios where such labeled data might not be readily available or where it's challenging to identify and label the nuisance factors. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. The paper presents great results using the proposed TTLSA method, but the assumption that the generative distribution p(x|y, z) is preserved across domains seems to rest heavily on the presence of one auxiliary feature. Can the authors provide additional insights into scenarios where this assumption is likely to hold? Are there particular types of data or specific domains where this assumption is more valid? 2. Related to the first question, in real-world scenarios, there can be multiple factors affecting the distribution. How would the TTLSA method behave in such complex situations, given its dependency on a single auxiliary feature for its assumptions? Can the authors share their thoughts on extending their methodology to accommodate multiple auxiliary features? 3. The paper mentioned future work that plans to relax the assumption of requiring labeled examples of Z during training. Could the authors provide some preliminary insights into how they plan to achieve this? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 2 fair Limitations: The authors have very adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review. Response to your points is below. “**The TTLSA method assumes that the generative distribution p(x|y, z) is preserved across domains. Here, the authors only address z as a one-dimensional feature. This assumption can be restrictive and may not hold true in many real-world scenarios.**” We 100% agree. We leave it to future work to consider multi-variate z. For low-dimensional z, one naive approach is to simply take a cartesian product of its components. “**The proposed method requires access to labeled examples of the nuisance factor Z during training**” We agree this is a weakness. However, as we show in Appendix C.5 (in supplemental), it is possible to apply our method with a very small amount of labeled data. In particular, we get good results on the worst group benchmarks even when up to 90% of the z labels are missing. The trick is to use label imputation to infer the missing z labels before applying our algorithm. We will clarify this in the camera ready. “**The assumption that the generative distribution p(x|y, z) is preserved across domains seems to rest heavily on the presence of one auxiliary feature. Can the authors provide additional insights into scenarios where this assumption is likely to hold?**“ As mentioned above, we will likely need to make z be a vector of factors (not just a single categorical variable) to make the method more general. We hope that datasets with richer meta-data may provide a source for such structured z’s. Alternatively we may be able to use techniques from the disentangled representation learning literature to avoid the need for extra annotations. However we leave this to future work. “**Can the authors share their thoughts on extending their methodology to accommodate multiple auxiliary features?**” We have indeed thought about this. One idea would be to make p(y, z(1:K) | x) be some kind of structured model, such as a (sparse) conditional random field. The key is to capture the dependence between each z(k) and y, conditioned on x. However we leave this to future work. “**The paper mentioned future work that plans to relax the assumption of requiring labeled examples of Z during training. Could the authors provide some preliminary insights into how they plan to achieve this?**” See comment above about our experimental results in Appendix C.5 in the supplemental. --- Rebuttal Comment 1.1: Comment: I will remain the score as 6 but thank you for the clarification
Summary: **A brief summary:** The paper proposed "Test-Time Label-Shift Adaptation" (TTLSA), that utilizes rather than eliminates spurious correlations. TTLSA adapts to changes in marginal distribution using the Expectation Maximization algorithm. Experiments on several datasets show that TTLSA outperformed traditional invariance methods and baseline empirical risk minimization. **Main Contributions:** The authors proposed a novel test-time adaptation method (TTLSA that capitalizes on spurious correlations instead of trying to eliminate them. Moreover, they expand on the label shift assumption, incorporating nuisance factors into the labels, train a discriminative classifier to predict the source distribution, and adapt to changes in the marginal distribution using the Expectation Maximization (EM) algorithm. Strengths: Strengths are summarized below: * The paper introduced an assumption that there exists "a hidden confounder that induces a spurious correlation between the label $Y$ and other causal factors $Z$, which together generate the features $X$," such that the label space in the DRO setting could be extended to $m=(y, z)$, hence treat like classical learning paradigm. * Experiments on several datasets show that TTLSA achieves promising results compared with traditional invariance methods. Weaknesses: Main weaknesses are two folds: * (1) Although the proposed method performs the label shift adaptation at test time, it still relies on a trained discriminative classifier, which is used to predict $p(y, z|x)$ on the source distribution. * (2) The training on the discriminative classifier requires a large labeled dataset from the source distribution. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: * (1) The introduced model assumption serves as a foundation for the proposed method. Is it possible to empirically verify or illustrate more about the reasoning behind adopting such an assumption? * (2) It would be better to distinguish the notation of $x_n$, which appears in both the source and target distributions (lines 145-146) * (3) In Figure 2, can the authors explain more about the $U-shape$ performance of TTLSA? Specifically, why the performance of TTLSA under shift parameter (0.0) might be worse than that of shift parameter (1.0)? * (4) It seems that certain references to figures are not appropriately included, i.e., line 311. * (5) It would be better if the authors could discuss the differences with two closely related works, i.e., [1, 2]. References: [1] Coping with label shift via distributionally robust optimization. [ICLR'21] --> propose an objective to cope with label shift, and provide an adversarial algorithm to effectively optimize it. [2] Distributionally Robust Post-hoc Classifiers under Prior Shifts. [ICLR'23] --> a method for scaling the model predictions at test-time for improved distribution robustness to prior shifts. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Same as mentioned weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. Response to your points is below. “**The training on the discriminative classifier requires a large labeled dataset from the source distribution…. still relies on a trained discriminative classifier, which is used to predict p(y, z|x)**” We agree this is a weakness. However, as we show in Appendix C.5 (in supplemental), it is possible to apply our method with a very small amount of labeled data. In particular, we get good results on the worst group benchmarks even when up to 90% of the z labels are missing. The trick is to use label imputation to infer the missing z labels before applying our algorithm. We will clarify this in the revised paper. “**The introduced model assumption serves as a foundation for the proposed method. Is it possible to empirically verify or illustrate more about the reasoning behind adopting such an assumption?**” Our method relies on an augmented version of the label shift assumption. The fact that we get better performance on a variety of datasets is empirical demonstration of the reasonableness of our assumption. However, we do not claim that it captures all (or even most) forms of distribution shift - it depends on the richness of the latent factors z. “**In Figure 2, can the authors explain more about the U-shape performance of TTLSA? Specifically, why the performance of TTLSA under shift parameter (0.0) might be worse than that of shift parameter (1.0)?**” Performance for 0.0 is in fact the same as for 1.0, modulo experimental noise. To see why, note that if there is 100% correlation or anti-correlation between z and y, then it becomes easy to predict the class label y by simply learning to predict z and then setting y=z or y=not(z). Conversely, if there is no correlation (0.5 on x-axis), there is no shortcut to exploit, and the model reduces to the performance of ERM predicting p(y|x). This is why there is a symmetric U-shaped curve. “**It would be better if the authors could discuss the differences with two closely related works, i.e., [1, 2].**” We will add discussion of these papers. --- Rebuttal Comment 1.1: Comment: Thanks authors for the responses. After reading the author's rebuttal and the other reviewers' comments, I would like to raise my score to (6: Weak Accept).
Summary: - This paper tackles the test-time adaptation problem to mitigate spurious correlation. By assuming the pervasiveness of $p(x|y, z)$, the authors derived an alternative method for predicting the target distribution and proposed Test-Time Label-Shift Adaptation (TTLSA) to optimize it. The TTLSA involves two steps: training on the source with calibration and logit adjustment and adaptation step using the EM algorithm. The method is evaluated on Colored MNIST, CheXpert, and four popular worst-group benchmark datasets, demonstrating superior adaptation performance (Figure 2) and compatible benchmark performances (Table 2). Minor corrections: Some references to figures are presented as double question marks (??). Strengths: - The paper has a coherent flow, making it easy to follow, and the details and evidence are well-presented. - The motivation for test-time adaptation is solid, and the experimental results in Figure 2 provide strong support. - The ERM performances in Table 2 are reliable, as they closely match the accuracies achieved in [1]. [1] Idrissi, Badr Youbi, et al. "Simple data balancing achieves competitive worst-group-accuracy." *Conference on Causal Learning and Reasoning*. PMLR, 2022. Weaknesses: - One of my concerns is the assumption of having an annotated training dataset. The field is evolving towards building algorithms without relying on group information. - Another concern is in comparison with Group DRO. Both methods assume that group information is available in the source dataset. TTLSA shows better test accuracy in this experiment, but not for worst-group accuracy. - The Group DRO performance is lower than what is presented in [2]. [2] Izmailov, Pavel, et al. "On feature learning in the presence of spurious correlations." *Advances in Neural Information Processing Systems* 35 (2022): 38516-38532. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: In [2], it is claimed that Group DRO performs well because it improves the last linear layer rather than the underlying feature representations. How does TTLSA address this aspect? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The proposed method requires group information in the training dataset, which can be costly. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. Response to your points is below. “**One of my concerns is the assumption of having an annotated training dataset.**" We agree this is a weakness. However, as we show in Appendix C.5 (in supplemental), it is possible to apply our method with a very small amount of labeled data. In particular, we get good results on the worst group benchmarks even when up to 90% of the z labels are missing. The trick is to use label imputation to infer the missing z labels before applying our algorithm. We will clarify this in the revised paper. “**TTLSA shows better test accuracy in this experiment [than gDRO], but not for worst-group accuracy.**” In table 2, our worst group performance is actually on par with gDRO for 3 out of 4 datasets, with the exception being CelebA. For example, on Waterbirds, gDRO gets 87.98 and we get 88.38. However, the main focus of our paper is on average performance. “**The Group DRO performance is lower than what is presented in Izmailov**”. We ran the gDRO code by Idrissi (with their published hyper-parameters) and reported our findings, which agrees with the original version by Sagawa. Recently, Izmailov provides another implementation of gDRO, but we did not use their code, nor do we know why they reported different numbers for their baseline. “**In [2], it is claimed that Group DRO performs well because it improves the last linear layer rather than the underlying feature representations. How does TTLSA address this aspect?**” The purpose of TTLSA is to make better predictions in the presence of target label shift, not learning better representations. In fact, both logit adjustment and TTLSA operate ad hocly on the output logits, and are thus oblivious to the model internals (which is why it applies to both neural networks and gradient boosted trees). That being said, asking the model to predict (y, z) jointly instead of only y may force it to learn a better presentation for features relevant to z. --- Rebuttal Comment 1.1: Title: Response to authors Comment: Thank you for the detailed response. I will retain my original score.
Rebuttal 1: Rebuttal: We respond to each reviewer separately below. However, a common concern was our dependence on a training set which has y and z labels. We agree this is a weakness. However, as we show in Appendix C.5 (in the supplemental material), it is possible to apply our method with a very small amount of labeled data. In particular, we get good results on the worst group benchmarks even when up to 90% of the z labels are missing. The trick is to use label imputation to infer the missing z labels before applying our algorithm. We will clarify this in the revised paper. (Note that the theoretical work in Yong'22 shows that some amount of environment labeling is required, absent further assumptions.) L. I. N. Yong, S. Zhu, L. Tan, and P. Cui, “ZIN: When and How to Learn Invariance Without Environment Partition?,” in Advances in Neural Information Processing Systems, May 2022 [Online]. Available: https://openreview.net/forum?id=pUPFRSxfACD.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Multi-body SE(3) Equivariance for Unsupervised Rigid Segmentation and Motion Estimation
Accept (poster)
Summary: The paper describes a method to simultaneously segment rigid parts and estimate part-level transformations for multi-body point cloud inputs. Given the input point cloud, equivariant features are first extracted and aggregated to invariant features for part segmentation. To estimate the transformations between two point clouds, an equivariant motion estimation head is applied to pairs of point clouds to obtain SE(3) motions. The entire model is trained in an unsupervised fashion using the consensus between 3D scene flow, motion segmentation, and transformations. Experiments show that the model is generalizable in two aspects: open-set motion and category-agnostic part segmentation. The authors test the model on SAPIEN, OGC, and KITTI datasets, and show superior performance in comparison to past baselines, with much better accuracy and much fewer FLOPs, even surpassing the supervised method. Strengths: 1. The identification of two generalization problems, i.e. invariant segmentation and equivariant motion estimation, is insightful. The use of equivariant backbones and their invariant counterpart is well-suited for such a problem. 2. The use of initial scene flow to bootstrap (or cold-start) the learning process and to get rid of local minima is novel. By utilizing the consistency between the output attributes which is inherent in the multi-body registration task, the method could be learned in an unsupervised manner. 3. Experiments are extensive and the results are compelling. The method is tested on four different datasets ranging from articulated objects to outdoor driving scenes, justifying the wide applicability. In the meanwhile, the fact that the algorithm is able to achieve a much higher accuracy with a much lower cost is impressive and set up a strong waypoint for future works. Weaknesses: 1. The unsupervised training strategy lacks clarity. What is the loss function? Is it eq.(8) plus eq.(7) in the supplementary with some balancing factor? This should be clarified in the main paper. In Line 213, the authors mention the weighted-Kabsch algorithm, what is the 'weight' here? 2. The generalization of the segmentation head is questionable -- this is my most important doubt about the paper. Although the generalization to category-agnostic part segmentations is demonstrated through experiments, the reason why an equivariant network backbone could achieve that is not explained. What adds to the confusion is that the method is trained in an unsupervised manner, which means that no canonical segmentation labels are defined. Given a novel multi-body shape that is not seen from the training set (e.g. a cabinet with four drawers), how could the model give a unique index to each of the drawers in a consistent fashion (i.e. the drawer index stays unchanged across multiple articulation states)? What is the maximum number of parts supported in the algorithm? Is it predefined or optimized/given for each test sample? The doubt also links to the experiment section -- In Fig.4 (a,b), the gap between training and validation is not shown, and it's unclear whether Fig.4 (c) is based on ground truth or predicted segmentation. 3. The experiments are not explained very clearly. In Sec4.2, why 'point-level invariance of segmentation' helps 'open-set motion' generalization, and why 'part-level equivariant of motion' helps 'category-agnostic part' generalization? Shouldn't they be swapped? It is preferable to move Sec4.4 ahead of Sec4.3. 4. More visualizations and challenging cases should be shown. There are only 3 visualizations provided in the paper+supplementary, on SAPIEN datasets. These visualizations only contain segmentation labels with two parts. To make the results more convincing, the author should provide more visualizations on all the datasets they've tested on, with transformations shown, and on more challenging cases, e.g., large scenes with >4 rigid parts. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Does the current training scheme leverage the cycle consistency among multiple inputs? If not how could this be possibly incorporated into the algorithm? 2. How does the method generalize to different point densities/distributions? What is the maximum number of points could the method process? 3. Some typos: - Line 144, 'There are mainly comprised of...' - Line 237, 'even given the global since...' Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Limitations are discussed in the last paragraph of the main paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. We appreciate your recognition of our use of equivariant backbones & their invariant counterparts, our solution to the cold-start problem, and our extensive and compelling experiments. We would like to address each of your concerns in detail below. **Q1:** The unsupervised training strategy lacks clarity. What is the loss function? Is it eq.(8) plus eq.(7) in the supplementary with some balancing factor? This should be clarified in the main paper. In Line 213, the authors mention the weighted-Kabsch algorithm, what is the 'weight' here? **A1:** Thank you for bringing the clarity issues to our attention. We would like to address them as follows: (1) We apologize for your confusion caused by a missing term $\hat M_k^s$ in Equation 8, of which the correct unsupervised segmentation loss should be $l_{seg} = \frac{1}{NS} \sum^N_i \sum^S_s ||\beta_{kl}^s \hat{M}\_k^s ({p^\prime}^i_l - \hat{\mathbf{T}}^s_{kl} \circ p^i_k)||$. By minimizing $l_{seg}$, the predictive part-level mask $\hat M_k^s$ can be optimized in a differentiable fashion. (2) We mainly utilize three tricks to balance the optimization process between the segmentation head (Eq. 8) and the motion head (Supp-Eq. 7): a) smoothly updating scene flow (Lines 194 - 200) mitigates the sharp fluctuation of motion estimation; b) the consensus $\beta^{s(i)}_{kl}$ (Lines 205 - 210) alleviates the influence of predictive errors from either head; c) Two heads are alternately updated per batch to prevent their gradients interfering each other, and their initial learning rates are set as segmentation : motion = 2.0e-4 : 2.0e-5. (3) The weight in the Kabsch algorithm refers to the weight assigned to the deviation of each point. Following [32, 61], we use the predictive mask of rigid segmentation as the point-wise weight. To address these concerns, we will take the following actions: (1) Rectifying Equation 8 and providing a description of the training strategy after Line 211. (2) Adding additional elaboration in Sections 2.1 and 2.2 within the supplementary material. (3) Introducing the pseudo-code for the weighted-Kabsch algorithm [37, 27] to the supplementary material and releasing our open-source implementation for public accessibility upon the publication of our work. **Q2:** Although the generalization to category-agnostic part segmentations is demonstrated through experiments, the reason why an equivariant network backbone could achieve that is not explained. What adds to the confusion is that the method is trained in an unsupervised manner, which means that no canonical segmentation labels are defined. Given a novel multi-body shape that is not seen from the training set (e.g. a cabinet with four drawers), how could the model give a unique index to each of the drawers in a consistent fashion (i.e. the drawer index stays unchanged across multiple articulation states)? What is the maximum number of parts supported in the algorithm? Is it predefined or optimized/given for each test sample? In Fig.4 (a, b), the gap between training and validation is not shown, and it's unclear whether Fig.4 (c) is based on ground truth or predicted segmentation. **A2:** (1) Segmentation Consistency: The segmentation head can take a single frame as its input, and the output part-level label indicates the partition inside a frame. The segmentation indexes are not necessarily consistent between the two frames. Following the common practice in [32, 61], we utilize the Hungarian-matching algorithm based on the IoU score between two masks to determine consistent parts. After the Hungarian-matching algorithm, the same rigid index across different frames is assigned together, and the correlated features $C_{kl}$ can be obtained for part-level equivariance in the motion head. We will include this description in Section 3.3 in the main body. (2) Maximum Number of Parts: This upper bound of part number is predefined as a sufficiently large number. Following the settings of OGC, the maximum is set as 8, 8, 8 and 15 on SAPIEN, OGC-DR, OGC-DRSV and KITTI-SF, respectively. (3) Explanation of Fig.4: a) The gap between training and validation is as follows: | Model | Prec@50$\uparrow$ (Training) | Recall@50$\uparrow$ (Training) | Prec@50$\uparrow$ (Validation) | Recall@50$\uparrow$ (Validation) | |--------|------------------------------|--------------------------------|--------------------------------|----------------------------------| | KPConv | 71.9 | 55.5 | 48.6 | 61.9 | | EPN | 73.7 | 58.1 | 56.2 | 65.7 | | Ours | 74.9 | 61.8 | 61.7 | 72.0 | The performance gap between training and validation for our model is significantly smaller than that of other models, providing further evidence of its generalizability. b) For a fair comparison, the evaluation of our motion head excludes the inference from part segmentation, and both experiments in Fig.4 (c) are based on ground-truth segmentation. **Our response to your Q3-Q7 will be uploaded once the function of interactive comment is available.** --- Rebuttal Comment 1.1: Title: Response to Reviewer XLXX (Q3-Q7) Comment: **Q3:** In Sec4.2, why 'point-level invariance of segmentation' helps 'open-set motion' generalization, and why 'part-level equivariant of motion' helps 'category-agnostic part' generalization? Shouldn't they be swapped? It is preferable to move Sec4.4 ahead of Sec4.3. **A3:** Thank you for helping us improve the paper writing. We will revise the paper to clarify the experiments and their explanations with the following three details: (1) Point-level invariance of our segmentation head makes the model robust to unseen pose changes (feature remains relatively consistent despite pose changes), which improves the generalizability to open-set motion (unknown pose variations). Part-level equivariance of the motion head helps generalize to category-agnostic parts because it allows the generalization based on the part-level motion, even if the category is unknown. Nevertheless, by improving the segmentation, the part-level equivariant motion head naturally obtains better supervision from the accurate mask, and vice versa. (2) Your suggestion of moving Sec4.4 ahead of Sec4.3 indeed creates a smoother logical flow from the intact model (Sec 4.4) to the ablations on separate modules (Sec 4.3). We will make this amendment to our paper accordingly. **Q4:** To make the results more convincing, the author should provide more visualizations on all the datasets they've tested on, with transformations shown, and on more challenging cases. **A4:** We provide additional visualizations (attached as PDF in the global response) across four datasets: OGC-DR, OGC-DSRV, SAPIEN, and KITTI-SF with multiple rigid parts. We also show some failure cases in challenging scenes on SAPIEN, as circled in green on the final two rows of the SAPIEN visualization. Please refer to the main response for more details on visualization. **Q5:** Does the current training scheme leverage the cycle consistency among multiple inputs? If not how could this be possibly incorporated into the algorithm? **A5:** Yes, we attempt to make use of cycle consistency from two aspects: (1) For a pair of frames $(P_k, P_l)$, scene flow is updated and obtained for both forward ($\hat\delta_{kl}$) and backward ($\hat\delta_{lk}$) directions. (2) The estimate of our motion head of the original input pair $(P_k, P_l)$ is rectified by averaging from the inversion of the predictive transformation from $(P_l, P_k)$. For readability, we simplify the description of cycle consistency; we will make sure to release the open-source implementation with these training details. **Q6:** How does the method generalize to different point densities/distributions? What is the maximum number of points could the method process? **A6:** (1) We follow standard protocols on corresponding datasets and evaluate our model under various point numbers. Specifically, the point number of one frame is set to 512, 2048, 2048 and 8192 on SAPIEN, OGC-DR, OGC-DRSV and KITTI-SF, respectively. (2) As for the generalizability to different point densities, we include additional experiments on KITTI-Det & SemanticKITTI, of which the points are sparser than those on KITTI-SF. Please refer to **A3** in our response to Reviewer ZttT for more details. **Q7:** Some typos. **A7:** Thank you for bringing these typos to our attention. We will make sure to correct the typos in Sec 3.3 & 4.2 in the revised manuscript.
Summary: This paper studies the problem of rigid part segmentation and motion segmentation. The proposed network first uses a global SE(3) equivariant backbone to extract point features, then a part segmentation head will predict invariant zero-order segmentation and a motion head will predict one-order transformations. The network can be trained on pairs of point clouds before and after deforming with noisy scene flow. The multibody equivariance is approximate because of 1.) descretization in EPN, 2.) the global backbone can’t handle deformation. Strengths: - The paper exploits strong geometry inductive bias — Equivariance, into the segmentation problem, which should be highlighted and emphasized more to the community. - The system is tested across different settings from articulated objects to outdoor scenes and the experiments are convincing. Weaknesses: - The main weakness of this paper is that the “equivariance” is not exact, i.e. because there are pooling and information aggregation in the point cloud feature backbone when the target is deformed (not necessarily non-rigid, but rigid transformed, articulated, this is a word from equivariant learning theory), the overall point cloud changes in such cases, but the information can be propagated across rigid parts so the features of each point do not transform purely following its parts transformation. This is because without knowing the mask, the group action of SE(3)xSE(3)x…xSE(3) on the point cloud can not be defined. - Another weakness of this paper is that the system should take in at least a pair of point as input during inference (as far as the reviewer currently understand, please correct him if this is wrong). So the task is relatively easy because by comparing two point clouds (even by brutal-force enumerating and classical geometric algorithms ) the segmentation of rigid moved parts may be trivially found in most cases. And the system seems only to focus on moving parts rather than semantic or instance parts that may not be moved. - The reviewer is also curious why the author chooses to use EPN as the backbone, is there a specific reason for such a design? Note EPN is only approximate equivariance due to the discretization. In other words, will such a framework still work for other backbones like TFN or VNN? Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitation is explicitly highlighted in the paper, but the reviewer hasn't found the social impact claim. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your thorough feedback and are grateful for your recognition of our exploitation of applying equivariance to segmentation and our convincing experiments. We have carefully considered each of your concerns and have provided detailed responses to each one below. **Q1:** The “equivariance” is not exact, i.e. because there are pooling and information aggregation in the point cloud feature backbone when the target is deformed (not necessarily non-rigid, but rigid transformed, articulated, this is a word from equivariant learning theory), the overall point cloud changes in such cases, but the information can be propagated across rigid parts so the features of each point do not transform purely following its parts transformation. This is because without knowing the mask, the group action of SE(3)xSE(3)x…xSE(3) on the point cloud can not be defined. **A1:** Thank you for helping us improve our work. We agree that per-point features cannot transform strictly along with part-level transformation, resulting in an inexactly defined local "equivariance". However, the inexact local equivariance is proven to be sufficiently effective in practice from existing work [73]. Specifically, EON [73] utilizes object-level equivalence in fully-supervised object detection, of which the Region Context Aggregation is not perfectly equivariant. In this paper, we explore the feasibility of part-level equivariance in unsupervised multi-body segmentation, and it performs well on various datasets. We appreciate your insight and will include a discussion on the inexactness of equivariance to Sec 5 (Limitations). **Q2:** Another weakness of this paper is that the system should take in at least a pair of point as input during inference (as far as the reviewer currently understand, please correct him if this is wrong). So the task is relatively easy because by comparing two point clouds (even by brutal-force enumerating and classical geometric algorithms) the segmentation of rigid moved parts may be trivially found in most cases. And the system seems only to focus on moving parts rather than semantic or instance parts that may not be moved. **A2:** Thank you for your valuable comment. We would like to clarify that our model is capable of taking a single frame as input **during inference** and can segment parts that may not be moved. In the training stage, our model indeed requires at least two frames since scene flow serves as a supervisory signal in the unlabeled setting. However, the segmentation head can work without the help of motions during inference. In the experiments, static cars/trucks parked at the side of the road frequently occur on KITTI-SF (Table 4 in the main body of this paper), KITTI-Det & SemanticKITTI (**A3** in our response to Reviewer ZttT), and our model still performs comparably or better than state-of-the-art methods. We will include additional notes in Figure 2 to provide further clarification on the input data. **Q3:** The reviewer is also curious why the author chooses to use EPN as the backbone, is there a specific reason for such a design? Note EPN is only approximate equivariance due to the discretization. In other words, will such a framework still work for other backbones like TFN or VNN? **A3:** (1) The choice of using EPN as the backbone in our paper was made based on its robustness (Lines 121 - 125). By regressing the residual after selecting the rotational anchor, instead of directly regressing the entire rotation, vanilla EPN exhibits robust global equivariance, which is adopted in some existing unsupervised tasks like 6D pose estimation [43]. Designed for unsupervised learning, our model also adopts EPN as the backbone. (2) Theoretically, our framework should still work with other backbones such as TFN or VNN, as long as its per-point equivariant feature is accessible. Experimentally, it would require further investigation to determine their robustness to the part-level uncertainty and the unsupervised setting, which would be beyond the scope of this paper. We appreciate your suggestion and will make sure such explorations are included in future work. --- Rebuttal Comment 1.1: Comment: The authors' responses address most of my concerns, and after reading other reviews I lean to keep my positive rating. --- Reply to Comment 1.1.1: Title: Response to Reviewer S5HY Comment: Thank you for your review and for taking the time to read our responses. We are glad to hear that our responses have addressed most of your concerns and that you have decided to keep your positive rating. We really appreciate your feedback :)
Summary: This paper proposes a unified framework to simultaneously handle rigid segmentation and motion estimation. Specifically, an probabilistic SE(3)-equivariant network is presented to obtain transformation-invariant features, which is followed by two heads for rigid segmentation and motion estimation. Due to the interdependent nature between the above two tasks, the authors utilize a unified training strategy to jointly optimize the two heads. Extensive experiments on four datasets show that this method can yield good performance with low computational complexity. Strengths: -Significance and originality Unsupervised rigid segmentation and motion estimation are challenging. Taking the generalization requirement into account, the authors introduce a probabilistic part-level SE(3)-equivariant feature encoder to capture transformation-invariant features, which is different from the global SE(3) equivariance. And it is simple but makes sense. A unified training strategy to jointly optimize the outputs of two heads in an online fashion is well motivated. Although many other tasks also employ joint optimization, this is the first one for this task. -Clarity The paper contains enough details and definitions of the proposed contributions. -Experiment Quantitative results show a clear improvement compared to previous methods. Weaknesses: -Time cost of the proposed methods and other methods. -The author should systematically analyze the robustness on initial scene flow. Does the performance drop a lot with poor initial scene flow results? (add some noise or with insufficiently trained scene flow results). -This paper relies on rigidity constraint and works well on finite datasets. The SAPIEN and OGC-DR datasets are without background and well-separated. The authors can conduct some experiments on large outdoor scenes, which are more complicated. Technical Quality: 3 good Clarity: 3 good Questions for Authors: please see the weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: -In related works, deep learning on dynamic point clouds is too highly summarized. The authors can present more details. -The authors can visualize more segmentation results and intermidiate results so that the proposed method can be better understood. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive feedback. We are grateful for your recognition of the significance and originality of our training strategy based on part-level SE(3)-equivariance, clarity, and experiments. We have carefully reviewed each of your concerns and all the additional experiments would be added to the supplementary materials accordingly. **Q1:** Time cost of the proposed methods and other methods. **A1:** We conduct experiments to compare the training and the testing time costs on the whole dataset of SAPIEN, reported as follows: | Model | Fully-supervised Training (Hours) | Unsupervised Training (Hours) | Inference (Hours) | |-------|--------------------|-----------------------|-----------| | MBS | 18.7 | --- | 3.2 | | OGC | 4.9 | 21.8 | 0.6 | | Ours | 4.5 | 17.3 | 0.8 | Our inference time is slightly longer than that of the OGC model due to the additional processing time required by the EPN to handle the 60-angle icosahedral group. However, our model has a strong ability to capture rigid information, which allows it to converge with fewer training iterations and results in a shorter overall training time. **Q2:** Does the performance drop a lot with poor initial scene flow results? (add some noise or with insufficiently trained scene flow results). **A2:** We conduct additional experiments to analyze the influence of initial scene flow on the performance of segmentation. By adding various intensities of zero-centered Gaussian noise to scene flow, the segmentation result on SAPIEN is reported as follows: | Variance of Gaussian Noise | AP$\uparrow$ | PQ$\uparrow$ | F1$\uparrow$ | Pre$\uparrow$ | Rec$\uparrow$ | mIoU$\uparrow$ | RI$\uparrow$ | |----------------------------|--------------|--------------|--------------|---------------|---------------|----------------|--------------| |0 (w/o Noise)|63.8|61.3|77.3|84.2|71.3|63.7|75.4| |0.01|62.7|60.1|76.2|82.0|71.1|63.7|75.1| |0.025|60.3|57.4|73.0|75.6|70.6|62.6|74.8| |0.05|53.8|32.9|46.5|36.5|64.2|56.7|68.1| |0.1|49.2|27.1|40.3|30.2|60.5|53.4|65.2| Our model has been observed to be robust to noisy scene flow within a variance range of 0.01 to 0.025. However, as the intensity of the noise increases to 0.05, there is a noticeable decline in performance. **Our response to your Q3-Q5 will be uploaded once the function of interactive comment is available.** --- Rebuttal Comment 1.1: Title: Response to Reviewer ZttT (Q3) Comment: **Q3:** This paper relies on rigidity constraint and works well on finite datasets. The SAPIEN and OGC-DR datasets are without background and well-separated. The authors can conduct some experiments on large outdoor scenes, which are more complicated. **A3:** Thank you for this insightful suggestion! Following the same evaluation protocol of OGC [61], we further evaluate the generalizability of our model from KITTI-SF to two larger outdoor datasets, i.e., KITTI-Det [1*] and SemanticKITTI [2*], and compare the performance with the results reported in OGC: (1) KITTI-Det | | AP$\uparrow$ | PQ$\uparrow$ | F1$\uparrow$ | Pre$\uparrow$ | Rec$\uparrow$ | mIoU$\uparrow$ | RI$\uparrow$ | |----------------------------|--------------|--------------|--------------|---------------|---------------|----------------|--------------| | OGC$_{sup}$ | 51.4 | 41.0 | 49.1 | 43.7 | 56.0 | 66.2 | 91.0 | | Ours$_{sup}$ | **52.5** | **43.3** | **51.8** | **47.5** | **57.0** | **68.0** | **92.6** | | | | | | | | | | | OGC$_{unsup}$ | 40.5 | 30.9 | 37.0 | 30.8 | **46.5** | **60.6** | 86.4 | | Ours$_{unsup}$ | **41.3** | **32.9** | **38.8** | **35.3** | 43.1 | 60.2 | **87.2** | (2) SemanticKITTI | Sequences | Methods | AP$\uparrow$ | PQ$\uparrow$ | F1$\uparrow$ | Pre$\uparrow$ | Rec$\uparrow$ | mIoU$\uparrow$ | RI$\uparrow$ | |-------------------------|----------------|--------------|--------------|--------------|---------------|---------------|----------------|--------------| | 00 - 10 | OGC$_{sup}$ | 53.8 | 41.3 | 48.1 | 40.1 | 60.0 | 68.3 | 90.0 | | | Ours$_{sup}$ | **60.1** | **47.6** | **55.4** | **48.6** | **64.4** | **71.9** | **93.4** | | | | | | | | | | | | | OGC$_{unsup}$ | 42.6 | 30.2 | 35.3 | 28.2 | 47.3 | 60.3 | 86.0 | | | Ours$_{unsup}$ | **46.9** | **31.6** | **36.9** | **29.0** | **50.6** | **63.2** | **88.7** | | | | | | | | | | | | 00 - 07 & 09 - 10 | OGC$_{sup}$ | 55.3 | 41.8 | 48.4 | 40.1 | 61.1 | 69.9 | 90.3 | | | Ours$_{sup}$ | **60.5** | **48.1** | **55.6** | **48.8** | **64.7** | **73.2** | **93.8** | | | | | | | | | | | | | OGC$_{unsup}$ | 43.6 | 30.5 | 35.5 | 28.1 | 48.2 | 62.1 | 86.3 | | | Ours$_{unsup}$ | **47.4** | **31.7** | **36.8** | **28.7** | **51.0** | **64.8** | **89.3** | | | | | | | | | | | | 08 | OGC$_{sup}$ | 49.4 | 39.2 | 46.6 | 40.0 | 55.8 | 60.3 | 88.3 | | | Ours$_{sup}$ | **58.4** | **46.0** | **54.4** | **47.8** | **63.1** | **65.8** | **91.7** | | | | | | | | | | | | | OGC$_{unsup}$ | 38.6 | 29.1 | 34.7 | 28.6 | 44.0 | 51.8 | 84.3 | | | Ours$_{unsup}$ | **44.2** | **31.0** | **37.3** | **30.0** | **49.1** | **55.8** | **86.1** | --- Reply to Comment 1.1.1: Title: Response to Reviewer ZttT (Q3-Q5) Comment: Unlike the stereo-based point clouds in KITTI-SF, the point clouds in the SemanticKITTI and KITTI-Det datasets are collected using LiDAR sensors, resulting in sparser data. In these **large-scale** and **low-density** outdoor settings, our model significantly outperforms the state-of-the-art approach OGC across all metrics on SemanticKITTI and on a majority of metrics on KITTI-Det. In addition to the above experiments, we plan to include more results and comparisons (e.g., training our model from scratch on these large-scale outdoor scenes) in our revised manuscript. [1*] Geiger, Andreas, Philip Lenz, and Raquel Urtasun. "Are we ready for autonomous driving? the kitti vision benchmark suite." 2012 IEEE conference on computer vision and pattern recognition. IEEE, 2012. [2*] Behley, Jens, et al. "Semantickitti: A dataset for semantic scene understanding of lidar sequences." Proceedings of the IEEE/CVF international conference on computer vision. 2019. **Q4:** In related works, deep learning on dynamic point clouds is too highly summarized. The authors can present more details. **A4:** We appreciate your suggestion to provide more details in the related works section of our manuscript. We will revise Sec 2 to provide a more comprehensive overview of the existing literature on deep learning on dynamic point clouds. Thank you for bringing this to our attention. If you have any specific citations or references that you would like us to include, please let us know and we will be happy to incorporate them into our revised manuscript. **Q5:** The authors can visualize more results so that the proposed method can be better understood. **A5:** We have provided additional visualizations, which are attached as a PDF in the global response, across four datasets: OGC-DR, OGC-DSRV, SAPIEN, and KITTI-SF with multiple rigid parts. We have also included some failure cases in challenging scenes on the SAPIEN dataset, as indicated by the green circles on the final two rows of the SAPIEN visualization. For more details on the visualizations, please refer to the global response.
Summary: This paper proposes a part-level SE(3)-equivariant framework for multi-body rigid motion modeling. The two heads that take in the SE(3) features, segmentation and motion estimation, are carefully crafted based on their inherit invariant and equivariant characteristics. The relationship of scene flow, rigid segmentation, and multi-body transformation is then exploited to derive an unsupervised optimization strategy. It achieves state-of-the-art in multiple datasets while significantly reducing the parameters and training computations required. Strengths: The structure of this paper is very clear, providing a comprehensive explanation of the research problem, motivation, background, principles, methods, and experiments. The text and figures have been meticulously polished. This work effectively utilizes SE(3) equivariance in rigid segmentation and multi-body motion estimation, and is outstanding in terms of experimental results and model size. Weaknesses: Rigid segmentation and multi-body motion estimation are intrinsically coupled. I wonder if this coupling might also have negative effects on the model, for instance, inaccurate segmentation leading to poorer scene flow estimation results. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to the weaknesses section, refute views or answer questions or improve the paper. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitations has been discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your valuable feedback and are grateful for your recognition of our writing and contributions presented in our paper, such as the utilization of SE(3) equivariance, experiments, and model size. We have carefully considered your main concern and have provided detailed responses below. **Q1:** Rigid segmentation and multi-body motion estimation are intrinsically coupled. I wonder if this coupling might also have negative effects on the model, for instance, inaccurate segmentation leading to poorer scene flow estimation results. **A1:** We agree that rigid segmentation and multi-body motion estimation are intrinsically coupled, and this coupling can have both positive and negative effects on the model. (1) It is natural that inaccurate segmentation would lead to poor scene flow estimation. To further explore the influence of inaccurate segmentation, we conduct additional experiments on the validation set of SAPIEN. By adding different levels of noise to the predictive segmentation, we compare the performance change in unsupervised motion estimation: | Noise Level | EPE3D$\downarrow$ | AccS$\uparrow$ | AccR$\uparrow$ | Outl$\downarrow$ | |-------------|-------------------|----------------|----------------|------------------| | 5% | 4.34 | 15.0 | 32.2 | 84.1 | | 15% | 6.79 | 11.6 | 26.1 | 87.6 | | 25% | 9.29 | 9.28 | 21.5 | 89.9 | It is observed that poor segmentation would lead to low performance on scene flow estimation results. (2) Experiments also demonstrate that poor scene flow could result in performance degradation in rigid segmentation. Please refer to **A2** in our response to Reviewer ZttT for more details. (3) To mitigate the negative effect of this coupling, this paper proposes some corrective schemes, such as consensus $\beta^{s(i)}_{kl}$ (Lines 205 - 210) and smoothly updating scene flow (Lines 194 - 200). In the unsupervised setting, the benefits of coupling rigid segmentation and multi-body motion estimation outweigh the potential drawbacks. Experiments show that our model is able to leverage the complementary information provided by each task to improve its performance on both. For future work, we encourage the community to improve our model to alleviate the side effect of the coupling under the unsupervised setting. Thank you again for your insightful comment. --- Rebuttal Comment 1.1: Comment: Sorry for my late reply. The rebuttal can address my concerns. Referring to the opinions of other reviewers, I will retain my score. --- Reply to Comment 1.1.1: Title: Response to Reviewer mTMF Comment: Thank you for your reply and your constructive feedback. We appreciate your time and effort in reviewing our paper. We are glad to hear that our rebuttal can address your concerns and that you will retain your score. We hope that our paper can contribute to the advancement of the field and that you will find it useful for your future research. Thank you again for your valuable comments and suggestions.
Rebuttal 1: Rebuttal: We thank all reviewers for their valuable comments and suggestions. We would like to highlight some strengths of our paper pointed out by reviewers, including a **well-motivated** (sjc7) and **original** (Ztt7) idea, **extensive and convincing experiments with impressive results surpassing state-of-the-art methods** (86bd, Ztt7, S5HY, XLXX), and **clear and detailed explanation of the motivation, contributions, method, and experiments** (mTMF, ZttT). We provide individual feedback for every review to address specific concerns. We also provide additional visualizations in the attached pdf file. The file includes (a) challenging scenes (multiple parts) on SAPIEN, with successful and failure cases (errors as circled in green on the final two rows); (b) visualizations on **all of the other datasets** (OGC-DR, OGC-DRSV, and KITTI-SF). All of the results ("Ours") are obtained from our model in the **unsupervised setting**, while "GT" means the ground-truth segmentation. Pdf: /pdf/6b7aa6e8515b66b6745afb0a036cce29a0f22599.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: Modeling multi-body rigid movements involves - rigid segmentation which is category-agnostic and motion estimation which is open-set. This work proposes a part-level SE(3) equivariant network that takes in point-clouds corresponding to video frames and can effectively estimate category-agnostic open-set motion. Strengths: The work leverages the point level feature equivariance from existing works and uses it to obtain invariant representations for the rigid segmentation and then further uses it to estimate motion using an equivariant motion head. The approach is able to provide state of the art performance with much lesser flops. It makes note of the interdependence between the tasks of rigid segmentation and motion estimation and uses that interdependence for unsupervised training. Weaknesses: The paper writing could be better so that the high-level idea is more easily understood. The contributions of the work could be summarized in fewer words in the introduction section making the work much easier to follow. SE(3) has been used multiple times in the paper including the title but it has not been described in the paper anywhere. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please address the issues mentioned in the weakness section Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors have discussed limitations of their proposed approach. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. We appreciate your recognition of our improvements on feature equivariance, the high performance and low FLOPs, and our use of interdependence for unsupervised training. Please find our responses to each concern below. **Q1:** The paper writing could be better so that the high-level idea is more easily understood. The contributions of the work could be summarized in fewer words in the introduction section making the work much easier to follow. **A1:** We really appreciate your advice to improve the readability of this paper. Currently, the main summary for each contribution is highlighted in italic format. We realize that this format may be hard to read, and will reformat the section by describing the details first and having shorter bullet points towards the end of Introduction. **Q2:** SE(3) has been used multiple times in the paper including the title but it has not been described in the paper anywhere. **A2:** Thank you for bringing this to our attention and we apologize for not including a detailed description of SE(3) in our paper. SE(3) stands for the **S**pecial **E**uclidean group in three dimensions (**3**D), which is the group of rigid body transformations in three-dimensional space. We will make sure to include a definition and explanation of SE(3) in Sec 3.1.
Summary: This paper presents a method that estimates the part segmentation, and the rigid motions of the parts, for an articulated object (with multiple rigid parts), given as input a sequence of point clouds of that object. The method works by first computing SE3-equivariant features for the points, and then processing with an SE3-invariant head for segmentation estimation (pooling across rotation groups), and then processing with an SE3-equivariant head for rigid motion estimation (pooling across points within estimated segments). The method also initializes a scene flow estimate from a pre-trained off-the-shelf model, and then iteratively updates this to match the estimated rigid motion. Finally, the model uses the weighted-Kabsch algorithm to estimate transformations for the parts according to the flow and segmentation, which yields optimization targets that encourage the estimates to be self-consistent. Strengths: This paper makes a well-motivated case for using a combination of equivariant and invariant features to jointly solve rigid part segmentation and motion estimation. The proposed network has fewer parameters and runs faster than the baselines, and also achieves higher accuracy, in both the supervised setting and the unsupervised setting. Jointly optimizing the motion, segmentation, and scene flow, makes sense. Weaknesses: I found this paper very difficult to follow. I am quite sure that the content in the main text is not a complete description of a reproducible method. Nearly every section defers to the supplementary material for additional information. In my reading of the guidelines, the supplementary material is for supporting the paper, not for describing key details of the method. The method in the paper should make sense on its own. 
Because my main complaint is on clarity, I'll put a lot of questions, but please interpret some of the questions as reflecting weaknesses, because these are things that should be clear in the work. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Do the two point clouds correspond? I suppose not, because then motion can be computed just from taking a difference of the positions. The problem statement in 3.2 suggests a sequence, but later (e.g., in 3.3) it seems like the sequence length might just be 2. Is the sequence length ever not 2? The w(p^i_k,g_j) used in equation 2 are described as "derived through a 1x1 convolution". What does this mean exactly? The paper talks about "fusing such invariant representations u^i_{k(1)},... from h layers" -- how is this fusing done exactly? The paper says "the segmentation head outputs a probabilistic prediction M^k of the rigid mask" -- what makes it probabilistic exactly? This idea of "probabilistic" masks is repeated many times through the paper, and at some point there are also "probabilistic SE3 features" (actually only in Figure 3). In my experience, the word "probabilistic" is usually used to indicate something non-deterministic, but in this case it seems like it might actually be deterministic. What exactly is meant by a probabilistic mask or probabilistic feature? The paper says "the motion head is supposed to *handle these uncertain category-agnostic parts*" (italics in the paper). Why are those words in italics? What is meant by "handle" here? It seems like a special usage of the word. Equation 3 looks like it might just be depicting an element-wise product. Is this correct? It would be great to write this more efficiently. The paper says "the specific category labels can be agnostic to the model". What does it mean for category labels to be agnostic to a model? The paper says "Based on the correlated feature Ckl, the motion head estimates rotation R^s_{kl} and translation t^s_{kl} of each rigid part s." This is a critical part of the model. How are the correlations used exactly, what is the predictor, how are the rotation and translation outputs represented, and how are the outputs supervised? Under equation 6, it says "In this manner, improved scene flow is capable of providing enhanced supervision to learn segmentation masks and motion estimates." I don't see how this follows from the equation. The equation only produces an updated scene flow, using a convex combination of the previous estimate and the rigid estimate. I see no "supervision" (or "enhanced supervision") applied to segmentation masks here. In the final paragraph of the method, it says "the motion head is optimized by the rotation component of T^s_{kl}, and estimates corresponding translation by minimizing our probabilistic part-level distance." What does it mean to optimize the motion head by rotation? What does it mean to estimate translation by minimizing "probabilistic part-level distance"? It seems like these are critical details in the procedure. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: Looks OK Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback! We are grateful for your acknowledgment of the motivation, novelty, and contributions of our model in terms of its parameters, complexity, and accuracy. While we understand your concern regarding the clarity of certain sections, we would like to address each of your concerns in detail below and assert that these corrections can indeed be addressed for the camera-ready version. We hope our answers, along with the support of Reviewer mTMF and ZttT that the explanation of the paper is "clear" and "detailed", can alleviate your concerns and give more merit based on the impact and originality of our method. **Q1:** Do the two point clouds correspond? I suppose not, because then motion can be computed just from taking a difference of the positions. **A1:** No, we do not assume that these point clouds correspond to one another. The correspondence between points in the two frames is unknown. **Q2:** The problem statement in 3.2 suggests a sequence, but later (e.g., in 3.3) it seems like the sequence length might just be 2. Is the sequence length ever not 2? **A2:** The length of an input sequence can exceed 2. For simplicity, we only describe how to process a pair of point clouds, but the method can be extended to a sequence longer than two frames, following common practice. In accordance with previous works [32, 61, 68], the sequence is input into a model as a pair of either any two frames (for the SAPIEN dataset) or consecutive frames (for the OGC dataset) at each iteration. We will further describe this at the start of 3.3 in the revision. **Q3:** The $w(p^i_k,g_j)$ used in equation 2 are described as "derived through a 1x1 convolution". What does this mean exactly? **A3:** This notation follows the paper of the original EPN [9]: the convolution of the $j^{th}$ icosahedral angle $g_j$ takes a point $p^i_k$ as its input and outputs the selection probability $w$. In other words, the value of $w$ is obtained from a 1x1 convolution, which is implemented through a point-wise mapping of the features from the last layer for a dimensional change. **Q4:** The paper talks about "fusing such invariant representations $u^i_{k(1)},...$ from $h$ layers" -- how is this fusing done exactly? **A4:** We apologize for not including a detailed description of our fusing process in the main body of the paper. Due to page limitations, our initial approach was to explain the high-level end-to-end idea of our method and have conformed to putting the details in the original supplementary material. Specifically, the fusing process is illustrated in Figure 2 of Supp. The multi-layer invariant representations are fused through an hourglass structure, where we downsample the features with Set Abstraction (SA) and then upsample with Feature Propagation (FP) modules [55]. Subsequently, this is followed by an attention-based decoder, as practiced in OGC [61]. We will make sure to include a brief description in Sec 3.3 in the main body to make it more readable. **Q5:** The paper says "the segmentation head outputs a probabilistic prediction $M^k$ of the rigid mask" -- what makes it probabilistic exactly? This idea of "probabilistic" masks is repeated many times through the paper, and at some point there are also "probabilistic SE(3) features" (actually only in Figure 3). In my experience, the word "probabilistic" is usually used to indicate something non-deterministic, but in this case it seems like it might actually be deterministic. What exactly is meant by a probabilistic mask or probabilistic feature? **A5:** We use the term “probabilistic” to refer to the fact that the segmentation head outputs a prediction of the rigid mask in the form of a probability distribution. Instead of a "hard" binary rigid partition, the mask represents the probability of each point “softly” belonging to different rigidities. Similarly, the term “probabilistic SE(3) features” refers to features that correspond to a probability distribution over SE(3) transformations, rather than a fixed rigid transformation. We apologize for any confusion and will change this term to **soft prediction** in Sec 3.3. **Q6:** The paper says "the motion head is supposed to _handle these uncertain category-agnostic parts_" (italics in the paper). Why are those words in italics? What is meant by "handle" here? It seems like a special usage of the word. **A6:** We use italics to emphasize that the motion head can deal with parts of the data that are uncertain (estimated from the segmentation head, without ground-truth partition) and do not belong to a specific category. By “handle”, we mean that the motion head is designed to process these uncertain parts in order to produce a meaningful output. **Q7:** Equation 3 looks like it might just be depicting an element-wise product. Is this correct? It would be great to write this more efficiently. **A7:** Thank you for the question. The symbol $\cdot$ means a broadcasting element-wise product over the $D$-dimensional feature channel, which is not exactly the same as a standard element-wise operation. Since $\hat{m}_k^i$ and $\theta (p_k^i,g_j)$ have different sizes, a standard element-wise operation cannot be directly applied. **Q8:** The paper says "the specific category labels can be agnostic to the model". What does it mean for category labels to be agnostic to a model? **A8:** This sentence occurs on Line 181, which means that our model does not require any category labels of the parts (e.g., 'Lense', 'Temple', 'Nose Pad' for eyeglasses), and the ground truth of part-level correspondence between frames is unknown to this model. We will change the wording to convey this idea more clearly. **Our response to your Q9-Q11 will be uploaded once the function of interactive comment is available.** --- Rebuttal Comment 1.1: Title: Response to Reviewer sjc7 (Q9-Q11) Comment: **Q9:** The paper says "Based on the correlated feature $C_{kl}$, the motion head estimates rotation $R^s_{kl}$ and translation $t^s_{kl}$ of each rigid part $s$." This is a critical part of the model. How are the correlations used exactly, what is the predictor, how are the rotation and translation outputs represented, and how are the outputs supervised? **A9:** Originally, due to the page limitation and its close similarity following the approach of EPN [9], we included the details in the supplementary material (Lines 38-46). We realize that this may cause slight ambiguity and will briefly describe this method towards the end of Line 183 in the main paper. The summary is as follows. The predictor of rotation is based on a 1×1 convolution. The anchor $g_{kl}^s$ is chosen from $\mathcal{G}$ by minimizing the registration error and then optimizing the residual $r_{kl}^{s}$. The rotation is computed as $\hat{\mathbf{R}}^s_{kl}=g_{kl}^{s}r_{kl}^{s}$. The translation is derived from the minimal weighted distance between the transformed frame of $P_k$ and the origin $P_l$, where $\hat{\mathbf{t}}^s_{kl}=argmin_{\mathbf{t}} d(\hat{\mathbf{R}}^s_{kl}P_k + \mathbf{t}, P_l)$ and $d$ is the chamfer loss weighted by the mask predictions. **Q10:** Under equation 6, it says "In this manner, improved scene flow is capable of providing enhanced supervision to learn segmentation masks and motion estimates." I don't see how this follows from the equation. The equation only produces an updated scene flow, using a convex combination of the previous estimate and the rigid estimate. I see no "supervision" (or "enhanced supervision") applied to segmentation masks here. **A10:** Thank you for bringing this to our attention. We apologize for any confusion caused by the sentence in question. Instead of only continuing from the previous text (Equation 6), this sentence mainly serves as a transition to the next two paragraphs (Lines 201-216), which subsequently describe how the improved scene flow supervises the segmentation (Lines 201-211) and motion estimation (Lines 212-216). **Q11:** In the final paragraph of the method, it says "the motion head is optimized by the rotation component of $T^s_{kl}$, and estimates corresponding translation by minimizing our probabilistic part-level distance." What does it mean to optimize the motion head by rotation? What does it mean to estimate translation by minimizing "probabilistic part-level distance"? It seems like these are critical details in the procedure. **A11:** Based on the original supplementary material (Lines 54-60), we create a summary that will be added to the revised paper (we will make our contributions in Introduction more concise to compensate for the space needed). The summary is as follows. Based on the prediction of segmentation masks, the part-level rigid transformation can be computed by the weighted-Kabsch algorithm [32, 27]. By minimizing a mean SO(3) loss (described in the original paper of EPN [9]) from the rotation component of this rigid transformation, the estimated rotation ($\hat{\mathbf{R}}^s_{kl}$ in **A9** ) is optimized in a differentiable manner. The corresponding translation is then estimated by minimizing the part-level chamfer distance (through the formula of $\hat{\mathbf{t}}^s_{kl}$ described in **A9**), which aims to find the translation that best aligns the two point clouds. We will make sure to revise our paper to ensure that the method is fully described and further understandable. We appreciate your questions and will address them in detail to clarify any weaknesses or misunderstandings in the revised manuscript. Thank you again for bringing these issues to our attention.
null
null
null
null
RangePerception: Taming LiDAR Range View for Efficient and Accurate 3D Object Detection
Accept (poster)
Summary: This paper studies range-view-based LiDAR 3D object detection. It proposes Range Aware Kernel and Vision Restoration Module to handle the problems of range view representation, such as inconsistent range distribution and boundary splitting. The main contributions are as follows: - The proposed module boosts the performance of range-view-based detection and the whole framework is very efficient, further closing the gap between range-view-based methods and BEV-based methods, which is meaningful in practical use. - Authors are willing to release the code based on the popular OpenPCDet codebase. Given that previous works either do not open source (e.g., LaserNet, RCD) or are based on a non-mainstream framework (RangeDet), it is valuable and admirable if authors could contribute the source code. Strengths: - The proposed modules well addressed the problem in range view. In particular, Range-aware Kernel is a more elegant and efficient solution than the Range-conditioned Pyramid in RangeDet. - Although not very extensive, the effectiveness of every proposed module is well grounded in the experiment section. - Performance is good and reaches SoTA performance among all range-view-based methods. Compared with other BEV-based methods, its performance is also promising, especially for a single-stage detector. - Authors are willing to release the code based on the popular OpenPCDet codebase. Given that previous works either do not open source (e.g., LaserNet, RCD) or are based on a non-mainstream framework (RangeDet), it would be valuable and admirable if authors could contribute the source code. Weaknesses: - Technical novelty is a little bit limited. The issue of spatial misalignment is in fact mentioned in RangeDet, where Meta Kernel is proposed to mitigate the problem. And the Vision Restoration is also a straightforward trick, and a similar trick (Ring CNN) is adopted in PolarNet. However, the proposed solution is indeed useful and practical. In my opinion, this point is not a fatal weakness. - Experiments seem not extensive. This paper only contains two tables. Although these tables are informative and well validate the proposed module, these experiments are not sufficient enough to reach the bar of a high-standard conference. For example, Table 2 could be split into several small tables for more detailed ablation on each component. RAK is the core design, and it deserves a more detailed ablation on various hyper-parameters (e.g., different range partition intervals, different placed layers/positions ) or be compared with other potential alternative designs, which is necessary to reveal the inner workings. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: My overall impression is positive and I believe this paper has significant practical meaning in LiDAR-based 3D object detection, especially industrial use. However, I think the experiments are not sufficient enough (but well supporting the claims) to meet the bar of the high-standard NeurIPS conference. I strongly encourage authors to add more ablation experiments to offer more insight into the proposed modules. If so, I will increase my rating. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Limitations have been adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive recognition of our work. We hope the discussion below could address your concerns. ## Response 1.1 Extensive Ablation Study for Range Aware Kernel (Question 1 & Weakness 2) We have conducted thorough comparison experiments during the development process of the Range Aware Kernel (RAK). Below, we present the quantitative results of these experiments, which will serve as an extensive ablation study in the next version of this paper. The results of the ablation study are presented in Table A (in global response 5.1), where we assess the effectiveness of RAK. **Comparison with Other Operators** Firstly, this study investigates the impact of RAK by removing it from our framework (E1) and replacing it with a 1$\times$1 Convolution layer of equal dimension (E2). The results show a significant decrease of more than 5 average L1 AP for both cases, highlighting the crucial role of RAK in achieving strong detection performance. Secondly, we compare the performance of RAK and Meta-Kernel by replacing RAK with Meta-Kernel in our framework (E3). Comparison reveals that the RAK setting outperforms the Meta-Kernel setting by an improvement of 2.85 average L1 AP, further validating the efficacy of RAK in our approach. **Adjusting Number of Windows.** We investigate the optimality of the Perception Window setting in RAK by varying the number of Perception Windows (E4 - E7), where the corresponding hyper-parameters are presented in Table B. Reducing the number of Perception Windows to 2 and 4 leads to decreases in detection AP (E4, E5), indicating that an insufficient number of Perception Windows hinders the remedy for Spatial Misalignment. Increasing the number of Perception Windows to 8 and 10 brings no further performance gain (E6, E7), which we hypothesize is because an increased number of Perception Windows makes subspaces $K$ sparser and generates redundant convolution parameters. **Adjusting Placement of Windows.** We explore the optimality of the Perception Window setting in RAK by experimenting with different placements of Perception Windows (E8, E9). As is mentioned in our main paper and Table B, RAK's Perception Windows $W$ are designed as a sequence of overlapped range intervals $w_i = [r_{i1}, r_{i2}]$, where the length of each range interval $l_i = r_{i2} - r_{i1}$ gradually increases. The overlapped alignment of Perception Windows better handles the cases where objects are separated by margins of Perception Windows. For example, if a vehicle lies on the margin of $w_1 = [0, 15]$ and $w_3 = [15, 30]$, its feature can still be preserved in $w_2 = [10, 20]$. Further, the design of gradually increasing window length $l_i$ considers the fact that as the LiDAR beams reach farther, the distribution of LiDAR points becomes sparser, and our design balances the point density in each Perception Window, thus enabling more fluent feature extraction. To verify the intuitions above, experiments are conducted for non-overlapped windows (E8) and uniformly distributed windows (E9), subtle performance drops can be observed for both cases, justifying the effectiveness of our design. **Integrating RAK into Range Backbone.** Placing RAK in the mid of Range Backbone is also examined, where experiments are conducted for the case in which RAK is placed after the first convolution block of Range Backbone (E10). Qualitative result indicates that such network design leads to a minor performance degrading, which we attribute to the fact that Spatial Misalignment can be more effectively resolved at the earliest stage of feature extractor. ### Table B: Hyper-parameters of Range Aware Kernel's Ablation Study. | | Setting | Perception Windows $W$ | | :---: | :---: | :---: | | E4 | 2 Perception Windows | $\{[0,40],[30, \infty)\}$ | | E5 | 4 Perception Windows | $\{[0,30],[20,50],[30,60],[50, \infty)\}$ | | E6 | 8 Perception Windows | $\{[0,10],[5,15],[10,20],[15,25],[25,35],[30,45],[35,60],[45, \infty)\}$ | | E7 | 10 Perception Windows | $\{[0,10],[5,15],[10,20],[15,25],[25,35],[30,40],[35,45],[40,50],[45,55],[50, \infty)\}$ | | E8 | Non-overlapped Windows | $\{[0,10],[10,20],[20,30],[30,45],[45,60],[60, \infty)\}$ | | E9 | Uniformly Distributed Windows | $\{[0,20],[15,35],[30,50],[45,65],[60,80],[75, \infty)\}$ | | | RangePerception | $\{[0,15],[10,20],[15,30],[20,40],[30,60],[45, \infty)\}$ | ## Response 1.2 Comparison with Existing Works (Weakness 1) **RangeDet.** We are aware that Spatial Misalignment issue is in also mentioned in RangeDet, where Meta-Kernel is proposed to mitigate the problem. We hope to bring to your attention that we are the first work to explicitly visualize and explain Spatial Misalignment issue (main paper L72-76 \& Fig.1(e)). We also implement Meta-Kernel and compare it with RAK in our ablation study A3 (main paper L289-292). The empirical comparison demonstrates a significant 2.85 average L1 AP improvement of the RAK setting over the Meta-Kernel setting, further affirming the effectiveness of RAK in our approach. **PolarNet.** We appreciate your acknowledgment of the Ring CNN design proposed by PolarNet. We confirm that Ring CNN offers a viable solution to the Vision Corruption issue mentioned in our paper. Furthermore, we agree that our Vision Restoration Module, as proposed in our paper, is also an effective and practical approach to address the Vision Corruption issue. In the upcoming version of our paper, we will incorporate PolarNet into the related work section and provide a concise comparison between our approach and the Ring CNN design. --- Rebuttal Comment 1.1: Comment: Thanks for the additional experiments, which absolutely make a better paper and resolve most of my concerns. Before my final rating, I have a few more questions. - The performance of E3 is quite similar to RangeDet but not completely consistent. Do you implement E3 yourself or cite from anywhere else? - From E4 to E5, the performance has huge leap. I'd like to listen to your analysis or opinions. - I notice you are willing to share the code after acceptance. Are you planning to release other baseline methods you implemented (e.g., Meta-Kernel, ) under the same codebase? If you do, it would be a good contribution to this field. --- Reply to Comment 1.1.1: Comment: We extend our heartfelt appreciation for your insightful questions and valuable suggestions, as they will significantly enrich the quality of our manuscript. We believe that the upcoming discussion will serve to address your questions more comprehensively. ## 1. Experiment E3 Extensive ablation experiment E3 is implemented, trained, and evaluated within our codebase. To maintain consistency and eliminate potential confounding factors such as differences in deep learning frameworks and hyper-parameters, we do not report RangeDet paper's performance on experiment E3. Actually, all ablation experiments in Table A are implemented, trained, and evaluated within our codebase, ensuring a comprehensive and rigorous comparative analysis. ## 2. Experiments E4 \& E5 We analyze the significant performance gap between experiments E4 and E5 in the following two aspects. **Remedy for Spatial Misalignment.** Our analysis indicates that the limited number of Perception Windows adopted in E4 is insufficient to effectively address Spatial Misalignment. Referring to the hyper-parameter settings detailed in Table B, it becomes evident that E4 is unable to rectify any Spatial Misalignment issues that lie within the range interval of $[30, \infty)$. Conversely, E5, by further subdividing this interval into two distinct subspaces, namely $\{[30, 60], [50, \infty)\}$, exhibits the capability to handle a substantial portion of Spatial Misalignments occurring within the $[30, \infty)$ range interval. In essence, the finer placement of Perception Windows in E5 empowers it to effectively resolve a larger portion of Spatial Misalignment issues within the range image. This finer granularity contributes significantly to the superior performance of E5 compared to E4. **Number of Convolution Kernels.** As depicted in our main paper's Figure 2, the transformed subspaces $K' \in \mathbb{R} ^ {{m} \times {n} \times {8l}}$ undergo non-linear feature extraction through the Range Backbone, where $l$ represents the number of Perception Windows in RAK. We denote the initial 2D Convolution Layer in the Range Backbone as $\mathcal{C}$, which produces an output feature map with a dimension of $d$, denoted as $H \in \mathbb{R} ^ {{m} \times {n} \times {d}}$. Notably, the 2D Convolution Layer $\mathcal{C}$ encompasses a total of $8l \times d$ fully connected kernels. Our ablation experiments, as detailed in Table A, utilize $l = 2$ for E4 and $l = 4$ for E5. Consequently, it is evident that the 2D Convolution Layer $\mathcal{C}$ in E5 employs double the number of kernels compared to E4, thereby yielding enhanced representation power in feature extraction. This increase in learnable kernels contributes to the notably stronger performance observed in E5. To summarize, the divergence in performance observed between experiments E4 and E5 can be attributed to E5's superior handling of Spatial Misalignment and its increased representation power through a larger count of convolution kernels. ## 3. Open-sourced Codebase We confirm that the codebase of RangePerception will be open-sourced after the acceptance of this paper. Our open-sourced codebase will also include our PyTorch implementation of RangeDet, encompassing both Meta-Kernel Convolution and Range Conditioned Pyramid. Meanwhile, our PyTorch implementation of single-frame FCOS-LiDAR will be made public.
Summary: The paper presents an efficient and accurate RV-based 3D object detection framework called *RangePerception*. The framework addresses two critical challenges impeding the performance of existing RV-based methods: 1) a natural domain gap between the 3D world coordinate used in output and 2D range image coordinate used in input, generating difficulty in information extraction from range images; and 2) native range images suffer from vision corruption issue, affecting the detection accuracy of objects located on the margins of the range images. To address these challenges, two novel algorithms named Range Aware Kernel (RAK) and Vision Restoration Module (VRM) are proposed. With the help of RAK and VRM, RangePerception achieves higher averaged L1/L2 AP compared to previous state-of-the-art RV-based method RangeDet on Waymo Open Dataset. For the first time as an RV-based 3D detection method, RangePerception achieves slightly superior averaged AP compared with the well-known BEV-based method CenterPoint and has an inference speed that is 1.3 times as fast as CenterPoint. The paper’s contributions include the RangePerception Framework, Range Aware Kernel and Vision Restoration Module. Strengths: **Originality**: The paper introduces two modules, Range Aware Kernel (RAK) and Vision Restoration Module (VRM), to address the challenges of existing RV-based methods. The proposed RangePerception framework is original in its approach to efficiently and accurately detect 3D objects using LiDAR range view. **Quality**: The paper presents a thorough analysis of the challenges impeding the performance of existing RV-based methods and proposes solutions to address these challenges. The paper provides ablation studies to demonstrate the effectiveness of each component. Supplementary Material is well written. Weaknesses: **Major Issues**: 1. In table 1, it seems that many related works [1-2] are missing. They also focus on Efficient and Accurate 3D Object Detection, and the authors should also compare with them. 2. Experiments are insufficient. It's interesting to see the performance on other datasets (e.g., KITTI and nuScenes). For nuScenes dataset, they use a 32-beam LiDAR (KITTI and WOD both are 64-beam). Does the proposed RV-based method still performs well for extremely low resolution range images (32 or even 16-beam LiDAR)? **Minor Issues**: 1. Inconsistent definition and dimension of *range image* in Algorithm 1 L.1 (range image $I \in R^{m \times n \times 8}$) and in Paper L.119-L.121 (m x n matrix). 2. Please add citations in Tab.1 first column (*Method* column). 3. Although the illustration on Redundancy Pruning in Fig. 5(c) is straight forward, it's better to also include the real range image visualization for Redundancy Pruner in Fig. 5 (a) or (b). 4. It could be easier to follow introducing the **Related Work** section ahead of the **Methodology**. [1] Liu, Zhijian, Haotian Tang, Alexander Amini, Xinyu Yang, Huizi Mao, Daniela Rus, and Song Han. "BEVFusion: Multi-Task Multi-Sensor Fusion with Unified Bird's-Eye View Representation." In ICRA. 2023. [2] Sun, Pei, Weiyue Wang, Yuning Chai, Gamaleldin Elsayed, Alex Bewley, Xiao Zhang, Cristian Sminchisescu, and Dragomir Anguelov. "Rsn: Range sparse net for efficient, accurate lidar 3d object detection." In CVPR. 2021. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Range view detection could be inevitably affected by **occlusion** and **scale variation** due to the spherical projection. Does the proposed method consider this? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitations are adequately included in the Supplementary Material. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your recognition of our work. We hope the following discussion adequately addresses your concerns. ## Response 2.1 Occlusion and Scale Variation Phenomena (Question 1) As is illustrated in our main paper L202-204 \& Fig.2, RangePerception learns the pixel-wise dense predictions, namely classification $C$, regression $B$, and confidence $U$. It is straightforward to observe from our main paper Fig.2 \& Fig.6(b, c), where the class prediction head of RangePerception accomplishes the same task as foreground segmentation. Such segmentation-based learning principle adequately addresses the issue of occlusion and scale variation: even though some objects have limited foreground points due to far distance or strong occlusion, their geometric attributes can still be learned and predicted via the segmentation-based paradigm. ## Response 2.2 Performance on nuScenes Dataset (Major Weakness 2) The results on nuScenes dataset are presented in this section. **Baseline Methods.** As shown in Table C, state-of-the-art BEV-based, MV-based, and RV-based detectors are selected as baseline methods. All BEV-based and MV-based baselines are trained and evaluated with OpenPCDet's official PyTorch implementation. We reimplement and examine RangeDet as described in our main paper L269-270. Since FCOS-LiDAR reports results on nuScenes without releasing the code, we reimplement FCOS-LiDAR in our codebase, and achieve similar single-frame results as reported in their paper. **Experiment Settings.** Training and evaluation are conducted with single-frame setting on nuScenes dataset. All models are trained with $20$ epochs on nuScenes training set, where batch size is $32$ and frame sampling rate is $100\%$. Inference speed is examined with one NVIDIA 3090Ti GPU with batch size set to $1$. **Main Results.** The quantitative performance of RangePerception is surprisingly excellent, surpassing all baseline methods in terms of NDS and average AP. Particularly, our method exhibits outstanding performance in detecting small objects, such as pedestrians, motors, bicycles, and traffic cones. This observation reaffirms the analysis presented in our main paper L277-280 \& L300-302, where the range-view representation better preserves visual features of small objects, while voxelization introduces quantization errors to the originally sparse foreground points. Additionally, RangePerception demonstrates superior performance in detecting construction vehicles, which we attribute to our segmentation-based learning paradigm being more sensitive to the distinctive shape of construction vehicles. The remarkable overall performance and balanced perception ability across all object categories further validate that the RangePerception framework effectively handles scale variation. ### Table C: Detection performance measured by NDS and AP on nuScenes validation set. All experiments are conducted under single-frame setting. C.V. stands for construction vehicle, and T.C. stands for traffic cones. | Method | View | Stage | NDS | AP | Car | Truck | Bus | Trailer | C.V. | Ped | Motor | Bicycle | T.C. | Barrier | FPS | |--------------------|------|-------|-------|-------|-------|-------|-------|---------|------|-------|-------|---------|-------|---------|-------| | Second | B | one | 50.48 | 50.03 | 75.24 | 46.04 | 54.17 | 47.99 | 16.73| 69.50 | 42.07 | 19.06 | 67.33 | 62.12 | 16.72 | | PointPillar | B | one | 50.02 | 49.27 | 74.70 | 45.46 | 53.23 | 45.99 | 16.46| 68.71 | 40.96 | 18.73 | 66.51 | 61.96 | 23.89 | | CenterPoint | B | one | 54.95 | 53.92 | 78.19 | 47.53 | 55.92 | 49.54 | 16.96| 77.09 | 49.99 | 22.62 | 70.98 | **65.69** | 19.75 | | PV-RCNN | B+P | two | 54.23 | 53.12 | 80.75 | 49.09 | 57.65 | 51.26 | 17.51| 74.84 | 47.27 | 21.54 | 66.78 | 64.47 | 1.85 | | Part-$A^2$-anchor | B+P | two | 54.67 | 54.10 | **81.10** | **49.30** | **57.91** | **51.45** | 21.58| 76.16 | 48.13 | 21.82 | 67.96 | 65.61 | 4.69 | | FCOS-LiDAR | R | one | 53.41 | 53.43 | 72.95 | 42.33 | 46.95 | 43.31 | 25.56| 74.99 | 60.35 | 34.61 | 70.29 | 62.74 | 25.23 | | RangeDet | R | one | 54.39 | 54.17 | 73.41 | 42.83 | 48.15 | 44.33 | 26.37| 75.46 | 60.58 | 34.58 | 71.52 | 64.50 | 19.87 | | RangePerception | R | one | **55.65** | **55.31** | 75.68 | 43.91 | 48.71 | 44.93 | **26.51**| **78.80** | **60.61** | **35.90** | **72.93** | 65.10 | **26.17** | ## Response 2.3 Comparison with BEVFusion and RSN (Major Weakness 1) We will cite and compare with these two methods in the next version of our paper. Meanwhile, we hope to point out that the problem settings of BEVFusion and RangePerception are naturally different: (1) BEVFusion is a multi-modal joint-LiDAR-camera detection method; (2) RangePerception, on the other hand, is a single-modal detection method based on LiDAR signal. ## Response 2.4 Range Image Definition (Minor Weakness 1) The definition of range image $I \in \mathbb{R} ^ {{m} \times {n} \times {8}}$ and range matrix $R \in \mathbb{R} ^ {{m} \times {n}}$ will be specifically clarified in the next version of this paper. ## Response 2.5 Visualization for Redundancy Pruner (Minor Weakness 3) We would like to clarify that Redundancy Pruner operates on feature space $F^r$ (described in main paper L191) instead of range image $I$, which is the reason why we choose to visualize redundancy pruner with pseudo feature in main paper Fig.5(c). That being said, we can include a new figure which visualizes how Redundancy Pruner operates on the real feature space, if you believe that would be helpful. ## Response 2.6 Placement of Related Work (Minor Weakness 2 & 4) Citations will be added to Table 1 in the next version of this paper. Meanwhile, Related Work section will be placed ahead of the Methodology section. --- Rebuttal Comment 1.1: Title: Decision to Elevate My Rating Comment: Thank you for the additional experiments and rebuttals. The rebuttal has addressed most of my questions and concerns, especially regarding performance on low-resolution datasets (nuScenes) and Occlusion & Scale Variation Phenomena. Therefore, I am inclined to accept this paper and raise my rating. --- Reply to Comment 1.1.1: Comment: We deeply value your meticulous review and are pleased that our responses have effectively addressed your questions and concerns. Your willingness to raise your rating and accept the paper is truly appreciated. We extend our profound appreciation for your insightful questions and invaluable suggestions, as they undoubtedly contribute to the elevation of our manuscript's scholarly caliber.
Summary: This paper proposes RangePerception, an RV-based 3D detection framework that aims to address the issues of Spatial Misalignment and Vision Corruption in range-view representation. The Range Aware Kernel (RAK) disentangles the range image space into multiple sub-spaces, and overcomes the Spatial Misalignment issue by enabling independent feature extraction from each subspace. The Vision Restoration Module (VRM) builds an extended spherical space by pre-defining a restoration angle to restore visual features originally corrupted by the LiDAR’s sampling process on both sides of range image. Strengths: 1. This paper presents a novel high-performing RV-based 3D object detection framework. 2. The specially designed RAK and VRM modules effectively solve the Spatial Misalignment and Vision Corruption problems in range-view representation. 3. The paper is well-organized and clearly written especially in the introduction part. Weaknesses: 1. The paper only provides the results on the Waymo validation set, lacking the results on the Waymo test set. 2. Lack of evaluation based on different distances like RangeDet. 3. No experiments are conducted on other mainstream datasets such as nuScenes and KITTI. 4. Lack of ablation studies on restoration angle hyper-parameters. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. For RV-based 3D detection, the scale-variation and occlusion issues are also two major challenges. Is the proposed method helpful to them? 2. For Range Aware Kernel, window partitioning inevitably leads to the problem of separating one object into different sub-spaces. Can you carefully analyze this issue? 3. After disentangling into multiple sub-spaces, the range image will become extremely sparse. Why does the feature extraction network still use 2D dense convolution instead of 2D sparse convolution? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors discuss the limitations and there are no potential negative societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for recognizing our work. We aim to address your concerns adequately in the following discussion. ## Response 3.1 Occlusion and Scale Variation Phenomena (Question 1) This topic is carefully discussed in Response 2.1 to reviewer PXtx. ## Response 3.2 Performance on nuScenes Dataset (Weakness 3) We include performance on nuScenes dataset in Response 2.2 to reviewer PXtx. ## Response 3.3 Separation of Objects (Question 2) As is introduced in our main paper L259, RAK's Perception Windows $W$ are designed as a sequence of overlapped range intervals $w_i = [r_{i1}, r_{i2}]$. The overlapped alignment of Perception Windows better handles the cases where objects are separated by margins of Perception Windows. For example, if a vehicle lies on the margin of $w_1 = [0, 15]$ and $w_3 = [15, 30]$, its feature can still be preserved in $w_2 = [10, 20]$. The benefit brought by overlapped window alignment is further validated by our extensive ablation study E8, presented in Response 1.1 to Reviewer GJQ1. ## Response 3.4 Dense 2D Convolution (Question 3) As illustrated in our main paper Fig. 2, transformed subspaces $K' \in \mathbb{R} ^ {{m} \times {n} \times {8l}}$ are processed by Range Backbone for non-linear feature extraction. We denote the first 2D Convolution Layer in Range Backbone as $\mathcal{C}$, whose output dimension is $d$. Subsequently, the output feature from $\mathcal{C}$ can be represented as $H \in \mathbb{R} ^ {{m} \times {n} \times {d}}$. Note that 2D Convolution Layer $\mathcal{C}$ consists of $8l \times d$ number of fully connected kernels. Due to the fully connected nature of $\mathcal{C}$, feature $H = \mathcal{C}(K')$ is a dense tensor, despite the sparsity of $K'$. As a result, the input and output features of each consecutive layer in Range Backbone are also dense tensors. It is therefore unfeasible to build Range Backbone with sparse 2D Convolutions. ## Response 3.5 Choice of Restoration Angle (Weakness 4) We randomly sample 200 frames of range images from Waymo Open Dataset. By visualization, we find that restoration angle $\delta = 0.086\pi$ is sufficient to resolve the Vision Corruption issues that occur in the sampled frames, without introducing unnecessary redundancy. An extensive ablation study for Vision Restoration Module is also conducted. As presented in Table D, ablations are experimented by disabling VRM and RP (E11), reducing restoration angle to $0.043\pi$ (E12), and increasing restoration angle to $0.172\pi$ (E13). Comparison demonstrates that all three sizes of restoration angle $\delta$ boost the detection accuracy, while the selected hyper-parameter $\delta = 0.086\pi$ generates the most ideal numerical results. ### Table D: Ablation study of Vision Restoration Module, measured by 3D AP/APH on WOD validation set. | | Setting | Vehicle | Vehicle | Pedestrian | Pedestrian | Cyclist | Cyclist | Average | Average | |:--------:|:-----------------------:|:-------------:|:---------:|:--------------:|:---------:|:--------------:|:---------:|:--------------:|:---------:| | | | L1 | L2 | L1 | L2 | L1 | L2 | L1 | L2 | | E11 | without VRM & RP | 72.50/71.97 | 66.40/65.91 | 80.20/76.08 | 72.27/68.53 | 70.31/68.92 | 68.74/67.42 | 74.34/72.32 | 69.13/67.29 | | E12 | $\delta = 0.043\pi$ | 73.46/73.03 | 66.45/65.93 | 80.22/76.10 | 72.28/68.53 | **70.33/68.93** | 68.74/67.43 | 74.67/72.69 | 69.15/67.30 | | E13 | $\delta = 0.172\pi$ | 73.60/73.09 | 66.46/65.99 | **80.24/76.12** | 72.29/68.53 | 70.32/68.92 | **68.75/67.43** | 74.72/72.71 | 69.16/67.31 | | | RangePerception | **73.62/73.11** | **66.47/66.00** | **80.24/76.12** | **72.29/68.54** | **70.33/68.93** | **68.75/67.43** | **74.73/72.72** | **69.17/67.32** | ## Response 3.6 Evaluation based on Different Distances (Weakness 2) Shown in Table E, we conduct an extensive evaluation on the WOD validation set, measuring the detection performance in terms of L1 3D AP for different distance ranges. For comparison, we use RangeDet as the baseline method. RangePerception demonstrates superior performance compared to RangeDet across different distances for all classes. The results validate the effectiveness of RangePerception in handling diverse scenarios and addressing scale variation issues. ### Table E: Detection performance measured by L1 3D AP on WOD validation set, based on different distances. | Class | Method | Overall | 0 - 30m | 30 - 50m | 50m - $\infty$ | |-------------|------------------|:---------:|:---------:|:---------:|:---------:| | Vehicle | RangeDet | 72.85 | 87.96 | 69.03 | 48.88 | | Vehicle | RangePerception | **73.62** | **88.79** | **69.77** | **49.45** | | Pedestrian | RangeDet | 75.94 | 82.20 | 75.39 | 65.74 | | Pedestrian | RangePerception | **80.24** | **86.87** | **79.64** | **69.36** | | Cyclist | RangeDet | 65.67 | 79.33 | 55.80 | 45.00 | | Cyclist | RangePerception | **70.33** | **82.95** | **59.35** | **50.19** | ## Response 3.7 Performance on Waymo Test Set (Weakness 1) Quantitative and qualitative results on Waymo test set will be presented and discussed in the next version of this paper. --- Rebuttal Comment 1.1: Title: Good feedback Comment: The authors well address my concerns in the feedback and I thus decide to upgrade my score to Weak Accept. --- Reply to Comment 1.1.1: Comment: We are grateful to you for recognizing our efforts in addressing your concerns during the response process. Your feedback has been instrumental in enhancing the quality of our work, and we look forward to continuing to meet your expectations in the final version of our paper.
Summary: This paper proposed a new range view based 3D object detection algorithm, which achieved on-par performance compared to sota BEV/voxel based algorithm. Strengths: 1. The paper starts from first principle, analyzes the possible information losses during the lidar data collection/analysis/transformation process, and comes up with corresponding strategies to enhance/improve range view perception performance. 2. The paper comes up with effective strategies for training range view detection models, and achieved on-par performance with sota voxel based methods, and runs much faster. Weaknesses: 1. The core idea of RAK is to divide the computation window based on the range. This is hardly entirely new/novel for range based methods, range conditioned convolution/detection has been widely studied before. The reviewer implemented very similar strategies but have not found experimental improvements in the range view detection task. 2. The VRM module does not demonstrated experimental benefits. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Modern lidar sensors usually carry multiple return signals with each laser pulse. For example, WOD dataset has multiple returns, each return forming a unique range image. It is unclear how this framework takes multi-return into consideration. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: No negative societal impact is noticed to the best of reviewer's knowledge. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your positive acknowledgment of our work. We hope that the subsequent discussion effectively addresses the concerns you have raised. ## Response 4.1 Incorporation of Multi-return Range Images (Question 1) The incorporation of multi-return LiDAR data is extensively investigated and evaluated during the development of RangePerception. Actually, the LiDAR data in Waymo Open Dataset (WOD) consists of only two returns, namely first return and second return. First-return and second-return range images of several sampled frames in WOD are visualized in Fig.1 of our global response PDF file. Through this set of visualization, it is evident that the first return of WOD LiDAR is significantly more compact and informative; in contrast, the second return is highly sparse and contains little information. To reaffirm this conclusion, we conduct a random sampling of 1000 foreground objects for each category in WOD. We then calculate the average number of first-return and second-return foreground points within these objects. The results, presented in Table F, clearly demonstrate that the second return is significantly sparser compared to the first return. Specifically, for vehicle category, the second-return points account for only 2.17\% of the total points. This observation reinforces the conclusion that the first return contains more informative data, while the second return is sparse and provides limited information. ### Table F: Statistics of sampled objects in WOD, in terms of first-return and second-return foreground points. The average numbers of first-return and second-return foreground points in each object are shown in the first and second data column. Additionally, the ratio of second-return points over all foreground points is shown in the third data column. | Class | Avg. 1st Points | Avg. 2nd Points | 2nd Points Ratio | |:-----------:|:----------------:|:-----------------:|:------------------:| | Vehicle | 613.97 | 13.68 | 2.17\% | | Pedestrian | 95.46 | 5.60 | 5.54\% | | Cyclist | 155.79 | 3.93 | 2.46\% | Indeed, the sparsity and limited information of the second return in the LiDAR data lead previous range-view-based detectors, such as RangeDet, to exclusively utilize the first return as both input data and for target assignment. To make a fair comparison with RangeDet, RangePerception is also implemented solely based on first-return range images. It is worth pointing out that in our main paper Table 1, the results of RangeDet and RangePerception are achieved solely based on the first-return LiDAR input, while other baseline methods are evaluated based on both returns of LiDAR input. Notably, RangePerception achieves state-of-the-art performance without the help of second-return data. Despite the facts above, explorations are also made to exploit the information contained in second-return range image. We adopt a straightforward approach that concatenates the second-return range image $S \in \mathbb{R} ^ {{m} \times {n} \times {8}}$ with transformed subspaces $K' \in \mathbb{R} ^ {{m} \times {n} \times {8l}}$, generating an augmented subspace tensor $A \in \mathbb{R} ^ {{m} \times {n} \times {8(l+1)}}$. Augmented subspace tensor $A$ is further input to Range Backbone for feature extraction. As shown in Table G, the performance gain introduced by this operation is relatively marginal, where the improvement to average L1 AP is only 0.08. Since the second-return data is initially sparse and hardly suffers from Spatial Misalignment, we do not attempt to process second-return data with Range Aware Kernel. ### Table G: Comparision study of two-return range image input, measured by 3D AP/APH on WOD validation set. Experiments are conducted under single-frame setting. | Method | View | Stage | Vehicle | Vehicle | Pedestrian | Pedestrian | Cyclist | Cyclist | Average | Average | FPS | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | | | | L1 | L2 | L1 | L2 | L1 | L2 | L1 | L2 | | | First-return Input | R | one | 73.62/73.11 | 66.47/66.00 | 80.24/76.12 | 72.29/68.54 | 70.33/68.93 | 68.75/67.43 | 74.73/72.72 | 69.17/67.32 | **45.85** | | Two-return Input | R | one | **73.71/73.20** | **66.55/66.08** | **80.27/76.15** | **72.32/68.57** | **70.45/69.05** | **68.78/67.45** | **74.81/72.80** | **69.27/67.37** | 44.73 | We will include the analysis and experiment results above in the supplementary material of our paper. Since the incorporation of second-return range image is not the focus of this study, we leave methodology exploration for the utilization of second return to future work. ## Response 4.2 Range Aware Kernel (Weakness 1) We address your concerns regarding Range Aware Kernel in our global response 5.2. ## Response 4.3 Vision Restoration Module (Weakness 2) The experimental advantages resulting from the incorporation of the Vision Restoration Module (VRM) are elucidated through ablation study A6, as discussed in our main paper at L296-299. In accordance with the findings presented in A6, the integration of VRM leads to an enhancement in L1 AP/APH for vehicle detection, with improvements of 1.12/1.14. Notably, the observed enhancements in the cases of pedestrian and cyclist detection are comparatively limited. We attribute this observation to the fact that pedestrians and cyclists, being relatively small objects, are less susceptible to Vision Corruption.
Rebuttal 1: Rebuttal: Thank you for acknowledging our work positively. We are pleased to provide the supplemental responses for part of the individual responses. ## Response 5.1 Supplemental Response for Reviewer GJQ1 ### Table A: Ablation study of Range Aware Kernel, measured by 3D AP/APH on WOD validation set. | | Setting | Vehicle | Vehicle | Pedestrian | Pedestrian | Cyclist | Cyclist | Average | Average | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | | | L1 | L2 | L1 | L2 | L1 | L2 | L1 | L2 | | E1 | without RAK | 70.46/70.01 | 63.45/62.89 | 72.86/68.88 | 64.56/60.72 | 63.63/62.43 | 61.44/60.25 | 68.98/67.11 | 63.18/61.29 | | E2 | 2D Convolution | 70.48/70.03 | 63.46/62.90 | 72.86/68.87 | 64.56/60.71 | 63.64/62.43 | 61.45/60.26 | 68.99/67.11 | 63.16/61.25 | | E3 | Meta-Kernel | 72.95/72.43 | 65.98/65.37 | 75.95/71.96 | 67.63/63.90 | 66.76/65.46 | 64.57/63.38 | 71.88/69.95 | 66.06/64.21 | | E4 | 2 Perception Windows | 71.34/70.85 | 64.31/63.75 | 73.63/69.65 | 65.37/61.48 | 65.17/64.09 | 63.10/61.95 | 70.04/68.20 | 64.26/62.39 | | E5 | 4 Perception Windows | 73.13/72.62 | 65.97/65.51 | 80.15/76.03 | 72.17/68.42 | 70.15/68.85 | 68.42/67.10 | 74.48/72.50 | 68.85/67.01 | | E6 | 8 Perception Windows | 73.59/73.08 | 66.45/65.98 | 80.25/76.13 | 72.30/68.56 | 70.31/68.92 | 68.74/67.42 | 74.72/72.71 | 69.16/67.32 | | E7 | 10 Perception Windows | 73.58/73.07 | 66.43/65.97 | **80.26/76.14** | **72.32/68.57** | 70.29/68.89 | 68.72/67.41 | 74.71/72.70 | 69.16/67.32 | | E8 | Non-overlapped Windows | 72.14/71.53 | 66.07/65.48 | 80.03/75.97 | 72.09/68.37 | 67.12/65.88 | 65.34/64.15 | 73.09/71.13 | 67.83/66.00 | | E9 | Uniformly Distributed Windows | 73.12/72.86 | 66.12/65.52 | 79.88/75.79 | 71.91/68.13 | 69.45/68.04 | 67.82/66.51 | 74.15/72.23 | 68.62/66.72 | | E10 | RAK After First Conv Block | 73.61/73.10 | 66.45/65.98 | 79.84/75.78 | 71.88/68.10 | 69.41/68.01 | 67.79/66.48 | 74.29/72.30 | 68.71/66.85 | | | RangePerception | **73.62/73.11** | **66.47/66.00** | 80.24/76.12 | 72.29/68.54 | **70.33/68.93** | **68.75/67.43** | **74.73/72.72** | **69.17/67.32** | ## Response 5.2 Supplemental Response for Reviewer ZtTP We are aware that previous works such as Cylinder3D and PolarNet divide computation windows based on range. However, the computation windows of previous works are designed as a set of non-overlapped intervals. Unlike prior approaches, Range Aware Kernel (RAK) incorporates a distinct innovation by adopting a set of overlapped intervals as part of its window structure design. This strategic choice ensures that objects situated at the margins of Perception Windows are not subject to information loss, thereby enhancing the fidelity of feature extraction. We believe this nuanced design, illustrated in Fig.3(a) \& L258-260 of our main paper, fundamentally differentiates RAK from previous methods. Furthermore, it's noteworthy that Cylinder3D and PolarNet are primarily developed for segmentation tasks operating on point views. RAK, on the other hand, is purposefully tailored for the range-view-based object detection task. This tailored focus underscores RAK's pioneering role in addressing Spatial Misalignment challenges within this specific context. Moreover, as illustrated in Response 1.1 to Reviewer GJQ1, thorough window structure designing and hyper-parameter tuning are the key enabler for the optimal functionality of RAK. Both factors significantly impact RAK's performance, as demonstrated by our extensive ablation study in main paper L286-296 \& Response 1.1. We recognize the necessity to address the concerns of novelty and experimental improvements thoroughly. We will carefully compare our method with Cylinder3D and PolarNet in the next version of our paper. Our supplementary material will include an in-depth analysis of the hyper-parameter tuning process, substantiated by comprehensive experiments that showcase the improvements attainable through RAK when the optimal configuration is achieved. The discussions above will facilitate a more nuanced understanding of RAK's potential and its applicability within the broader context of range-view-based methods. In conclusion, we are committed to refining our manuscript to more accurately represent the significance and effectiveness of the Range Aware Kernel, and we are grateful for your guidance in this endeavor. Pdf: /pdf/b445a1457baacf4c4f85d5386c1953c1e08441b6.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Hypervolume Maximization: A Geometric View of Pareto Set Learning
Accept (poster)
Summary: This manuscript introduces a method for multi-objective optimization and mainly contributes to modeling the Pareto set. Compared to previous methods which usually focus on limited and discrete Pareto solutions, the method in this manuscript is capable of modeling the full Pareto set in a continuous way. They also proposed a connection between the complete Pareto set and the related hypervolume, enabling a convergence analysis of the hypervolume as a novel metric for Pareto set learning. Also, this method bridges the Pareto solutions and their representation in a polar coordinate system. The authors also evaluate the proposed method extensively in a set of tasks. Strengths: 1. The presentation is clear. The visualization is intuitive and helps to understand. 2. The experiment results look convincing. I do expect some deeper tasks though, besides these simple tasks. Weaknesses: 1. The text on the figures is too small to read. Maybe considering to add a description in the caption. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. I wonder how this method generalizes in a more general multi-task learning context - for example, the accuracy of the network and the total norm of it (performance vs regularization). 2. I wonder how this method compares to Pareto exploration methods like [1], which also explores Pareto set in a continuous way. [1] Ma, Pingchuan, Tao Du, and Wojciech Matusik. "Efficient Continuous Pareto Exploration in Multi-Task Learning." International Conference on Machine Learning. PMLR, 2020. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable feedback. We hope that our following response can address your concerns. **W1. The text on the figures is too small to read. Maybe considering to add a description in the caption.** Thank you for your guidance. In the revised edition, we have implemented a larger font size ($\geq 16$) for all labels and tick marks in all figures. **Q1. Multi-Task Learning: I wonder how this method generalizes in a more general multi-task learning context - for example, the accuracy of the network and the total norm of it (performance vs regularization).** We recognize the importance of applying our proposed framework to **large-scale multi-task learning** situations. To achieve this, one potential approach is to utilize task-specific neural networks to focus on learning the trade-off objective. Concurrently, a backbone network can be employed to learn shared parameters that benefit all tasks. This effective technique was originally introduced in prior literature [1]. Moreover, it is worth noting that even for complex problems like multi-objective neural combinatorial optimization [2] and multi-objective drug design using GFlowNet [3], the number of task-specific parameters can be managed at a lower level. Future work is to learn the **optimal network structure** that effectively balances the size of the network (as referred to in the context of **normalization**) while simultaneously maximizing accuracy. **Q2. Comparison to Pareto Exploration Method: I wonder how this method compares to Pareto exploration methods like [2], which also explores Pareto set in a continuous way. Ma et al, [4].** We appreciate your advice, and we intend to incorporate a discussion on Ma et al.'s method [4] in our revised paper. Their approach employs the Krylov subspace method for local exploration using first-order approximation, with the objective of obtaining approximate Pareto solutions in close proximity to a discovered Pareto solution. The discovered Pareto solutions are pre-computed using the methodology introduced by Lin et al. in PMTL [5]. Due to the local approximation nature of their method for generating continuous solutions, the accuracy of approximating distant solutions diminishes when they are far from a discovered solution. Consequently, as depicted in Figures (2), (10), and (12) of Ma et al.'s work [4], the ability of their approach to approximate Pareto solutions decreases when the distance between the Pareto solution and the discovered solution increases. The proposed method in our work directly learns the mapping from the preference space, which corresponds to the ($m$-1)-dimensional unit sphere (where $m$ represents the number of objectives), to the complete set of Pareto solutions. As a result, the quality of the learned solution remains unaffected by the size of the neighborhood, unlike in Ma et al.'s method [4]. This characteristic ensures that our approach achieves superior overall quality compared to their approach. **Reference** [1]. Multi-Task Learning as Multi-Objective Optimization. NeurIPS 2018. [2]. Pareto Set Learning for Neural Multi-Objective Combinatorial Optimization. ICLR 2022. [3]. Multi-Objective GFlowNets. ICML 2023. [4]. Efficient Continuous Pareto Exploration in Multi-Task Learning." ICML. 2020. [5]. Pareto multi-task learning. NeurIPS 2019. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for the effort of answering the my questions. I personally found the authors' responses very convincing (explaining the high-and-low-level multi-objective problems). Also, I agree with the authors that [Ma et al. 2020] can only capture the local approximation by the nature of the algorithm they used. I held the positive opinion previous and I still do now. I will keep my score.
Summary: This paper presents a novel perspective of hypervolume maximization in Pareto learning. The proposed method is largely interpretable based on this perspective, and could outperform many peer competitors considering efficiency and solutions. Strengths: 1. The hypervolume analysis from the polar coordinate perspective is elaborate and reasonable. 2. The proposed method achieved a generally better performance than compared evolutionary and Pareto learning algorithms. Weaknesses: 1. The innovations are not emphasized. For example, if the benefit of the proposed method based on geometric analysis include improved efficiency compared with other Pareto learning methods, it is suggested to be emphasized. 2. The details of the proposed method is inadequate. As the relationship between theoretical reasoning and methodology is obliquitous, the method is vulnerable to ignorance. 3. Comparisons and discussions may have missed some important Pareto learning techniques like hypernetworks. Contrasts or superiority of the proposed method are deficient. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Does the proposed method introduce any type of preferences between objectives? 2. Does this method incorporate multiple neural models for joint learning? 3. What is the crucial determinant of the improved efficiency regarding this method? 4. How does scalability to large-scale problems embody in this work, typically compared with EPO-based approaches? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitations on solution quality is clearly mentioned. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for the valuable comments on our work. We hope to address your concerns with the following answers. **W1. The innovations are not emphasized. For example, if the benefit of the proposed method based on geometric analysis includes improved efficiency compared with other Pareto learning methods, it is suggested to be emphasized.** - The first and main **innovation** of this work is its **error analysis**. While several Pareto learning methods exist (e.g., [1-3]), they lack theoretical analysis, and the properties of the learned Pareto model remain unclear. We establish a connection between PSL and hypervolume maximization, an important **"geometric"** quantity in multi-objective optimization (MOO), and provide convergence analysis for PSL with respect to HV maximization. This connection and the error bounds in this paper help us better understand PSL. - The second **innovation** of our proposed model is its ability to generate a Pareto solution that precisely aligns with the preference vector, enabling the localization of a Pareto solution from a **"geometric"** standpoint (as discussed in Property 2). In other words, our simple yet efficient method can produce an "exact" Pareto solution, as defined in EPO [3-4]. It's worth noting that our approach is 100 times faster than the EPO-based PSL method [3-4]. We sincerely thank your advice, and we will make our motivations more clear in the next version. **W2. Details of the Proposed Method: The details of the proposed method is inadequate. As the relationship between theoretical reasoning and methodology is obliquitous, the method is vulnerable to ignorance.** We thank the reviewer for the suggestion. We will summarize our method in an **algorithm block** in the next revision and highlight the connection between a step in the algorithm and the corresponding quantity in our analysis. **W3. Comparisons and discussions may have missed some important Pareto learning techniques like hypernetworks. Contrasts or superiority of the proposed method are deficient.** The proposed framework is model-agnostic and orthogonal to techniques such as using a hypernetwork. For simplicity and theoretical analysis, the paper employs a fully-connected network. We believe that integrating hypernetworks into our model is straightforward by applying the optimization strategy described in Eq. (7) of the main paper. **Q1. Does the proposed method introduce any type of preferences between objectives?** In our Pareto neural model, "preferences" serve as inputs. Proposition 2 establishes the relationship between the preference vector $\lambda$ (expressed in polar coordinates) and the corresponding Pareto solution. Once the Pareto model is trained, it can produce a new (approximate) Pareto solution based on this new preference vector. **Q2. Does this method incorporate multiple neural models for joint learning?** Our current implementation utilizes a single neural network. The extension of the proposed method with joint learning is left as future work. **Q3-4. What is the crucial determinant of the improved efficiency regarding this method? How does scalability to large-scale problems embody in this work, typically compared with EPO-based approaches?** In short, our method enhances efficiency compared to the EPO-based approach by **eliminating** the need to calculate and operate on gradient vectors of **all** ($m$) objectives. Instead, we update the model **solely** based on the gradient of the argmin index in Eq. (6). It is worth noting that calculating a single gradient vector through backward propagation can be time-consuming in the default PyTorch implementation. As a result, the proposed approach becomes even more efficient than the EPO-based approach when applied to large-scale problems. To apply the proposed method to large-scale problems, we can utilize a backbone network that provides benefits across all objectives. By employing a small subset of networks, optimization can be performed independently for task-specific objectives, as described in [5]. This strategy allows us to update only the task-specific parameters using PSL. Our next plan is to incorporate this strategy into the proposed approach and apply it to large-scale problems. We hope our response adequately addresses your concerns, more or less. We appreciate any further comments or concerns you may have regarding our paper. Reference [1]. Multi–Objective Reinforcement Learning with Continuous Pareto Frontier Approximation. AAAI 2015. [2]. Multi-objective Reinforcement Learning through Continuous Pareto Manifold Approximation. JAIR, 2016. [3]. Learning the Pareto Front with Hypernetworks. ICLR, 2021. [4]. Gradient Descent with Controlled Ascent in Pareto Optimization. ICML 2020. [5]. Multi-Task Learning as Multi-Objective Optimization. NeurIPS 2018. [6]. Learning a Neural Pareto Manifold Extractor with Constraints. UAI, 2022. --- Rebuttal Comment 1.1: Comment: Dear authors: Thank you for your detailed responses and additional experiments, which have made the paper appear more comprehensible and rigorous. Generally, I have no more inquiries regarding this paper.
Summary: This paper proposes a geometric perspective for Pareto set learning by establishing an equivalence between learning the complete Pareto set and hypervolume maximization. A theoretical analysis is provided to examine the gap between the estimated hypervolume and the true hypervolume of the Pareto set and empirical studies show its effectiveness. Strengths: 1. This paper presents the equivalence between Pareto set learning and hypervolume maximization 2. Theoretical analysis of the gap between the estimated hypervolume and the true hypervolume of the Pareto set leads to better understanding in practice. Weaknesses: The empirical studies need to involve more problems, e.g., those from the popular benchmarks DTLZ and WFG, which have been shown to represent some difficult properties of real-world problems. Technical Quality: 3 good Clarity: 3 good Questions for Authors: How are the parameters of the experiment set? Is the complexity of the Pareto front taken into account when selecting test problems in the experiments? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: It is suggested to analyze the effectiveness of the proposed learning method on problems with irregular Pareto front. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable feedback. We hope that we can address your concerns with the following answers more or less. **W1. More Problems: The empirical studies need to involve more problems, e.g., those from the popular benchmarks DTLZ and WFG, which have been shown to represent some difficult properties of real-world problems.** Thank you for your suggestion. We have considered "more problems", e.g., the real-world problems mentioned in [6], which are four-objective designing problems. Detailed results can be found in the **general response** and the attached **one-page PDF**. These empirical results show that our method demonstrates strong potential as a multi-objective optimization approach. The benchmark problems in DTLZ and WFG were primarily intended for evolutionary multiobjective algorithms (EMOAs). Since our method is gradient-based, these problems are still challenging for PSL (and other PSL methods) due to multiple local Pareto fronts. We acknowledge this limitation around line 346 of the main paper. In response to that, we will provide a more comprehensive comparison of the advantages and disadvantages of the proposed method in the revised version. **Q1. Experiment Setting: How are the parameters of the experiment set? Is the complexity of the Pareto front taken into account when selecting test problems in the experiments?** The experimental setup is standard. All experiments are conducted using Python, utilizing the PyMoo and PyTorch libraries. Our experiments did not rely on GPUs. The neural model employed for these experiments is a 4-layer ReLU/Sigmoid network, and the optimizer used is SGD, with a clipping norm of 2.0. The batch size was set to 256. We also provide new results on the **effect of network structures** as the 3rd experiment in the **general response**. For your second question, in our paper, we have considered various "complexity levels" of the Pareto front, including convex examples such as ZDT1, concave examples like ZDT2, as well as real-world problems (with 3/4 objectives) with unknown Pareto front shapes, such as Four Bar Truss Design, Rocket Injector Design, and multi-objective linear quadratic regulator problems. **L1. Problems with Irregular Pareto Front: It is suggested to analyze the effectiveness of the proposed learning method on problems with irregular Pareto front.** To answer this question, we categorize irregular Pareto fronts (PFs) into two distinct cases: **disjointed** PFs and **degenerated** PFs. For a degenerated PF, for simplicity, we assume the PF is ($m$-2)-D, where $m$ is the number of objectives. This classification of irregular Pareto fronts is adopted from Hua et al. [5]. In terms of disjointed Pareto fronts, the theoretical expectation of estimated hypervolume in our paper matches the true hypervolume of the Pareto front. This fulfills the theoretical analysis for this work. The challenges related to disjointed PFs are discussed in the main paper (lines 165-168). Additionally, a detailed analysis of these challenges can be found in section B.5 and Figure 19 of the supplementary material. In the case of a degenerated ($m$-2)-D Pareto front, sampling preferences on an ($m$-2)-D subspace can reconstruct the entire front. Consequently, preference sampling on the original ($m$-1)-D sphere results in duplicated Pareto solutions, reducing the learning efficiency. As far as we know, handling an irregular Pareto front remains a challenging and unresolved topic in multi-objective optimization. The crucial aspect lies in estimating the Pareto front to enable wise sampling of preference vectors. We have identified this as a potential area for future research. We hope to address your concerns. If you have any further questions, please let us know. Reference [1]. Multi–Objective Reinforcement Learning with Continuous Pareto Frontier Approximation. AAAI 2015. [2]. Multi-objective Reinforcement Learning through Continuous Pareto Manifold Approximation. JAIR, 2016. [3]. Pareto Set Learning for Expensive Multi-Objective Optimization. NeurIPS 2022. [4]. Diversity-Guided Multi-Objective Bayesian Optimization With Batch Evaluations. NeurIPS 2020. [5]. A Survey of Evolutionary Algorithms for Multi-Objective Optimization Problems with Irregular Pareto Fronts. IEEE/CAA Journal of Automatica Sinica, 2021. [6] An easy-to-use real-world multi-objective optimization problem suite. Applied Soft Computing, 2020. --- Rebuttal Comment 1.1: Comment: Thank you for the response, all my concerns have been well addressed.
Summary: The author presents a novel approach to multiobjective algorithms aimed at modeling the Pareto set using neural networks. The proposed approach in the manuscript allows for the direct modeling of the entire Pareto set, and it also establishes an equivalence between learning the complete Pareto set and maximizing the associated hypervolume. In this study, the results of the proposed approach on various benchmark problems and real-world problems are encouraging, which makes it a promising alternative to existing multiobjective algorithms. Strengths: - The authors establish a crucial equivalence between learning the complete Pareto set and maximizing the associated hypervolume, which facilitates the convergence analysis of hypervolume for Pareto set learning. - The author provides a clear interpretation of Pareto set learning as a hypervolume maximization problem, establishing a theoretical connection between the results of Pareto set learning and the hypervolume. - Also, this paper establishes a direct correspondence between specific preferences and the resulting Pareto solution within a polar coordinate system, enhancing the interpretability of the approach. - Further, it incorporates essential techniques in hypervolume-based Pareto set learning (PSL) for modeling the entire Pareto set, as discussed in Section 4.3. In this study, the results obtained from applying the proposed approach to various benchmark problems and real-world scenarios are highly encouraging, indicating its potential as a viable alternative to existing multiobjective algorithms. - Overall, the main strengths lie in developing a novel approach to multiobjective algorithms, which enables direct modeling of the entire Pareto set using neural networks and demonstrates promising results. Weaknesses: -The paper acknowledges that the proposed approach relies on gradient-based methods, which can result in finding solutions that are locally optimal rather than globally optimal when dealing with non-convex objectives. Additionally, the effectiveness of classical nonparametric techniques in practical applications is uncertain. -The computational latency of the PSL approach is recognized as a significant obstacle in effectively handling large-scale problems. This limitation prevents the method from scaling well to very large problems, thereby limiting its usefulness in practical scenarios. -The paper highlights that the finite set learned from classical methods may not accurately approximate the continuous manifold of the Pareto set, particularly when there are multiple objectives involved. As a result, the method may struggle to accurately represent the Pareto set, which could result in suboptimal solutions. The generalizability of the proposed framework to other problems is unclear. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - The author utilizes a 4-layer fully connected neural network in their demonstration. Is the performance of the network influenced by its structure? Are there any anticipated performance differences when using alternative architectures? - In Table 2, what is the analysis regarding the occurrence of 0 values for range and sparsity when employing PSL-LS with ZDT2? - What are some examples of classical nonparametric techniques that can enhance the robustness of the method and address the limitations of gradient-based methods? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: - One major limitation is the reliance on gradient-based methods, which can lead to locally optimal solutions for non-convex objectives. Consequently, the method may not always find the global optimal solution. To address this issue and enhance the method's statistical guarantees, the authors propose exploring classical nonparametric techniques to boost its robustness. It would be clearer if the author could list some examples of such techniques in the manuscript. - Another limitation is the computational latency associated with the PSL approach, which poses a significant challenge in handling large-scale problems. As a result, the method's scalability to very large problems is limited, which restricts its practical applicability. - Further, the paper highlights that the finite set obtained from classical methods may not accurately approximate the continuous manifold of the Pareto set, particularly when there are numerous objectives. Thus, the method may fail to precisely represent the Pareto set, potentially leading to suboptimal solutions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for your constructive comments on our work. We hope that our following response can address your concerns more or less. **Q1. Is the performance of the network influenced by its structure? Are there any anticipated performance differences when using alternative architectures?** The performance is **indeed** affected by the network structure. We present **numerical results** for various network structures as the 3rd experiment in the **general response**, revealing that the **depth** of the net has **a minimal impact** on the results, while the **width** does play a significant role. Decreasing the network width to 64 adversely affects performance. This observation aligns with a theoretical study in [6]. **Q2. In Table 2, what is the analysis regarding the occurrence of 0 values for range and sparsity when employing PSL-LS with ZDT2?** PSL-LS is limited to identifying **a single solution** (Figure 11(e)). Due to the concave nature of the Pareto front ($f_2 = 1 - {(f_1)}^2, 0 \leq f_1 \leq 1$) in ZDT2, the LS-based PSL method can only find an arbitrary endpoint ((1,0) or (0,1)) of the Pareto front. **Q3;L1. What are some examples of classical nonparametric techniques that can enhance the robustness of the method and address the limitations of gradient-based methods?** The limitation of finding only a locally optimal solution is inherent in gradient-based multiobjective approaches. In the current stage, achieving global optimality, both theoretically and practically, is still challenging for PSL. Our rough plan for applying “nonparametric techniques”' is to change the function space of $x_\beta()$ from neural models to a certain reproducing kernel Hilbert space (RKHS) $H$ with the associated kernel $k(\cdot, \cdot): \Theta \times \Theta \mapsto R$, and turn to a regularized optimization problem $ \min_{x \in H} \frac{c_m}{N} \sum_{i=1}^N [\rho(x(\theta^{(i)}), \theta^{(i)})^m ] + \frac{1}{2} \lambda_r \\|x\\|_H^2$, where $\\|x\\|_H$ is the RKHS norm of the function $x()$. The usage of the RKHS may cause a loss of representation power while allowing a finer analysis and more robust optimization. In short, due to the representer theorem of RKHS, $x()$ (in contrast to the neural model $x_\beta()$) is now convex w.r.t. the tunable parameters, and therefore helps the convergence of ``the gradient-based methods''; the inclusion of the regularization term $\frac12 \lambda_r \\|x\\|_{H}^2$ is also supposed to improve the convexity and the robustness of the optimization problem. We thank the reviewer for the question and will incorporate the explanation above into the next revision. **L2. Another limitation is the computational latency associated with the PSL approach, which poses a significant challenge in handling large-scale problems.** We acknowledge that searching for the entire Pareto set, an ($m$-1)-dimensional continuous manifold, is indeed more challenging and time-consuming compared to searching for a single or a finite number of solutions. Notably, there have been successful applications of Pareto set learning (PSL) in large-scale problems such as multiobjective neural combinatorial optimization [1] and drug design [2]. To address **large-scale PSL problems**, a useful technique (proposed in [3]) is to learn a backbone network parameterized by $\beta^{(sh)}$, which benefits all objectives independently of the preference information. Additionally, separate networks parameterized by $\beta^{(i)}(\lambda)$ conditioned on the preference information are employed to learn the trade-off objectives, where $\lambda$ is an $m$-D preference vector. In such a way, the total number of parameters $\beta=[\beta^{(sh)}, \beta^{(1)}(\lambda), \ldots, \beta^{(m)}(\lambda)]$ can be maintained in a low level. Compared to the widely used EPO-based PSL method introduced in [4-5], our proposed method has already shown superior efficiency. Table 2 and Figure 8 provide evidence that our method is 100 times faster and produces higher-quality Pareto solutions. Consequently, when dealing with large-scale problems, the proposed method outperforms EPO-based PSL in terms of efficiency. **L3. Numerous objectives.** We present new experiment results and promising findings on many-objective problems ($m \geq 4$). For detailed discussions, please refer to the **general response** and the **attached PDF**. For many-objective problems, representing the entire Pareto set with a finite number of populations becomes increasingly difficult since the solutions space is very large. The proposed method, aiming to learn the full continuous Pareto set, provides much more Pareto solutions than traditional MOEAs. We believe that the idea of the proposed method offers an alternative way of solving many-objective problems. We welcome any additional comments or suggestions you may have. Reference [1] Pareto Set Learning for Neural Multi-Objective Combinatorial Optimization. ICLR, 2022. [2] Multi-Objective GFlowNets. ICML, 2023. [3]. Multi-Task Learning as Multi-Objective Optimization. NeurIPS, 2018. [4]. Learning the Pareto Front with Hypernetworks. ICLR, 2021. [5]. Gradient Descent with Controlled Ascent in Pareto Optimization. ICML, 2020. [6] Any Deep ReLU Network is Shallow. Arxiv, 2023. --- Rebuttal Comment 1.1: Title: rebuttal Comment: The author well addressed the questions that arose and responded to comments with further experiments to alleviate the limitations of the previous version.
Rebuttal 1: Rebuttal: **General response** We sincerely appreciate the diligent efforts of all the reviewers. In response to their concerns, we have conducted three additional experiments. - - - > **1. Many-Objective Problem** In our first experiment, we evaluate the performance of our proposed method on many-objective problems. We hope that this analysis can more or less address the concerns raised by **W4 for FLs5**, **Q1 for 6icQ**, **L3 for ZpxS**, and **W1 for Kyce**. | Problem (obj=4) | Indicator | Proposed PSL-HV2 (8000 sols) | NSGA2 (100 sols) | PSL-EPO | |--------------------------|-----------|------------------------------|------------------|----------------| | Car side impact design (CSID) | HV | **1.88** | 1.66 | 1.08 | | | Time | 41.46(s) | **26.36(s)** | 19.63(m) | | Conceptual marine design (CMD) | HV | **1.17** | 1.01 | 0.47* | | | Time | 41.86(s) | **26.68(s)** | 19.44(m) | (*We have finetuned the parameters and learning rate of EPO using code from [5]. It does not work very well on CMD. ) We find that the proposed method improves the efficiency of learning the Pareto set for a large number of objectives ($m$=4) compared to traditional Multi-Objective Evolutionary Algorithms (MOEAs). In a 4-objective scenario, the Pareto set/front is a continuous 3-D manifold within an $n$/$m$-D space. The Pareto set is significantly larger than the problems considered in the main paper ($m$=2,3). Approximating this extensive Pareto set using finite solutions through traditional MOEAs presents a considerable challenge. Taking NSGA2 [6] as a representative of conventional MOEAs, it takes approximately 26 seconds for NSGA2 to generate 100 solutions for these two problems. Our proposed method demonstrates remarkable scalability for 4-obj problems. With a training time of just 40 seconds, we train a model which is able to (approximately) generate the **entire** Pareto set. The hypervolume of the entire Pareto set is greatly improved compared with NSGA2. The trained Pareto model has the theoretical ability to generate an infinite number of Pareto solutions, approximating the true 3-D Pareto manifold. In practice, we present results using 8000 solutions (they can be generated just very fast). For detailed results, please refer to the attached PDF. We have observed that when applying our method to **many($m \geq 4$)-objective** problems, it **surpasses** the efficiency of EPO-based PSL. EPO-based PSL requires gradient calculations and operations for all objectives, leading to longer running times as reported in [4]. We are sincerely grateful to the reviewers for highlighting the importance of conducting experiments on many-objective problems. Based on our findings, we believe that our method holds great promise as a valuable tool for addressing many-objective problems. - - - > **2. Traditional MOEAs** As suggested by **W3 for FLs5**, we have compared with more traditional MOEAs on a three-objective Rocket Injector Design problem [1]. Specifically, we consider the**MOEAD-Tche** [2] and **SMS-EMOA** [3]. Both MOEA/D-Tche and SMS-EMOA only generate a finite number (91) of solutions, as shown in the attached file. Figure 1 clearly demonstrates that the continuous Pareto front examined in this paper accurately represents the true Pareto front, showcasing its superior ability in dealing with a 3-obj problem. The running time of these three algorithms is presented in the following table. As our method has the ability to generate an infinite number of solutions, its running time is **comparable** to that of MOEA-Tche (91 solutions) but significantly **outperforms** SMS-EMOA. | Problem (obj=3) | Indicator | MOEA/D-Tche | SMS-EMOA | Proposed (Training) | |------------------------|-----------|-------------|----------|---------------------| | Rocket Injector Design | Time | 11.77s | 8.35m | 33.64s | --- > **3. Model Structure** The last experiment studies the network structure effect raised by **W1 for FLs5** and **Q4 for ZpxS**. We found that the **depth** of a ReLU network has **minimal impact**, while the **width does affect** the performance. Reducing the width to 64 generally leads to a decrease in performance. This observation aligns with a theoretical study in [7], and we plan to investigate further and gather additional results. | | Rocket Injector Design | | | Four Bar Truss Design | | | |-----------------|--------------------------|-------|----------|-------------------------|-------|----------| | Network | HV | Range | Sparsity | HV | Range | Sparsity | | m-256-n | 40.7 | 0.7 | 2.2 | 11.9 | 1.6 | 1.5 | | m-256-256-n | 40.5 | 0.7 | 2.2 | 11.8 | 1.5 | 1.8 | | m-256-256-256-n | 40.6 | 0.7 | 2.7 | 11.9 | 1.6 | 1.5 | | m-64-64-64-n | 38.1 | 0.3 | 1.3 | 11.8 | 1.5 | 1.7 | --- Reference [1] An easy-to-use real-world multi-objective optimization problem suite. Applied Soft Computing, 2020. [2] MOEA/D: A Multiobjective Evolutionary Algorithm Based on Decomposition, IEEE Transactions on Evolutionary Computation, 2007. [3] SMS-EMOA: Multiobjective selection based on dominated hypervolume, European Journal of Operational Research, 2007. [4] Learning a Neural Pareto Manifold Extractor with Constraints. UAI, 2022. [5] https://github.com/dbmptr/EPOSearch. [6] A fast and elitist multiobjective genetic algorithm: NSGA-II. TEVC, 2002. [7] Any Deep ReLU Network is Shallow. arXiv, 2023. Pdf: /pdf/3fc78148271730c19ef52c9eea2f2509fbbf4d92.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper presents an algorithm to learn a continous approximation of the Pareto frontier in multi-objective reinforcement learning (MORL). The idea of the algorithm is to train a neural network to produce solutions such that their hypervolume is maximized. The idea iteself is not new, but the proposed method overcome well-known hard limitations of MORL algorithms, most notably the use of the hypervolume to train the neural network. To overcome this limitation, the authors propose an approximation of the hypervolume based on its computation in polar coordinates. Strengths: To the best of my knowledge, this is the first paper that overcomes the hard computation of the hypervolume as loss to train neural networks to produce continuous Pareto frontier approximation. The hypervolume computation is very expensive for problems with 3 objectives or more, and is non-differentiable. The authors propose an approximation that overcomes these limitations and achieves good results in the experiments. The authors also provide proofs in the appendix and acknowledge limitations of their approach, which I highly appreciate. Weaknesses: The authors missed some fundamental related work that is highly relevant to their method. - The idea of learning a continuous Pareto frontier was first proposed --even though without neural network-- by Pirotta et al., "Multi-objective reinforcement learning with continuous pareto frontier approximation". - Proposition 1 is equivalent to say that the Pareto frontier achieves the highest hypervolume. This was already shown by [17], so a reference should be added. - The problem formulated in Eq. (3) is a special case of the more general version by Parisi et al., Multi-objective Reinforcement Learning through Continuous Pareto Manifold Approximation (Eq. (2)). In your case, you are using the hypervolume as indicator to be maximized. - It would be nice to stress how hard the computation of the exact hypervolume is. In particular, Friedrich et al., "Friedrich, T., Horoba, C., & Neumann, F. (2009). Multiplicative approximations and the hypervolume indicator" showed that it is a #P-hard problem and proposed an approximation. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I understand that the main contribution of the paper are theoretical, but could you run some experiments with more than 3 objectives? The MO-LQR can be easily customized to have N objectives, and it would be interesting to have a short experiment to investigate the computational and time complexity (as well as the other metrics like hypervolume and sparsity) of all algorithms (yours and baselines) on varying the number of objectives (2, 3, 4, 5). Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: The authors discuss the limitation of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your helpful feedback. We greatly appreciate it. In response, we have carefully considered your concerns and incorporated them into our revisions. **W1-2. Missing Highly Relevant Work: The idea of learning a continuous Pareto frontier was first proposed --even though without a neural network-- by Pirotta et al. [1]. Proposition 1 is equivalent to say that the Pareto frontier achieves the highest hypervolume in [5]. So a reference should be added.** We would like to acknowledge that the concept of learning a continuous Pareto frontier was originally proposed by Pirotta et al. [1], and we recognize the valuable discussion of Proposition 1 by [5]. In the revised version of our paper, we have ensured to appropriately cite their contributions. **W3. Problem Formulation: The problem formulated in Eq. (3) is a special case of the more general version by Parisi et al. [2] (Eq. (2)). In your case, you are using the hypervolume as an indicator to be maximized.** Thank you for bringing Parisi et al.'s [2] work to our attention. In their work, they primarily focus on three indicators: the accuracy indicator $I_U$, the covering indicator $I_{AU}$, and the mixed indicator $I = \beta_1 I_{AU} / I_U - \beta_2$. We will make sure to cite their contributions appropriately. Due to the seminal nature of Parisi et al.'s work [2] in this field, many approaches to Pareto modeling are influenced, to some degree, by their work. Nevertheless, our proposed approach provides significant and **unique contributions** beyond Parisi et al.'s [2] work. The key differences and contributions can be summarized as follows: - Our method, a **conditioned model**, facilitates mapping preferences to Pareto solutions, enabling the **convenient generation** of user-specific solutions with a single input preference. In contrast, their method does not possess this capability. - Our proposed method establishes a crucial connection between Pareto set learning and hypervolume, a key indicator in multi-objective optimization (MOO). In addition, our approach includes an error analysis of the gap with the true hypervolume. **W4. Computational Complexity and Approximation of the Exact Hypervolume.** We agree that determining the hypervolume of a finite set of $n$ solutions with $m$ objectives is indeed a recognized NP-hard problem with respect to $m$, as indicated in reference [3]. We also would like to mention that the current best asymptotic runtime for $n$ solutions and $m$ objectives is $O(n \log n+n^{m/2})$, as proposed in reference [4]. On the other hand, our proposed approach significantly diverges from previous methods [3-5] by estimating the hypervolume through an expectation problem, as depicted in Eq. (5) and Eq. (6). As a result, our primary focus in this paper is on addressing the **statistical** approximation error. Fortunately, the statistical hypervolume approximation error can be effectively bounded by the empirical mean of Eq. (5) when the number of samples is large. In this paper, we offer two types of approximation bounds (Eq. (9) and (10) in the paper). **Q1. Problems with More Objectives.** To address many-objective problems ($m \geq 4$), we provide a comprehensive discussion in the 1st experiment in the **general response**. Visualization results can be found in the **attached PDF**. In short, the experiments demonstrate the **good scalability** of the proposed method when applied to many-objective problems. Given the challenge of representing a high-dimensional Pareto set with finite populations using traditional MOEAs, our proposed method, which learns the entire Paret set, provides a valuable tool for handling many-objective problems. If you have any additional remarks or questions, please feel free to let us know. **Reference** [1]. Multi–Objective Reinforcement Learning with Continuous Pareto Frontier Approximation. AAAI 2015. [2]. Multi-objective Reinforcement Learning through Continuous Pareto Manifold Approximation. JAIR, 2016. [3]. Multiplicative approximations and the hypervolume indicator. GECCO, 2019. [4]. S-Metric Calculation by Considering Dominated Hypervolume as Klee's Measure Problem. Evolutionary Computation, 2009. [5]. The Hypervolume Indicator Revisited: On the Design of Pareto-compliant Indicators Via Weighted Integration. EMO, 2007. --- Rebuttal Comment 1.1: Comment: Thank you for your response, all my questions have been well-addressed.
Summary: This paper presents a novel approach to multiobjective algorithms that allows for the direct modeling of the entire Pareto set. The authors present a novel approach to Pareto set learning (PSL) from a geometric perspective, distinguishing it from existing methods that treat all preferences equally, resulting in a mere partial Pareto front. The contributions of this paper are as follows: * Introducing a connection between preferences and their corresponding Pareto solutions, enabling the learning of the complete Pareto front. * Proposing a novel geometric perspective for PSL, demonstrating the equivalence of Pareto set learning to Hypervolume maximization. * Utilizing a neural network model to effectively approximate the entire Pareto set. * The experimental results validate the superiority of the proposed method over baseline approaches. Strengths: * The paper presents a novel approach to multiobjective algorithms that allows for the direct modeling of the entire Pareto set. * The authors establish an equivalence between learning the complete Pareto set and maximizing the associated hypervolume, which enables the convergence analysis of hypervolume for Pareto set learning. * The theoretical foundation of the proposed method is well established, especially the derivation of the generalization gap between estimated and true hypervolumes. * The proposed approach is evaluated on various benchmark and real-world problems, and compared to multiple state-of-the-art algorithms, and the results are promising. Weaknesses: * There is limited discussion regarding the design of the neural network structure, and the Machine Learning perspective could benefit from further elaboration. * The discussion about the construction of the training data is lacking, which is crucial to understanding the similarities and differences between the training dataset and benchmark problems. * The evaluation of the proposed model's PSL ability is solely compared with other PSL methods. It would be beneficial to also compare its optimization performance with other traditional methods, such as decomposition-based and hypervolume-based approaches. * The focus of the tested problems is primarily on 2 or 3 objective optimization problems. However, it remains unclear if the proposed method can be generalized to handle more objectives. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See Weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable feedback. We have taken your concerns, and we hope our following answers can address your concerns more or less. **W1. There is a limited discussion regarding the design of the neural network structure, and the machine learning perspective could benefit from further elaboration.** Thank you for the suggestion. In response, we conducted additional experiments in the $3rd$ experiment of the **general response** to examine the impact of network structure. Our findings suggest that the depth of the neural model has **minimal** influence on the generated quality, whereas the width **does** affect the results. Specifically, using a width of 64 leads to poor solutions. This finding aligns with the theoretical results discussed in previous research [3]. **W2. The discussion about the construction of the training data is lacking, which is crucial to understanding the similarities and differences between the training dataset and benchmark problems.** As shown in Figure 2, the studied Pareto set learning (PSL) problem involves learning the mapping function $x_\beta(\cdot)$ from a preference vector (in polar coordinates) and the corresponding Pareto solution. The considered Pareto neural model is a mapping function denoted as $x_\beta(\cdot): {[0, \pi/2]}^{m-1} \mapsto R^n$, where $m$ represents the number of objectives and $n$ denotes the dimensionality of the solution space. And therefore, according to Eq. (5), the "training data" (preference vectors) are sampled from the uniform distribution, $\text{Unif}({[0, \pi/2]}^{m-1})$. This sampling method provides an unbiased estimation of the true hypervolume. **W3. The evaluation of the proposed model's PSL ability is solely compared with other PSL methods. It would be beneficial to also compare its optimization performance with other traditional methods, such as decomposition-based and hypervolume-based approaches.** Thank you for your feedback. We conducted "more" experiments, comparing the proposed method with traditional MOEAs, including **MOEA/D-Tche** [1], **SMS-EMOA** [2], and **NSGA2** [4]. Please refer to the **general response** and the **attached PDF** for details. In summary, our proposed method aims to generate the entire Pareto front, while traditional MOEAs rely on a finite population to approximate the Pareto set. We also observed that the advantages of the proposed Pareto set learning concept is particularly valuable for problems with many objectives (m ≥ 4) (in 1st experiment in general response), as approximating such a large Pareto manifold becomes increasingly challenging with only finite solutions. **W4. The ability to handle problems with more objectives.** In response to many-objective problems (m ≥ 4) issues, we have provided a comprehensive discussion about that in the 1st experiment in **general response** and the **attached PDF**. In short, our method displays **good scalability** when applied to many-objective problems. As the Pareto set of a 4-objective problem is large (a 3-D continuous manifold), approximating this set with finite solutions by traditional MOEAs poses increasing challenges. However, using the proposed method, it is easy to train a model to approximate the **entire** continuous Pareto set, which provides more alternative solutions for the user. We hope our response adequately addresses your concerns. If you have any additional questions or concerns, please feel free to let us know. Reference [1] MOEA/D: A Multiobjective Evolutionary Algorithm Based on Decomposition. IEEE TEVC, 2007. [2] SMS-EMOA: Multiobjective selection based on dominated hypervolume. EJOR, 2007. [3] Any Deep ReLU Network is Shallow. Arxiv, 2023. [4] A fast and elitist multiobjective genetic algorithm: NSGA-II. TEVC, 2002. --- Rebuttal Comment 1.1: Title: Response to authors Comment: Thanks to the authors for the revision. All my concerns have been addressed convincingly. I have updated my score accordingly.
null
null
null
null
Rewrite Caption Semantics: Bridging Semantic Gaps for Language-Supervised Semantic Segmentation
Accept (poster)
Summary: This paper first points out the semantic gap problem of the existing language-supervised semantic segmentation method. It shows that not all visual elements are included in the corresponding language annotations. Then the paper proposes Concept Curation (CoCu), which includes Vision-driven Expansion, Text-to-Vision-Guided Ranking, and Cluster-guided Sampling strategies to solve the semantic gap problem. The experiments show the effectiveness of the CoCu. Strengths: 1. The semantic gap problem sounds very reasonable and significant for language-supervised semantic segmentation. 2. The proposed method CoCu could alleviate the semantic gap problem to some extent. 3. The experiments and ablation study are comprehensive. Weaknesses: 1. The authors should summarize the Sec. 2.3 at the beginning or the end of this section. For example, in general, Sec. 2.3 provides a method to find better class candidates for the loss in Sec. 2.1. Otherwise it confuses the readers about the ultimate goal of Sec. 2.3. 2. The overall pipeline of CoCu is a kind of dataset pre-processing method, and it is complicated. Is CoCu done online or offline? If online, how much time does it cost to find the final class candidates? Technical Quality: 3 good Clarity: 3 good Questions for Authors: None Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank for your valuable suggestions and response as follows for your concern. **Q1: Writing in Sec. 3.3?** Thanks for pointing it out. As suggested, we will clarify the motivation of our method at the beginning of Section 3.3 as well as its relation with Section 3.1. **Q2: Online or Offline? Time Cost?** We implement CoCu in an offline manner so there is no additional online time cost as compared to baseline method. For offline computation, performing Concept Curation on CC3M [36] dataset roughly takes 1.6 hours (which is nearly 8% of pre-training time following our setting) but brings +4.9% performance gains. Please refer to a table below for more details on time cost. |**Method**|**Step**|**Computation**|**Operation**|**Time Cost (hrs)**|**Average mIoU (%)**| |:-----|:-----|:-----|:-----|:-----|:-----| | GroupViT [43] | 1 (final step) | online | pre-train | 20.0 | 8.2 | | CoCu | 1 | offline | inference | 0.3 | | | CoCu | 2 | offline | build index | 0.1 | | | CoCu | 3 | offline | curation | 1.2 | | | CoCu | 4 (final step) | online | pre-train | 20.0 | 13.1 (+4.9) | --- Rebuttal Comment 1.1: Title: Response to the rebuttal Comment: Thanks for the answer. I keep my rating.
Summary: This paper proposed a novel data curation/argumentation process, named Concept Curation (CoCu), for language-supervised semantic segmentation. In the setting of language-supervised semantic segmentation (e.g. GroupViT), the network is trained with image-text contrastive loss on large-scale image-text pairs. Authors identified several issues in the vanilla data distribution of the original contrastive learning, e.g. semantic gap, and semantic bias. The proposed CoCu mitigates these issues and improves over prior work GroupViT under a controlled setting, showing a faster convergence rate and high accuracy. Strengths: 1. In the quantitative evaluation, authors re-implemented GroupViT and compare CoCu and GroupViT under the controlled experiment setting, i.e. 1024 batch size. The authors also report the accuracies on additional evaluation datasets, IN50, IN300, Cityscapes, etc. CoCu outperforms GroupViT on all the datasets by a margin. Although CoCu doesn't achieve state-of-the-art results on some datasets, it doesn't degrade the effectiveness of the method. 2. The proposed method is well-motivated. The web-crawled image-text pairs are indeed quite noisy. And GroupViT is also known to be bad at segmenting background classes like grass and sky. Moreover, from the visualization in Figure 3(b), the CoCu shows that could focus on background grasses instead of the foreground fox. 3. In the Table 2 ablation study, when only trained on CC3M, CoCu improves over GroupViT by a large margin on Pascal VOC. It is a very interesting result and justifies that CoCu speeds up the convergence. Weaknesses: 1. Insufficient qualitative comparison. Since one major claim of CoCu is that, compared with vanilla contrastive loss used in GroupViT, the proposed dataset curation mitigates semantic cap and semantic bias issues. So besides quantitative evaluation metric mIoU, more visualizations compared with GroupViT are expected. I would suggest authors add more visualizations in the supplementary materials if there is no space left in the main submission. 2. In the ablation table 3, the authors studied the effects of different components of CoCu. But the average mIoU may not be an insightful metric. Since different datasets have very different category vocabulary, authors may try to include a detailed ablation study table in the supplementary material and elaborate more on the results. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: 1. The boundary between the semantic gap and semantic bias is not that clear. At least from Figure 1, it's hard to tell the major differences between them. Maybe authors could define them more rigorously and provide more examples about them. 2. How long does it take to process the dataset? It would be informative for other readers who want to reproduce or follow up on the curation process. 3. In Table 2, CoCu also showed significant improvement on ImageNet50 and ImageNet300. But for these two datasets, the vocabularies are mainly foreground, it would be very interesting to show why CoCu could improve on these datasets as well. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 2 fair Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your value of our efforts and helpful suggestions. Please check our clarification below regarding your concerns. **Q1: More qualitative comparison.** Please refer to the Appendix Figure 2 for additional qualitative comparisons (heatmap). The figure shows that CoCu learns visual concepts (especially not captured by captions) better than the baseline GroupViT (as presented in manuscript Figure 4). As suggested, we will provide more examples presenting better convergence to missing concepts in updated Appendix. **Q2: Ablation study table.** Please find a more detailed ablation study table below for different evaluation datasets, where "baseline", "#1", "#2", "#3" and "#4" refer to the same experiments as in manuscript Table 3. As suggested, we will include this table into the Appendix for reference. |**Model** | **PVOC** | **PCON** | **COCO** | **IN-S-50** | **IN-S-300** | **CITY** | **ADE** | **STUF** | **Average** | |:-----|:-----|:-----|:-----|:-----|:-----|:-----|:-----|:-----|:-----| | Baseline | 15.5 | 10.4 | 6.5 | 10.2 | 2.9 | 8.1 | 4.4 | 7.7 | 8.2 | | #1 | 20.7 | 10.4 | 8.2 | 13.9 | 4.6 | 8.3 | 4.9 | 8.0 | 9.9 | | #2 | 22.8 | 11.4 | 8.6 | 13.5 | 5.1 | 8.3 | 5.8 | 7.0 | 10.3 | | #3 | 29.2 | 13.0 | 10.2 | 18.8 | 6.3 | 7.5 | 5.9 | 8.3 | 12.4 | | #4 | 30.6 | 13.9 | 10.8 | 19.3 | 7.3 | 8.2 | 6.1 | 8.5 | 13.1 | **Q3: Difference between semantic gap and semantic bias?** We further clarify these two concepts as suggested. We refer to semantic gap as a problem underlying in **pre-training data**. In web-crawled image-text pairs, it is prevalent that texts (i.e., captions) do not capture comprehensive visual concepts in their paired images, yielding semantic gap between the paired texts and images. In comparison, we refer to **semantic bias** as a problem in **pre-trained vision-language models**. We notice that pre-trained vision-language models often retrieves salient visual concepts in images when taking images as inputs and attempting to retrieve texts, hence demonstrating certain bias toward salient visual concepts. We conjecture that semantic bias in large-scale vision-language models originates from the semantic gap in pre-training data they used to pre-train the large-scale models. For semantic bias in CLIP models, semantic gap lies in the noisy WIT400M image-text pairs. We will further define and clarify these two concepts in the updated manuscript. **Q4: Time cost to process the dataset?** For reference, processing CC3M [36] dataset with our method takes roughly 1.6 hours (8% of pre-training time) including fast inferencing images and texts, building search index, and performing concept curation on all image-text pairs. Please refer to the table below for more details on time cost. |**Method**|**Step**|**Computation**|**Operation**|**Time Cost (hrs)**|**Average mIoU (%)**| |:-----|:-----|:-----|:-----|:-----|:-----| | GroupViT [43] | 1 (final step) | online | pre-train | 20.0 | 8.2 | | CoCu | 1 | offline | inference | 0.3 | | | CoCu | 2 | offline | build index | 0.1 | | | CoCu | 3 | offline | curation | 1.2 | | | CoCu | 4 (final step) | online | pre-train | 20.0 | 13.1 (+4.9) | **Q5: Why CoCu is also effective for foreground regions?** In Figure 3 of the submitted Appendix, we provided qualitative comparisons between the baseline method GroupViT [43] and our CoCu for segmenting ImageNet-S foreground categories. We showcased that **with CoCu pre-training, the segmentator is more robust to changes of expressions of the same semantics in captions**, e.g., from "dog" to "kuvasz", demonstrating that CoCu leads to more effective and generalized representation learning over foreground categories. We include a new example for segmenting foreground regions in Figure 1 in the attached **"global-response.pdf''**. Specifically, in this figure, we segment an image captioned *helicopter fly at high altitudes* with both GroupViT [43] (first row) and CoCu (second row) pre-trained models. When we feed different text inputs (i.e., *cloud*, *helicopter* and *mountain*) to both segmentors and visualize their activations, it is clear to see that GroupViT [43] activated on *helicopter* region (highlighted with red box) with all given text inputs (i.e., *helicopter*, *cloud*, *mountain*). In such a case, the pre-trained segmentor struggled to discriminate *helicopter* from other concepts (*cloud*, *mountain*). As a comparison, CoCu successfully captured corresponding regions when given different text inputs. This is because the pre-training in the baseline GroupVIT captures insufficient textual concepts in its visual representations, thus is easily confused by text inputs. In contrast, CoCu segmentor learns significantly more comprehensive concepts in pre-training by bridging semantic gaps and is more robust to variations of semantic context, thus localizing each text input more accurately. This further showcased that **our CoCu facilitates discriminating different contexts in the same images**, leading to better foreground segmentation on ImageNet-S-50 and ImageNet-S-300. We thank you for your valuable comments and will include this analysis in the updated appendix.
Summary: This paper targets learning unsupervised semantic segmentation from image-text pairs. The authors specifically address the issue of training data quality in previous method Group-ViT and propose an approach to enhance the captions with additional visual concepts through an automated pipeline. The paper demonstrates the effectiveness of this data filtering pipeline by evaluating it on official segmentation benchmarks. Strengths: - The semantic gap problem the authors studied is intresting and meaning full. - The writing is good and the paper is easy to understand. Weaknesses: The proposed data filtering pipeline relies on the CLIP model for collecting visually similar samples. However, the CLIP model has been trained on a much larger scale, with hundreds of millions of image-text pairs, whereas the training data used in this paper is relatively smaller. Does it work by distilling the CLIP model? This raises concerns about **the effectiveness of the pipeline when scaling up to larger training data sizes**. It would be helpful to explore **alternative self-supervised methods such as DINO, MAE, or Group-ViT itself to replace the CLIP model** in the pipeline and evaluate its performance. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: NaN Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your valuable suggestion. Below please find our clarification regarding your concern. **Q1: Does it work by distilling CLIP Model? Replace CLIP with GroupViT?** No, CoCu does not rely on vast knowledge encoded in CLIP [32] but more on our novel pipeline design. To verify this, we replace CLIP with GroupViT as our curation model to perform CoCu. The quantitative comparison with our baseline is presented in the Table below. As presented, performing CoCu with GroupViT brings decent performance gains (+3.9%) over the baseline method, which is comparable with that of performing CoCu by CLIP (+4.9%). Hence, the performance gain is largely attributed to our CoCu design, though employing the powerful CLIP further boosts the performance | **Curation Model** | **Backbone** | **PVOC** | **PCON** | **COCO** | **IN-S-50** | **IN-S-300** | **CITY** | **ADE** | **STUF** | **Average** | |:-----|:-----|:-----|:-----|:-----|:-----|:-----|:-----|:-----|:-----|:-----| | - | - | 15.5 | 10.4 | 6.5 | 10.2 | 2.9 | 8.1 | 4.4 | 7.7 | 8.2 | | GroupViT [43] (newly added) | ViT-S | 26.1 | 12.3 | 10.4 | 17.8 | 6.9 | 7.6 | 6.8 | 8.7 | 12.1 (+3.9) | | CLIP [32] | ViT-B/16 | 30.6 | 13.9 | 10.8 | 19.3 | 7.3 | 8.2 | 6.1 | 8.5 | 13.1 (+4.9) | **Q2: Scaling up to larger training size?** We did not perform large-scale experiments due to resource limitations. For the pre-training scale in our manuscript Table 1, it takes around 9 days with Tesla V100 GPUs and the global batch size of 1024. Increasing pre-training scales further will lead to much longer training time. Nevertheless, Table 2 of the manuscript shows how CoCu behaves under various pre-training scales, indicating that CoCu's superior performance is quite tolerant to the pre-training scale. **Q3: Replace CLIP with Self-Supervised Models?** We should clarify that CoCu is not applicable to DINO or MAE as CoCu takes texts as inputs, which is not supported by self-supervised models that handle image inputs solely. We surely agree that it is very meaningful to explore what role self-supervised models could play in bridging semantic gaps, and we will study it in the future work.
Summary: Current VLMs suffer from a noticeable semantic gap between visual and textual modalities since many visual concepts present in images are easily missed in their paired captions. This work proposes Concept Curation (CoCu), a pipeline that leverages CLIP to compensate for the missing semantics. For each image-text pair, CoCu establishes a concept archive that maintains potential visually-matched concepts using vision-driven expansion and text-to-vision-guided ranking. This approach enables the identification of relevant concepts through cluster-guided sampling, which are then fed into the pre-training process. As a result, CoCu bridges the gap between visual and textual semantics. Extensive experiments conducted on eight segmentation benchmarks demonstrate that CoCu achieves exceptional zero-shot transfer performance and significantly enhances the language-supervised segmentation baseline by a substantial margin. These results underscore the importance of closing the semantic gap in pre-training data. The code for CoCu will be made available to the research community. Strengths: 1. Overall, the paper is well-written, and the study of this work has a good motivation. 2. The mechanism of CoCu is intuitive and easy to follow. 3. CoCu demonstrates significant empirical results: it outperforms GroupViT (re-implemented baseline) by 4.6% mIoU in average on eight popular semantic segmentation benchmarks. 4. CoCu also yield good qualitative results, with more accurate and smooth masks than its baseline. Weaknesses: 1. Despite the fact that CoCu indeed enriches visual concepts during VL pre-training, the cost seems to be heavy. During training, a CLIP model is employed to perform the key components of CoCu such as vision-driven expansion and text-to-vision guided ranking, which introduces a lot of additional computation. Thus, the comparison to your baselines such as GroupViT might not be strictly fair. A detailed comparison of training time or FLOPS should be presented to support the effectiveness of CoCu. 2. My bigger concern lies in how CoCu relies on the pre-trained CLIP, i.e., if using a weaker CLIP model for CoCu, how will the performance change? If CoCu is not sensitive to the CLIP's performance, you can directly use GroupViT to perform vision-driven expansion and text-to-image-guided ranking, so that no additional parameters will be introduced. As shown in Table 4, GroupViT obtains acceptable zero-shot classification results. However, if CoCu highly relies on a strong CLIP, your contributions might also be challenged. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. In table 1, are both CoCu and GroupViT trained for 30 epochs? 2. With the concern of Weakness 1, I am also wondering what if you change the x-axis from #epochs to actual training time in Figure 3? Will CoCu still converge faster than GroupViT? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: As CoCu basically follows the architecture of GroupViT, it has the same limitation that for each image, the number of semantic concepts segmented by CoCu/GroupViT has a maximum of the number of group tokens fed to the vision encoder. Thus, given a high-resolution image with many semantic regions, these methods might fail to generate accurate segmentation masks. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments. We appreciate the value you see in our innovative motivation and competitive experimental results. Below, please find clarifications regarding your concerns below. **Q1: In Table 1, are both CoCu and GroupViT trained for 30 epochs?** Yes, in manuscript Table 1, pre-training configurations of GroupViT [43] and CoCu remain the same to ensure a fair comparison. **Q2: Change the x-axis from #epochs to actual training time in Figure 3?** We would clarify that CoCu is implemented in an offline manner, and its does not introduce much overhead in processing. Take the experiment on CC3M [36] dataset as an example. CoCu takes 1.6 hours in total as detailed in the table below, and the rest pre-training is similar to the pre-training in GroupViT [43] which takes around 20 hours. Hence, CoCu introduces around 8% computational overhead. We provide the time cost of each operation in Table below. |**Method**|**Step**|**Computation**|**Operation**|**Time Cost (hrs)**|**Average mIoU (%)**| |:-----|:-----|:-----|:-----|:-----|:-----| | GroupViT [43] | 1 (final step) | online | pre-train | 20.0 | 8.2 | | CoCu | 1 | offline | inference | 0.3 | | | CoCu | 2 | offline | build index | 0.1 | | | CoCu | 3 | offline | curation | 1.2 | | | CoCu | 4 (final step) | online | pre-train | 20.0 | 13.1 (+4.9) | Despite the 8% additional offline time cost, CoCu brings remarkable performance gains in comparison to the baseline GroupViT [43] (+4.9% over 8 benchmarks). This supports the significance of bridging semantic gaps in language-supervised segmentation, and offers flexibility while tackling different tasks with different requirements. As suggested, we will update a new loss curve that uses actual time as x-axis in manuscript Figure 3. The time usage will include both offline operations and online pre-training. **Q3: For curation model, replace CLIP with a weaker CLIP model and GroupViT?** Technically, CoCu could leverage any vision-language model to perform curation, bridge semantic gaps and facilitate language-supervised segmentation. However, we would highlight that a strong curation model (e.g., CLIP [32]) is not a necessity to achieve decent performance gains in downstream datasets. To verify this, we follow your suggestion and replace CLIP model (CLIP ViT-B/16) we used in manuscript with a weaker CLIP model (CLIP ViT-B/32) to perform CoCu, followed by pre-training a segmentor on CC3M [36]. The experiment shows that CoCu with CLIP ViT-B/32 brings comparable performance gains over baseline (8.2%) as compared to that of using CLIP ViT-B/16 (12.8% v.s. 13.1%). This demonstrates that CoCu can work with different CLIP models with consistent performance gains. To ensure fairness to baseline method [43], we further replace CLIP ViT-B/16 with GroupViT [43] as our curation model to perform CoCu. As presented in Table below, performing CoCu with GroupViT still brings clear improvement (+3.9%) over baseline method. This indicates the significant performance gain is largely attributed to our novel pipeline design, though employing the powerful CLIP further boosts the performance. Please note that the retrieved results vary depending on different language-vision models, leading to fluctuating pre-training and segmentation results. Nevertheless, the enhancements across all models remain noteworthy, underscoring our pipeline’s inherent capacity for generalization and improvement. | **Curation Model** | **Backbone** | **PVOC** | **PCON** | **COCO** | **IN-S-50** | **IN-S-300** | **CITY** | **ADE** | **STUF** | **Average** | |:-----|:-----|:-----|:-----|:-----|:-----|:-----|:-----|:-----|:-----|:-----| | - | - | 15.5 | 10.4 | 6.5 | 10.2 | 2.9 | 8.1 | 4.4 | 7.7 | 8.2 | | GroupViT [43] (newly added) | ViT-S | 26.1 | 12.3 | 10.4 | 17.8 | 6.9 | 7.6 | 6.8 | 8.7 | 12.1 (+3.9) | | CLIP [32] (newly added) | ViT-B/32 | 27.4 | 14.8 | 10.6 | 16.7 | 6.3 | 10.4 | 6.3 | 9.9 | 12.8 (+4.6) | | CLIP [32] | ViT-B/16 | 30.6 | 13.9 | 10.8 | 19.3 | 7.3 | 8.2 | 6.1 | 8.5 | 13.1 (+4.9) | --- Rebuttal Comment 1.1: Title: Response to authors' rebuttal Comment: Thank you for your rebuttal and additional experiments. My primary concerns are addressed, and I would like to raise my rating to borderline acceptance.
Rebuttal 1: Rebuttal: Please check global_response.pdf for our new Figures. Pdf: /pdf/ab1c4584b82b029b287583a1ef997cbff32e3d2b.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
(Almost) Provable Error Bounds Under Distribution Shift via Disagreement Discrepancy
Accept (poster)
Summary: This paper try to propose a new metric on evaluation of the distribution shift of dataset. The authors first prove the proposed error estimation bound with some assumptions. Then they demonstrate the effectiveness of their method through training a surrogate model maximizing the disagreement discrepancy. The authors conduct experiments on different dataset, showing that their method never overestimate the prediction accuracy on the target domain. Strengths: 1. The authors try to use unlabeled data in the target domain, which is believed can bring benefits on predict the accuracy on the target domain. 2. The author introduce an interesting conception, named disagreement discrepancy, that represents the maximum difference between the target domain and source domain in the hypothesis set. This conception may provide some insights on the investigation of domain shift. Weaknesses: 1. Certain evaluation metrics seem unclear. The reviewer finds it confusing why the method's characteristic of never overestimating the target accuracy—consistently underestimating it—is considered advantageous, particularly when its Mean Absolute Error (MAE) significantly underperforms compared to other methods. The reviewer recommends the authors to illustrate the benefits of this particular feature. 2. When compared with the baseline methods, the proposed metric falls short in terms of some crucial metrics, such as the Mean Absolute Error (MAE) 3. The authors attempt to train an additional network by maximizing the discrepancy in disagreement. However, this approach may be time-consuming and unreliable due to its sensitivity to the training hyperparameters. Technical Quality: 1 poor Clarity: 2 fair Questions for Authors: Most questions are listed in the weakness part. Minor: 1. Please explain the meaning of '(almost) Provable' Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 1 poor Presentation: 2 fair Contribution: 2 fair Limitations: Shown in weakness part Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. In your review, you evaluated the soundness as “poor”. We are not sure we understand the reason for this evaluation. Our understanding is that soundness is meant to evaluate the correctness of claims in a given work. Do you believe that our bound or our experiments are not valid? **If not, could you please clarify why you gave this evaluation, so that we can address whatever concerns you have?** * “**The reviewer finds it confusing why…never overestimating the target accuracy…is considered advantageous, particularly when MAE significantly underperforms.** Consider a scenario where a model will be making crucial decisions: self-driving cars, medical diagnoses, financial agents, etc. If we deploy this model expecting 80% accuracy and its actual accuracy is 20%, the result could be very costly. In this setting, a valid error bound is *essential* and accuracy of an error prediction only matters if we can trust it. This is exactly when consistent error overestimation is important. Say we need a *maximum* of 20% error for a model to be reasonably safely deployed. If our method predicts 20% error, you can be quite sure that it’s ok to deploy. If our method predicts 50% error, the model *might* still be ok to deploy—but we probably don’t want to take that chance! If you were to use a different method to predict error here and it predicted 10% error, you might then deploy it, only to discover that the *true* error is 50%. **But by then the damage may already be done.** We hope it is clear that there are *many* settings where overestimating error is advantageous. Please see our response to all reviewers for more details. > **”compared with the baseline methods, the proposed metric falls short in terms of some crucial metrics, such as the Mean Absolute Error (MAE).** We want to emphasize that **the primary focus of this work is on giving valid, non-vacuous error *bounds***, with accurate prediction being secondary, though still important. We believe we were as upfront as possible about this in our writeup. Indeed, **as early as the abstract we state that we do not beat the baselines purely on MAE.** Where our method *does* outperform existing methods is reliability, and it does so substantially. Based on our point above about the need for reliability, we hope you agree that this also represents a meaningful contribution. We discuss this in greater detail in our response to all reviewers. You stated that our method falls short on metric***s*** (plural). But we are not sure what other metrics you may be referring to? Our method does *much* better on coverage and has much lower MAE on the (very few) points for which it underestimates error. **Could you please clarify what other metrics you were talking about?** > **”The authors…train an additional network by maximizing the discrepancy in disagreement. However, this approach may be time-consuming and unreliable due to its sensitivity to the training hyperparameters.”** This is not correct. Our method trains a *linear* layer on frozen features. This optimization is extremely cheap due to its simplicity and convexity, taking literally seconds. It is also very insensitive to hyperparameters because of the convexity. > **”Please explain the meaning of '(almost) Provable”** “Almost” is used to highlight the fact that we need an assumption for the bound to hold provably. The error bound is guaranteed if the assumption is valid—but by definition, assumptions cannot be proven a priori, so it cannot be truly guaranteed in all settings. Notably, this is true of *all* methods that predict or bound error (including the baselines), even if they don't state it. Our extensive experiments across numerous benchmarks show that on natural distribution shifts the assumption holds and our method gives valid error bounds. Hence, though our method works well, “almost” is used to cover extreme scenarios where the assumption may not hold true, making the error bound incorrect. --- Rebuttal Comment 1.1: Title: Thank you for your replying! Comment: Clarification on soundness 1: This poor score is mainly due to the evaluation metric and the unfair comparison. 1. The authors list several applications where safety is critical. However, the most reliable metric is the out-of-distribution (OOD) accuracy itself in these applications. As the proposed method requires touch to the unlabeled OOD data, the safest evaluation is to label the data and compute the out-of-distribution accuracy. If the authors want to demonstrate the necessity of the non-vacuous property of the proposed method, they should find an application that: a. safety is essential, b. unlabeled OOD data is easy to be collected, c. labeling is expensive even considering the safety requirement. 2. Unfair comparison. It is not hard to make the other baseline methods non-vacuous. In practice, the simplest way is to add a safe threshold for prediction, which can be derived theoretically for every metric based on concentration inequality or expectation calculation by adding a $\sqrt{\frac{log \delta}{n}}$ term in the bound. This method would be similar to the shift setting provided in Appendix D when ignoring the dataset size difference. As shown in that table, some baseline methods can achieve around 90% coverage while the MAE is still smaller than the proposed method. In that sense, the results in Table 1 are misleading, which implies that it's hard for other methods to achieve a large coverage rate. Other discussion based on the reply from the authors: 1. Thank you for your clarification on your methods! However, the reviewer then has some questions on Assumption 3.5. As the authors only finetune the linear layer, Assumption 3.5 may not hold very well. Therefore, the reviewer wonders how much this term contributes to the error of the proposed bound. Although the authors provide a demonstration in Appendix E, the reviewer thinks that using the corollary to demonstrate the assumption is not straightforward. Additionally, the experimental results in the appendix are not positive: only 25.7% of experiments support the assumption. The reviewer wonders if this means that for 74.3% of experiments, the assumption is violated and the proposed bound is 'vacuous' in the perspective of theory. 2. To better demonstrate the assumption, the reviewer suggests the author to plot the relation between $\hat \Delta(\hat h, y^*)$ and $\hat \Delta(\hat h, h')$. The reviewer is aware that $\hat \Delta(\hat h, y^*) > \hat \Delta(\hat h, h')$ does not directly lead to $\Delta(\hat h, y*) > \Delta(\hat h, h')$, but such a plot can help the reviewer to identify if the assumption holds in a reasonable setting. --- Reply to Comment 1.1.1: Title: Responding to new items Comment: Thanks for replying to our rebuttal! > “the safest evaluation is to label the data…find an application that: a. safety is essential, b. unlabeled OOD data is easy to be collected, c. labeling is expensive even considering the safety requirement.” We agree that these conditions (a, b, and c) are relevant. In fact, **we think it is hard to argue that the examples we gave in our original response, such as self-driving cars and medical diagnoses, do not already satisfy all of the above points.** **We use the case of medical diagnosis under hospital locale shift to show how a, b and c are satisfied.** (a) is satisfied as safety is essential in medical diagnosis applications; and (b) unlabeled patient data is easily available for a target hospital. Moreover, labeling medical data can be *very expensive* and needs expert human input. It’s also common for **the true diagnosis to be unknown,** or to see a shift **at test-time, when labeling data is impossible. So (c) is satisfied as well.** Finally, suppose safety on a particular task is important enough to warrant the labeling cost. **If our method guarantees high accuracy on one shift, we can avoid the cost of labeling it and instead label a different one.** This clearly shows the value of a *bound* rather than just an estimate. If a method is good on average but sometimes fails completely, is that really a method we should rely on? > “It is not hard to make the other baseline methods non-vacuous. [It] can be derived theoretically for every metric based on concentration inequality.” **We believe this claim is incorrect.** To clarify terminology, when you write “non-vacuous”, do you mean “guaranteed”? The baselines here are error *estimates*---as they do not bound the error like our method does, the term “non-vacuous” does not apply to them. Next, when you suggest this approach, **what quantity are you claiming will concentrate? Estimates by existing methods will not concentrate around the *true* test error; they will just concentrate around their *expected prediction*.** If this expected prediction is incorrect, no amount of data will cause them to give a valid bound. Thus, we do not believe simply adding a concentration term could allow prior methods to give true error bounds. **To clarify here, could you please state this claim in a more mathematically precise way? We would be happy to discuss this further.** > “some baseline methods can achieve around 90% coverage while the MAE is still smaller than the proposed method.” There’s a key distinction here between *known, a priori bounds* and *post-hoc evaluation, reported for comparison*. Appendix D shows that other methods can get reasonable coverage (but not at the desired rate) *if we know exactly the correct shift/scale ahead of time.* Put another way, **these values represent test data leakage; they are only to show the "ideal" baseline (which we still often beat). It is not valid to cherry-pick the best performing item in the group.** If we try a baseline with fifty hyperparameter settings and a few of them do better, it would not be correct to say that that method is just as good—**in a *real* OOD setting, we would not know which setting to use, or whether the desired rate $\delta$ would be satisfied.** In contrast, our bound is valid every time without modifications. > “Although the authors provide a demonstration in Appendix E, the reviewer thinks that using the corollary to demonstrate the assumption is not straightforward” We are not sure we understand this statement. **Our corollary *proves* that the Assumption holds with very high probability on 25% of the datasets we evaluate on.** Could you clarify what you mean when you say this is “not straightforward”? > “The reviewer wonders if this means that for 74.3% of experiments, the assumption is violated and the proposed bound is 'vacuous' in the perspective of theory.” **We address this in detail in the paper. In the paragraph immediately after the one you are referencing (line 729), we wrote:** “Note that the fact that the bound is *not* violated for a given shift does not at all imply that the assumption is not true.” Also, **it seems like you may be using “vacuous” synonymously with “valid”. Please note that these are very different terms.** To clarify: A “valid” bound is one that is *correct*. A “vacuous” bound is one that is *trivial* (greater than the obvious bound of 1). A vacuous bound is always valid. The challenge is in giving *valid, **non**-vacuous* bounds, as our work does. > “the reviewer suggests the author to plot the relation between $\hat\Delta(\hat h, y^\*)$ and $\hat\Delta(\hat h, h’)$” **We *have* plotted this, it is in our response pdf. Our paper also already includes a similar plot, Fig. 4a.** In the last plot in our pdf, we plot “drop in accuracy” (i.e., $\hat\Delta(\hat h, y^\*)$) vs. “predicted drop in accuracy” (i.e., $\hat\Delta(\hat h, h’)$). **As you can see, the assumption holds consistently.**
Summary: The paper proposes a method that under certain assumptions provides upper bounds on the accuracy under distribution shift when provided with unlabelled test data from the shift's target distribution. Strengths: The assumptions necessary to obtain the bounds are clearly stated and discussed. The problem of estimating performance under distribution shift is important, and it is not clear whether meaningful bounds are possible. As such, the attempt of providing almost provable bounds in the way that it is done here could be a good contribution in this direction. The experimental evaluations are quite extensive and reasonable. Particularly the careful examinations of when the assumptions are fulfilled are important and support these assumptions. Weaknesses: The method strongly depends on the function class in which the optimal critic is searched for, and potentially also on its approximation quality within that class. This means that when one allows a perfect critic to be found, which intuitively would look like it should be allowed since it definitely exists if the source and target distributions have smaller overlap than the source test error, then the obtained bounds are vacuous. On the other hand, one can not know a priori whether a function class is large enough to contain a critic that satisfies Assumption 3.3 and thus provides a trustworthy guaranteed bound. These issues are clearly stated and discussed, and it is shown that the necessary assumptions easily hold when considering linear functions on features for standard networks and benchmarks. However, it is not clear how reliably the method could be transferred to new, potentially less regular datasets and/or models. The bounds after all rely on assumptions that can only be confirmed empirically after the true labels of the target distribution are known. Compared to other methods, the proposed $Dis^2$ method makes estimates on the accuracy on the target distribution that are worse than simple baselines. This means that the issues of the provability of the bounds mentioned above are important weaknesses. In l. 215, it is not clear that the logistic surrogate is still a valid approximation after it is combined with the disagreement logistic loss. The combination of the two losses should be discussed as the sum looks straightforward but has as far as I can see no intrinsic reason to be natural. There is balancing done by leaving out the $1/\log|Y|$ term, which seems very ad hoc. Other combinations like e.g. sum of squares or other powers of the individual losses would appear just as valid. Would it make sense (for certain function classes) to do a constraint optimization of the disagreement on the target data while fixing the decisions on the source (i.e. $\epsilon_S{\hat{h}, h') = 0$)? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Why is Method [1] (Agreement-on-the-line) not compared to in the evaluations? This should be possible after it is fit on some other shifted distributions. Certain assumptions on the distribution shift seem to be necessary, besides the discussed model-specific settings. For example, the bounds would not hold for a shift with similar images but switched class labels. Can such extreme cases be formalized and excluded? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Whether the necessary assumptions hold is extensively discussed and tested for standard distribution shifts. It is however not clear to what kinds of situations one can or cannot expect the method to generalize. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your feedback! We address your comments below. > **The bounds…rely on assumptions…confirmed empirically after the true labels…are known** You’re correct that we can’t confirm our assumption until the true labels are known. **It’s important to remember that this is true of *every* method, including the baselines.** The baselines also rely on assumptions—**their failures imply these assumptions are often not met.** The difference is that their assumptions are *unstated and unknown*, so it’s impossible in to even *guess* whether they will work. Instead, these papers use experiments to show their methods work—most of the time. But when they fail (which is not uncommon), **there’s no way to anticipate it.** We present experiments to demonstrate that our method is consistently reliable and accurate. But we also explicitly state a simple condition for guaranteed validity **and this allows us to identify failure cases a priori** (a very useful property). Since the focus of this paper is on *reliable bounds*, this represents a substantial improvement over prior work. Please see our overall response for further discussion. > **not clear how reliably the method transfers to new, potentially less regular datasets and/or models** As it’s impossible to evaluate on all datasets/models, **this statement is true for *every* ML method.** It is standard to evaluate on a wide set of benchmarks, with the hope that observations transfer. > **the proposed method…worse than baselines…provability of the bounds mentioned are important weaknesses** We agree that absent “true” provability, average accuracy is more important. But **a *true* guaranteed bound under shift is impossible without assumptions.** As we wrote above, the baselines we compare to all rely on *unknown* assumptions which could fail at any time. Further, all prior bounds are vacuous in practice. **The only way to be really, truly certain of a reliable error bound is to predict 100\% error.** We hope you agree that we should aspire to do better than that! This means that “reliability” is a continuum—in a given setting, we decide how *important* a valid bound is and choose the appropriate method. If “reliable bound” is the goal, **our method represents substantial progress.** Extensive experiments show $\text{DIS}^2$ consistently gives valid bounds. It does so while allowing us to identify failure cases a priori, at a small cost to overall accuracy. Furthermore, we can give relaxed bounds along the accuracy/reliability pareto frontier—**this is a very useful feature which other methods lack.** If reliability is not at all important, then it could make sense to use other methods; we tried to clearly convey this point several times throughout the paper. **But such a setting is not the focus of this work.** > **Why is Agreement-on-the-line not compared to?** We don’t compare to that method as the cost is substantially higher than just training the base model (it requires training many models for *each* distribution shift). In this work we consider methods with negligible overhead beyond training the model being evaluated. However, note that the pattern of severely underestimating test error is present in that method (e.g. the rightmost plots in Figure 1 of that paper). > **Certain assumptions on the distribution shift seem to be necessary, besides the discussed model-specific settings. For example…with similar images but switched class labels. Can such extreme cases be formalized and excluded?** We want to make sure we are understanding this point correctly. Are you saying that **(i)** this example implies assumptions on the distribution are necessary **in addition to** our existing assumption; or **(ii)** simply that one could make a more “fine-grained” assumption to exclude settings such as your example? If you mean **(i)**, then we want to clarify that **our bound does not need any additional conditions—Assumption 3.3 already handles this case.** Recall, Assumption 3.3 states that $\epsilon_T(\hat h, y^\* ) - \epsilon_S(\hat h, y^\*) \leq \epsilon_T(\hat h, h^\* ) - \epsilon_S(\hat h, h^\*)$ where $\hat h$ is the classifier we are evaluating, $y^\*$ is the true labeling function, and $h^\*$ is the critic that maximizes the disagreement discrepancy. In the setting you’ve described, there are two possibilities: either (a) the capacity of $\mathcal{H}$ is large enough to express a critic $h^\*$ which achieves larger discrepancy than $y^\*$, or (b) no such critic exists in $\mathcal{H}$. If (a), $\text{DIS}^2$ will predict test error close (perhaps equal) to 1. **This bound will be valid *and* very close to the true error.** If we are in setting (b), then the assumption is not met (note that (b) is exactly the negation of (a), and (a) is exactly Assumption 3.3). If you meant **(ii)**, note that our assumption *could* be made more mathematically precise as you are suggesting, but only by making additional distributional assumptions. Since our intent was to use the weakest assumption possible, we did not go this route. > **Logistic loss** Certainly, other combinations could work; since the precise loss was not our main focus, we didn’t extensively explore other options. The loss we derived works well—better than prior works. Designing alternatives could be an interesting follow-up. Leaving out the $\log Y$ term was intentional: since the features are optimized for “agreement” (the data is separable according to $\hat h$), equal weighting would give an unbalanced objective. We observed this in practice, so we rescaled. We did consider constraint optimization! Some algorithms have been designed for this in the fairness literature [1]. Unfortunately they only work for binary classification; we spent some time trying to extend it to multiclass but didn’t want to go too far down that path. [1] “Predictive Multiplicity in Classification.” Marx et al. 2020 --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thanks for the detailed responses to all points of my review! > We present experiments to demonstrate that our method is consistently reliable and accurate. From my understanding of when the provability of the bounds could be useful, these experimental results mostly confirm that the margin is large enough that underestimation is unlikely, but can as experiments not really confirm the provability. > But we also explicitly state a simple condition for guaranteed validity and this allows us to identify failure cases a priori (a very useful property). From my understanding, identifying the failure cases a priori relies on confirming the condition a priori, which is not possible. Do you mean something different here, or am I confusing something? > transferability to new scenarios The reason I mentioned this point is that the method is supposed to give provable bounds. That the reliability of provable bounds when they are produced cannot be assumed is a drawback in my opinion that differs from other "no-free-lunch" situations, and for me strongly limits the meaning of provability. > The only way to be really, truly certain of a reliable error bound is to predict 100% error. We hope you agree that we should aspire to do better than that! I agree that such a bound would not be informative. However, I am not convinced that the bounds calculated in this paper do satisfy a useful notion of provable bound, since they rely on uncheckable assumptions in the moment the bound is computed and used. The point why I mention this as a weakness is that **if** one doesn't use the bounds, than the method is weaker than previous ones, which is an important point to me when evaluating the paper, even if the point doesn't contradict what is written in the paper which properly discusses it. > logistic loss My point here is that the justification from statistical learning theory for using the softmax convex surrogate makes sense if one optimizes it as the individual loss. Choosing a different surrogate for both parts still gives a convex function, but it might have a different mimimum than the sum of the original losses. The weighting question is basically almost the same point and can be (and maybe is) used to actively choose a combination with a minimum close to that of the original sum. However, while a theoretically precisely justified surrogate loss would be nice and strengthen the paper, I do not see this as a major weakness since the chosen loss works well enough empirically and could be improved in future works. --- I still do not see the immediate the usefulness of guarantees that cannot be given without knowing measured results that could trivially be used to calculate even stronger guarantees / exact error estimations. However, while the assumptions cannot be verified a priori, as the authors state, these assumptions while unjudgeable in that sense are very weak and do not incorporate concrete knowledge on e.g. distributions. Thus I can see that there might be scenarios where they can be checked or be assumed with some concrete probability. This makes it plausible that the paper could be helpful for future works or situations where one has good prior reasons to assume that the assumptions hold. For the statements and methods that the paper provides, the technical and experimental treatment is good. I'm updating my current score from 4 to 6. --- Reply to Comment 1.1.1: Comment: **Thanks so much for taking the time to read our rebuttal and update your review!** We’re happy to further clarify the first two points: > “these experimental results mostly confirm that the margin is large enough that underestimation is unlikely, but can as experiments not really confirm the provability.” You are absolutely correct that these experiments cannot prove the provability. We hope we did not give you the impression that we are arguing this point :-) Our emphasis here was on the consistency of these results: whereas other methods frequently overestimate the test accuracy, our bound is (empirically) almost always valid. In the same way that the average accuracy of prior methods is taken as evidence that they will *probably* be reasonably accurate in the future, we feel this supports the idea that our method is reasonably likely to give valid bounds—though of course not guaranteed, hence “almost”. Also note that conditioned on overestimation, (i.e., if we penalize overconfidence but not underconfidence), our method does substantially outperform prior work. > “From my understanding, identifying the failure cases a priori relies on confirming the condition a priori, which is not possible. Do you mean something different here, or am I confusing something?” Sorry for the confusion. What we meant here is that because we are able to explicitly and simply state the necessary condition for success, we can often reason whether the condition is likely to hold in a given scenario, even without knowing the labels. As an example of this, we discuss in Section 3.1 how the use of a domain-adversarial (DA) representation could be expected to violate the assumption *a priori*, precisely because the regularization term in DA methods implicitly minimizes $\Delta(h, h^\*)$. Thus, even without seeing the data we would know not to expect a valid bound in this setting (though interestingly, our method does get much better MAE here). **Thanks again for your efforts in reviewing and staying involved during this discussion. Please let us know If you have any remaining questions.**
Summary: The paper aims to propose a way to characterize the error bounds under distribution shift with provable error guarantees. Based on several assumptions, the authors show that the target error bound can be reasonably estimated by maximizing the agreement within the desired classifier on source distribution and maximizing the disagreement on the target distribution, i.e. $DIS^2$. The authors justified the validity of $DIS^2$ by comparing it to other existing methods both from theoretical and empirical perspectives, and showed that it successfully upper bounds the error under distribution shift. The authors also discussed to what extent the assumptions hold in practice with empirical findings. Strengths: The problem of justifying the validity of the trained models under distribution shift is important in modern machine learning. The prior work based on data dependent uniform convergence or assumptions of how distribution shifts are relatively vacuous in practice. The author proposes a method to characterize the error bound of the trained model on target distribution that is theoretically sound with assumptions almost empirically verifiable, making it a significant progress in this area. Weaknesses: Weaknesses: 1. Although empirical evidence shows that both Assumptions 3.3 and 3.5 are reasonable, the observations are still based on limited data. There could be cases when the assumptions fail to hold. The paper discussed a setting (domain-adversarial learning) where the proposed method $DIS^2$ may be invalid. It is still unclear to me when the assumptions do or do not hold in practice. From a theoretical perspective, is it related to the representativeness of the hypothesis class H? On the empirical side, although except for the adversarial learning case the target function is rarely the worst case, I still feel there could be a possibility. After all that is the whole point of worst-case analysis. Hence, this point of view needs to be supported by more substantive evidence, probably based on real data. 2. The representativeness of the hypothesis class is partly based on how many features the algorithm uses in training. There is a section talking about how the number of features affects the value of the error bound (the logits). It would be interesting to discuss these empirical findings together with hypothesis class representation. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: See weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: Yes, the limitations have been adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your thorough review! We appreciate that you have recommended acceptance, and that you consider the presentation and contribution of this work to be of high quality. > **“There could be cases when the assumptions fail to hold”** We agree that there are cases where the assumption might not hold. The best we can do in practice is to test our method in a wide variety of settings. We emphasize that **explicitly stating our assumption is strictly superior to not giving any guarantees at all.** Prior methods also make assumptions, implicitly. So, there could just as easily be settings where *their* assumptions fail to hold, but that would be harder to anticipate since *we don’t know what those assumptions are.* > **”is it related to the representativeness of the hypothesis class H?”** The validity of the assumption depends on the *interaction* between the capacity of $\mathcal{H}$, the predictions of $\hat h$, and the true labeling function $y^\*$. This cannot be made more mathematically precise without making additional distributional assumptions. Since our intent was to use the weakest assumption possible, we did not go this route. (Very) roughly, you can think of it as assuming that $\mathcal{H}$ contains a function “somewhat close” to $y^*$ (e.g., if $y^\* \in \mathcal{H}$ then Assumption 3.3 is immediately satisfied). > **”discuss these empirical findings together with hypothesis class representation”** We are not entirely sure we understand you here. Are you suggesting that we move the empirical results on reducing the number of features (i.e., Figures C.4/C.5 in the Appendix) to this section in the main body? Or add some additional discussion there? If you could please clarify we will be happy to flesh out the writing as necessary. > **”although…the target function is rarely the worst case, I still feel there could be a possibility. After all that is the whole point of worst-case analysis”** We totally agree. **But all prior methods give vacuous error bounds under shift.** Fundamentally, any guarantee is going to require some assumption. Since all prior assumptions are too weak to give non-vacuous bounds, we need to explore the correct way of strengthening them to make any progress. This work represents one such attempt. You wrote that we haven’t provided real evidence that the worst case doesn’t actually happen. We believe the fact that our method consistently works *is* data-based evidence for this claim. Further, as we remark in the footnote on page 5, **the “true” worst case would be precisely 0\% test accuracy.** As far as we are aware, this doesn’t happen in reality. Our extensive experiments imply that the assumption holds in practice, which is the most anyone could hope for. If someone doesn’t believe it will hold, that’s fine—but if they want to say anything meaningful at all, they’ll need to choose some *other* assumption to make. --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer TEVz Comment: Thank you very much for the detailed response. I agree that it is reasonable to make assumptions for proving better bounds and testify how realistic they are with empirical evidence. I still feel the term "somewhat close to $y^*$" is not very precise. It will be better if the authors could support this argument with specific metrics, say L_2 distance between different hypotheses. In my understanding, the empirical findings of how reducing the number of features affects the error bounds, is theoretically equal to how restricting the representativeness of the hypothesis class affects the error bounds. Say if you just output the top 10 of the PCs, you only output hypotheses from a restricted hypothesis class (by restricting it to a subspace spanned by these 10 PCs). Hence, it will be more stringent to ensure that there exists a function in $\mathcal{H}$ "close enough" to $y^*$. This is related to the hypothesis class representation analysis.
Summary: This paper provides new error bound of distribution shift based on a notion called disagreement discrepancy. By assuming that the model class has enough expressiveness, it is theoretically proven that the error can be bounded by the worst-case disagreement discrepancy. Empirical investigation shows that such a method provides valid error bounds. Strengths: 1. The proposed method is novel and interesting. The paper does a great work in demonstrating why such a measure is better than $\mathcal{H}-$ and $\mathcal{H}\Delta \mathcal{H}$-divergence. 2. This paper provides exhaustive experiments to test the performance of proposed measure. Weaknesses: 1. The writing of this paper can be further polished. For example, the authors can briefly introduce $\mathcal{H}-$ and $\mathcal{H}\Delta \mathcal{H}$-divergence in Section 3. 2. According to the experiments, the proposed measure is not better than existing baselines. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In line 246, what does “the logits output by $\hat{h}$” mean? 2. The method bounds the disagreement discrepancy between h and true label by the worst-case disagreement discrepancy with respect to h. However, when the model class is highly expressive, how we can expect the worst-case bound to be tight? 3. Does the method preserve the order of error? That is, if we have the error of model A is larger than model B, can we observe that the Dis^2 of model A is also larger than model B? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Please refer to “weakness”. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your feedback! To address your points: > **“briefly introduce $\mathcal{H}$- and $\mathcal{H}\Delta\mathcal{H}$-divergence”** We do roughly describe them in Section 3.1, but we will add a more detailed description with the additional space. You’ve stated the writing could use improvement—since this is just one example, **could you indicate anywhere else you believe the writing could be improved?** We would very much like to present this work clearly. > **”the proposed measure is not better than existing baselines.”** We acknowledge this point and we tried to clearly convey it several times throughout the paper. **But we do not believe that this constitutes a weakness of our method**. The goal of our approach is to give valid, non-vacuous error *bounds*, with raw accuracy being a secondary objective. We discuss this in more detail in our overall response to all reviewers. > **what does “the logits output by $\hat h$” mean?** Recall that $\hat h$ is the last layer of the network, which linearly transforms the features $\phi(x)$ into logits $h^\top \phi(x)$ to then produce a vector of probabilities via softmax. We are saying that instead of optimizing the critic on the features $\phi(x)$, we can also optimize directly on the logits $h^\top \phi(x)$ (if there are $k$ classes, the logit space will be $k$-dimensional). > **”how we can expect the worst-case bound to be tight?”** This is an important point and is key to understanding our bound. In fact, assuming the validity condition is met, **our method will give a bound that is exactly as tight as possible. If it gives a loose bound, it means it would would be *improper* to give a bound that is any tighter.** We use the term “improper” here to distinguish from “incorrect”. Any bound that is not an equality can of course be tightened while remaining correct. But in this setting, we have *no knowledge* of the distribution shift beyond unlabeled data. Thus, for a given critic which implies large test error, there is no justification for the conclusion that this critic is less likely to be correct than the network itself. And if it *were* the correct function, it would imply the reported error on the test distribution. Therefore, we cannot rule out the possibility that our network has this error, and so it is “improper” to output a bound that is any tighter. **This is precisely the idea conveyed by Figure 2(c).** Recall that we don’t have labels for the test set (red triangles). Therefore, **both $y^\* = \hat h$ and $y^\* = y_3^\*$ can be considered equally plausible,** because they both perfectly match the training data. Here, $\text{DIS}^2$ would return a bound of 0.5. In one possible universe, we have $y^\* = y_3^\*$, then $\hat h$ will have test error 0.5 and our bound will be exact. In another universe our bound may be loose. **But we cannot distinguish between these universes without labels for $\mathcal{T}$**, and therefore it would be “improper” to output a bound tighter than 0.5. We hope the above discussion is clear! > **”Does the method preserve the order of error?”** Great question! The answer can be deduced from Figure 1: “preserving error order” is equivalent to points further to the right on the X-axis also being further to the top on the Y-axis. We see that $\text{DIS}^2$ does approximately preserve order—there are a few points with a different rank than would be predicted (other methods exhibit the same), but overall we see the desired pattern. However, we also observe that Figure 1 omits an additional factor, namely source accuracy. So one thing we realize would be useful to report is whether $\text{DIS}^2$ and other methods preserve order in estimating the *drop in accuracy* from source to target. We plotted the result in our pdf response and it looks quite similar to Figure 1—all methods are reasonably order-preserving, with occasional outliers. Thanks for bringing this up! **We hope our response above (and our general response to all reviewers) answers your questions and that you will consider improving your recommendation. Please let us know if you have any additional concerns.** --- Rebuttal Comment 1.1: Comment: Thanks for the clarifications. I will keep my score.
Rebuttal 1: Rebuttal: Thanks to all reviewers for their helpful comments and suggestions! We worked very hard to present this work as clearly as possible, so we are glad to hear that **reviewers xsfT, rQbE, TEVz, and uhaU all found the writing clear and easy to follow (even “enjoyable to read”!)** A few reviewers pointed out that our method does not match existing baselines on Mean Absolute Error (MAE). We want to emphasize that **the primary focus of this work is on giving valid, non-vacuous error *bounds***, (that is the title of the paper, after all!) with accurate prediction being secondary, though still important. We believe we were as upfront as possible about this in our writeup. Indeed, **as early as the abstract we state that we do not beat the baselines purely on MAE.** Where our method does outperform existing methods is reliability and conditional MAE, *and it does so by a huge margin.* We think that in safety-critical settings where reliability is important, **a minor drop in overall accuracy is a small price to pay for substantially more reliable error bounds.** This is precisely what our method offers. Consider a scenario where a model will be making crucial decisions: self-driving cars, medical diagnoses, etc. If we deploy this model expecting 80% accuracy and its actual accuracy is 20%, the result could be very costly. In this setting, a valid error bound is *essential* and accuracy of an error prediction only matters if we can trust it. Here the baselines do *very poorly* and our method substantially improves on SOTA—to our knowledge it is the first to give non-vacuous bounds. **If you agree that there are settings where average accuracy is secondary to reliability,** then we see no reason that having slightly worse accuracy should be considered a weakness, given that our method has substantially better coverage. The main takeaway from our experimental results is that if we *only* care about accuracy, other methods might be preferable, as we state throughout the paper. **But if we are at all worried about overestimating test accuracy then coverage becomes important—and $\text{DIS}^2$ gets *much* better coverage, and has much lower conditional MAE.** We discuss this point further in Appendix D, where we even attempt to *strengthen the baselines*—but we find that our method is still best. Our response pdf includes an updated Table 1 with standard errors and Figure 1 stratified by training method, as requested by reviewer **xsfT**. The other plot depicts estimated vs. true *drop in accuracy* (as opposed to test accuracy), as suggested by reviewer **JSAn**. Pdf: /pdf/2a08b65b9de0a2a228a2797d65eaccf6a6cf8ce8.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper describes a novel method to estimate the error of a classifier under a shift of distribution. It relies on finding a worst case "critic" classifier which maximizes disagreement on some unlabeled target domain and hence bounds the disagreement with the true labeling function. The authors apply several necessary ticks to make the bound tractable and non-vacuous: (i) they introduce a new disagreement loss which can be efficiently maximized to find the critique and (ii) they only consider linear critics. While those two aspects make it possible that computed bounds are violated, in practice they show their bound is competitive in terms of mean absolute error while providing a better coverage than other methods. Strengths: Clarity: I found the paper well written and enjoyable to read. Presenting complex ideas in simple terms is one of the contribution of this work. Related works are cited adequately, I would only suggest adding [1] which seems also relevant. Originality: While I find the insights and derivation not entirely novel---the derivation in [2] also relies on the same idea of bounding disagreement on the target domain with a worst-case critic while agreeing on the source domain---using those with the goal to provide an error bound on the target distribution is novel to me. Furthermore, the authors propose a novel disagreement loss. Significance: The proposed approach is simple to implement yet competitive, I can see it being useful to the community. References: [1] Predicting Out-of-Distribution Error with the Projection Norm (https://arxiv.org/abs/2202.05834) [2] Agree to Disagree: Diversity through Disagreement for Better Transferability (https://arxiv.org/abs/2202.04414) Weaknesses: While they are an extreme case, datasets with completely spurious correlations such as the ones in [3,4,2] might be interesting to consider. For those datasets, the best critic is actually $y^*$. You can then see if your approach recovers the right critic despite your added linearity constraint. As those datasets are shown in [4] to be a failure case when relying on logits, this might fail unless other features are used. References: [3] Evading the Simplicity Bias: Training a Diverse Set of Models Discovers Solutions with Superior OOD Generalization (https://arxiv.org/abs/2105.05612) [4] Last Layer Re-Training is Sufficient for Robustness to Spurious Correlations (https://arxiv.org/abs/2204.02937) Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: See weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Limitations of the approach are mentioned adequately in various parts of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks so much for your thoughtful feedback! **We are very happy to hear that you appreciated the presentation; we took great pains to present this work as clearly as possible.** We will add the suggested reference [1] to the related work section. We remark that it would not make sense to include this work as a baseline—despite the title, our understanding is that that method does not actually predict error, but rather ranks models to enable model selection. Thanks for the additional datasets; these will certainly contribute a better understanding of the resulting critic. Note that the best critic is a function of the *classifier accuracy*, not the dataset itself. That is, unless the classifier has 0\% train error and 100\% test error, it is not the case that the best critic $h^* = y^*$. You are correct that with the linearity constraint we may not recover the best critic, but the advantage of our approach is that *we don’t need to recover $y^\*$*, which means that *we don’t need the true $y^\*$ to be linear to give a valid bound!* We agree that the derivation is not completely new, and we are careful to properly attribute prior work: we emphasize the connection to $\mathcal{H}\Delta\mathcal{H}$-divergence at several points. On the other hand, **we do feel that the use of this idea to give distribution-free non-vacuous error bounds on modern deep networks under distribution shift**—the first such result, that we are aware of—is quite noteworthy. **Thanks again for your helpful suggestions. We hope you will advocate for this work during the reviewer discussion.**
Summary: The authors propose a novel approach to estimate error bounds for deep neural networks under distributions shift. The proposed bounds remedy the limitations of exiting bounds that either provide vacuous bounds or underestimate the error for a big fraction of shifts while also often requiring access to test labels. In contrast, the bounds proposed by the authors only use unlabeled test data under a simple, intuitive assumption about the hypothesis class's ability to achieve small loss on the unlabeled train and test distributions. More specifically, their approach involves optimizing a disagreement discrepancy between two classifiers using a novel disagreement loss. The experimental results show that the obtained bounds are non-vacuous and comparable to existing competitive baselines. Strengths: - The presentation is clear and the paper is overall well-written. The notation is also introduced properly and make the theory easy to follow. - The limitations of previous works are also clearly stated and provide a good motivation for this work. - One of the main promises of this approach is allowing the user to interpolate between robustness and accuracy depending on the preferred level of risk tolerance. - The approach is overall simple and leads to non-vacuous bounds without using test labels. Weaknesses: - The presentation of the results is the main drawback of this work for me! The choice of aggregating all the results over all datasets, shifts, and training methods is strange and does not allow an objective evaluation of the performance of the bounds and analysis of their drawbacks and limitations for the different scenarios. It is possible that the proposed bounds underperform severely on most of the datasets, shifts and training methods and perform exceptionally well on a subset of them. It is also concerning that even the appendix does not include the detailed evaluation for each dataset and shift pair separately. - Additionally, we see from Table 1 that the only scenario where the proposed bounds outperform existing baselines is if the concentration term in Theorem 3.6 s dropped at the expense of coverage. Hence, a more detailed and transparent presentation of the results is crucial to verify the claims made by the authors. - Including standard errors in Table 1 would also be crucial to verifying the validity of the results given the wide range of scenarios for which the performance was aggregated. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - How does the calculation of the bound scale with the size of the dataset and model architecture in practice? - In which cases would you expect your bound to be vacuous? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors identified a setting where their proposed bounds may be invalid. The above questions and feedback may help identify more limitations of this work to be explicitly discussed in the main text. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your feedback! You’ve stated that you are mostly concerned with missing experimental results, which we are happy to add. **We put preliminary results in our pdf response, which we'll expand on in the paper.** We want to briefly explain why these results were not originally included. You believe we may be misrepresenting test error *prediction*, but we emphasize that **the focus of this work is on giving valid, non-vacuous *bounds***, with accuracy being secondary. Our experimental results are meant to emphasize coverage, but also show that we get competitive MAE. > **“choice of aggregating all the results”** We agree that aggregating can obscure information. Our pdf response includes a rough version which stratifies by training method. We observe that **the pattern of accurate error estimation is retained across individual strata.** Preliminary results across datasets indicate that $\text{DIS}^2$ does better or worse when other methods do: the worst is $.2293 \pm .062$ (for ATC: $.1736 \pm .039$; difficult to draw conclusions with large std errors), and the best is $.0766 \pm .023$ (ATC: $.0758 \pm .018$). We originally aggregated the results because **stratifying presents a *less informative* evaluation of bound validity. Without aggregation, there isn’t enough statistical power to reject the null hypothesis that our bound is valid at the chosen confidence level.** The only way to demonstrate that our method *doesn’t* work is to show a statistically significant difference between the confidence level and the observed bound violation rate (Figure 4b). By stratifying, we would not have enough samples (i.e., shifts) in each group to distinguish between a valid/invalid bound. > **“it is possible to underperform on most methods and perform well on a subset”** The results in the pdf show that this is not the case. This also seems unlikely in general since MAE is bounded in $[0,1]$, so the influence of outliers is limited. Note that this could *not* occur for validity bounds: if our method only gave valid bounds for a small subset, the enforced level $\delta$ would be violated. We hope this distinction is clear and that our new results convince you that our method predicts error robustly. As you’ve indicated that this is your main concern and that you otherwise like the paper, **please let us know if there are additional results you think are missing and we will be sure to include them.** > **“outperform existing baselines”** We again point out that *pure accuracy* is not the focus of this work. We present a non-vacuous error bound under shift, and we are careful to never claim more. **We state as early as the abstract that our method does not do better than predictive baselines.** Where our method *does* outperform existing methods is coverage/reliability. We are unsure how this presentation could be made more transparent. As you noted, $\text{DIS}^2$ matches/outperforms baselines when dropping the concentration term. **This as a *strict improvement*---the other methods provide no coverage guarantees at all!** Dropping the concentration term puts all methods on equal footing for comparing raw accuracy. When reliability is important, we would retain the concentration term. **But now it no longer makes sense to compare $\text{DIS}^2$ to prior work purely on the basis of MAE, because our method provides guarantees and the other methods do not.** Instead we consider coverage, and our results show that $\text{DIS}^2$ gets excellent coverage, while retaining competitive accuracy. When coverage is not important at all, other methods may be preferable, as we point out in the paper. We even go so far as to *strengthen the baselines* by exploring ways to improve their coverage without reducing accuracy (Appendix D). > **including standard errors** We’ve recreated Table 1 with standard errors in our pdf. Since “coverage” is a binary variable and we already report $\hat p$, the only missing data was $n=90$, which is available in the Appendix. Stratified across training methods there is not much variation in MAE: the worst is $.1589\pm .022$, and the best is $.1360\pm.024$. > **Bound calculation scaling** Calculating the bound is trivial. It takes ~5-10s on one GPU for the largest datasets with $n_{\mathcal{S}} + n_{\mathcal{T}} \approx$ 62K, less than a second for the smaller ones—this could be sped up with stochastic optimization. **It does not scale with the model architecture** because we are optimizing over the frozen features of the network—the cost to extract these features is one pass over the dataset, exactly the same as the cost to evaluate accuracy. The optimization is similarly cheap because it is a convex objective optimized over linear predictors. > **”When to expect the bound to be vacuous?”** **This is an important question and crucial to understanding what makes our method so much stronger than prior bounds.** For simplicity, consider an interpolating network with 100\% train accuracy. By definition, our method will give a vacuous bound when there is a linear critic which fully agrees on train and fully disagrees on test. **This is exactly when we would *want* to output a vacuous bound.** Knowing nothing about the test distribution a priori, we cannot conclude that this critic is less likely to be correct than the network itself. And if it *were* the correct function, it would imply 100\% test error. Therefore, we cannot rule out the possibility that our network has 100\% error, and so it is “correct” to output a vacuous bound (assuming our focus is on reliability, as in this work). In other words, the bound will be vacuous precisely when it is “correct” to output a vacuous bound (conservatively). We hope this explanation is clear. **We are committed to presenting this work transparently, and we will add the metrics you’ve requested. We hope this will convince you to improve your recommendation. Please let us know if you have any additional concerns.** --- Rebuttal Comment 1.1: Title: Following up Comment: Hi, as the discussion period nears the end we wanted to check in one more time to see if you are willing to reconsider your score in light of our response and the new results you asked for. You wrote that the presentation of the results was your main concern---we hope that the new info you requested such as stratified experimental results and standard errors helps to address that. Were there any other results that you felt would strengthen the paper? Even if we don't have time to add them now, we could always add them to the final version. Please let us know if you have any remaining questions and we will do our best to respond to them before the end of the discussion period!
null
null
null
null
Django: Detecting Trojans in Object Detection Models via Gaussian Focus Calibration
Accept (poster)
Summary: The paper proposes the first object detection backdoor detection framework Django (Detecting Trojans in Object Detection Models via Gaussian Focus Calibration). It leverages a dynamic Gaussian weighting scheme that prioritizes more vulnerable victim boxes and assigns appropriate coefficients to calibrate the optimization objective during trigger inversion. The experimental results show the superiority of Django over some state-of-the-art baselines. Strengths: The authors found that the poison effect can vary significantly across bounding boxes in object detection models due to its dense prediction nature, leading to an undesired optimization objective misalignment issue for existing trigger reverse-engineering methods. The authors propose a trigger inversion-based backdoor detection framework for object detection: DJANGO (Detecting Trojans in Object Detection Models via Gaussian Focus Calibration). It features a Gaussian Focus Loss to calibrate the misaligned loss during inversion by dynamically assigning weights to individual boxes based on their vulnerability. The authors claim that equipped with a label proposal pre-processor, DJANGO is able to quickly identify malicious victim-target labels and effectively invert the injected trigger lies in the backdoored model. The experimental results show Django outperforms some state-of-the-art baselines on four metrics: Precision, Recall, ROC-AUC, and Average Scanning Overheads for each model. Weaknesses: Can the authors explain why the authors claim that Django is the first trigger inversion-based backdoor detection framework for object detection while there have been also many popular and effective trigger inversion backdoor scanning architectures/methods such as [51, 49, 15, 31]? Do you mean that the proposed framework is the first one for object detection? How about [3] mentioned in the paper? Inspired by the Focal loss [25], the authors propose Gaussian Focus Loss. However, It is unclear (i) how the proposed Gaussian Focus Loss work to solve object detection backdoor detection, and (ii) how the parts in this Gaussian Focus Loss interact to dynamically capture a set of vulnerable boxes that have not been flipped yet, and assign a large coefficient to encourage the transition. The setting of the training process of the proposed framework and baselines are not mentioned clearly. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Please see the questions in the weaknesses section. Some further questions for improvements: What is the proposed trigger inversion-based backdoor detection framework? Can the authors show a visualization of a flow or a pipeline or an algorithm of this framework including the main components? Can the authors explain the setting of the training process (e.g., the models configuration and the values of any hyper-parameters used) of the proposed method and baselines? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The limitations of the work are not mentioned in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[W1] Clarification of the Django** We are sorry for the confusion. [51, 49, 15, 31] are initially designed for detecting backdoor on image classification models. According to our evaluation(main text, line 268-line 291, Table 1), existing trigger inversion techniques all have limited performance when adapted on object detection models due to the misalignment issue presented in main text Sec 3.1. In Baddet[3], the proposed Detection Cleanse is a backdoor sample detection technique, which has a completely different threat model and goal with model-level backdoor detection. Please refer to a more detailed discussion in global response A1. To the best of our knowledge, the proposed Django is the first model-level trigger inversion-based backdoor detection framework tailored for deep learning object detection models. We will further clarify in the revision. --- **[W2] Intuition of Gaussian Focus Loss** Please refer to **global response A2**. --- **[W3&Q2] Model & Baseline configuration details** In Appendix A, we provide comprehensive details regarding the datasets and model architectures employed in this study. To provide further elucidation, it is important to note that the poison rate varies within the range of 0.1% to 8% across diverse models sourced from TrojAI r10 and r13. Similarly, the trigger size exhibits a range of 1x1 to 22x22, representing a scale of 0.001% to 0.7% relative to the input dimensions. Pertaining to the hyper-parameters utilized in model training, the learning rate is stochastically assigned, spanning from 1.56e-08 to 1e-04 across different models. The number of epochs for training spans from 6 to 100, while the batch size ranges from 4 to 32. As for the model performance metrics, the average clean mAP across the models attains a value of 0.7979, while the average poison mAP stands at 0.7680. We assess TrojAI models using polygon triggers, encompassing two distinct attack types: misclassification and evasion attacks. The trigger's polygonal structure is characterized by varying edge counts, ranging from 3 to 8. Furthermore, each individual trigger is endowed with randomly generated color and texture attributes. --- **[Q1] Workflow of Django** We extend our gratitude to the reviewer for highlighting the presentation issue in our paper. In response, we have introduced a comprehensive overview figure (rebuttal PDF Figure 2) during the rebuttal, which provides a visual representation of our Django framework. Django operates through two distinct stages: 1. Compromised Label Proposal via Backdoor Leakage: In this initial stage, we propose a lightweight screening algorithm that swiftly identifies a small subset of victim-target label pairs. This selection is grounded in the observation that the behavior of a poisoned model on victim samples tends to shift towards the target label, even in the absence of the backdoor trigger itself. a.k.a backdoor leakage. Further elaboration can be found in Section 3.3 (main text, lines 223-233). 2. Trigger Inversion via Gaussian Focus Loss: The second stage involves trigger inversion, where each chosen label pair undergoes a precision-oriented process using our proposed Gaussian Focus Loss. This process precisely and dynamically captures a small fraction of compromised bounding boxes, assigning them larger coefficients when calculating the inversion objective function. The norm of the inverted trigger from each candidate pair serves as a determinant of the model's benignity. Additional insights are available in Section 3.2 (main text, lines 184-222). ------------ **[Q3] Hyperparameter settings and sensitivity** In Section 4.2 (lines 326-332), we discuss and evaluate the sensitivity of hyperparameters in the pre-processing procedure ($h$ and $\omega$), and the main results are shown in the main text Figure 5b. Recall the values of $h$ and $\omega$ strike the balance between efficiency and effectiveness of the pre-processing procedure We conducted experiments by setting $h$ from 1 to 10 and $\omega$ from 0.1 to 0.8. For each combination of $h$ and $\omega$, we recorded the True Positive Rate (the ratio of selecting ground-truth label pairs) and the corresponding number of selected pairs in total for each model architecture. Figure 5b illustrates the results. We observed that different model architectures require slightly different preprocessing hyper-parameter settings to achieve the optimal trade-off between efficiency and effectiveness. However, the True Positive Rate (TPR) quickly saturates after selecting a reasonably small portion of label pairs (less than 200 out of 2602 pairs for all architectures). To provide more detailed information, we will include specific values of $h$ and $\omega$ at each changing point of Figure 5b in the revised version. We also evaluate the Django performance under different settings of IoU threshold, initialization region size and score threshold. Please find more details in Appendix D. During the rebuttal, we further evaluate the impact of two hyperparameters(initial mean $\mu$ and variance $\sigma$) in the Eq.4 on Django. We randomly sample 8 models (4 trojan and 4 benign) for each architecture that was trained on the synthesized traffic sign dataset. Besides the default values we report in the paper ($\mu$=0.1,$\sigma$=2), we set 5 more groups of initial values and report the detection performance. As shown in rebuttal PDF Table 2, Django remains effective under different initializations. ------------ --- Rebuttal Comment 1.1: Title: Many thanks to the authors for answering almost all of my questions and concerns. Comment: I have two last questions and concerns as follows: Can the authors explain how the authors obtain the best model during the training process before using the best model for the evaluation phase? Please let me know if I miss something here. The source code has not been released, so it is quite hard to check the correctness and consistency of the implementation regarding the proposed model’s theory as well as the reproducibility of the experiments. I am willing to increase the score when the authors solve my above-mentioned questions and concerns. --- Reply to Comment 1.1.1: Title: Replies to followup questions Comment: **[Q1] Model training and evaluation** We appreciate the reviewer for raising the subsequent questions. We have identified two potential interpretations of the term 'model' in the question and have addressed them individually: 1. If 'model' refers to the subject models (poisoned and clean object detection models) used to evaluate our Django framework, we clarify that these models were pre-trained and obtained from TrojAI rounds 10 and 13. They were trained until convergence, meeting specific criteria such as clean mAP for benign models, poison mAP, and clean mAP for poisoned models. 2. If 'model' pertains to our proposed Django framework, we emphasize that, akin to other reverse-engineering based backdoor detection methods[49, 15, 51], Django is a non-parametric method that does not necessitate a training process. When provided with a model and a small set of clean samples, Django determines whether the model is poisoned by analyzing the size of the inverted trigger. For meta-classifier based backdoor detection approaches [17, 60], which require clean and poison models for training, we present their 5-fold cross-validation outcomes in the main text's Table 2. We intend to offer further clarity on this matter. Please let us know if we have misunderstood your questions. We are open to a thorough discussion. ---- **[Q2] Code availability** We promise to release all the source code to reproduce our experimental results upon publication.
Summary: This paper investigates the problem of detecting trojans in the context of Objection Detections. The authors first observed that the poison effect can vary significantly across bounding boxes in object detection models due to its dense prediction nature, which leads to a misalignment issue for existing trigger reverse-engineering methods. To solve this problem, they proposed the Django framework built upon the Gaussian weighting scheme to prioritize more vulnerable victim boxes. Extensive empirical results are conducted several objects, datasets, and models. Strengths: - The problem of detecting backdoors in Objective Detections is of sufficient interest to the NeurIPS community. - The paper is well-written and easy to follow. - The empirical evaluations over three objects, 16 detection image datasets, three model architectures, and two types of attacks are convincing. Weaknesses: - Why not consider non-trigger-inversion-based methods for detection? Regarding backdoor detection problems in the case of classification problems, many methods do not rely on inverting the trigger, e.g., the STRIP. I am curious about the performance of applying these methods for detection. - Do you have any theoretical comprehension of the choice of gaussian? How about some other options? Technical Quality: 3 good Clarity: 3 good Questions for Authors: - The different weighting schemes (of the poison effect) on the bounding boxes seem to resemble the observations in CNN that different magnitudes of neurons for clean and backdoor inputs. Do you have any comments on this? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Please see my comments above Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[W1] Comparison with non reverse-engineering based backdoor detection methods** Please refer to **global response A1**. ------ **[W2] Theoretical comprehension of the choice of gaussian** Please refer to **global response A2**. ------ **[Q1] Correlation between weight schemes and compromised neuron magnitude** We believe our findings are consistent with the disparity in neuron magnitudes observed between clean and backdoored samples in CNN models [29]. In the context of object detection models, we suspect that the elevated activation values of target labels within compromised bounding boxes can be attributed to the pronounced magnitudes of compromised neurons within intermediate layers. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for addressing my concerns. I am happy to maintain my positive rating.
Summary: This paper proposes a novel trigger detection method that works on object detection tasks. The proposed method leverages a novel Gaussian Focus Loss to calibrate the misaligned loss during trigger inversion. The evaluations presented in the paper demonstrate that the proposed method is effective in different settings including against adaptive attack. Strengths: 1. This paper has revealed and discussed a generalized phenomenon in the object detection model trigger inversion, i.e., the misalignment of CE Loss and ASR. From my perspective, this finding can shed light on future works on defending against backdoor attacks in object detection tasks and also propose a good question for the backdoor attacks in the setting. 2. The proposed method has leveraged the finding mentioned above and solved the challenges when applying such a finding, I think the proposed method has good soundness. 3. The evaluations are comprehensive and the proposed method is effective. In addition, the computational cost is quite acceptable. 4. To sum up, the proposed method is simple yet effective while the discovered phenomenon can provide new perspectives in this field. Weaknesses: The limitations of the proposed method are not discussed in this paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to the weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: This paper has not discussed the limitations. A potential limitation of the proposed method may lie in the performance against different types of triggers (e.g., dynamic triggers). Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[W1] No limitation discussed** As discussed in the main text (lines 107-113), our primary focus in this paper is on attacks that use static polygon triggers, which are more feasible in real-world scenarios. How to effectively inject more complex attacks such as WaNet[38], DFST[7] and dynamic attack[43] into object detection models is still an open question. We leave it to future work. We're planning to enhance the clarity of our writing and will also include a section that highlights the limitations of our approach.
Summary: The key idea of this paper is to propose Django, an adaptation of the trigger reverse-engineering technique used for detecting backdoored models in classification to object detection models. The paper shows that existing trigger reverse-engineering techniques are ineffective in object detection and finds that loss misalignment is the primary reason for the less effectiveness. The paper then proposes a new loss function that leverages Focal loss, a well-studied loss function in object detection, against compromised object detection models. In evaluation, Django is more effective than the baseline methods in backdoor detection with less computational overhead. The paper shows Django is effective against a few adaptive attacks. Strengths: 1. The paper proposes a backdoor detection method for object detection models. 2. The paper presents why existing detection (trigger reverse-engineering) methods are ineffective. 3. Django implements a novel loss function that addresses the problems identified in 2. 4. The paper runs an evaluation against existing techniques and adaptive attacks. I like the contribution of this paper proposing a backdoor detection method for object detection models. I don't think this is a groundbreaking contribution, but it is also not a trivial contribution. I believe Django could be a good baseline for backdoor detection in object detection. Weaknesses: 1. I expect the related work could be a bit more comprehensive in backdoor attacks and detection "in object detection." 2. The evaluation mostly compares the effectiveness against trigger reverse-engineering techniques; comprehensiveness would be nice. Related work > Since the paper addresses backdooring in object detection, the related work should discuss the backdoor attacks and defenses in object detection. As backdooring has been studied for a while, there are many backdoor attack papers and proposed defenses. It would also be nice for readers to identify the novelty of this work much more clearly. Evaluation > I believe that there are multiple approaches to backdoor defenses (for example, I can think of a simple way like STRIP). Hence, I'd like to see a comprehensive categorization of backdoor defenses (perhaps in the related work section) and a comparison of Django against those missing in the current evaluation. The focus is not to show Django outperforms all the defenses but to show where Django lies in backdoor detection and how effective/ineffective Django is compared to different approaches. Technical Quality: 3 good Clarity: 3 good Questions for Authors: No question. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper discusses the limitations in Line 333--342. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[W1] More comprehensive related work in object detection** We thank the reviewer for the detailed suggestion regarding related work. We will add a new section to introduce more detail regarding backdoor attack and defense in the context of object detection. ----- **[W2] Comparison with non reverse-engineering based backdoor detection methods** Please refer to **global response A1**. --- Rebuttal Comment 1.1: Title: Thank You Comment: Thank the authors for the detailed response regarding additional defenses. It addresses my concerns.
Rebuttal 1: Rebuttal: **Global Response** We extend our sincere gratitude to all the esteemed reviewers for their invaluable and astute feedback. In the Global Response section, we shall diligently address the common questions raised and supplement the document with additional figures and tables to enhance clarity. Besides, we will reply to the specific inquiries and suggestions of each reviewer individually. “Q”, “W” and “L” indicate the question, weakness and limitation mentioned by the corresponding reviewer. e.g. “Q2@sH6B” denotes the second question brought by the reviewer sH6B. “A” denotes our answer. Tables and figures detailing the additional experiments can be found in the rebuttal PDF. ------------------------ **[W2@sH6B, W1@VAnQ] Clarification of our threat model and evaluation of other types of backdoor defenses** **A1**: As discussed in the threat model (main text lines 107-113), our proposed Django framework falls under the category of backdoor model detection, aligning with a line of existing works[49, 15, 51]. In this scenario, defenders are required to classify the subject model as trojaned/benign with access only to a limited set of clean samples but no poisoned samples. Model-level backdoor detection techniques are usually executed offline before model deployment. On the contrary, the techniques mentioned by the reviewers, such as STRIP[13] and Detection Cleanse[3], belong to another type of backdoor defense known as backdoor sample detection. This defense approach operates under a completely different threat model and serves a distinct purpose. Defenders aim to discriminate trojaned/benign input samples on-the-fly, requiring access to the poison samples. Therefore, it may not be appropriate to compare our proposed Django with STRIP and Detection Cleanse. We will further clarify in the revision. However, during our rebuttal, we encountered a related work FreeEagle[1] that extends STRIP to encompass trojaned model detection techniques. In line with the paper's configuration, we replicated the setup and conducted a comparative analysis between STRIP and Django. Our evaluation involved a random sampling of 8 clean models and 8 poisoned models, all trained on the traffic sign synthetic dataset. Rebuttal PDF Table 1 presents evaluation results. We can see that Django is able to achieve 0.9375 ROC-AUC while STRIP has 0.6250. It is because STRIP is not capable of detecting label specific triggers[1]. Moreover, it's important to note that STRIP's superimposing operation has the potential to introduce additional objects onto the fused image. This is particularly significant due to the dense output nature of object detection models. Consequently, this operation may influence the entropy scores of bounding boxes, which could ultimately lead to a decrease in its overall detectability. In addition to reverse-engineering based backdoor model detection techniques, we also compare Django with two state-of-the-art meta-classifier based techniques: MNTD[60] and MF[17]. As shown in the main text Table 2, Django outperforms the two baselines by large margins on all three datasets. Please refer to the detailed discussion from main text Line 292-313. [1] Fu, Chong, et al. "FreeEagle: Detecting Complex Neural Trojans in Data-Free Cases." arXiv preprint arXiv:2302.14500(2023). ------------------------ **[W2@VAnQ, W2@tRsV] Intuition and theoretical comprehension of Gaussian Focal Loss** **A2**: The selection of the Gaussian distribution stems from empirical observations made during our exploration of the underlying reasons behind the misalignment issue in evasion and misclassification attacks(main text, lines 162-173). This choice is supported by the discussions in Section 3.2 (main text, lines 199-209), where we highlight the constraints of the Focal Loss method. Specifically, we observed the cause of misalignment is due to the unequal poisoning effect on individual bounding boxes. Only boxes with moderate confidence shall be focused dynamically through the entire inversion procedure. Therefore, naive Focal Loss is insufficient as it only focuses on boxes with low confidence, i.e. hard examples. Furthermore, we observe our goal is highly aligned with the natural bell shape of the probability density function of gaussian distribution. Through adjusting the mean and variance during inversion, the proposed gaussian weighting scheme can dynamically capture the boxes with moderate confidence scores and assign larger coefficients to encourage the transition. Please note that any distributions characterized by centralized peaks are suitable for our intended purpose. During the rebuttal, we attempted to substitute the Gaussian distribution with the Laplace distribution, another commonly employed probability distribution. Our experimentation involved 20 models trained on the traffic sign synthetic dataset, comprising 10 clean models and 10 models poisoned with misclassification triggers. The outcomes of these experiments are presented in rebuttal PDF Table 3. It is evident that with the Laplace focus loss, Django maintains a high level of effectiveness, achieving an ROC-AUC of 0.9, whereas baseline methods only achieve 0.7. We hypothesize that the 0.05 performance decline, in comparison to the Gaussian Focus Loss, may be attributed to the sharper peak of the Laplace distribution when contrasted with the Gaussian distribution. Within our context, a reduced number of bounding boxes exhibiting moderate confidences are allocated larger coefficients in different stages, thereby potentially diminishing the emphasis on compromised boxes. We intend to present a more comprehensive array of experiments in the revised version. Pdf: /pdf/029649d69406e5d22c4179c097f03cb54f549ea3.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: In this paper, the author proposes a new method for detecting backdoors in object detection models. The author finds that directly applying backdoor detection methods for classification models to object detection models results in a loss misalignment problem. Therefore, the author suggests assigning different weights to different bounding boxes during the trigger inversion optimization process. As such, they propose a new optimization loss function in the paper, Gaussian Focus Loss, to better recover triggers. Additionally, to reduce computational overhead, the author also proposes a pre-processing method to decrease the number of label pairs that need to be scanned. Strengths: 1. This paper summarizes and analyzes the loss misalignment phenomenon that occurs during trigger inversion in object detection models and proposes a new optimization function based on this phenomenon. Compared to directly applying existing trigger inversion methods for classification models to object detection models, the method proposed in this paper can achieve better results at a lower cost. 2. The pre-processing method proposed in this paper, based on backdoor leakage, reduces the cost of scanning the model. This method can also be applied to other backdoor detection scenarios. 3. The experimental results in this paper support the effectiveness of the proposed method. 4. The paper is well-organized and easy to follow. Weaknesses: 1. Some hyperparameters mentioned in the paper have not been carefully studied for their impact on the method's effectiveness, such as the two parameters in Eq.4 and the h in label pair pre-processing. 2. All models tested in the paper are selected from TrojanAI, but the paper does not detail the poisoned conditions of the models, such as the poisoning rate, trigger size, and trigger type. 3. There are no recovered trigger samples given in the paper. 4. Code is missing. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Is this method effective for composite types of triggers? (e.g., "Composite Backdoor Attack for Deep Neural Network by Mixing Existing Benign Features") 2. Is the method proposed in this paper effective for other types of backdoors? (e.g. Backdoors in the physical world "Dangerous Cloaking: Natural Trigger based Backdoor Attacks on Object Detectors in the Physical World") 3. The threat model requires some clean samples; are these samples selected from training samples? If not, would this impact the inversion effectiveness? 4. The paper categorizes backdoor attacks on object detection into two types: misclassification and evasion. Does the misclassification attack include an object-appearing attack, where a bounding box that does not exist in the ground truth appears? ( "BadDet: Backdoor Attacks on Object Detection" and "Clean-image Backdoor: Attacking Multi-label Models with Poisoned Labels Only") Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The author did not mention limitations in the article. Although this method can detect backdoors in object detection models, it is limited to polygon triggers, and other types of triggers have not been explored. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[W1] Hyperparameter settings and sensitivity** In Section 4.2 (lines 326-332), we discuss and evaluate the sensitivity of hyperparameters in the pre-processing procedure ($h$ and $\omega$), and the main results are shown in the main text Figure 5b. Recall the values of $h$ and $\omega$ strike the balance between efficiency and effectiveness of the pre-processing procedure We conducted experiments by setting $h$ from 1 to 10 and $\omega$ from 0.1 to 0.8. For each combination of $h$ and $\omega$, we recorded the True Positive Rate (the ratio of selecting ground-truth label pairs) and the corresponding number of selected pairs in total for each model architecture. Figure 5b illustrates the results. We observed that different model architectures require slightly different preprocessing hyper-parameter settings to achieve the optimal trade-off between efficiency and effectiveness. However, the True Positive Rate (TPR) quickly saturates after selecting a reasonably small portion of label pairs (less than 200 out of 2602 pairs for all architectures). To provide more detailed information, we will include specific values of $h$ and $\omega$ at each changing point of Figure 5b in the revised version. We also evaluate the Django performance under different settings of IoU threshold, initialization region size and score threshold. Please find more details in Appendix D. During the rebuttal, we further evaluate the impact of two hyperparameters(initial mean $\mu$ and variance $\sigma$) in the Eq.4 on Django. We randomly sample 8 models (4 trojan and 4 benign) for each architecture that was trained on the synthesized traffic sign dataset. Besides the default values we report in the paper ($\mu$=0.1,$\sigma$=2), we set 5 more groups of initial values and report the detection performance. As shown in rebuttal PDF Table 2, Django remains effective under different initializations. ------------ **[W2] Model configuration details** In Appendix A, we provide comprehensive details regarding the datasets and model architectures employed in this study. To provide further elucidation, it is important to note that the poison rate varies within the range of 0.1% to 8% across diverse models sourced from TrojAI r10 and r13. Similarly, the trigger size exhibits a range of 1x1 to 22x22, representing a scale of 0.001% to 0.7% relative to the input dimensions. Pertaining to the hyper-parameters utilized in model training, the learning rate is stochastically assigned, spanning from 1.56e-08 to 1e-04 across different models. The number of epochs for training spans from 6 to 100, while the batch size ranges from 4 to 32. As for the model performance metrics, the average clean mAP across the models attains a value of 0.7979, while the average poison mAP stands at 0.7680. We assess TrojAI models using polygon triggers, encompassing two distinct attack types: misclassification and evasion attacks. The trigger's polygonal structure is characterized by varying edge counts, ranging from 3 to 8. Furthermore, each individual trigger is endowed with randomly generated color and texture attributes. Please refer Appendix C for the detail settings of baseline methods ------------ **[W3] Recovered trigger samples** We attach the inverted trigger from TrojAI round13 model ID-7 and ID-120 in rebuttal PDF Figure 1. Compared to ground truth triggers, Django inverted triggers have good visual similarity. ------------ **[W4] Missing code** We will release the code upon publication. ------------ **[Q1] Evaluation on composite attack** During the rebuttal phase, we conducted an evaluation of Django using 10 models that were trained on the traffic sign synthesis dataset. This set included 5 clean models and 5 trojan models poisoned by composite attack. As indicated in Table 5 of the rebuttal PDF, Django achieved an 0.9000 ROC-AUC for detecting composite attacks. It's worth noting that the composite attack does not rely on an explicit trigger. Instead, it leverages a clean object A to serve as the trigger for attacking another object B. Interestingly, Django is capable of effectively reversing this process, essentially identifying a trigger that closely mimics the pattern of object A. This ability enables Django to detect composite backdoors with a high level of accuracy. ------------ **[Q2] Evaluation on Physical attack** Dangerous Cloaking uses homemade dataset with t-shirt as trigger to realize the physical backdoor attack. Unfortunately, the paper did not release the code and dataset for our reproduction. We will further contact the authors or collect samples by ourselves and evaluate the effectiveness of Django in the revision. ------------ **[Q3] Source of the inversion samples** All the samples we used for Django reverse-engineering are from the validation set. To evaluate the effect of different sources, we conduct the experiments on 10 models trained on COCO dataset (5 clean and 5 poison). For each model, we run Django twice with samples from different sources, i.e., randomly sample 10 images from each class in the training set and validation set separately. The detection performance is shown in rebuttal PDF Table 4. We can see that Django is not sensitive to the source of the clean images used for inversion. We will clarify in the revision. ------------ **[Q4] Evaluation on object-appearing attack** Yes, according to our definition, the object-appearing attack is a special case of the misclassification attack with the background \mathcal{empty} class as the victim class. To demonstrate the effectiveness of Django on object-appearing attacks, we evaluate Django on 10 Baddet models and 10 Clean-image backdoor models during rebuttal. For each type of attack, we mix 5 clean models with 5 poisoned models with object-appearing triggers. The evaluation results are shown in rebuttal PDF Table 5. Django achieves 0.8 ROC-AUC on both Baddet and clean-image object appearing attacks. --- Rebuttal Comment 1.1: Comment: Thank the authors for the efforts made to address my concerns. I have no more questions.
null
null
null
null
null
null
DPM-Solver-v3: Improved Diffusion ODE Solver with Empirical Model Statistics
Accept (poster)
Summary: The paper proposes a new sampling algorithm, DPM-Solver-v3, to solve the diffusion ODE. Applying the Rosenbrock-type exponential integrators, DPM-Solver-v3 pre-computes several coefficients (empirical model statistics) to minimize the norm of the gradient of non-linear term, which leads to reduced discretization error. Empirical results of DPM-Solver-v3 demonstrate its effectiveness in unconditional/conditional sampling and classifier-free guidance on Stable Diffusion. Strengths: - By introducing empirical model statistics arising from Rosenbrock-type exponential integrators, DPM-Solver-v3 has better model parametrization than data/noise prediction ones, which leads to lower discretization error. - Experiments show that DPM-Solver-v3 outperforms DPM-Solver++ and UniPC for both conditional and unconditional sampling. DPM-Solver-v3 also incurs a smaller MSE on Stable Diffusion, across NFEs. Weaknesses: - The sampling algorithm is based on the pre-computed empirical model statistics $l_{\lambda}, s_{\lambda}, b_{\lambda}$ at each time step $\lambda$, and their corresponding estimated integrals. This seems to lead to extra memory cost and makes the model not able to be flexibly adapted to new use cases (e.g., a different time schedule, guided sampling, etc.) - When developing the error order and convergence of the algorithm in section 3.2, the estimation error of EMS is not taken into account. However, the unstable $s_{\lambda}$ w.r.t $\lambda$ in Figure 5 (especially for the one estimated on ScoreSDE) indicates that the EMS estimation error is not ignorable. Is it possible to provide such analysis or guarantees given the error of EMS? - In the low FID region (which is closer to practical usage), the benefit of DPM-Solver-v3 seems not very significant compared with UniPC in Figure 3 and Figure 4 - Applying Rosenbrock-type exponential integrators to diffusion ODE is a valid contribution, but other components (e.g. higher-order derivative estimation) of the method seem to be quite relevant to those proposed by existing works like the RK45 Butcher tableau or [1,2]. The real improvement of the Rosenbrock-type technique is unclear when compounded with many other components. Could the authors maybe ablate on its individual effect? - I think it would be very helpful to compare the FID score / CLIP score on Stable-Diffusion since low MSE does not necessarily imply better sample quality (the quality is affected not only by discretization error but also by estimation error of neural networks). - Could the authors also compares DPM-Solver-V3 with some typical stochastic samplers, like Alg 2 in EDM, Gonna Go Fast [3], or more recent works on stochastic samplers? Is it possible to integrate the solver in these stochastic samplers, as these samplers seem to provide significantly better sample quality compared to their deterministic counterparts on complex datasets? [1] Fast Sampling of Diffusion Models with Exponential Integrator, Zhang et al., ICML 2023 [2] DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps, Lu et al., NeurIPS 2022 [3] Gotta go fast when generating data with score-based models. A. Jolicoeur-Martineau, K. Li, R. Piché-Taillefer, T. Kachman, and I. Mitliagkas. CoRR, abs/2105.14080, 2021. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - The author mentioned that EMS computed on the model without guidance can work for guided sampling cases within a common range of guidance scales. I'm wondering how large is the performance gap between algorithms using EMS computed on the model with guidance and the model without guidance. How is this affected by different magnitudes of guidance scales? - Theoretically and empirically, is the model robust to estimation error of EMS? - How is DPM-Solver-v3's performance affected by single-step versus multistep sampling, in comparison with the baselines? - The estimated $s_{\lambda}$ seems to be unstable w.r.t $\lambda$ in some cases. Would using moving average or other smoothing methods help with the sampling algorithm? - I think it would be very helpful to compare the FID score / CLIP score on Stable-Diffusion since low MSE does not necessarily imply better sample quality (the quality is affected not only by discretization error but also by estimation error of neural networks). - How well is DPM-Solver-v3 compatible with current SDE samplers? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the interest and acknowledgment of our theoretical and empirical contributions. Below are our explanations to the questions, which we hope may clarify some misunderstandings. We kindly request that you consider raising the score accordingly if you are satisfied. *W1: The method needs extra memory cost and makes the model not able to be flexibly adapted to new use cases (e.g., a different time schedule, guided sampling, etc.)* **A**: **The extra memory cost is rather small**. Please refer to *common response, Q1*; And **the EMS are flexible for your mentioned cases**. Please refer to *common response, Q2*. *W2: In the low FID region, the benefit of DPM-Solver-v3 seems not very significant compared with UniPC in Figure 3 and Figure 4.* **A**: When NFE is around 20, our improvement of sample quality is small because all different fast samplers almost converge. However, what matters is that our method has a **faster convergence speed** to good sample quality. A notable example is that, on LSUN-Bedroom, we reach the FID of 3.06 with 12 NFE, while the previous best method requires 20 NFE, which means our computation cost is approximately 60%. *W3: Could the authors ablate on the individual effect of Rosenbrock-type exponential integrators?* **A**: Other components (e.g. higher-order derivative estimation) are just what we formulate to adapt high-order solvers to our new parameterization. Therefore, the EMS estimation itself is what we are different from existing works. If we set $l_\lambda,s_\lambda,b_\lambda$ to special values, our solvers degenerate to previous ones such as DPM-Solver and DPM-Solver++ (Appendix A.2). When we compare our method to DPM-Solver++/UniPC, we are already ablating on the EMS' effect. *Q1: The performance gap between EMS computed with guidance and without guidance? How is this affected by different magnitudes of guidance scales?* **A**: Empirically, the EMS computed on the model without guidance (unconditional part) performs more stably than those computed on the model with guidance. See *common response, Q2* for details. *Q2: Theoretically and empirically, is the model robust to estimation error of EMS?* **A**: Theoretically, **the order and convergence theorems are irrelevant to the EMS estimation error**. The ODE solution Eq. (9) is correct whatever $l_\lambda,s_\lambda,b_\lambda$ are, and we only need the assumption that these coefficients are bounded (Assumption B.2 in Appendix B.1.1) to prove the local and global order; Empirically, we need to ensure the amount of datapoints for estimating EMS (see examples in Appendix G.1), and we find that our method is robust to the estimation error of EMS **given only 1024 datapoints**. *Q3: Single-step vs multistep sampling?* **A**: We implement the singlestep DPM-Solver-v3 and compare it with the singlestep version of DPM-Solver++, since UniPC only has multistep version. We report the FID on CIFAR10 (ScoreSDE). See *response pdf, Table 1* for details. The results show that DPMv3 (singlestep) is better than DPM++ (singlestep), but multistep version is better than singlestep and can achieve the best results. *Q4: The estimated $s_\lambda$ seems to be unstable. Would moving average help with the sampling algorithm?* **A**: We want to clarify that the unstable $s_\lambda$ is not an issue, and **our sampler is stable**: - The instability of $s_\lambda$ on ScoreSDE is intrinsic and not due to the estimation error. As we increase the number of samples to decrease the estimation error, the fluctuation is not reduced. We attribute it to the periodicity of trigonometric functions as stated in Sec.4.2. - Moreover, as we only consider the integrals of $s_\lambda$ and the sign of $s_\lambda$ does not change much, the integrals of $s_\lambda$ is rather smooth, which ensures the stability of our method. Therefore, it is no need for smoothing since the involved form is not $s_\lambda$ but the integral of $s_\lambda$, and it's actually already smooth. *Q5: Compare the FID score / CLIP score on Stable-Diffusion since low MSE does not necessarily imply better sample quality.* **A**: We choose the MSCOCO2014 validation set as reference, and compute the FID/CLIP score with 10k samples of cfg scale 7.5. See detailed results in *response pdf, Table 2*. The results show that our methods achieve consistently better FID and similar CLIP scores. **Notably, we achieve an FID of 15.4 in 8 NFE, close to the reported FID of Stable-Diffusion v1.4**. Still, we claim that FID is not a proper metric for evaluating the convergence of latent-space diffusion models. As stated in DPM-Solver++, The FIDs quickly achieves 15.0~16.0 within 10 steps, even if the latent code does not converge, because of the strong image decoder. Instead, MSE in the latent space is a direct way to measure the convergence. By comparing the MSE, our sampler does converge faster to the ground-truth samples of Stable Diffusion itself. *Q6: Compare DPM-Solver-V3 with stochastic samplers? Is it possible to integrate the solver in these stochastic samplers?* **A**: - **Stochastic samplers have bad performance for NFE <= 20**. SDE solvers are good at best-quality sampling, not fast sampling. In the papers of SDE solvers, the reported NFE is usually greater than 100, and the sample quality is quite low at around 20 NFE, even using the recent advanced SDE solver (Appendix E.7 in [1]). - Yes, our EMS can also be used in SDE solvers. Note that previous SDE solvers also limit their parameterization to noise/data prediction, and we can find better parameterization by the proposed EMS. Therefore, It is promising to improve SDE solvers with our EMS for both fast and best-quality sampling. [1] Gonzalez, Martin, et al. "SEEDS: Exponential SDE Solvers for Fast High-Quality Sampling from Diffusion Models." Once again, thank you for your constructive feedback and for considering our paper for acceptance. We'll revise our paper according to your suggestions. --- Rebuttal Comment 1.1: Title: Looking forward to further feedback Comment: Dear Reviewer wxE7, We thank you very much again for the great efforts on reviewing our manuscript and providing the valuable comments to further improve. We hope you may find our response satisfactory and increase the rating accordingly. If you have any further feedback, we would be very happy to reply. Best, Authors
Summary: This work proposes a method to speed up the DPM solver by using empirical model statistics (pre-trained model) as well as employing multistep methods with a predictor-corrector framework for improving sample quality. Promising results were reported. Strengths: - This work addresses an important problem of fast sampling in diffusion models without retraining. - The proposed parametric model is simple and seems neat with optimality criteria. Weaknesses: - The improvement of the proposed method over prior works such as DPM-solver++ seems incremental. It is unclear if the proposed empirical model statistics is good enough. While this work discusses an optimality under this parametric model, it is unclear if it is truly optimal in terms of modeling itself. This work may also discuss the prior work such as [H Zheng et al., Fast Sampling of Diffusion Models via Operator Learning, ICML 2023] and its arXiv version, which was able to achieve incredible results with only a single model evaluation. - The experiment results seem to need more improvements and should ensure fairness. For example, Figures 3, 4 indicate that the proposed method is faster than other methods in early iterations, but in order to ensure reasonable FID, one may need a certain number of NFE and in those cases, the differences of FIDs look negligible. Moreover, the comparisons in Fig 6 seems unfair - will other methods achieve similar image quality and FID scores with 15-30% more evaluations? Incorporating the factor of computation should be incorporated for fair comparisons. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please address the comments in the Weaknesses section. - Will the speed-up of 15-30% have a practical value? Please discuss. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for recognizing our work's importance and theoretical neatness. However, we feel that there are some fundamental misunderstandings. We will address your questions and concerns below. We kindly request that you consider raising the score accordingly if you are satisfied. *W1: The improvement seems incremental. Discuss with [H Zheng et al. 2023].* **A**: First of all, **we respectfully disagree with the comments on incremental results**. We appreciate you for informing us of the nice work of [H Zheng et al. 2023] and will add proper citations in related works, but we have to point out that [H Zheng et al. 2023] is a distillation-based method, different from what our work focuses on. We would like to make the following clarifications: 1. We achieve the **most SOTA sample quality among all the training-free sampling methods** on a fair benchmark, under various settings, against the most recent and strong baselines. 2. **Our method is training-free samplers, while [H Zheng et al. 2023] needs heavy training of an additional generative model.** In general, although distillation-based methods can achieve good performance even in one step, it has obvious flaws (as discussed in Appendix A.1) such as onerous extra training and lose of information. Moreover, **their application scope is usually limited to unconditional cases, and it is much costly to distill a text-to-image model**. Since our training-free approach **can be easily applied to large text-to-image models** to boost the sampling with negligible extra cost, while the distillation-based methods such as [H Zheng et al. 2023] needs much more costs to do so, we argue that our work has high practical value. *W2: It is unclear if the proposed empirical model statistics is good enough. It is unclear if the parameterization is truly optimal in terms of modeling itself.* **A**: The proposed empirical model statistics can greatly speed up the convergence and improve the sample quality, especially within 5~10 NFEs. **It has already achieves the SOTA performance among all training-free fast samplers**. Moreover, we did not claim the optimality of the modeling parameterization. We **first** systematically study the parameterization of exponential integrators for diffusion models and **first** explain the superiority of previous data-pred over noise-pred (Appendix A.2). Also, this parameterization is elegantly designed by Rosenbrock-type methods and elucidating the design space of previous parameterizations. We understand that a better form of parameterization may be derived, but our introduced parameterization is already novel, insightful and efficient. *W3: Experiment results does not have much improvement and unfair.* **A**: **We respectfully disagree with the comments on unfairness and poor results**. We would like to make the following clarifications: 1. **The extra computation is negligible**. Please refer to *common response, Q1*. 2. **Our evaluation is fair**. We feel that the expression "early iterations" implies a critical misunderstanding: when users generate images with a pretrained diffusion model, they predetermine a certain NFE. Therefore, comparing FID at the same NFE, or comparing NFE for the same FID is completely fair, and **are adopted by all the previous fast diffusion sampling methods**. 3. **Our method converges much faster**. Our method converges 15-30% faster than previous samplers, thus saving 15-30% costs. **On a fair benchmark, under various settings, we achieve the best convergence speed among all the training-free samplers (Appendix F/H)**. A notable example is that, on LSUN-Bedroom, we reach the FID of 3.06 with 12 NFE, while the previous best method requires 20 NFE, which means our computation cost is approximately 60%. *Q1: Will the speed-up of 15-30% have a practical value?* **A**: Yes, the practical value embodies the following aspects: 1. **15~30% speed-up can save lots of costs for online text-to-image applications**. As we have clarified, our speed-ups are applicable to large text-to-image diffusion models, which are an important part of today's AIGC community. As the recent models become larger, a single NFE requires more computational resources, and 15-30% speed-up can save a lot of the companies' expenses for commercial usage. 2. **Improvement in 5~10 NFEs benefits image previewing**. Since the samples are controlled by the random seed, coarse samples with 5~10 NFEs can be used to ***preview*** thousands of samples with low costs and give guidance on choosing the random seed, which can be then used to generate fine samples with best-quality sampling strategies. This is especially useful for text-to-image generation. Since our method achieves better quality and converges better to the ground-truth sample, it can provide better guidance when used for preview. Finally, **we respectfully disagree with the comments on limited contribution**. We would like to emphasize our main contributions as follows: - Theoretically, we are the **first** to systematically study the parameterization problem of exponential integrators for diffusion sampling, the **first** to explain the superiority of previous parameterizations, and the **first** to introduce Rosenbrock-type methods into diffusion sampling. We also propose the pseudo-order solver and the half-corrector technique, which are **novel, insightful and efficient**. These ideas may inspire future works for training-free sampling of diffusion models, such as SDE-based samplers. - Empirically, we achieve the **most SOTA sample quality among all the training-free sampling methods** on a fair benchmark, under various settings, against the most recent and strong baselines. Our methods have **high practical value** in the AIGC community regarding the performance at small NFEs and the applicability to the text-to-image Stable Diffusion, for the usage of **generating previews** and high-quality samples of text-to-image diffusion models with low costs. --- Rebuttal Comment 1.1: Title: Looking forward to further feedback Comment: Dear Reviewer oHv2, We thank you very much again for the great efforts on reviewing our manuscript and providing the valuable comments to further improve. We hope you may find our response (as well as other reviews) satisfactory and increase the rating accordingly. If you have any further feedback, we would be very happy to reply. Best, Authors --- Rebuttal Comment 1.2: Comment: I would like to thank the authors for detailed responses. I increased my scores to +1 (in contribution / in overall score). --- Reply to Comment 1.2.1: Title: Thank you Comment: We appreciate it very much that you acknowledge our contribution and increase the score accordingly. We wonder if our response has addressed your concerns, and we are happy to give further replies. Best, Authors
Summary: This work proposes DPM-Solver-v3, an ODE solver for diffusion model inference. The method builds upon existing formulations involving exponential integrators, where the linear term of the ODE is canceled and only the noise predictor needs to be approximated. A new model parameterization is derived which involves an optimal set of coefficients, empirical model statistics (EMS). The authors describe methods for computing the EMS in practice, as well as a pseudo high-order method in the few NFE regime and a half-corrector technique when the guidance scale in guided sampling is large. Strengths: - Although the method builds upon existing methods involving exponential integrators in the diffusion literature, the model parameterization involving EMS is novel. - Experiments and ablations are thorough and clearly done. Results are compelling and the proposed method consistently outperforms existing ODE solvers across datasets/models, especially in few NFE regime. - Theoretical results for local/global error are clearly presented and assumptions seem reasonable. Weaknesses: - In the experiments, the authors mention that "we tune the strategies of whether to use pseudo-order predictor/corrector at each NFE on CIFAR10" (lines 254-255). This seems to be an important caveat and should be reiterated in the appendix where the detailed empirical results are presented (Table 4). - Similarly, for Stable Diffusion, s=7.5, the authors mention that the best results among no/half/full corrector are reported. This should be reiterated in the appendix in Table 7. - In general, I wonder if the presentation of the paper would be more compelling if some of the quantitative results could be moved to the main body from the appendix. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - FID degenerating from 12.76 to 15.91 for ScoreSDE on CIFAR10 (line 822, G.3) in whether or not to use pseudo-order predictor/corrector seems quite substantial. In which cases is the pseudo-order predictor necessary? It might be compelling to produce a table similar to Table 10 (pseudo-order corrector experiments) examining the influence of the pseudo-order predictor. - How does choice of N, K for the EMS change as the timestep scheduler changes? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors adequately address the limitations of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review, especially for reading the proofs and ablations in the Appendix. Below we provide detailed responses for the questions and clarify misunderstandings. We kindly request that you consider raising the score accordingly if you are satisfied. *W1 & W2: Some tuning strategies should be reiterated in the Appendix (Table 4/Table 7).* **A**: Thank you for carefully reading the Appendix! We fully agree that although they have been mentioned in the main text, they should be reiterated where the quantitative table results are presented. We will revise relative paragraphs in the final version. *W3: Some of the quantitative results could be moved to the main body from the appendix.* **A**: Thank you for the valuable suggestion. Due to the page limitation of 9, we will move some of them to the main body in the final version. *Q1: In which cases is the pseudo-order predictor necessary? It might be compelling to produce a table similar to Table 10 (pseudo-order corrector experiments) examining the influence of the pseudo-order predictor.* **A**: The pseudo-order predictor only provided improvement in our mentioned case (5 NFE on CIFAR10). It can be seen as a specialized trick at 5 NFE on CIFAR10. *Q2: How does choice of N, K for the EMS change as the timestep scheduler changes?* **A**: The choice of $N,K$ is **disentangled** with the timestep scheduler in sampling: once we have estimated the EMS at $N$ (e.g., 1200) timesteps, they can be **flexibly adapted** to any schedule (uniform logSNR/uniform t...) in sampling, by corresponding the actual timesteps during sampling to the $N$ bins. The choice of $N,K$ and timestep scheduler are dependent on the pretrained model. See more discussions in *common response, Q2*. Once again, thank you for your constructive feedback and for considering our paper for acceptance. We'll revise our paper according to your suggestions. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the detailed responses. I will retain my score of weak accept. --- Reply to Comment 1.1.1: Title: Thank you Comment: We are happy to hear that you find our response satisfactory and are positive on the rating! We appreciate it that you are one of the very few reviewers who have read our Appendix when writing the reviews. We are more than willing to discuss if you have further questions. Best, Authors
Summary: This paper proposed an improved/generalized version of DPM-solver. It include three more coefficients based on the semi-linear ODE solution proposed by DPM-solver, accounting for minimizing the 'linearity' for the nonlinear component of the ODE and minimizing the first-order discretization error respectively. Such optimal coefficients can be approximately calculated by Monte Carlo samples from the learned model. Together with multistep formulation, predictor-corrector framework by UniPC, and some practical techniques, the paper empirically demonstrated the proposed sample can outperform existing ones especially when NFE is very small. Strengths: - The paper is well-written and easy to follow. - The paper built on DPM-solve formulation and generalized it with additional coefficients that are optimized based on trained model. The added coefficients are intuitive. Theoretical analysis has been carried out to ensure local/global convergence. - Two practical techniques has been proposed and seems to be useful in general. - Extensive experiments have been performed to justify the effectiveness. Weaknesses: - I'm not convinced by optimizing $s_\tau$ with Eq. (11): - In equation 10, the first coefficient inside the big integration should be $e^{\int_{\lambda_s}^\lambda (l_\tau + s_\tau) d\tau}$ instead of $e^{\int_{\lambda_s}^\lambda l_\tau d\tau}$. Therefore, the first-order discretization error not only depends on $f_\theta^{(1)} - s_\lambda f_\theta - b_\lambda$. There could be a chance that Eq. (11) is minimized but this term $e^{\int_{\lambda_s}^\lambda (l_\tau + s_\tau) d\tau}$ is amplified. - Main results shown in this paper are not really by first-order solver but by higher-order solvers. A justification of why minimizing first-order discretization error can help higher-order solvers is needed. - A downside of the introduced coefficients is that the integrals in Eq. (14) become intractable, which has to be approximated by trapezoidal rule. It could introduce extra approximation error and add computational cost. I'd like to see a detailed analysis on the extra error and computational cost. - It is understandable and acceptable that this work is heavily built on DPM-solver, but certain sentences in this paper is duplicated from DPM solver. E.g., line 94 our first key insight is..., line 105 our second key insight is ..., line 135-136: the proposed solvers and analysis ....The paper should at least rephrase these sentences. - The paper can do a better job on clarifying which is the unique contribution of this paper while which part has been proposed by previous works. For example, most techniques and theoretical conclusions in section 3.2 have already been developed in DPM-solver / DPM-solver++ / UniPC, which should be further admitted and clarified. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - Why EMS without guidance can be directly applied to model with guidance? Does it still hold if the guidance weight is really high? - Why for conditional setting, the best setting is 2nd-solver instead of higher-order counterparts? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: I'd expect to see more discussion on limitations due to the intractable integrals. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive comments. Below we provide detailed responses for the questions and clarify misunderstandings. We kindly request that you consider raising the score accordingly if you are satisfied. *W1: Equation 10 is wrong, the first-order discretization error may be amplified.* **A**: **We respectfully disagree. This is a misunderstanding**. Specifically, by the definition of $g_\theta$, we have $$ g_\theta^{(1)}(x_\lambda,\lambda) = e^{-\int_{\lambda_s}^{\lambda}s_\tau d\tau}\left( f_\theta^{(1)}(x_\lambda,\lambda)-s_\lambda f_\theta(x_\lambda,\lambda) - b_\lambda \right) $$ By using the Taylor expansion $g_\theta(x_{\lambda_s},\lambda_s) = g_\theta(x_{\lambda},\lambda ) + (\lambda_s-\lambda)g_\theta^{(1)}(x_{\lambda},\lambda) + O((\lambda-\lambda_s)^2)$ stated in the paper (**expand it at each $\lambda$**), we can see that the term $e^{\int_{\lambda_s}^{\lambda}s_\tau d\tau}$ in Eq. (9) is cancelled by the term $e^{-\int_{\lambda_s}^{\lambda}s_\tau d\tau}$ in $g_\theta^{(1)}(x_\lambda,\lambda)$, and we obtain exactly Eq. (10). *W2: A justification of why minimizing first-order discretization error can help higher-order solvers is needed*. **A**: Sure. Below we provide the corresponding theoretical justification about how minimizing first-order discretization error helps high-order cases. We assume that the EMS are bounded (Assumption B.2 in Appendix B.1.1), which is empirically confirmed as in Section 4.2. Here we take the 2nd-order case as an example. By Eq. (52) in Appendix B.4, the 2nd-order local error is $$ \hat x_t-x_t=\alpha_tA(\lambda_s,\lambda_t)\int_{\lambda_s}^{\lambda_t}E_{\lambda_s}(\lambda)(g_\theta(x_\lambda,\lambda)-g_\theta(x_{\lambda_s},\lambda_s))d\lambda-\frac{\alpha_tA(\lambda_s,\lambda_t)}{\lambda_{i_1}-\lambda_s}(g_\theta(x_{\lambda_{i_1}},\lambda_{i_1})-g_\theta(x_{\lambda_s},\lambda_s))\int_{\lambda_s}^{\lambda_t}E_{\lambda_s}(\lambda)(\lambda-\lambda_s)d\lambda $$ which is equivalent to $$ \alpha_tA(\lambda_s,\lambda_t)\int_{\lambda_s}^{\lambda_t}e^{\int_{\lambda_s}^{\lambda}(l_{\tau}+s_{\tau})d\tau}\left(\int_{\lambda_s}^{\lambda}g^{(1)}\_\theta(x_{\tau},\tau)d\tau\right)d\lambda-\frac{\alpha_tA(\lambda_s,\lambda_t)}{\lambda_{i_1}-\lambda_s}\int_{\lambda_s}^{\lambda_t}E_{\lambda_s}(\lambda)(\lambda-\lambda_s)d\lambda\int_{\lambda_s}^{\lambda_{i_1}}g^{(1)}\_\theta(x_{\lambda},\lambda)d\lambda $$ Which is controlled by $\|g_\theta^{(1)}\|$. Other terms are only dependent on the EMS and are bounded. For higher orders, the derivation is similar, since they all involve the form $g_{\theta,t_2}-g_{\theta,t_1}$, which is the integral of $g_\theta^{(1)}$ . We will put the detailed analysis in the revised paper. *W3: A detailed analysis on the extra error and computational cost of the integrals w.r.t. the introduced coefficients.* **A**: Below we provide a detailed analysis correspondingly. - **The computational cost is ignorable**: See *Common response, Q1*. - **The extra error of trapezoidal rule is under higher order and ignorable**: We assume that the EMS and their derivatives are bounded, then by the error bound formula of trapezoidal rule, $$ |E|\leq \frac{(b-a)^3}{12n^2}\max |f''(x)| $$ We can easily conclude that the errors of $\int_{\lambda_s}^{\lambda_t}E_{\lambda_s}(\lambda)B_{\lambda_s}(\lambda)d\lambda$ and $\int_{\lambda_s}^{\lambda_t}E_{\lambda_s}(\lambda)\frac{(\lambda-\lambda_s)^k}{k!}d\lambda$ in Eq. (14) are $O(h_0^2h)$ and $O(h_0^2h^{k-1})$ respectively, where $h_0$ is the stepsize of EMS discretization, and $h=\lambda_t-\lambda_s$. We'll put detailed analysis in the revised paper. *W4: Certain sentences is duplicated from DPM solver.* **A**: Thank you for pointing it out. We take these expressions since they concisely emphasize our motivation, novelty and how we get inspiration from previous works, and did not notice the duplicated expressions. We will rephrase them in the revised paper. *W5: Clarifying the unique contribution in section 3.2.* **A**: Thank you for the valuable suggestion. We coarsely admit that by saying *"...highly motivated by ... in the field of diffusion models"* at the beginning of Section 3.2, and the specific technical details are different. We will further clarify detailedly at each part of Section 3.2 to seperate the unique contributions of ours from the previous works in the revised version. *Q1: Why EMS without guidance can be directly applied to model with guidance? Does it still hold if the guidance weight is really high?* **A**: It is because the unconditional model contains some common information, please refer to *common response, Q2* for details. However, it no longer holds for extremely large guidance scale (e.g., cfg scale 15 for Stable Diffusion), since in this case the condition has a large impact on the denoising process. Note that at these extreme large scales, the sampling quality is very low (compared to cfg scale 7.5) and they are rarely used in practice. Therefore, our proposed EMS without guidance is suitable enough for the common applications with the common guidance. *Q2: Why for conditional setting, the best setting is 2nd-solver instead of higher-order counterparts?* **A**: The high-order solvers are more unstable compared to low-order, especially for large guidance scales[1]. However, a slightly large guidance scales are often preferred in practice to improve the condition-sample alignment, as shown in Imagen, Stable Diffusion. Therefore, we and previous works (e.g., DPM-Solver++, UniPC) all find that 3rd-solver will produce artifacts in conditional setting, and 2nd-solver is better. [1] Wizadwongsa, Suttisak, et al. "Diffusion Sampling with Momentum for Mitigating Divergence Artifacts.". Once again, thank you for your constructive feedback and for considering our paper for acceptance. We'll revise our paper according to your suggestions. --- Rebuttal Comment 1.1: Comment: Thank the authors for the detailed rebuttal. My questions were well address as long as the paper will be further revised as the authors promised in the rebuttal. I tend to retain my original score of leaning towards acceptance. --- Reply to Comment 1.1.1: Title: Thank you Comment: We are happy to hear that you find our response satisfactory and are positive on the rating! We will definitely further revise in the final version as promised. Best, Authors
Rebuttal 1: Rebuttal: We sincerely thank all reviewers' effort for the detailed and insightful suggestions. First we'd like to present all the paper revisions. ### Summary of paper revisions **Theories** - Theoretical justification of why minimizing first-order discretization error can help higher-order solvers (Reviewer dqUr, W2) - Theoretical analysis on the extra error of the trapezoidal rule (Reviewer dqUr, W3) **Experiments** - DPM-Solver-v3’s performance affected by single-step versus multistep sampling (Reviewer wxE7, Q3) - FID score / CLIP score on Stable-Diffusion (Reviewer wxE7, Q5) **Writing** - Rephrase the duplicate expressions from DPM-Solver (Reviewer dqUr, W4) - Clarify which is the unique contribution in Section 3.2 (Reviewer dqUr, W5) - Reiterate the tuning strategies in Appendix Table 4/Table 7 (Reviewer yHtS, W1/W2) - Moving the quantitative results to the main body from the appendix (Reviewer yHtS, W3) - Clarify the fluctuation of $s_\lambda$ is not due to estimation error, and there is no need for moving average or other smoothing methods (Reviewer wxE7, W2/Q4) - Discuss why the EMS of unconditional model can be applied to guided sampling (Reviewer dqUr/wxE7, Q1) - Discuss why 2nd-order is better for conditional setting (Reviewer dqUr, Q2) - Discuss the practical value of 15\~30% speed-up and improvement in 5\~10 NFE (Reviewer oHv2, Q1) We find there are common misunderstandings to our paper, and we'd like to clarify here again. ### Response for common questions *Q1: Extra computation/memory costs of the proposed method? (from dqUr,oHv2,wxE7,wxE7)* **A**: **The extra memory cost is rather small**. The extra coefficients are discretized and computed at $N$ timesteps, each with a dimension $D$ same as the diffused data. The extra memory cost is $O(ND)$, including the precomputed terms in Appendix C.1.2, and is rather small compared to the pretrained model (e.g. only ~125M in total on Stable-Diffusion, compared to ~4G of the model itself). **The pre-computation time for estimating EMS is rather short**. The EMS introduced by our method can be effectively estimated on around 1k datapoints within hours, which is rather short comparing to the long training / distillation time of other methods. Moreover, the integrals of these extra coefficients are just some vector constants which can be pre-computed **within seconds**, as shown in Appendix C.1.2. The precomputing is done only once before sampling. **The extra computational overhead during sampling is negligible**. Once we obtain the estimated EMS and their integrals at discrete timesteps, they can be regarded as constants. Thus, during the subsequent sampling process, **the computational overhead is the same as previous training-free methods** (such as DPM-Solver++) with negligible difference (Appendix E). *Q2: How does EMS apply for different time schedules and guided sampling? (from dqUr,yHtS,wxE7)* **A**: The pre-computed EMS can be applied for any time schedules during sampling without re-computing EMS. Besides, we compute EMS on unconditional models and it can be used for a wide range of guidance scales (such as cfg=7.5 in Stable Diffusion). In short, EMS is flexible and easy to adopt in downstream applications. - **Time schedule**: The choice of $N,K$ in EMS is **disentangled** with the timestep scheduler in sampling. Once we have estimated the EMS at $N$ (e.g., 1200) timesteps, they can be flexibly adapted to any schedule (uniform logSNR/uniform t...) during sampling, by corresponding the actual timesteps during sampling to the $N$ bins. For different time schedule, we only need to re-precompute $E_{\lambda_s,\lambda_t}^{(k)}$ in Appendix C.1.2, and the time cost is within seconds. - **Guided sampling**. We compute the EMS on unconditional model for all guided cases. Empirically, the EMS computed on the model without guidance (unconditional part) performs more stably than those computed on the model with guidance, and can accelerate the sampling procedure in a wide range of guidance scales (including the common guidance scales used in pretrained models). We think that the unconditional model contains some common information (image priors) for all the conditions, such as color, sketch, and other image patterns. Extracting them helps correct some common bias such as shallow color, low saturation level and lacking of details. In contrast, the conditional model is dependent on the condition and has large variance. - In addition, EMS computed on models without guidance cannot work for extremely large guidance scale (e.g., cfg scale 15 for Stable Diffusion), since in this case the condition has a large impact on the denoising process. Note that at these extreme large scales, the sampling quality is very low (compared to cfg scale 7.5) and they are rarely used in practice. Therefore, our proposed EMS without guidance is suitable enough for the common applications with the common guidance. We think the the quality of our work has been improved a lot. We are welcome for further questions. Pdf: /pdf/82dddf1b736b933c8bb776fe8c4281d5a778f18d.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Inner Product-based Neural Network Similarity
Accept (poster)
Summary: This paper studies neural network similarity which is an important problem in many areas such as federated learning and continue learning. In this paper, the authors develop a new method to reduce NN representational similarity to filter subspace distance. Moreover, they present the effectiveness and efficiency of their algorithms in theory and practice. Strengths: 1. The problem studied in this paper is fundamental. 2. The algorithm mentioned in this paper is simple and effective. 3. The experimental results show that the method developed in this paper is effective. 4. The authors present the effectiveness of algorithm in theory and practice. Weaknesses: 1. The solution mentioned in this paper can be used for only CNNs. 2. The experimental section can be improved. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Is it possible to extend this algorithm to other models? I am interested in why this method can be extended to transformer-based model. 2. As for the experimental section, is it possible to add experimental results over CIFAR-10, SVHN, and ImageNet? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: If the authors can solve the issues mentioned in the Questions, I am willing to improve the rate. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your supportive comments. **Q1:** Is it possible to extend this algorithm to other models, such as transformer-based model? **A:** We can extend our method to other models, including transformer-based models. The proposed method relies on the decomposition of weight matrices, and sharing components with a substantial number of parameters among models while finetuning remaining parameter-efficient elements. This approach can thus be effectively applied to, for example, linear layers, major components of transformer-based models (Feedforward, Query, Key, and Value matrices). Specifically, by utilizing our proposed method in Section 2.2, we can decompose the weight of the linear layer $W \in \mathbb{R}^{c_{out} \times c_{in}}$ into two components: $W = \text{reshape}(\alpha \times D)$, where the atom coefficients are represented by $ \alpha \in \mathbb{R}^{c'\_{out} \times c'\_{in} \times m} $, and the atoms by $D \in \mathbb{R}^{m \times k \times k}$, and $c_{out} = c'\_{out} \times k$ and $c_{in} = c'\_{in} \times k$. For instance, assume $k=4, m=9$, weight matrix $w \in \mathbb{R}^{256 \times 64}$ is decomposed into $\alpha \in \mathbb{R}^{64\times 16\times 9}$ and $D\in \mathbb{R}^{9\times 4\times 4}$. Moreover, we finetune $D$ while fixing $\alpha$ when finetuning transformer models. Following the experimental setup in Section 3.4, we apply our method to continual learning with transformer-based models. The corresponding results are presented in the table below. | | CIFAR100 | MFLOPs | Time (s) | GPU Memory (MB) | |---------|:------:|---------|:------:|---------| | ViT (base) | 75.17 $\pm$ 0.21 | - | - | - | | +CCA | 77.28 $\pm$ 0.09 | 4.13 $\times 10^7$ | 46.84 | 1181 | | +CKA | 76.67 $\pm$ 0.13| 1.81 $\times 10^5$ | 35.08 | 1209 | | +Ours | **78.16 $\pm$ 0.05** | **0.015** | **0.35** | **0** | Our method also exhibits a strong correlation with CCA/CKA in transformer-based models. The correlation between our method and CCA/CKA is shown in the table below, and Figure 1 (a)(b) in the attached PFD of the general response. Our experiment is built on top of the code [1]. | | Correlation | |---------|:------:| | CCA | 0.9443 | | CKA | 0.9079 | Moreover, by adopting the experimental setting outlined in Section 3.3, our method enables the measurement of task similarity using transformer-based models. In this specific experiment, we employed 100 models, and the CIFAR-100 dataset was divided into 20 subtasks, with each subtask containing 5 classes. Each subtask was shared by 5 models. As demonstrated in Figure 1 (c) in the attached PDF of the general response, every group of 5 models that share the same task shows a notably high similarity among themselves. The results indicate that our method can effectively measure the similarities of transformer-based models. On the other hand, since the above decomposition equation remains the same form as the convolution considered in the paper, our theoretical results can also be extended to transformer models with linear layers. However, a rigorous application of our method to transformer-based models would require further study, which we leave for further study. **Q2:** Is it possible to add experimental results apart from CIFAR, SVHN, and ImageNet? **A:** We evaluate the performance of our method on three different datasets: CelebA[2], Oxford Flower[3], and Food-101[4]. The experimental setup remains consistent with the one described in Section 3.1 of the paper. As shown in the table below, our method consistently shows high correlations with CCA and CKA on three datasets. | | CelebA [2] | Flower [3] | Food [4] | |---------|:------:|--------|:----:| | CCA | 0.9014 | 0.9155 | 0.9901 | | CKA | 0.8766 | 0.8831 | 0.9266 | Reference: 1. https://github.com/kentaroy47/vision-transformers-cifar10 2. Liu, Ziwei and Luo, Ping and Wang, Xiaogang and Tang, Xiaoou. "Deep Learning Face Attributes in the Wild." Proceedings of International Conference on Computer Vision (ICCV), 2015. 3. Nilsback, M-E. and Zisserman, A. "Automated flower classification over a large number of classes." Proceedings of the Indian Conference on Computer Vision, Graphics and Image Processing, 2008. 4. Bossard, Lukas and Guillaumin, Matthieu and Van Gool, Luc. "Food-101 -- Mining Discriminative Components with Random Forests." European Conference on Computer Vision, 2014.
Summary: In their paper, the authors introduce a novel approach to significantly decrease the computational cost of representational similarity analysis in CNNs by transitioning it to filter subspace distance evaluation. Their proposed filter subspace-based similarity is both theoretically and empirically demonstrated to display a robust linear correlation with prevalent probing-based metrics. Strengths: 1. The approach offers significant improvements in efficiency and robustness. 2. The paper is very clear, especially the methodology part is clearly written and easy to understand. 3. The proposed method is simple, making it easy to implement. Weaknesses: 1. Could you elaborate on how Assumption 2.6 is applicable to complex real-world datasets? It appears to be a strong assumption. For instance, consider a simple matrix $$\begin{pmatrix}0 & 0\\\ 1 & 0.9\end{pmatrix}$$, the assumption does not hold true. 2. No obvious weakness in my mind now. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. On line 115, is m a hyperparameter? If yes, is the model performance sensitive to the selection of m? 2. In Figure 2b, it appears that the points located in the center are less correlated (or more distant from the diagonal blue line) than the points positioned at the ends. Could you explain why this might be? 3. With regard to Proposition 2.1, would the inequality become significantly looser if many extreme values exist in X? 4. On line 148, the proposed method suggests averaging layer-wise similarities for network-wise similarity. Is it possible that some layers might be more significant than others? For instance, should the final few layers carry more importance? 5. The paper [1] outlines certain limitations of CCA/CKA in Section 3, for example, randomly permuting the order of pixels (either at the input or in the latents) is an orthogonal transform, and thus does not affect the CKA. However, this destroys the spatial structure of the input, and intuitively should affect the “representation quality.”, does this apply to the proposed method? [1]Bansal, Yamini, Preetum Nakkiran, and Boaz Barak. "Revisiting model stitching to compare neural representations." Advances in neural information processing systems 34 (2021): 225-236. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Authors do not discuss the limitation of their work. Based on the information presented, it does not appear that this work has any negative societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your supportive comments. **W1:** Could you elaborate on how Assumption 2.6 is applicable to complex real-world datasets? Consider a simple matrix $\begin{pmatrix} 0 & 0 \\\\ 1 & 0.9 \end{pmatrix}$, the assumption does not hold true. **A:** Our assumption states that different channels in the features are less correlated. In specific, every diagonal component in $Z_u^T Z_u \in \mathbb{R}^{c\times c}$ represents the variances within each channel, and an off-diagonal element indicate the correlation between two channels. Therefore, the 2x2 matrix shown by the reviewer may not be valid in practice, as there is likely no 0 in the diagonal elements, and little chance to have two channels completely correlated. In complex datasets, it may be hard to have features that satisfy the assumption directly. As shown in the Appendix line 628-631, we further propose a regularization term to reduce channel-wise correlation, which improves the correlation with CCA in Figure 8(b) and matches our theoretical findings. **Q1:** On line 115, is m a hyperparameter? If yes, is the model performance sensitive to the selection of m? **A:** Yes, $m$ is a hyperparameter. In our paper, we choose $m=9$. The similarity of models remains robust across different choices of $m$. In this experiment, we follow the setting in Section 3.1. Specifically, we train 10 models separately on 10 sub-tasks of the CIFAR-100 dataset. In the table below, we present the correlation between CCA and the filter subspace similarity with different choices of $m$, the standard deviation of the correlations is only 0.0027. | m | 3 | 6 | 9 | 12 | |---------|:------:|---------|:------:|---------| | correlation w/ CCA | 0.9313 | 0.9289 | 0.9327 | 0.9366 | **Q2:** In Figure 2b, it appears that the points located in the center are less correlated (or more distant from the diagonal blue line) than the points positioned at the ends. Could you explain why this might be? **A:** We hypothesize that when networks are more intrinsically dissimilar, the distance between atoms $D_u, D_v$ becomes larger, i.e., $\text{cos}(D_u, D_v)$ becomes smaller. In this case, the linear correlation between our method and CCA would be slightly affected, as stated in line 195. Yet, we still show a reasonably strong correlation with CCA in both high-similarity and low-similarity cases. which is shown in the following table as the similarity is in the range [0.4, 0.5]. | | Correlation | |---------|:------:| | CCA | 0.9231 | | CKA | 0.9178 | **Q3:** With regard to Proposition 2.1, would the inequality become significantly looser if many extreme values exist in X? **A:** As we only consider real image datasets, the values in $X$ are normalized and bounded. Thus it is unlikely to have extreme values in it. **Q4:** On line 148, the proposed method suggests averaging layer-wise similarities for network-wise similarity. Is it possible that some layers might be more significant than others? For instance, should the final few layers carry more importance? **A:** Since different layers can carry different information, we agree that there would be a more effective weighted scheme to aggregate layer-wise similarities. Since it is likely non-trivial to decide weights for different layers, we leave it for future study. **Q5:** The paper [1] outlines certain limitations of CCA/CKA in Section 3, for example, randomly permuting the order of pixels (either at the input or in the latents) is an orthogonal transform, and thus does not affect the CKA. However, this destroys the spatial structure of the input, and intuitively should affect the “representation quality.”, does this apply to the proposed method? **A:** Our method is immune to those "attacks" on CCA/CKA as we don't rely on probing data to compute feature similarity. Our method focuses on the intrinsic similarity of models with inner-product of decomposed model parameters. It is agnostic to distortion on probing data. We show similar experiments as [1] in Figure 3(b), where CCA and CKA are severely affected by the choice of the probing data and our atom-based similarity remains robust. --- Rebuttal Comment 1.1: Comment: The authors addressed most of my concerns. I would keep the score as "Weak Accept" but increase the confidence to 3. --- Reply to Comment 1.1.1: Comment: Thanks for the positive feedback and acknowledgment. We sincerely appreciate your time and efforts.
Summary: This paper presents a new approach for measuring the similarity between neural network models based on a new efficient metric defined layer-wise on filter atoms. In particular, the metric is defined as the cosine similarity between this filter atoms that can be implemented as the normalised inner product of the filter atom vectors. It is shown that 1) the probing-based similarity CCA is upper-bounded by the proposed metric up to a scaling factor, where both CCA and the factor are data dependant, and 2) under the assumption that the features in a layer have low correlation, there is an approximately linear relation between CCA and the proposed metric. Experimental results show that the proposed approach can be effectively used to study the learning dynamics of deep neural networks. The method has been further applied to Personalized Federated Learning and Continual Learning and improves the results over CCA and CKA similarity metrics. Strengths: - An original theoretically-grounded approach for measuring layer-wise similarity between two neural network models - Evaluation on different applications and good results Weaknesses: - Comparison only with CCA and CKA-based similarity metrics - Assumption 2.6. is quite strong. Although your empirical results seem to corroborate this assumption, in practice I believe there can be strong correlations between different features within a layer. - Only AlexNet and VGG are used (a part from the experiment showing the correlation between Grassmann similarity and the proposed filter subspace similarity). It would have been interesting, if the proposed method also performs well for other models, e.g. ResNet models. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - In the experiments, the overall similarity between two neural networks is computed as the average over the layer-wise similarities. However, intuitively not all layers are equally important for a given task. Would it make sense to use different weights for different layers (i.e. a weighted average)? Would that change the overall results? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: See weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your supportive comments. **W1:** Comparison only with CCA and CKA-based similarity metrics **A:** We want to claim that CCA and CKA are two representative feature-based network similarity measures that are adopted in many works [1,2,3]. Showing a strong linear correlation with CCA and CKA is significant for demonstrating the effectiveness of our method. Reference: 1. Raghu, Maithra, et al. "Do vision transformers see like convolutional neural networks?." Advances in Neural Information Processing Systems, (2021). 2. Neyshabur, Behnam, Hanie Sedghi, and Chiyuan Zhang. "What is being transferred in transfer learning?." Advances in neural information processing systems, (2020). 3. Yang, Xingyi, et al. "Deep model reassembly." Advances in neural information processing systems, (2022). **W2:** Assumption 2.6. is quite strong. Although your empirical results seem to corroborate this assumption, in practice I believe there can be strong correlations between different features within a layer. **A:** As shown in the Appendix line 628-631, we further design a regularization loss to reduce the correlation between features, which improves the correlation between our method and CCA in Figure 8(b) and matches our theoretical findings. **W3:** Only AlexNet and VGG are used (apart from the experiment showing the correlation between Grassmann similarity and the proposed filter subspace similarity). It would have been interesting, if the proposed method also performs well for other models, e.g. ResNet models. **A:** As shown in Figure 2(b) and Figure 2 (Table), we show strong correlations between our method and CCA/ CKA with ResNet-18. Besides, we also show the effectiveness of our method with ViT in our general response **GR1**. Following the experimental setup in Section 3.4, we apply our method to continual learning with transformer-based models. The corresponding results are presented in the table below. | | CIFAR100 | MFLOPs | Time (s) | GPU Memory (MB) | |---------|:------:|---------|:------:|---------| | ViT (base) | 75.17 $\pm$ 0.21 | - | - | - | | +CCA | 77.28 $\pm$ 0.09 | 4.13 $\times 10^7$ | 46.84 | 1181 | | +CKA | 76.67 $\pm$ 0.13| 1.81 $\times 10^5$ | 35.08 | 1209 | | +Ours | **78.16 $\pm$ 0.05** | **0.015** | **0.35** | **0** | **Q1:** In the experiments, the overall similarity between two neural networks is computed as the average over the layer-wise similarities. However, intuitively not all layers are equally important for a given task. Would it make sense to use different weights for different layers (i.e. a weighted average)? Would that change the overall results? **A:** We agree that there would be a more effective weighted scheme to aggregate layer-wise similarities. Since it is likely non-trivial to decide weights for different layers, we leave it for future study.
Summary: This paper proposes an approach for evaluating neural network similarity by decomposing convolution layer into linear combination of atom filters. The basic idea derives from [33] and the paper extend the approach for continual learning (CL), to federated learning (FL). The paper provides both empirical and theorical evidence and shows the time-efficiency of the proposed method. However, the application scenarios are relatively limited. Strengths: S1. The paper extends [33], which is originally oriented for continual learning (CL), to federated learning (FL), and compares with CCA and CKA. S2. In limited cases, the proposed method is more efficient than feature-based similarity evaluation (e.g., CCA [40] and CKA [19]) S3. In limited cases, the proposed method can achieve more accurate similarity evaluation and can provide theoretical justifications. Weaknesses: W1. The proposed method has strong restrictions in limited cases. W1.1 The proposed method is restricted to only convolutional layers, and it further needs different neural networks using the same atom coefficients. That means the proposed method is hard to be used in the similarity evaluation of common neural networks. W1.2 The proposed method is restricted to cases where the model architecture must be identical and the model parameters much be similar. For example, in Fig. 2 (Correlation between filter subspace similarity and other approaches), all the three methods achieve a high estimated similarity up to 0.9. What about the cases of lower similarity? The author should present more results for the lower estimated similarity. Thus, it’s unclear whether the proposed method is as flexible as the baselines CCA and CKA, which can be used to compare neural networks with different depths or trained by different datasets. Whether the proposed method can only be used in FL and CL, where the models as well as the parameters are inherently similar? W1.3 The theoretical evidence in this paper is limited to single-layer convolution, particularly, without nonlinearity. Proposition 2.4 (line 152) is restricted to orthogonal matrices and the case of k^2 =m. W2 In FL/CL, the proposed method is time-efficient, but that time is not a severe bottleneck in FL/CL. Furthermore, by nature, the proposed method vectorizes the model parameters and computes the similarities of vectors, which are widely used in multi-task learning. W3. The proposed method has already been proposed in [33]. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Q1. In the experiment, CKA and CCA compute which layer computation? How does the proposed method consider merging the result of different layers? By concatenation or summation? This paper only discusses the case of single-layer convolution. Q2. What’s the essential difference between the proposed method and directly computing the cosine similarity and inner product of parameters? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your constructive comments. We address all your concerns in the following. **W1.1** The proposed method is restricted to only convolutional layers, and it needs different NNs using the same atom coefficients. **A:** Although our method focuses on convolutional filter subspace, it can also be easily extended to other types of layers, e.g., linear layers. As explained in our general response **GR1**, our approach can be effectively extended to **transformer-based models** in continual learning experiments while still maintaining a strong correlation with CCA/CKA. We also would like to state that fixing a large number of parameters of a pretrained model while finetuning a small portion is a common paradigm in the field of modeling visual knowledge, e.g., images. The shared parameters $A_i$ can be viewed as the fixed part of a model pretrained on a significant amount of data, and the $B_i$ is a set of a small number of parameters tuned to fit downstream tasks. In other words, we can potentially obtain pretrained atom coefficients on large datasets and finetuning multiple models on downstream tasks for better performances. Therefore, we argue that, although a new filter subspace view to NN is adopted in the paper, sharing atom coefficients across models does not deviate from the above common paradigm, and poses no major limitation. **W1.2** The author should present more results for the lower estimated similarity. Whether the proposed method can only be used in FL and CL, where the models as well as the parameters are inherently similar? **A:** Our method also applies to lower-similarity cases. We conduct an additional experiment to show the correlation between our method and CCA/CKA in low-similarity cases. In our experiment, we train neural networks on two distinct datasets, Oxford Flower and Food-101. For each dataset, we trained 10 models and computed the similarity between models trained on different datasets. The results show a low similarity range of [0.4, 0.5]. In this case, our similarity measure still exhibits a consistently high correlation with CCA/CKA, as shown in Figure 2 in the attached PDF of our general response. | | Correlation | |---------|:------:| | CCA | 0.9231 | | CKA | 0.9178 | When considering the network architecture, it is important to highlight the increasing popularity of the pretraining-finetuning paradigm today, as discussed above, where the network structure remains consistent. In our framework, we have pretrained atom coefficients and fine-tuned the filter atoms with various architectures, demonstrating the versatility of our method, which can be applied not only to FL and CL but also to this commonly used setting. **W1.3** The theoretical evidence in this paper is limited to single-layer convolution without nonlinearity. Proposition 2.4 is restricted to orthogonal matrices and the case of k^2 =m. **A:** In Proposition 2.4, $D_u, D_v$ are not required to be square matrices. As shown in line 502 in the Appendix, by saying 'orthogonal matrices' we actually assume atoms matrices satisfy $D_u^T D_u=I$, i.e. their columns are orthonormal. We will revise it to 'Assume $D_u, D_v$ satisfy $D_u^T D_u=D_v^T D_v=I$' in Proposition 2.4 for clarification. Note that orthonormality makes equality happen in Proposition 2.4, a strong correlation can still be observed without the orthonormality requirement as shown in Figure 2 Table. We can extend our analysis to a non-linear layer with approximation to the non-linearity. Consider the non-linear layer, $Z=\sigma(\alpha X D)$, we can approximate it with two linear terms as $Z=\alpha X D + \alpha X' D=\alpha \tilde{X} D$, where $X'=\alpha^{+}(\sigma(\alpha X D) - \alpha X D) D^+$, $\alpha^{+}, D^+$ denote psuedoinverses of $\alpha, D$. For the derivation of Theorem 1 and 2, we can simply replace $X$ with $\tilde{X}$ and the rest remains the same. In this way, our main theoretical results still hold. Finally, we simplify our analysis with a single-layer setting, yet it can be easily extended to multi-layer. **W2** In FL/CL, the proposed method is time-efficient, but... **A:** It is infeasible to directly assess network similarities by vectorizing their weights and computing the inner product, since a permutation matrix is required for alignment [1]. Our method leverages the decomposition structure and can directly evaluate network similarities with filter atom inner-products. Our method is not restricted to FL/ CL tasks. We only adopt FL/ CL to demonstrate our method in multi-model scenarios. In a large multi-model system where there are frequent needs to compare similarities for, e.g., deciding whether to train a new model or reuse an old one given a new task, the proposed method can dramatically reduce the latency and costs of model comparisons. **W3** The proposed method has already been proposed in [33]. **A:** [33] proposes to use the Grassmann distance of filter atoms to assess network similarity as a heuristic approach with **no theoretical justification**. In this work, we simplify the atom-based similarity to inner-product computation and provide both theoretical and empirical results to demonstrate its equivalence with popular feature-based similarities CCA/CKA. **Q1.** How does the proposed method consider merging the result of different layers? **A:** As we explained in line 147-148, we simply average layer-wise similarities for the network-wise similarity. **Q2.** What’s the essential difference between the proposed method and directly computing the cosine similarity and inner product of parameters? **A:** As explained in our general response **GR2**, there is a substantially low correlation between CCA/CKA and the weight inner-product, suggesting that directly using the inner product of parameters is unlikely to effectively measure model similarity. Reference: 1. Ainsworth, Samuel, et al. "Git Re-Basin: Merging Models modulo Permutation Symmetries." ICLR. 2022. --- Rebuttal Comment 1.1: Comment: The reviewer thanks the clarification of the authors. As far as the my concern, the theoretical contributions are limited by nature, however, I improve the rating of this paper from 4 to 5, because of the extension to FL/CL. --- Reply to Comment 1.1.1: Comment: We sincerely thank reviewer 8het for the time and effort invested in evaluating our work.
Rebuttal 1: Rebuttal: ### General Responses *We provide figures for the additional required experimental results in the attached PDF.* **GR1: Extend our algorithm to transformer-based models.** Although our method focuses on convolutional filter subspace due to the highly compact size of resulting filter subspace elements (atoms), it can also be easily extended to other types of layers, e.g., linear layers. In essence, our findings rely on: 1) weights $W_i$ are decomposed into two components, $W_i=A_i\times B_i$; 2) in a group of models, one component $B_i$ is fine-tuned while the other component $A_i$ remains fixed. For example, one can apply our method to transformer-based models with decomposed weight matrices in linear layers (Feedforward, Query, Key, and Value matrices). Specifically, by utilizing our proposed method in Section 2.2, we can decompose the weight of the linear layer $W \in \mathbb{R}^{c_{out} \times c_{in}}$ into two components: $W = \text{reshape}(\alpha \times D)$, where the atom coefficients are represented by $\alpha \in \mathbb{R}^{c'\_{out} \times c'\_{in} \times m}$, and the atoms by $D \in \mathbb{R}^{m \times k \times k}$, and $c_{out} = c'\_{out} \times k$ and $c_{in} = c'\_{in} \times k$. For instance, assume $k=4, m=9$, weight matrix $w \in \mathbb{R}^{256 \times 64}$ is decomposed into $\alpha \in \mathbb{R}^{64\times 16\times 9}$ and $D\in \mathbb{R}^{9\times 4\times 4}$. Moreover, we finetune $D$ while fixing $\alpha$ when finetuning transformer models. Following the experimental setup in Section 3.4, we apply our method to continual learning with transformer-based models. As shown in the table below, our approach can be effectively extended to transformer-based models in continual learning experiments while still maintaining a strong correlation with CCA/CKA. | | CIFAR100 | MFLOPs | Time (s) | GPU Memory (MB) | |---------|:------:|---------|:------:|---------| | ViT (base) | 75.17 $\pm$ 0.21 | - | - | - | | +CCA | 77.28 $\pm$ 0.09 | 4.13 $\times 10^7$ | 46.84 | 1181 | | +CKA | 76.67 $\pm$ 0.13| 1.81 $\times 10^5$ | 35.08 | 1209 | | +Ours | **78.16 $\pm$ 0.05** | **0.015** | **0.35** | **0** | | | Correlation | |---------|:------:| | CCA | 0.9443 | | CKA | 0.9079 | Moreover, by adopting the experimental setting outlined in Section 3.3, our method enables the measurement of task similarity using transformer-based models. In this specific experiment, we employed 100 models, and the CIFAR-100 dataset was divided into 20 subtasks, with each subtask containing 5 classes. Each subtask was shared by 5 models. As demonstrated in Figure 1 (c) in the attached PDF, every group of 5 models that share the same task shows a notably high similarity among themselves. The results indicate that our method can effectively measure the similarities of transformer-based models. On the other hand, since the above decomposition equation remains in the same form as the convolution considered in the paper, our theoretical results can also be extended to transformer models with linear layers. However, a rigorous application of our method to transformer-based models would require further study, which we leave for further study. **GR2: The fundamental difference between our method and directly computing the cosine similarity or inner product of weight parameters.** It is infeasible to directly assess network similarities by computing cosine similarity and inner product of parameters, since a permutation matrix is required for alignment [1]. Our method leverages the decomposition structure and can directly evaluate network similarities with filter atom inner-products. Here we present an ablation study to show the correlation between CCA/ CKA with both the weight inner-product and our atom inner product. We adopt the same settings in Section 3.1. The table presented below demonstrates that there is a substantially low correlation between CCA/CKA and the weight inner-product, suggesting that directly using the inner product of parameters is unlikely to effectively measure model similarity. The results are also presented in Figure 3 in the attached PDF. | | Inner-product of weights | Inner-product of atoms | |---------|:------:|:------:| | Correlation w/ CCA | 0.0803 | **0.9327** | | Correlation w/ CKA | 0.2924 | **0.9550** | Reference: 1. Ainsworth, Samuel, et al. "Git Re-Basin: Merging Models modulo Permutation Symmetries." International Conference on Learning Representations. 2022. Pdf: /pdf/ea9dc0df77e804d964f5f71b4455bf0138d78c95.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Reinforcement-Enhanced Autoregressive Feature Transformation: Gradient-steered Search in Continuous Space for Postfix Expressions
Accept (spotlight)
Summary: Feature transformation is an effective way to improve downstream task. This paper aims to improve the search efficiency of the optimal feature space while ensuring the stability and robustness of the transformation. They formulate the discrete Automated Feature Transformation (AFT) problem as a continuous optimization task and propose a reinforcement-enhanced autoregressive feature Transformation framework (MOAT). MOAT implements four steps to advance efficiency and robustness. Extensive experiments and case studies are performed to demonstrate the effectiveness and robustness of the proposed method. Strengths: This paper provides a clear and detailed illustration of the proposed framework, especially the modules in the framework. They conduct extensive experiments to demonstrate the superiority of the framework, including ablation studies for each module, to demonstrate the efficiency and robustness of the framework. Weaknesses: Missing reference to experimental results on page 8, Data collection and augmentation section. Several methods have used reinforcement learning to improve search efficiency at present (e.g., in data augmentation, NAS), and the framework proposed in this paper lacks novelty to some extent. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: (1) MOAT uses multi-stage process to obtain a convincing performance. Is it possible to learn an optimal feature space using reinforcement learning by end-to-end? (2) Does the optimal feature space searching problem can be directly solved by representation learning in continuous space, rather than combining multiple operations in discrete space? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: This paper pays more attention to framework design and lacks some theoretical perspectives. In addition, how to design the framework proposed in this paper into an end-to-end training process is desirable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer GfrD Thank you for your detailed feedback and observations on our work. We highly appreciate the recognition of the strengths of our paper, and we acknowledge the points raised in the weaknesses and questions sections. We will address each of them comprehensively: **1. Regarding the End-to-end Model Architecture:** Response: The end-to-end reinforcement learning process for optimal feature space learning has been explored in works such as GRFG, NFS, and other reinforced methodologies. However, our extensive research and experimentation showed that while such an approach promises to streamline, it poses notable challenges, particularly regarding training stability and convergence. To illustrate, consider GRFG, which, despite its architecture mirroring the RL-based data collector, demands over 3000 epochs to converge, often settling for sub-optimal results. In stark contrast, our MOAT framework swiftly learns a continuous search space from merely RL-based data collectors' initial 512 search records. This generates a more stable, effective, and efficient feature transformation sequence. **2. Regarding the Merit of Representation Learning:** Response: Representation learning is a cornerstone in modern machine learning, offering a transformative approach to feature extraction and data interpretation. At its core, representation learning provides an avenue to automatically discover the representations needed for data analysis tasks, bypassing manual feature engineering, which has historically been time-consuming and domain-specific. In our study, we search for another way to extract the feature information from the dataset, which is the core of Data-centric AI, bringing automation into feature engineering. Our approach could easily apply to some small datasets, and this small amount of data might be insufficient to train a deep neural network for representation learning. In contrast, our approach can generate meaningful features by conducting mathematical feature-feature crossing. The fully traceable generated feature also can be summarized as domain knowledge by its operated mathematical transformation. **3. Regarding Novelty or Contribution** Response: Our computing insights and technical contributions go beyond the combination of GRFG and DIFER. The following implications and findings are beneficial for future research: We highlight a postfix expression embedding and generation perspective for feature transformation. We demonstrate that reinforcement intelligence can be used as a self-optimizing training data collector to explore high-quality feature transformations, evaluate downstream ML task accuracy, and automate training data collection. We find that integrating transformation sequence reconstruction loss and downstream task accuracy estimation loss can better measure the effectiveness of an embedding space and strengthen the denoising capability of gradient-based search of the optimal transformation embedding. We encode a feature transformation operation sequence as a single postfix expression so that the generative model can capture feature-feature interactions and automatically identify the dimensionality of the new feature space and the complexity of the transformation operation sequence. Our framework automatically determines the dimension of the generated feature space by examining the special token <EOS>, contrasting with DIFER's manual setting requirement. These insights are proven to improve the reliability and performance of automated feature engineering tasks. **4. Regarding the Missing Reference on Page 8:** Response: We apologize for this oversight. We will revise this issue in the final version.
Summary: This paper introduces a succinct yet effective framework for automatic feature transformation. The authors mapped decision sequences collected from reinforcement learning into a high-dimensional vector space. Optimization is carried out through the gradient direction provided by an evaluator, and a sequence reconstruction component is employed to rebuild the decision sequence of this feature transformation. Overall, the paper's experiments are comprehensive, and the discussions provide excellent validation of various features of the framework. The comparisons with existing models also aptly place this work among the latest contributions in the field. Strengths: 1. The design of the entire framework is clear and comprehensible, and based on the authors' descriptions and experimental discussions, this framework also shows potential for reuse in other fields. 2. The experimental design is detailed and particularly addresses my concerns about the time-consuming nature of using reinforcement learning for data collection. 3. Similar to the second strength described above, a significant issue with using reinforcement learning in feature transformation tasks is that it might require an excessive number of search steps to achieve optimal. However, this framework can use a fixed number of search steps to construct the space for searching and achieve respectable downstream task performance, significantly mitigating the problem of uncertain duration in feature transformation tasks based on reinforcement learning. Weaknesses: 1. There are a few typos in this paper, e.g., the legend of the figure 4, the model name is wrong. 2. The authors have proposed an encoder-decoder model based on LSTM for sequence modeling. However, there is no discussion on why this specific method was chosen. It would be beneficial if the authors could explain the rationale behind this choice. Could other sequence modeling methods, such as Transformers, have been considered or used instead? 3. While the design of the model and ample experiments mutually validate the feasibility of this framework, if there could be more in-depth insights provided, it might help establish this work as foundational for broader fields. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. The framework's description of the data collection part for the optimization objective lacks clarity. The authors should consider adding more detail in the appendix to better elucidate this aspect. 2. Are there any way to make the framework more generalized, e.g., extend this method to graph-like data, picuture-like data. 3. Would it be feasible to design a more generalized operation set that views these mathematical operations as analogous to sub-networks? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer yQs6, We sincerely appreciate the reviewer's time and effort in assessing our paper. We are pleased to find that the design and potential of the framework were recognized as strengths. We want to address the highlighted weaknesses and answer the posed questions to improve our paper's quality and clarity. **1. Regarding Typos and Minor Revision:** Response: We apologize for the oversight and will promptly correct the typos and the error in the legend of Figure 4 in our next revision. **2. Regarding the Choice of LSTM:** Response: Our framework is a generic feature learning framework, and the sequential model's purpose is to preserve feature learning knowledge in a continuous space, providing a solid foundation for subsequent steps. LSTM, as a vanilla model structure, effectively demonstrates the framework's generalization ability. However, if desired, our framework can also accommodate other sequential models, e.g., Transformers. **3. Regarding the Optimization Objective of the Data Collection Component:** Response: We realize that the data collection part may require more clarity. We will provide a more detailed description in the main paper and, if space-constrained, in the appendix. **4. Regarding the Framework Generalization on graph-like, picture-like data:** Response: It's an interesting point. Currently, our method is designed with specific data in mind, i.e., tabular data. However, extending it to handle graph-like or picture-like data is possible, and we are in the initial stages of exploring these extensions. **5. Regarding a Generalized Operation Set:** Response: Designing a set of operations that treat mathematical operations as analogous to sub-networks is a compelling idea. While our current framework has not incorporated this, we see its merit and will consider it for future work or iterations of the framework. We once again thank the reviewer for their constructive comments and will ensure to make the necessary revisions to enhance the paper's quality and impact.
Summary: The authors propose a feature transformation method that reformulates the problem as a continuous space optimization task and utilizes a reinforcement-enhanced autoregressive framework for gradient-steered search. The method involves four steps: (1) reinforcement-enhanced data preparation, (2) feature transformation operation sequence embedding, (3) gradient-steered optimal embedding search, and (4) transformation operation sequence reconstruction. The proposed method is evaluated through extensive experiments and case studies, demonstrating its effectiveness and robustness. Strengths: 1. This paper proposes a novel automated feature transformation framework that has not been explored before, and it achieves significant improvement in performance compared to previous methods. 2. This paper provides clear and detailed explanations of the proposed method's four-step process, making it easy for readers to follow and replicate the method. 3. This paper conducts extensive experiments and case studies to illustrate the effectiveness of the framework from different perspectives. 4. The authors released the related code and data, which can help other researchers reproduce the experiments. 5. This paper provides clear and detailed explanations of the proposed method's four-step process, making it easy for readers to follow and replicate the method. Weaknesses: 1. The authors use LSTM as the backbone of their framework. Any reason for choosing LSTM? How about other alternatives, such as Transformer? 2. There are some typos in this paper. For instance, in Figure 4 (c) and Figure 4 (d), the name of the model variant should be MOAT^-a instead of GBFG^-a. The authors should fix these typos for keeping consistency. 3. Analyzing the experimental results, the RL-based data collector is important. But, the description for this part is limited. Can the authors provide more explanation on this? Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. It is unclear why the performance of DIFER is not good for openml 616, openml 637. Can you provide some explanation on this? 2. After reviewing the appendix section, I found that MOAT requires more training time but less inference time than DIFER. Can the authors explain the reason for it? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer miDK, We want to thank you for acknowledging this paper's novelty, effectiveness, extensive experiments, and reproducibility. **1. Regarding the inferior performance of DIFER on the openML dataset:** Response: OpenML616 and OpenML637 are two artificially constructed datasets, some of which contain added noise columns. In practice, MOAT will obtain high-quality samples through a reinforcement learning-based feature transformation sequence generator and use these samples as seeds to produce better feature transformation sequences. As the reinforcement learning agent converges, these artificially added noises can be easily removed. In contrast, DIFER selects better features from a set of generated random sequences for its search. This could potentially retain noise sequences, affecting downstream tasks' performance. **2. Regarding the difference between training and inference time for MOAT and DIFER** Response: This observation is due to the two frameworks' strategies to search for optimal feature transformation sequences. In DIFER, it models and searches \textbf{each column} as an independent transformation sequence. This means the sequence length it needs to model is much shorter than MOAT. Still, its search requires generation for each column according to a given sequence length hyperparameter, resulting in more search operations. MOAT models and searches the transformation operation sequence for the entire dataset, where the model determines the lengths of different columns, so the time consumption for sequence modeling is higher. Still, the time consumption for the generation process is less. **3. Regarding the description of RL-based data collector:** Response: We realized that we had omitted too much in the section describing the optimization objectives of RL-based data collectors, which might lead to misunderstandings. In the final version, we will detail the overall optimization objectives of reinforcement learning. **4. Some minor revisions:** Response: In the future version, we will address the typo and other minor errors in Figure 4.
Summary: Distinct from existing work, this study collects feature transformation operation sequences, which have a well-researched background based on reinforcement learning. It then obtains hidden representations of these limited numbers of sequences in a self-supervised manner and finally optimizes this continuous vector representation to generate superior sequences guided by gradients. Specifically, to model the sequences, the authors propose an LSTM-based encoder-decoder-evaluator architecture with just a few collected samples. Detailed runtime analysis was conducted in this work to demonstrate its advantages over reinforcement learning-based (GRFG) and random generation-based (DIFER) approaches. The experiments, coupled with the authors' analytical discussions, effectively substantiate the viewpoints declared in the paper. Strengths: 1) This paper is well-written and easy to understand, with an appropriate level of detail in describing the methods. It considers the primary issue, time consumption, associated with this framework in terms of experimental design, effectively addressing the readers' concerns. 2) The paper's model architecture is thoughtfully designed, and the authors articulate their motivations for each component with great clarity. The streamlined nature of the entire framework demonstrates the depth of the authors' understanding and thoughtful consideration of this work. 3) The authors provide thorough experiments and analysis for this work, covering aspects of interest in this field such as performance, robustness, runtime, scalability, memory usage, traceability, and efficacy tests for each component. The visualization of the hidden space well illustrates the reasons for the model's effectiveness in conducting gradient search. Weaknesses: 1) In the sections 2.1 Important Definitions and Section 3.2 Reinforcement Training Data Preparation, the authors have inconsistencies in the naming of the agents within the cascading agent structure for the data collection part. For instance, it's named as 'feature agent1' in the definition section, while it's referred to as 'head feature agent' in the methodology part. The authors should maintain consistency in these definition names to reduce confusion for the readers. 2) The description of the optimization objective for reinforcement learning in the data collection section is not detailed enough. The authors should provide a more comprehensive description of this critical component. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1) Why does DIFER require more time in the inference phase? The authors should provide a detailed explanation for this. 2) In the time complexity analysis, the authors have chosen four datasets - Wine Red, Wine White, Openml_618, and OpenML_589. However, they have not adequately described the rationale behind selecting these particular datasets. 3) There are some avoidable writing errors, such as on page 13 of the appendix where the authors incorrectly referred to Table 3 as Figure 3. The authors should conduct a thorough check of the entire paper to prevent inconsistencies in naming. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer MJDJ, Thank you for acknowledging that our paper has a solid research background, well-written, easy-to-understand, and extensive experiments to support the claimed research insights and technical contributions. For your mentioned issues: **1. Regarding the typos, name inconsistency, and figure caption errors:** Response: We will revise the correlated description in Section 2.2 Problem Statement to avoid the inconsistency of the name definitions. We will thoroughly check before the final version and prevent hidden typos and errors. **2. Regarding the details of the optimization objective for the reinforcement learning component:** Response: We tried to present a balanced description of the model architecture and provide an as light as possible data collection component description, but based on your suggestion, we realized that we had omitted too much in the section describing the optimization objectives, which might lead to misunderstandings. In the final version, we will detail the overall optimization objectives of reinforcement learning. **3. Regarding the time complexity analysis of DIFER:** Response: DIFER implements a completely different sequence generation strategy. After the model converges, DIFER will generate each column in a fixed number of steps and then use a filter to remove invalid sequences generated. In contrast, MOAT will directly generate the entire sequence, where the model determines the lengths of different columns, and the model will try to generate valid sequences through Beam Search. Such differences lead to MOAT having better Inference Time than DIFER when generating datasets with the same number of columns. **4. Regarding the time complexity analysis of MOAT:** Response: We neglected to explain why the datasets were selected in the main text. According to Table 1, Wine Red and Wine White have the same number of features but different numbers of samples. Comparing the time efficiency of MOAT on these two datasets can reflect the model's scalability on datasets with different instances. Conversely, Openml618 and Openml589 have the same number of instances but different numbers of features, and comparing the time efficiency of these datasets can reflect the model's scalability on datasets with varying feature counts. These two experiments combined can effectively validate the model's scalability. We realize that the lack of this part of the description may affect the reader's understanding, and we will detail the reasons for these dataset selections in the final version.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null