title
string
paper_decision
string
review_1
string
rebuttals_1
string
review_2
string
rebuttals_2
string
review_3
string
rebuttals_3
string
review_4
string
rebuttals_4
string
global_rebuttals
string
dataset_source
string
conference_year
int64
review_5
string
rebuttals_5
string
review_6
string
rebuttals_6
string
review_7
string
rebuttals_7
string
review_8
string
rebuttals_8
string
Kernelized Reinforcement Learning with Order Optimal Regret Bounds
Accept (poster)
Summary: This work studies RL with kernel function approximation, specifically where it is assumed that the transition dynamics and reward function live in some RKHS. While there has been some work on RL with kernel function approximation, existing work provides bounds which are suboptimal. This work seeks to tighten these bounds, and ultimately obtains a bound scaling as $O(\sqrt{\Gamma(T) T})$ where $\Gamma(T)$ is the maximum information gain, matching the lower bound in certain settings of the bandit case. Their proposed algorithm is a variant of optimistic LSVI. Strengths: 1. The setting of kernel RL has not been studied as thoroughly as some areas of RL, and optimal bounds do not exist. This work takes a step in obtaining optimal regret, tightening the best-known existing bounds, and obtaining optimal regret in certain special cases (some bandit instances). However, I believe the stated result is incorrect—see below. Weaknesses: 1. I do not believe the stated result is correct. In the setting of linear bandits/linear MDPs/linear mixture MDPs, the information gain is bounded as $\Gamma_{k,\lambda}(T) \le O(d \log T)$, which would translate to a regret guarantee scaling as $O(\sqrt{d T})$. However, there are well-known lower bounds for these settings which scale as $\Omega(d \sqrt{T})$ (see e.g. [1] and [2]). Thus, the bound stated in this paper is better than the lower bound by a factor of $\sqrt{d}$, which is impossible. The following are less significant issues, but are also important to address: 2. There is a vast body of literature on RL beyond tabular and linear function approximation which is not referenced or discussed. See references [3-6] below for a start, and the works cited therein. This literature should be cited and discussed. 3. To make the results concrete, it would be helpful to instantiate Theorem 2 in the setting of tabular and linear MDPs. 4. The result is only provably optimal in the setting of bandits with Matern kernels. However, this paper considers deterministic rewards, so it doesn’t actually capture the bandit setting and as such does not handle the setting for the stated lower bound. It’s difficult, then, to make the claim that this result is optimal in any setting. I would suggest modifying the setting to allow for noisy rewards, or showing that there is a reduction from the current setting to the bandit setting (by encoding reward randomness in the transitions). 5. In addition, it would greatly strengthen the paper to show a lower bound for kernelized RL. 6. It was not clear to me what the $\eta$ parameter corresponded to or how it was defined. This should be clarified. 7. A reference or proof should be given for Lemma 1. [1] Zhou, Dongruo, Quanquan Gu, and Csaba Szepesvari. "Nearly minimax optimal reinforcement learning for linear mixture markov decision processes." Conference on Learning Theory. PMLR, 2021. [2] Lattimore, Tor, and Csaba Szepesvári. Bandit algorithms. Cambridge University Press, 2020. [3] Du, Simon, et al. "Bilinear classes: A structural framework for provable generalization in rl." International Conference on Machine Learning. PMLR, 2021. [4] Jin, Chi, Qinghua Liu, and Sobhan Miryoosefi. "Bellman eluder dimension: New rich classes of rl problems, and sample-efficient algorithms." Advances in neural information processing systems 34 (2021): 13406-13418. [5] Foster, Dylan J., et al. "The statistical complexity of interactive decision making." arXiv preprint arXiv:2112.13487 (2021). [6] Zhong, Han, et al. "A posterior sampling framework for interactive decision making." arXiv preprint arXiv:2211.01962 (2022). Technical Quality: 1 poor Clarity: 2 fair Questions for Authors: Please provide further clarification on the instantiation of the results to linear bandits/MDPs (see comment above). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 1 poor Presentation: 2 fair Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for reviewing our paper and providing helpful comments. We have addressed your questions below and hope that it will positively affect your evaluation of the paper. We would also like to highlight that we firmly disagree that our results contradict the lower bounds for linear case, as clarified below. We welcome further discussions if required. 1. Our results do not contradict the lower bounds for linear case, as mentioned in the review. Several NeurIPS, AISTATS and ICML published works on kernel bandits, including [20,21,40], report $\mathcal{O}(\sqrt{\Gamma(T) T})$ regret bounds with $\Gamma(T)$ as the maximum information gain. The reasoning in the review would challenge all such results, which is not correct. The $\mathcal{O}(\sqrt{\Gamma(T) T})$ regret bounds match the lower bounds given in [19] for kernel-based bandits with Matérn kernel. The reasoning in the review seem to suggest that $\mathcal{O}(\Gamma(T)\sqrt{ T})$ should be the best achievable regret bounds in kernel bandits/RL. This, however, is undesirable as it may be trivial (superliner). Also see [18] for a technical discussion on the challenges in desining algorithms obtaining $\mathcal{O}(\sqrt{\Gamma(T) T})$ regret bounds. This comment from the reviewer further emphasizes the significance of our results and our contribution to the literature. The alleged contradiction between the $\mathcal{O}(\sqrt{\Gamma(T) T})$ upper bounds and the $\Omega(d\sqrt{T})$ lower bound in the linear case arises from an oversight. The upper bound $\mathcal{O}(\sqrt{\Gamma(T) T})$ is more explicitely $\mathcal{O}(\sqrt{d\Gamma(T) T})$, with an $\sqrt{d}$ factor hidden in the $\mathcal{O}$ notation. When applied to linear case, we recover $\mathcal{O}(d\sqrt{ T})$ regret as expected. In kernel settings, $d$ and $\Gamma(T)$ represent different notions of dimension: $d$ is dimension of the input space, while $\Gamma(T)$ , roughly speaking, represents the effective dimension of the kernel. Although they coincide in the linear case, in general kernel-based settings, $\Gamma(T)$, which grows with $T$, is the dominating term. Hence, it is customary to omit the $d$ terms in the expression of regret bounds in kernel-based bandit and RL literature, e.g., [10, 16, 18, 20, 21, 40]. See a similar discussion in the paragraph next to last in [18]. We will add the $d$ term in our final expression of regret bound and explain this point in the paper, also commenting on the linear case. We hope this clarification addresses the concern. We welcome further discussion and suggest referring to [18, 20] for a detailed technical exploration of this intricate issue. 2. We acknowledge the extensive RL literature. We have cited crucial related works, including [10] and references therein, as well as relevant literature on bandits. We value the reviewer's recommendations and will incorporate the suggested references, discussing their relevance and contributions. 3. In the tabular setting, the regret bound scales with the cardinality of state-action space. Thus, when the cardinality of state action space is very large, we obtain tighter regret bounds. Our regret bounds remain the same for a finite state-action space. For the linear model, please see response to question 1. 4. Thanks for mentioning this point. Even with deterministic rewards, the observations of target function are noisy due to the random transition to the next step. While we agree that noisy rewards makes the problem more general and aid in comparisons with noisy bandits, they will not effectively change the results in general. Hence, deterministic rewards are often assumed for simplicity, as seen in [10], without loss of generality. Reference [10] also compares the results with the noisy bandit setting. We value your feedback and will incorporate this discussion in the final version. 5. Having a general lower bound that incorporates the episode length $H$ seems a significantly challenging problem, which is beyond the scope of this paper. 6. The introduction of the parameter $\eta$ is a technical formality for the correctness of the results, as also given in [10]. In some works, $\eta$ is simply assumed to be zero by assuming the Mercer eigenfeatures are uniformly bounded. This, however, may not hold in general as Mercer eigenfeatures depend on the shape of the domain. We introduce $\eta$ such that $\sigma_m^{\eta}\phi_m$ are uniformly bounded. This uniform boundedness is used in the bounds on maximum information gain and function calss covering number. The uniform boundedness of $\sigma_m^{\eta}\phi_m$ holds for regular domains such as hyperspheres and hypercubes with typical kernels such as Matérn [10]. 7. This result is also used in [10]. Following your comment, we will add a proof (e.g., proof of Lemma 3 in Yeh et al., AISTATS'23). S. Y. Yeh, F. Chang, C. Yueh, P. Wu, A. Bernacchia, S. Vakili, "Sample Complexity of Kernel-Based Q-Learning", AISTATS 2023. --- Rebuttal Comment 1.1: Title: Reply to rebuttal Comment: I would like to thank the authors for their detailed response. I had a few follow-up questions: 1. Could the authors point me to the specific line in the proof where the factor of $d$ is hidden? 4. I agree that considering stochastic rewards should not change the results, and that it is often possible to embed a bandit in a 1-step RL problem with deterministic reward by encoding the bandit reward noise into the stochastic transition. As such, I agree with the authors that it's not unreasonable to compare with the bandit lower bound even when considering deterministic rewards. However, I do think this should be mentioned explicitly in the discussion around equation (22) (or the embedding of bandits into 1-step RL problems with deterministic rewards made explicit). --- Reply to Comment 1.1.1: Title: Response to follow up questions Comment: We greatly value the reviewer's engagement and helpful comments. In response to follow up questions: 1. Regarding the input dimension $d$: The constatns in front of both the information gain and the logarithm of the covering number scale with $d$. Consequently, the confidence interval width multiplier $\beta_h^t(\delta, \epsilon)$, as given in Equation (19)), also increases with $d$. This leads to the constants in regret bound increasing with $d$. The growth of constants in both the information gain and the logarithm of the covering number is attributed to the constants in the kernel's spectrum increasing with $d$. Specifically, within the proof of Lemmas 2 and 3, these constants scale with $d$ (in Equations (39), (47) and the subsequent one, and (52) and the one follwoing). In proof of Theorem 2, this scaling impacts the constant in Equation (59) and the implied constant, in $\Theta$ notation, in Equation (60). That carries to the constant in Equation (65), which appears in the final regret bounds. We wish to highlight that hiding constants dependent on $d$ is not a peculiarity of our presentation. Such a practice is commonplace in all kernel bandits and RL papers we are familiar with, as evident by well-cited papers such as [10, 14, 16, 20, 21, 22, 31, 40, 44, 45]. In this line of work, the primary focus has been on the regret growth with $T$, stiving for sublinear and, ideally, order-optimal regret bounds in $T$, that has overshadowed attention to $d$. Prompted by the reviewer's comment, we will include a more detailed explanation and clearer discussion on this point in the final version. 2. We completely agree with this comment. We will elaborate on this point in the final version of the paper. Specifically, we will clarify that adding observation noise to rewards does not affect the order of regret bounds presented. Therefore, we can compare our results to the noisy bandits, as in [10], without any loss of generality. We appreciate the reviewer's constructive feedback. We are confident that incorporating these discussions will enrich our exposition, further improving the quality of our paper.
Summary: This paper proposes a reinforcement learning algorithm called $\pi$-KRVI that achieves order-optimal regret. The algorithm performs local kernelized optimistic least squares value iteration update - specifically, it partitions the state-action space so that each cell contains a small number of observations, and the Q value within a cell is updated to a Gaussian Process based Upper Confidence bound using observations in that cell only. This is motivated by KOVI [10] and $\pi$-GP-UCB [14]. A sublinear upper bound on the regret is given - this seems to be the first sublinear bound in a general setting, and it is claimed to be order optimal in the number of episode $T$. Strengths: The sublinear regret seems to be the first one established in a general setting, and this seems to be order-optimal. The paper is generally clear, but a few things could be improved as mentioned in Weaknesses. The idea seem to be sound, but I didn't read the proofs. Weaknesses: * The paper only informally refers to [19] when stating the bound is order optimal. This is not immediately clear due to differences in notations, and some ambiguity in the paper's discussion. A more precise and detailed discussion would be helpful. * It is not clear what ${\cal S}$ and ${\cal A}$ are, but it seems both are cubes? If yes, is this a necessary assumption? * The reward functions and transition distributions are assumed to have a norm $\le 1$. How will the regret change if the upper bound is larger than 1? * Lemma 1: $V$ is not used anywhere. Is it supposed to be $V_{h+1}$? If yes, an arbitrary $V_{h+}$ can potentially have a large norm, and Eq. (11) may not hold? A proof of the lemma seems to be missing. * Line 233: why is the target value constructed using a new random state $s_{h+1}'$, rather than the random state $s^{t}_{h+1}$ from the current episode? * Line 248: Does the running time refer to the total running time of Algorithm 1? If yes, can you explain why the upper bound is independent of the size of the partitions? Minor comments * Kernel ridge regression is generally described as a method that doesn't provide any uncertainty estimate, while the paper describes it as a method that provides uncertainty estimates. In particular, [38] is cited as the reference, but it doesn't seem to provide such an account of kernel ridge regression. What's described as kernel ridge regression is Gaussian process regression. * A brief explanation on the motivation behind maximum information gain would be helpful. * Putting the pseudocode of the algorithm in the paper would be helpful. Alternatively, provide a more complete description of the algorithm. * Algorithm 1, line 10: $x_{h}^{t}$ should be $s_{h}^{t}$? Technical Quality: 3 good Clarity: 3 good Questions for Authors: I would appreciate a discussion on the weaknesses before minor comments. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: A discussion on the limitation of Assumption 1 would be beneficial. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for reviewing our paper. We appreciate your feedback and we are glad that you find the paper generally clear. We respond to all your comments in the weaknesses section (in order) and hope that it positively affects your evaluation of the paper. - In the kernel based bandit setting, which corresponds to the special case of $H=1$ and $|S|=1$, [19] established a lower bound of $\Omega(T^{\frac{\nu+d}{2\nu+d}})$. This lower bound implies that the regret bound in Theorem 2 cannot be improved in terms of $T$, in general. We however agree with the reviewer that a lower bound under the RL setting, taking into account the general case and incorporating $H$ would be useful. That however seems itself a challenging problem and beyond the scope of this paper. We will add a more detailed and precise discussion following your comment. - In our construction of the domain partitioning, we assume that $\mathcal{S}\times \mathcal{A}$ is a hypercube, implying $\mathcal{S}$ and $\mathcal{A}$ are hypercubes. This is for simplicity and correctness of a formal derivation. This assumption is not necessary for the overall approach. The domain partitioning technique can be applied to other compact subsets of $R^d$ by considering the smallest cube containing the domain. This allows us to apply the partitioning technique and obtain valid results for RL problems with non-cube-shaped state and action spaces. - Scaling the norms of the reward functions and transition distributions with a constant 'C' would scale the "H+1" term in the confidence interval (see Theorem 1) with C. This represents the RKHS norm of the target function in kernel regression. However, this additive term does not affect the order of the regret bounds, as it is overshadowed by the dominant terms (information gain and log covering number), which grow with $T$. This is similar to the case of kernel bandits, where the regret order in $T$ remains unchanged as long as a bound on the RKHS norm of the target function is known. Only if the RKHS norm of the function scales with $T$ would further investigation be needed to understand the impact on the regret bounds. - Thanks for mentioning this. There is a typo and $V_{h+1}$ should be $V$. We express the lemma for a general $V:\mathcal{S}\rightarrow [0,H]$. This result is also used in [10]. We will add a proof (e.g., see Lemma 3 in Yeh et al., AISTATS'23 for a proof). S. Y. Yeh, F. Chang, C. Yueh, P. Wu, A. Bernacchia, S. Vakili, "Sample Complexity of Kernel-Based Q-Learning", AISTATS 2023. - We believe the notation is consistent and correct. The episode index $t$ is specified in the notation $Z_h^t$ defined in Line 230. - Yes, it is the total running time of Algorithm 1. At each episode $t$ and each step $h$, the computation of the kernel ridge regression statistics in each hypercube has a cost of $O(N_c^3+|A_c|N_c^2)$ where $|A_c|$ is the number of actions in the hypercube and $N_c$ is the number of previous observations in the hypercube. Summing up over all hypercubes, we bound the computational complexity with $O(t^3+|\mathcal{A}|t^2)$, where $|\mathcal{A}|$ is the total number of actions. This bound is obtained using the simple arithmetic that the cube of the sum of natural numbers is larger than the sum of their cubes. Summing up over steps and episodes, we arrive at the overall runtime complexity of $O(HT^4+H\mathcal{A}T^3)$. This calculation is similar to [14], and analogous to [14], we expect an improved runtime for $\pi$-KRVI in practice due to inequalities used in this calculation. We will add further details and clarification on this calculation following your comment. *Minor comments* - Kernel ridge regression and Gaussian process (GP) regression lead to essentially the same calculation for the prediction and uncertainty estimate based on data, but from different Bayesian and frequentist perspectives, respectively. The equivalence between these two methods and an interpretation of the uncertainty estimate within kernel ridge regression framework are discussed in detail in Chapter 3 of "Gaussian Processes and Kernel Methods: A Review on Connections and Equivalences", M Kanagawa, P Hennig, D Sejdinovic, and B K Sriperumbudur. We find the terminology of kernel ridge regression better suited for our case, similar to [10], as the use of GP terminology (when a *surrogate* GP is employed for modeling the target function) may incorrectly imply that the target function is a sample from a GP. We will further clarify this point and cite Kanagawa et al. as a reference. - Roughly speaking, information gain represents the effective dimension of the kernel. That is to say, while typical kernels have an infinite-dimensional feature space, only a limited number of features have a significant effect on the regression with a finite dataset. This represents the effective dimension that is the same as information gain up to a log factor. This quantity appears in the confidence intervals for kernel-based models as shown in the results on vector valued self-normalized martingale inequalities and their extension to Hilbert spaces [17,34,42]. We will add this explanation to the final version of the paper. - The pseudocode of $\pi$-KRVI is provided in Appendix A in the submitted version of the paper. Due to space limitations, we have placed it in the appendix. However, in the final version of the paper, with the availability of an extra page, we will include the pseudocode in the main body of the paper. - Thanks for pointing out the typo. We will correct it.
Summary: This paper presents an optimism-based online learning algorithm for RL with large state-action spaces (including continuous spaces). It proposes a (Gaussian) kernel-based function approximation + optimism (building on UCBVI) algorithm. It assumes that the reward and transition density functions are representable in 1-bal of an RKHS (with a Gaussian kernel), which is quite reasonable. It also introduces a domain-partitioning technique to make the kernel ridge regression part scalable. The regret bound obtained for the algorithm is shown to be an improvement over SOTA [10]. Specifically, regret scales H^2 and sublinear in T. Strengths: Online RL algorithms for continuous state and action spaces is a really challenging problem, and until recently was unresolved. This is the best such result I have seen. It makes a very smart (and now seemingly natural) use of kernel-based function approximation. Weaknesses: The authors have not presented any numerical results. So it leaves one wondering whether it is all nice theory, and there is some hope of the making the algorithms practical. Note: Title has a typo for Kernelized". Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. Could you present some numerical studies so we can understand the strength of your algorithm? It could even be large tabular but difficult problems such as DeepSea and Montezuma's revenge. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Kernel-based method may limit scalability. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for reviewing our paper. We appreciate your feedback and are glad that you mention the significance and novelty our results. While we agree that numerical experiments are important for evaluating the practicality of RL algorithms, our main contribution in this paper is theoretical in nature, similar to existing work, e.g., [9-10]. There seems to be a natural progression of model complexity in RL, from tabular to linear, to kernel-based, and to deep learning based. While tabular and linear problems are adequately studied, the state of the art delas with kernel-based RL. Our work makes a significant contribution by providing an order-optimal regret bound in the number of episodes for a broad class of common kernels. This contribution is particularly important considering that existing results fail to demonstrate even sublinear regret bounds for this class of kernels. We appreciate your recognition of the significance of our results. --- Rebuttal Comment 1.1: Comment: I am not convinced that a numerical investigation here won't be useful. With kernel methods, scalability is a key concern. I feel my original score was generous, and I will keep it.
Summary: This paper theoretically studies the performance of a reinforcement learning algorithm under the assumption that the Q function is a member of RKHS with a known kernel. The authors provide cumulative regret bound on the iterative-least value iteration algorithm and specialise their results to the kernel with polynomial decay of eigenvalues. They improve the regret bounds of prior works by refined analysis of confidence sets in this non-parametric setting. Strengths: The paper is clearly written in the context of its rather technical nature. The work combines the newest understanding of adaptive confidence sets and their analysis for the case of Matern kernels with Linear MDPs, by providing a kernelized variant of thereof. I am not familiar with the other literature utilising the novel variants of the analysis in the context of RL. Seems like there is a sub-community interested in this issue given the COLT open problem, however, as somebody not in this community, I have to say it seems certainly a matter of taste rather than importance. Weaknesses: The prior work, [10] indeed does not provide optimal bounds, but arguably the algorithm seems to be more practical than a rather time-varying discretization of the domain in order to facilitate the construction of the order optimal confidence sets. To be slightly harsh I wonder if the proper academic solution would not have been informing the authors of [10] of this new technique rather than writing a wholly new paper which utilises the trick with the domain splitting which comes from other prior works e.g. [14]. At the expense of creating more elaborate confidence sets one can indeed improve the bounds, but whether this improves the performance remains unanswered in this paper, as no comparison is provided. The proposed contribution is solely of theoretical nature. It is common in online learning to use the doubling trick and mention it in passing in case improved results are desired. I wonder if this is not a similar discretization trick, and whether this deserves an independent publication at NeurISP. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Mercer decomposition diagonalize the infinite dimensional operator either under a specific distribution or on a bounded domain (essentially uniform distribution), could you be more specific what you mean by the Mercer decomposition in Section 2.2 - Why is there H^2 in the regret bounds? What is the nature of this? One "H" would be more intutivie. - How are quantities Infogain and covering numbers related? - Does one also need info-gain to derive the bound? Clearly there are bound with only the info-gain, can there be only covering number bounds? I wonder if works by van der Geer on high dimensional statistics are related to this. - Covering numbers for Sobolev paper are well-studied objects in functional analysis, especially due to seminal paper of field’s medalist Smale. I suggest looking into ref in publications of this author. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: None identified. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for reviewing our paper. We value your feedback and are pleased that you found the paper clearly written. We address your comments below and hope that it will positively impact your evaluation of the paper. We are open to further discussion if required. In terms of contribution, we would like to mention that we make substantial contribution to the related work. For example, we provide flexibility in setting the parameters of the confidence interval (in Theorem 1), that ultimately contributes to the improved regret bounds. We also derive bounds on the maximum information gain (Lemma 2) and the function class covering number (Lemma 3), taking into consideration the size of the state-action domain. Lemma 2 provides a stronger bound than the one used in [14], which also contributes to the improved regret bounds. Thus, in applying the partitioning technique of [14] to the kernel-based RL problem, we make significant improvements to the technique and provide a finer analysis that also significantly improves the results of [14], as stated in lines 91-93 in the introduction. To further clarify this point, we will report the regret bounds for kernel bandits in [14, Theorem 3], which is in order of $\tilde{O}(T^{(d(2d+3)+2\nu)(d(2d+4)+4\nu)})$, and has a polynomial gap from the $\mathcal{O}(T^{(\nu+d)/(2\nu+d)})$ proven in our work for the more involved RL setting. Given these significant improvements, we believe our results constitute a significant contribution to the literature regarding achievable regret bounds in kernel-based RL. Nonetheless, our work reports the first order optimal regret bounds in $T$ for a broad class of common kernels, which is a novel result that may be of wide interest in the community and become a main reference for the achievable regret bounds. Responses to all questions (in order): - A formal presentation of Mercer theorem with the details is provided in Appendix B. For example, we can consider the Lebesgue measure as stated in Appendix B. We note that our analysis only relies on the expressions of kernel and RKHS elements in terms of eigenvalues and eigenfeatures given in equations (7) and (8). The choice of measure in Mercer theorem only affects the eigenvalues and eigenfeatures, and not the expressions in equations (7) and (8) or the rest of our proof. For regular domains such as hyperspheres and hypercubes, with typical kernels, Mercer decompositions are well studied and can be used within our results. Following your comment, we will make this further clear in the paper. - One $H$ scaling seems to be due to the RKHS norm of state-action value functions, the target functions in kernel ridge regression. Lemma 1 shows that this norm scales with $H$. Our results on regret bound align with those in the SOTA [10]. With our results, the optimality of scaling with $H$ cannot be determined. We will make this point further clear in the paper. - Information gain and covering number both represent the complexity of the function class that we are considering. Roughly speaking, information gain represents the effective dimension of the kernel. That is to say, while typical kernels have an infinite-dimensional feature space, only a limited number of features have a significant effect on the regression with a finite dataset. This represents the effective dimension that is the same as information gain up to a log factor. The covering number, on the other hand, is determined by the RKHS topology, gauging the count of RKHS elements required to approximate the entire set within a given sup norm error margin. While information gain and covering number represent distinct complexities of the problem, they are both characterized using the kernel spectrum. Thus, it is expected that their value is related. Please see Lemmas 2 and 3 for bounds on their value. In our domain partitioning algorithm, we set $\epsilon$ proportional to $1/\sqrt{N}$, where $N$ is the number of samples in the corresponding subdomain, while [10] sets $\epsilon$ proportional to 1/T. In both cases, covering number turns to be the dominating term in the confidence intervals width factor. - We here provide a brief explanation on where these quantities appear in the analysis. Please let us know if further details are needed. One important component in the analysis is the confidence intervals for the kernel-based models. The information gain appears due to adaptivity of samples and the covering number appears to to variation of the target function. In an offline setting, $1-\delta$ confidence intervals of the form $|f(x)-\mu_t(x)| \le \beta(\delta)\sigma_t(x)$ are established where $\beta(\delta) = R+\sqrt{\log(\frac{1}{\delta})}$ up to multiplicative constants, which are not related to information gain and covering number. In the online setting, with adaptively collected samples (bandit or RL, for example), information gain appears in the confidence interval $\beta(\delta) = R+\sqrt{\Gamma + \log(\frac{1}{\delta})}$. Please see the work on vector valued self-normalized martingale inequalities and their extension to Hilbert spaces [17,34,42]. In addition, in the RL setting, the target function itself is not predetermined and varies due to Markovian dynamics. The confidence interval thus needs to hold uniformly for all functions within a certain class. This is where the log covering number appears in the confidence interval, $\beta(\delta) = R+\sqrt{\log N + \Gamma + \log(\frac{1}{\delta})}$. It is thus expected for both terms to appear in the regret bound of standard LSVI [10]. In our approach with the domain partitioning technique, we ensure both terms stay logarithmic in $T$ at most, and instead we bound the number of partitions, which leads to the improved regret bounds. - Thank you. We will look into this for future research on the topic. --- Rebuttal Comment 1.1: Title: response Comment: Thank you for your responses. I liked the clarifications. - Are you the first work to discuss the covering approach for order-optimal confidence sets? - [10] sets covering using 1/N instead of 1/\sqrt{N}. What would their bounds be if they used your discretization with 1/\sqrt{N}? However, there is still a fundamental disagreement in terms of contribution. I feel like you use a trick from paper A [and arguably polish the analysis a bit] to apply in a setting of paper B, where they use it as a tool, to get a *theoretically* better algorithm at the expanse of being more difficult to implement. It is not the same method as in [10] since the confidence construction is more elaborate. If A and B were completely different fields, I would be fine with this setup, but this essentially the same field. I do not find this surprising that one can use these techniques in the RL context compared to Bandits. These improved confidence sets can be applied and improve the bounds anywhere they need. I am not saying your work does not deserve audience, but, the following description still were accurately characterizes my general opinion of this paper: "Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation." I do not share the viewpoint of the other reviewer who claims some lower bounds are contradicted. I think the results are believable but I did not check the maths. They are especially believable since somebody already did near identical analysis before in a different context. --- Reply to Comment 1.1.1: Comment: We greatly appreciate your participation in the discussion. Regarding the questions: - The use of a covering set to establish confidence intervals applicable to all members of a function class is a requirement imposed by the MDP framework. The target function $r_h+[P_h V_{h+1}^{t}]$ is not fixed and determined; it depends on $V_{h+1}^{t}$due to the temporal dependence in the MDP setting. The utilization of a covering set is also employed in the most related work [10]. However, in establishing our confidence intervals, we allow flexibility in choosing the parameter of the covering set, which ultimately contributes to improved regret bounds with an appropriate parameter selection. - The proof in [10] does not provide flexibility in choosing the parameter of the covering set. So we cannot directly observe the answer to your question by setting $\epsilon^*$ (in their notation) in their Theorem 4.2 proportional to $1/\sqrt{t}$, where $t$ is the episode number. However, we can see that the regret bound would at least scale with $O(\Gamma(T)\sqrt{T})$. The reason for this is that the confidence interval width would at least scale with $\sqrt{\Gamma(T)}$, and following the rest of the proof of [10] (which is independent of $\epsilon^*$) leads to a regret bound scaling at least with $O(\Gamma(T)\sqrt{T})$. We agree with the reviewer that our improved and order optimal regret bounds come at the price of a more sophisticated algorithm. However, we still believe that our results are significant and of interest to the wider research community for several reasons. There is a broad interest in RL and its analysis. We provide the first order-optimal regret bounds in the kernel-based RL setting for a broad class of common kernels. The SOTA fails to show even sublinear regret bounds. Our results are not achieved by simply applying a technique from [14] to [10]. Although, we agree that our work is highly inspired by [10] and [14], and we have acknowledged this throughout the paper. Specifically, even in the much simpler problem of kernel bandits, [14] obtained sub-optimal regret bounds of $\tilde{O}(T^{(d(2d+3)+2\nu)/(d(2d+4)+4\nu)})$, while we obtain optimal regret bounds of $\tilde{O}(T^{(\nu+d)/(2\nu+d)})$. This is achieved based on the merit of our Lemma 2, which provides a tighter bound on information gain than the one used in [14], and subsequent improvements to the algorithm and its analysis. Also, we would like to address the comment in the review that states "*These improved confidence sets can be applied and improve the bounds anywhere they need*." This statement is not entirely accurate. We would like to emphasize that the algorithm and domain partitioning are closely intertwined. Domain partitioning alone cannot be used to achieve tighter confidence intervals in general. Instead, a careful and elaborate algorithm that leverages domain partitioning is required to improve the regret bounds. This observation is also highlighted in the recent work of [Lattimore, COLT'23] on the kernel-based confidence intervals, where it is stated that "... any analysis of linear contextual bandits aimed at proving a similar result [order optimal regret bounds] cannot completely decouple the concentration analysis and the algorithm. The same is true for kernelised bandits where the dimension-dependence arising from loose confidence bounds is especially pernicious and can be the difference between sublinear and linear regret". Tor Lattimore, "A Lower Bound for Linear and Kernel Regression with Adaptive Covariates'', COLT 2023. We hope that these further clarifications improve the reviewer's evaluation of the paper.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Learning Probabilistic Symmetrization for Architecture Agnostic Equivariance
Accept (spotlight)
Summary: The paper proposes a probabilistic version of the canonization method to construct equivariant architectures from generic non-equivariant backbones. This approach is inspired by the symmetrization solution, which involves averaging a non-equivariant function to obtain an equivariant one, but replaces the uniform averaging distribution with an input-conditional one to achieve a lower sample complexity. Strengths: The paper is clearly written and well motivated. The proposed idea overcomes some important limitations of both the symmetrisation and the canonicalisation methods. I liked the idea of using a general-purpose transformer architecture, which makes the proposed method very flexible and easy to apply in more generic contexts. The surprising performance of pre-trained vision models on non-vision tasks is also a very interesting result and I wonder if it can tell us something more general about these architectures. Despite the generality and flexibility of the approach, the experiments show it can also achieve competitive performance. Weaknesses: With respect to the canonicalization method from [26], the proposed approach essentially averages over $N>1$ samples and, therefore, is $N$ times more expensive than canonicalization. Since this work is partially motivated by computational efficiency, I think this aspect has not been sufficiently discussed in the manuscript. Moreover, I imagine the number of samples chosen can play an important role on 1) the model's final performance, 2) the training dynamic, 3) the variance of the learned distributions and 4) the training time. I think this point also requires some additional discussion. I would encourage the authors to include a short experiment to study the effect of the sample size on (at least some of) these aspects. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Why is it necessary to assume the group is compact? line 96-97: "2) learn to produce low-variance samples $g \sim p_\omega(g|x)$ that can offer more stable gradients for the base function $f_\theta$ during training." I think this claim requires some additional comments to justify it. Why should the network learn to produce low-variance samples? On a related note, do you train the models with some low-variance regularization? what prevents the model from outputting always a uniform distribution? lines 114-117: this seems more a problem of parameterizing the group G itself, not its representations. The representation $\rho(g)$ can be computed in a differentiable way from $g$ when $G$ is a Lie group at least. Eq. 8: it is not really clear how you output a permutation matrix. You use the notation $P_g \approx \hat{P}_g$ but you don't explain what it means. The matrix $\hat{P}_g$ is a real valued matrix and doesn't belong to $(0, 1)^{n \times n}$. Do you apply rounding on it? If so, isn't it possible to just use the matrix $\hat{P}_g$? Is there any benefit in using the boolean $P_g$ instead? You compare your probabilistic method with the deterministic canonicalization method. It is not clear to me if the benefit is coming from the fact the backbone is averaged over $N>1$ samples or if the stochasticity of $p_\omega$ plays a role. Could you try a version of your model which outputs $N$ samples but deterministically (e.g. sample the noise variables only once before training and keep them fixed)? While this is already mentioned by the authors among the limitations as a future work, it would be interesting to understand if this kind of architectures enjoys the same data-efficiency property of other equivariant networks. Do the authors have any insights about the effect of the training set size on the ability of the model to learn the symmetrising distribution? The paper is missing a related works section. Moreover, I think an important part of the literature about equivariant networks using group-convolution is not cited. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: Some limitations are discussed in the supplementary material. I encourage the authors to move the main points mentioned there in the main paper. See also my comments under Weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Q1. The proposed approach essentially averages over N > 1 samples and, therefore, is N times more expensive than canonicalization. A1. The O(N) cost with N samples is a genuine weakness, and we will add clarification n the main text. Nevertheless, as the sampling is completely parallelizable and analogous to using a larger batch size, we believe this can be overcome to some degree by leveraging parallelism [5] developed for scaling batch size. Q2. I would encourage the authors to include a short experiment to study the effect of the sample size on training. A2. We agree that controlling the number of samples used at training time would offer interesting results. We are planning to add a small empirical analysis on how the sample size affects the optimization dynamics and will post the result during the discussion period. Q3. Why is it necessary to assume the group is compact? A3. The compactness of group G is necessary for the proof in Theorem 2. Except that, the group does not need to be compact. We can extend Theorem 2 to non-compact groups if we make additional assumptions, such as the support of p(g|x) being always compact. Q4. Why should the network learn to produce low-variance samples? Do you train the models with some low-variance regularization? what prevents the model from always outputting a uniform distribution? A4. At some point, certain group elements g ~ p(g|x) can be favored more than others in terms of task loss; this is likely due to the specific configuration of the parameters of the base function f coming from random initialization or pre-training. In either case, it will signal the distribution p(g|x) to favor generating these group elements to minimize task loss, which eventually leads to learning to produce those samples consistently. In all experiments in the paper, we directly minimize the task loss and do not use low-variance regularization. In our early exploration, we experimented with two variance regularization techniques, which are simple equivariance loss [3] and entropy estimation on the distribution p(g|x) [4]. However, equivariance loss often resulted in collapsed trivial predictions, and entropy estimator did not bring performance gain. We conjecture that the task loss provides a strong signal to nudge the distribution p(g|x) to favor specific group elements g ~ p(g|x) and this leads to a reduced variance of group elements without collapsing to Unif(G). Q6. Lines 114-117: this seems more like a problem of parameterizing the group G itself, not its representations. A6. Indeed, the distribution p(g|x) does not fundamentally need to output a representation, i.e., it may produce alternative specifications of g (e.g., a Lie algebra element) from which representation is computed (e.g., with exponential map). In our work, we produce group representation as it provides a convenient medium for sampling and backpropagation through p(g|x) while guaranteeing the G equivariance of p(g|x). Alternatives, e.g., based on Lie algebra [5], is possible. We will revise the main text to clarify this. Q7. Eq. 8: it is not really clear how you output a permutation matrix. A7. We apologize for the confusion. Given nodewise scalar Z ∈ Rn from GNN, we first perform argsort to directly obtain the hard permutation matrix Pg [6]. And then, we compute the soft permutation matrix hat{P}_g following Eq. (8) [6]. hat{P}_g does not affect how the hard permutation Pg is computed, but it affects training as it is used for straight-through gradient. Q8. Isn't it possible to just use the matrix hat{P}_g? Is there any benefit in using the boolean P_g instead? A8. The first reason we use P_g is, since hat{P_g} is not a valid representation, symmetrization using it would not be G equivariant. The second reason is, using hard permutation is often critical for convergence. We conjecture this is due to the smoothing of soft permutations; in early training, hat{P}_g is close to a dense matrix, and applying it to input would smooth the information of nodes. Such smoothing is known to underlie oversmoothing of GNNs [8]. We added an experimental result of using the soft permutation for training, please see Table 4 below. Q9. Could you try a version of your model that outputs N samples but deterministically? A9. We ran a variant of our approach in the EXP-classify (Section 3.1), where the noise are sampled once before training and fixed. The results are in Table 4. The model quickly reaches 82% test accuracy, but then severely overfits the training data. This implies that the effect of averaging is insufficient to explain our approach's performance, and the stochasticity of p(g|x) plays an important role. Table 4. Ablation-extended results for EXP-classify. | method | EXP-classify ↑ | |:---:|:---:| | MLP-GA [7] | 50% | | MLP-FA [7] | 100% | | MLP-Canonical. | 50% | | MLP-PS (Ours) | 100% | | MLP-PS (Ours; soft permutation) | 50% | | MLP-PS (Ours; fixed noise) | 82% | Q11. The paper is missing a related works section. I encourage the authors to move the limitation sections in the main paper. A11. We will revise the main text accordingly. [1] Kaba et al., Equivariance with learned canonicalization functions (2022) [2] You et al., Scaling SGD batch size to 32k for ImageNet training (2017) [3] Murphy et al., Janossy pooling: Learning deep permutation invariant functions for variable-sized inputs (2019) [4] Seo et al., State entropy maximization with random encoders for efficient exploration (2021) [5] Finz et al., Generalizing convolutional neural networks for equivariance to Lie groups on arbitrary continuous data (2020) [6] Prillo et al., Softsort: A continuous relaxation for the argsort operator (2020) [7] Puny et al., Frame averaging for invariant and equivariant network design (2021) [8] Cai et al., A note on over-smoothing for graph neural networks (2020) --- Rebuttal Comment 1.1: Comment: I thank the authors for the detailed answer and for quickly producing the suggested experiments. I maintain my (positive) score and recommend the authors to include these answers in the final version of the manuscript. --- Reply to Comment 1.1.1: Title: Further Response Comment: Dear Reviewer SdiQ, Thank you again for the positive and constructive comments. We will indeed include the response in the final revision of the paper. Following common requests, we have conducted an in-depth empirical analysis of the convergence and sample complexity of the algorithm using the EXP-classify dataset (Sn invariance; Section 3.1). The results can be found in the latest common response: https://openreview.net/forum?id=phnN1eu5AX&noteId=ci75ubQbgK. We think the findings might be of your interest, in particular the analysis on how the sample size for estimation affects training (**Q4**). A notable finding is that smaller sample size for training regularizes the model towards low-variance estimation, which is **beneficial** in terms of inference time sample complexity. In addition, we think we have missed one of the questions in the initial rebuttal. We apologize for this, and would like to provide our response below. > Q. While this is already mentioned by the authors among the limitations as a future work, it would be interesting to understand if this kind of architectures enjoys the same data-efficiency property of other equivariant networks. Do the authors have any insights about the effect of the training set size on the ability of the model to learn the symmetrising distribution? A. For a given task, assuming underfitting is not a problem, we conjecture our method (and symmetrization approaches in general) would not be able to enjoy the same level of data efficiency as equivariant architectures when trained from random initialization. This is because the hypothesis space of symmetrized function would be larger in general (G equivariant universal, Theorem 2) than equivariant architectures. Based on the traditional notion of model flexibility and overfitting, symmetrized models would require more data to reach a generalizable solution. However, if we consider knowledge transfer from other domains through pre-trained base function, these transferred parameters (knowledge) would impose a strong prior on the hypothesis space; in this case, transferred knowledge could possibly improve the data efficiency of symmetrized models. The symmetrizing distribution p(g|x) here would serve as an aligner between the pretrained knowledge and target task. While we currently do not have a clear conjecture on the data efficiency of learning p(g|x) as an aligner, we plan to investigate it in future work. Sincerely, Authors of submission 12717
Summary: The paper presents a method to learn to symmetrize a neural network using data. The method considers a learnable probability distribution over the group and uses group averaging to enforce (relaxed) equivariance w.r.t. the learned distribution. Strengths: Over recent years there has been a growing interest in the community to learn relaxed symmetries (invariance and equivariance) from data. The paper addresses this important problem and provides a practical method that scales to interesting model classes and data problems (e.g. the use of Transformers, particle dynamics, and graphs is noteworthy). Weaknesses: * Missed prior work There have been several approaches that aim to learn symmetrizing distributions (invariance or equivariance). Placing a probability distribution on the group over which a function is averaged and learning this distribution from data is not new. For instance, see approaches listed in survey Sec. 6 of [2]. * Proposed method already exists The paper misses this literature and it seems that the method has been published before (Equivariant Augerino, Sec. 3.1 of [1]). An argument could be made that although the Augerino paper describes the method for general groups, it only considers very simple affine groups in practice. This paper does consider more interesting group structures and domains (e.g. graphs). However, if the same method has been published before and is applied to new domains/data, this should at least be credited and discussed. This would make it more clear what the contributions of the work are. * Objective function A well-known difficulty with learning symmetry distribution is not necessarily describing a probability distribution on the group but rather the objective to learn the distribution over the group. Although the method claims to be probabilistic, it is not clear what objective is used to train the model (or whether inference is being performed). If regular maximum likelihood loss is used, it seems there is no encouragement in the objective that prevents the collapse of the symmetrization distribution into a delta peak at the identity. In such a case, it would result in not learning the symmetrization that generalises well. Prior works often consider more sophisticated losses, such as regularization [1], lower bounds or model selection, to prevent this. The paper seems to skip over this issue. [1] Benton, Gregory, et al. "Learning invariances in neural networks from training data." Advances in neural information processing systems 33 (2020): 17605-17616. [2] Rath, Matthias, and Alexandru Paul Condurache. "Boosting deep neural networks with geometrical prior knowledge: A survey." arXiv preprint arXiv:2006.16867 (2020). Technical Quality: 3 good Clarity: 3 good Questions for Authors: a) How does the method differ from equivariant Augerino (Sec. 3.1 of [1])? b) What objective is used, or how is inference over the probability distribution being performed? c) If a maximum likelihood objective is used, what mechanism prevents the symmetrization from collapsing into the identity? Did authors notice such a collapse in experiments, or is this mitigated somehow? [1] Benton, Gregory, et al. "Learning invariances in neural networks from training data." Advances in neural information processing systems 33 (2020): 17605-17616. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: - The paper seems to miss prior works that consider learning similar parameterizations over group structures. - The proposed method seems to be proposed already (equivariant Augerino, Sec. 3.1 of [1]) - It is not clear what objective is being used to train the model and what prevents collapse of the symmetrization distribution. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Q1. The proposed method already exists. A1. Since our approach parameterizes a distribution p(g|x) on a group G for symmetrization ϕ(x) = E_g[g⋅f(g-1⋅x)] and learns it from data, one may find some similarity to Augerino [1] and other approaches in [2] such as [3-7] that learn distributions over augmentations (e.g., p(g)) for a similarl symmetrization. However, we would like to clarify that our method is distinguished from all these approaches [1-7], including Augerino [1], as our method is the only one that guarantees G equivariance of symmetrized ϕ(x) while able to learn useful distribution p(g|x) from data to maximize performance. Let us elaborate below. First, it has to be noted that the objectives of the approaches are different. Our approach aims to obtain an exact G equivariant symmetrization ϕ(x) given the known symmetry group G of data (e.g., G=S_n for graphs), while [1-7] aim to discover underlying (approximate) symmetry constraint from data. Because of this, the symmetrizing distribution on G has to be designed differently: In our case, we parameterize the distribution p(g|x) itself to be G equivariant (see Theorem 1), while for [1-7], the distribution p(g) is parameterized for a completely different purpose of covering a range of different symmetry constraints and their approximations (e.g., a set of 2D affine transformations [1]). Importantly, this allows our approach to learn non-trivial and useful distribution p(g|x) per data x while keeping the symmetrized ϕ(x) exactly G equivariant. This is a key advantage distinguished from [1-7] if the group G is given, since [1-7] does not guarantee G equivariance, nor can they learn useful p(g|x) in the case they learn to be G equivariant. For example, Augerino [1] puts an unconditional distribution p(g) on group G; in order to achieve exact G equivariance, p(g) has to learn a right invariant distribution, i.e., p(gh) = p(g) for all h ∈ G (see Sec. 3.1 of [1]). Since such p(g) is a Haar distribution, i.e., the uniform distribution Unif(G), it can be seen that Augerino has to reduce to basic group averaging [8] in order to achieve exact G equivariance. Please note that we have already extensively discussed the advantages of our approach compared to group averaging in the main text; for example, see Lines 42-45 and 90-97. The same argument also applies to [3-6]: see Sec. 3.1 of [3], Appendix A of [4], Sec. 3 of [5], and Sec. 3.4 of [6]. LILA [7] is closer to our approach as it defines an input-conditional augmented distribution p(x’|x) (see Eq. (1) of [7]), but still does not guarantee G equivariance of ϕ(x) as the distribution p(x’|x) is unconstrained. We will revise the main text and add the above citations and discussion to make this more clear. Q2. It is not clear what objective is being used to train the model and what prevents the collapse of the symmetrization distribution. A2. In symmetrization approaches that aim to discover symmetry or its extent from data [1-7], encouragement in the objective, e.g., through regularization [1] or model selection [7], has to be used to prevent the collapse of the symmetry to the trivial group (i.e., p(g|x) = \delta(g=id) for all x). At a high level, this is because the symmetrized ϕ(x) = E_g[g⋅f(g-1⋅x)] is allowed to search over the space of symmetries, which includes the trivial group; if one uses the maximum likelihood objective, the model would likely favor the trivial symmetry because it is the least constrained and would fit the training data most easily. However, in our case, the goal is not to search over the space of symmetry groups but to build an exact G equivariant function for a given symmetry group G (e.g., G=S_n for graphs). For this, as in Theorem 1, we constrain the symmetrizing distribution p(g|x) to be G equivariant, which leads to a strong provable guarantee that the symmetrized function ϕ(x) = E_g[g⋅f(g-1⋅x)] is always G equivariant regardless of how the (G equivariant) distribution p(g|x) is trained. In other words, the symmetrized model ϕ(x) cannot collapse to trivial symmetry, as it is enforced to be equivariant for the given symmetry group G. This allows us to use regular maximum likelihood objective for training without the need to address symmetry collapse. As a more bottom-up interpretation, we can show that if a symmetrizing distribution p(g|x) is G equivariant (as in Theorem 1), it cannot technically collapse to a delta peak at the identity p(g|x) = \delta(g=id) for all x. To prove this, assume that a G equivariant p(g|x) has collapsed. Recall G equivariance p(g|x) = p(g’g|g’⋅x) for all x and g’ ∈ G (see Eq. (5)); transforming input has to transform the distribution accordingly. This yields a contradiction, as transforming an input x to g’⋅x has to transform the distribution p(g|x) with g’ as well, and as a result, p(g|x’) for x’ = g’⋅x is no longer a delta peak at identity. Q3. How is inference performed? A3. Since we use regular maximum likelihood objective, we only need to perform sampling g ~ p(g|x) for MC estimation of ϕ(x) to obtain loss, and inference or density estimation on the symmetrizing distribution p(g|x) is not necessary. [1] Benton, Gregory, et al. "Learning invariances in neural networks from training data." (2020) [2] Rath, Matthias, and Alexandru Paul Condurache. "Boosting deep neural networks with geometrical prior knowledge: A survey." (2020). [3] ​​Rommel, Cédric, et al. “Deep invariant networks with differentiable augmentation layers.” (2022). [4] Wilk, Mark van der, et al. “​​Learning invariances using the marginal likelihood.” (2018). [5] Wilk, Mark van der, et al. “Learning invariant weights in neural networks.” (2022). [6] Romero, David W., et al. “Learning partial equivariances from data.” (2021). [7] Immer, Alexander, et al. “Invariance learning in deep neural networks with differentiable Laplace approximations.” (2022). [8] Yarotsky, Dmitry. ”Universal approximations of invariant maps by neural networks.” (2018). --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their detailed reply and explanations. I have updated my rating.
Summary: The paper suggests a probabilistic approach to symmetrization, where an input conditional distribution is used to replace the untractable haar measure distribution of infinite groups. In turn, the paper identifies what are the conditions on the conditional distribution under which the symmetrization yields an equivariant function. The method is evaluated on several benchmarks involving permutation and rotation symmetries. Strengths: Overall the paper is well-written and easy to follow. The formulation of the method seems to be adequate. The proposed idea is simple, and a natural extension of previous methods. Weaknesses: Relation to previous works The paper in the introduction states that existing approaches focus on either manually deriving smaller subsets [1], or implementing a relaxed version of equivariance [2]. It is not clear what manually means in this context. In fact, [1] identifies a similar condition to eq(5) when the function p(g|x) is a delta function. I noticed section 2.4 relates to these previous works more thoroughly. More importantly, I feel [2] and the suggested approach are very similar, as [2] does not only suggest an approximate equivariant but rather suggests a learnable frame function (group subset), satisfying the equivariance constraint (similar to eq (5)). Thus, I feel this work should convince us of the necessity of the probabilistic model for learnable frames. To this end, I would have expected an ablation study testing “apples-to-apples” architectures where the only difference is the addition of the input noise term. It would also be beneficial to separate the test of averaging over S_n to O(3) in the experiment in table (2) and use symmetrization only to the O(3) symmetries with S_n equivariant networks. Missing discussion In fact, the network prediction in eq(4) is only approximated. Thus I assume, the loss terms used for training only estimate the true loss (gradients). How stable are these estimates? How well do they approximate the true loss? Technical Quality: 3 good Clarity: 3 good Questions for Authors: I would appreciate any response from the authors regarding the weakness stated above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Could not find a discussion on limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Q1. The introduction states that [1] focuses on manually deriving smaller subsets, but it is not clear. A1. We intended to describe that one needs to manually solve G equivariant set-valued frame, and it cannot be discovered from data. In contrast, the G equivariant distribution p(g|x) requires less hand design and can be learned entirely from data. Q2. [2] and the suggested approach is very similar. A2. Theoretically, we can show that there always exist certain input data that canonicalization fails to guarantee exact G equivariance, while PS guarantees G equivariance to all input data (as in Theorem 1). To see this, let us recall the definition of a G equivariant canonicalizer C: x ↦ ρ(g) [2]. In the paper, it is stated that a canonicalizer C is G equivariant if C(g⋅x) = ρ(g)C(x) for all g ∈ G and x ∈ X. Now consider an input x which has a non-trivial stabilizer subgroup G_x = {h ∈ G | h⋅x = x}, i.e., has inner symmetries. It can be seen that G equivariant canonicalizer is ill-defined for these inputs. Specifically, let g1 = gh1 and g2 = gh2 for some g ∈ G and any h1, h2 ∈ G_x where h1 ≠ h2. Then we have C(g1⋅x) = C(gh1⋅x) = C(g⋅x) = C(gh2⋅x) = C(g2⋅x), implying that ρ(g1)C(x) = ρ(g2)C(x). Since g1 ≠ g2, this contradicts the group axiom, and therefore a G equivariant canonicalizer C cannot exist for inputs x with non-trivial G_x. For a more detailed discussion of this problem, please see Appendix A of [1]. To handle all possible inputs, canonicalization [1] adopts relaxed equivariance: a canonicalizer C satisfies relaxed equivariance if C(g⋅x) = ρ(gh)C(x) up to arbitrary action from the stabilizer h ∈ G_x. As a result, the symmetrization ϕ(x) = g⋅f(g-1⋅x) performed using a relaxed canonicalizer C only guarantees relaxed equivariance ϕ(g⋅x) = gh⋅ϕ(x) up to arbitrary action from the stabilizer h ∈ G_x (proof is given in [1]). Intuitively, this means canonicalization does not guarantee G equivariant processing for input data x with inner symmetries G_x. To visually demonstrate this, we performed a minimal experiment using several graphs x with non-trivial stabilizers G_x (inner symmetries [2]). The results are in Figures 1-3 of the common response. We fixed a randomly initialized MLP f: Rn x n → Rn and symmetrized it using our approach and canonicalization. When symmetrized, the MLP is expected to provide scalar embedding of each node, which we color-code for visualization. For each graph, we illustrate three panels: the leftmost one illustrates the color-coding of the inner symmetry of nodes (automorphic nodes), the middle one illustrates node embedding from MLP-PS, and the rightmost one illustrates node embedding from MLP-Canonical. If a method is G equivariant, it is expected to give identical embeddings for automorphic nodes because an equivariant model cannot distinguish between the identities of individual automorphic nodes in principle [3]. As in Figures 1-3, in the presence of inner symmetry (left panels), MLP with PS (middle panels) is able to perform G equivariant processing and produce almost identical embeddings for automorphic nodes. However, the same MLP with canonicalization fails and produces unstructured embeddings (right panels). The result illustrates a potential advantage of PS over canonicalization when learning data with inner symmetries, which is found in applications such as molecular graph processing [4]. Q3. I expected an ablation study where the only difference is the addition of the input noise. A3. All our main experiments in Sections 3.1-3.3 are conducted in the way the reviewer suggested. Please see Lines 214-216 in the main text; all canonicalization models are constructed by deleting the noise term from the probabilistic mode using identical architectures. Q4. It would be beneficial to use symmetrization only to the O(3) symmetries with S_n equivariant networks. A4. We additionally conducted an experiment on E(3) symmetrization of an S_n equivariant GNN for n-body task (Section 3.2). The results are in Table 2 of the common response, and our approach obtains state-of-the-art performance and significantly improves over all other symmetrization approaches. Q5. The loss terms used for training only estimate the true loss (gradients). How stable are these estimates? How well do they approximate the true loss? A5. While Eq. (4) gives a G equivariant symmetrized function ϕ(x), we cannot directly observe ϕ(x), but can obtain samples of the unbiased estimator g⋅f(g-1⋅x). Given that, it can be questionable what objective we are actually optimizing when we use estimations of ϕ(x). Fortunately, a theoretical framework for training of S_n symmetrized models has been established [3-4], from which we generalize to the general group. The main message is that minimizing a convex loss function based on the sampled g⋅f(g-1⋅x) is equivalent to minimizing an upper bound to the true objective that involves ϕ(x). Given a training set D = {(x1, y1), …, (xn, yn)} for a G equivariant task, our true objective would be minimizing the empirical loss L(D; θ, ω) = sum_i l(y_i, ϕ(x_i)) where l is a convex loss. However, in practice, ϕ(x) cannot be observed, and we observe g⋅f(g-1⋅x). While g⋅f(g-1⋅x) serves as an unbiased estimator of ϕ(x), the estimation is no longer unbiased when g⋅f(g-1⋅x) is used to compute the loss: from Jensen’s inequality, we have E_g[l(y, g⋅f(g-1⋅x))] ≥ l(y, E_g[g⋅f(g-1⋅x)]). That is, minimizing the sampling-based loss is minimizing an upper-bound surrogate to the true objective [3]. Q6. Could not find a discussion on limitations. A6. We will move Appendix A.4 to the main text for visibility. [1] O. Puny et al., Frame averaging for invariant and equivariant network design (2021) [2] S. Kaba et al., Equivariance with learned canonicalization functions (2022) [3] R. L. Murphy et al., Relational pooling for graph representations [4] R. L. Murphy et al., Janossy pooling: Learning deep permutation invariant functions for variable-sized inputs --- Rebuttal Comment 1.1: Title: Further Response Comment: Dear Reviewer wgQS, Thank you again for the insightful and constructive comments. Following common requests, we have conducted an in-depth empirical analysis of the convergence and sample complexity of the algorithm using the EXP-classify dataset (Sn invariance; Section 3.1). The results can be found in the latest common response: https://openreview.net/forum?id=phnN1eu5AX&noteId=ci75ubQbgK. We think the findings might be of your interest, in particular the analysis on how the sample size for estimation affects training (**Q4**). A particularly interesting finding is that smaller sample size for training regularizes the model towards low-variance estimation. This supplements our previous response on sampling-based loss, and provides some understanding of the side effect induced by optimizing the sampling-based upper-bound to the true objective. Sincerely, Authors of submission 12717
Summary: This paper proposes a new symmetrization method for achieving equivariance for any base function. It absorbs previous methods with the same objective as special cases, including group averaging, frame averaging, and canonicalization. The method uses a learnable, equivariant map to generate the group representation necessary for achieving equivariance from an invariant, external noise variable, followed by necessary post-processing to ensure the validity of the group representation. The authors empirically demonstrate that the learned stochasticity in symmetrization leads to improved model performance than the previous methods, and showcase the benefits of general pre-training for equivariant models. Strengths: * The proposed method is general, providing a unified perspective for previous methods and may lay the foundation for further theoretical exploration of the "optimal" conditional group distribution. * The presentation is clear and easy to read. * The finding of the benefit of non-symemtric pretraining for equivariant models seems interesting and new. Weaknesses: * My primary concern is that the supriorty of the proposed method compared to the canonicalization method is not clear enough. - The advantages of the former over the latter seem to be more evident only in the first experiment, as the canonicalization method appears to have difficulties during optimization (although it is unknown whether this issue can be mitigated through some warm-up techniques). In the other two experiments, the performance of both methods is quite similar. - On the other hand, compared to the canonicalization method, the proposed method incurs more computational cost in practice, including the additional parameters and hyperparameters in the learnable module, as well as the sampling cost (the normalization method only requires a single "sample", whereas here we need 10-20 samples). + This paper lacks thorough empirical and theoretical analyses for the two claimed superiorities over previous methods. Specifically, - "Learn to collaborate with base function to maximize task performance": No sensitivity analysis is provided to demonstrate how this method adapts better to different base functions. - "learn to produce low-variance samples that can offer more stable gradients for the base function": No analysis of gradient magnitudes is provided to indicate its stability. * The limitation is not discusses, including the extra computational cost. * The post-process seems interesting but also laborious, as it requires different designs for each different (compact) groups. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: * What's the computational cost of this method, including training and testing, compared to the canonicalization method? * Analyzing the sampling efficiency for estimating expectations would be beneficial, especially in comparison to the uniform distribution, as it is a major claim of the effectiveness of this method. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors have not discussed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Q1. The superiority compared to canonicalization is not clear. A1. Theoretically, we can show that there always exist certain input data that canonicalization fails to guarantee exact G equivariance, while PS guarantees G equivariance to all input data (as in Theorem 1). To see this, let us recall the definition of G equivariant canonicalizer C: x ↦ ρ(g) [1]. A canonicalizer C is G equivariant if C(g⋅x) = ρ(g)C(x) for all g ∈ G and x ∈ X. Consider an input x which has a non-trivial stabilizer subgroup G_x = {h ∈ G | h⋅x = x}, i.e., has inner symmetries. It can be seen that G equivariant canonicalizer is ill-defined for these inputs. Specifically, let g1 = gh1 and g2 = gh2 for some g ∈ G and any h1, h2 ∈ G_x where h1 ≠ h2. Then we have C(g1⋅x) = C(gh1⋅x) = C(g⋅x) = C(gh2⋅x) = C(g2⋅x), implying that ρ(g1)C(x) = ρ(g2)C(x). Since g1 ≠ g2, this contradicts the group axiom, and therefore a G equivariant canonicalizer C cannot exist for inputs x with non-trivial G_x. For more detailed discussion, please see [1]. To handle all inputs, canonicalization [1] adopts relaxed equivariance: a canonicalizer C satisfies relaxed equivariance if C(g⋅x) = ρ(gh)C(x) up to arbitrary action from the stabilizer h ∈ G_x. As a consequence, the symmetrization ϕ(x) = g⋅f(g-1⋅x) performed using a relaxed canonicalizer C only guarantees relaxed equivariance ϕ(g⋅x) = gh⋅ϕ(x) up to arbitrary action from the stabilizer h ∈ G_x (proof is in [1]). Intuitively, this means canonicalization does not guarantee G equivariance for input data x with inner symmetries G_x. To visually demonstrate this, we performed a minimal experiment using graphs x with non-trivial stabilizers G_x (inner symmetries [2]). The results are in Figures 1-3 of the common response. We fixed a randomly initialized MLP f: Rn x n → Rn and symmetrized it using our approach and canonicalization. When symmetrized, the MLP is expected to provide scalar embedding of each node, which we color-code for visualization. For each graph, we illustrate three panels: the leftmost one illustrates the color-coding of the inner symmetry of nodes (automorphic nodes), the middle one illustrates node embedding from MLP-PS, and the rightmost one illustrates embedding from MLP-Canonical. If a method is G equivariant, it is expected to give identical embeddings for automorphic nodes as an equivariant model cannot distinguish between the identities of individual automorphic nodes in principle [3]. As in Figures 1-3, in the presence of inner symmetry (left panels), MLP with PS (middle panels) achieves G equivariance and produces almost identical embeddings for automorphic nodes. However, the same MLP with canonicalization fails and produces unstructured embeddings (right panels). The result illustrates a clear advantage of PS over canonicalization when learning data with inner symmetries, which is often found in applications such as molecular graphs [4]. Q2. The advantages over canonicalization are evident only in the first experiment. A2. Please note that empirical advantage is evident not only in the first experiment (Section 3.1) but also in the second one (n-body; Section 3.2) where Transformer-PS achieves 0.00417 MSE, which significantly improves over Canonicalization with 0.00779 MSE. To further demonstrate advantage over canonicalization, we added two experiments on graph isomorphism (Section 3.1) with S_n symmetrization of GIN base model with node identifiers (GIN-ID) [7], and on n-body (Section 3.2) with E(3) symmetrization of GNN. In both, we achieved state-of-the-art performance; the results are in Tables 2 and 3 of the common response. Our approach consistently improves over canonicalization as well as other symmetrizations. Q3. Compared to canonicalization, the method incurs more cost, including parameters and hyperparameters in learnable module, as well as sampling. A3. The O(N) cost with N samples is a genuine weakness, and we will add clarification in the limitations (Appendix A.4). As the sampling is parallelizable, we believe this can be overcome to some degree by leveraging parallelism [5] developed for scaling batch size. For the parameters and hyperparameters, we think it adds a negligible overhead to canonicalization. This is because canonicalization also uses a equivariant module C: x ↦ ρ(g), so our approach does not add parameters in principle. For the hyperparameters, our approach only adds scale hyperparameter ŋ of noise, and we find a simple choice ŋ = 1 works robustly in all our experiments. Q6. No thorough analyses for the claimed superiorities. No analysis on sensitivity and gradients. Analyzing sampling efficiency would be beneficial. A6. In Lines 90-97, we are making a specific comparison against group averaging rather than claiming superiorities in general. We will revise the main text for clarification. We are planning a controlled experiment to understand the training dynamics, including behavior of p(g|x), stability of gradients, and sample complexity in comparison to group averaging. We will post the results during the discussion period. Q7. The post-processing seems laborious. A7. The post-processing indeed has to be implemented for each group. However, it is not an issue specific to our approach, as it is an issue of canonicalization as well [1]. Also, we think designing it can often be more straightforward compared to alternatives, e.g., a frame which is G equivariant function that has to produce a set of group elements. [1] Kaba et al., Equivariance with learned canonicalization functions (2022) [2] Thiede et al., Autobahn: Automorphism-based graph neural nets (2022) [3] Srinivasan et al., On the equivalence between positional node embeddings and structural graph representations (2019) [4] MaKey et al., Surge: a fast open-source chemical graph generator (2022) [5] You et al., Scaling SGD batch size to 32k for ImageNet training (2017) [6] Puny et al., Frame averaging for invariant and equivariant network design (2021) --- Rebuttal Comment 1.1: Title: Further Response Comment: Dear Reviewer Ry8Q, Thank you again for the insightful and constructive comments. Following common requests, we have conducted an in-depth empirical analysis of the convergence and sample complexity of the algorithm using the EXP-classify dataset (Sn invariance; Section 3.1). The results can be found in the latest common response: https://openreview.net/forum?id=phnN1eu5AX&noteId=ci75ubQbgK. We think the findings might be of your interest, in particular the analysis on the stability of gradients (**Q2**) and sample complexity in comparison to the uniform distribution (**Q1 and Q3**). Sincerely, Authors of submission 12717
Rebuttal 1: Rebuttal: The common response PDF contains the following: - Table 1, including additional results on $\textnormal{S}\_n$ invariant graph isomorphism learning with $\textnormal{S}\_n$ symmetrized GIN-ID base function. - Table 2, including additional results on $\textnormal{S}\_n\times\textnormal{E}(3)$ equivariant $n$-body problem with $\textnormal{E}(3)$ symmetrized GNN base function. - Figure 1 to 3, including visualizations of scalar node embeddings of graphs produced by randomly initialized MLP-PS (Ours) and MLP-Canonical. under the presence of inner-symmetries. We have added the responses to each reviewer individually. Pdf: /pdf/1365fc2b3f9b2e15a59cf5693cc459c69cdd7659.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This work presents a generic approach to symmetrize wide range of base models. Instead of relying on uniform average sampling, the approach introduces a trainable transformation to model the group equivariant distribution. The framework theoretically encompasses existing approaches such as group averaging, frame averaging, and canonical function approaches. Furthermore, the author provides empirical evidence demonstrating competitive performance. Strengths: 1. The proposed method demonstrates the ability to symmetrize architectures in a group-agnostic manner for general purposes, supported by sound theory. 2. The theoretical analysis presented in this work showcases the ability of the proposed approach to encompass interesting literature that assigns distributions on the compact group $G$. 3. The paper is written in a reader-friendly manner, making it easy to follow. Weaknesses: 1. A thorough discussion of the empirical and theoretical advantages and disadvantages of literatures, specifically group average, frame average, and canonicalization, would greatly enhance our understanding, given the shared problem setup with the proposed method. 2. I have reservations about the assertion made in Line 3 that >we use an arbitrary base model (such as an MLP or a transformer)... While the argument of this work suggests that the base model $f_{\theta}$ can be arbitrary, in the experiments, only MLP and transformer architectures were explored and evaluated. 3. Typically [1,2], showcasing improved performance on image classification problems is one of the common applications used to demonstrate the effectiveness of a newly proposed equivariant network. It is encouraged to include such experiments in order to comprehensively validate and demonstrate the efficacy of the proposed equivariant network. [1] S. Basu, P. Sattigeri, K. N. Ramamurthy, V. Chenthamarakshan, K. R. Varshney, L. R. Varshney, and P. Das. Equi-tuning: Group equivariant fine-tuning of pretrained models [2] S. Kaba, A. K. Mondal, Y. Zhang, Y. Bengio, and S. Ravanbakhsh. Equivariance with learned canonicalization functions Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Is it feasible to explore the convergence of the probabilistic equivariant distribution $p_{\omega}$ and assess its dissimilarity to a uniform distribution? This analysis would provide insights into the nature of the mechanism. 2. What is the sensitivity of the proposed method to the architecture scales of MLP and transformer? Can the efficacy of the proposed method be maintained when the base models have a large number of parameters? 3. Is there a general framework or guideline for designing the $G$ equivariant neural network $q_{\omega}$? How significantly does the expressivity of $q_{\omega}$ impact the performance? 4. Why some experimental comparisons between [1-3] are missing? For instance, in ``3.1 Graph Isomorphism Learning with MLP'', what limits the comparison with [3]? [1] S. Basu, P. Sattigeri, K. N. Ramamurthy, V. Chenthamarakshan, K. R. Varshney, L. R. Varshney, and P. Das. Equi-tuning: Group equivariant fine-tuning of pretrained models [2] S. Kaba, A. K. Mondal, Y. Zhang, Y. Bengio, and S. Ravanbakhsh. Equivariance with learned canonicalization functions [3] O. Puny, M. Atzmon, E. J. Smith, I. Misra, A. Grover, H. Ben-Hamu, and Y. Lipman. Frame averaging for invariant and equivariant network design Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have discussed limitations and potential societal impact associated with this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Q1. A thorough discussion of advantages and disadvantages of symmetrization approaches would enhance our understanding. A1. We will add related work section with added thorough comparison. A tentative discussion is in Table 3. Please note that we performed theoretical analysis in Section 2.4 and Appendix A.2.5, and also included discussions on performances in Sections 3.1-3.3. Table 3. Overview of symmetrization approaches. | | GA | FA | Canonicalization | PS (Ours) | |---|---|---|---|---| | Symmetrization | g ~ Unif(G) | g ~ Unif(F(x)) | g = C(x) | g ~ p(g\|x) | | \|Sample space\| | \|G\| | \|G\_x\| ≤ \|F(x)\| ≤ \|G\| | 1 | \|G\_x\| ≤ \|supp p(g\|x)\| ≤ \|G\| | | Advantages | Simple | Efficient if F(x)is small | Efficient, X(x) can be learned | p(g\|x) can be learned, empirically works well | | Disadvantages | The group G has to be compact, training challenge, costs linearly to sample size | F(x) requires manual solving, F(x) cannot be learned, costs linearly to sample size if F(x) is large | No exact equivariance for x with \|G\_x\| > 1, training challenge compared to PS | The group G has to be compact, costs linearly to sample size | Q2. While it is suggested that the base model f can be arbitrary, only MLP and transformer were evaluated. A2. In principle, we can indeed use an arbitrary base model f, but focused on MLP and transformers as they have universality guarantees as in Theorem 2, and are widely used as general-purpose architectures. We will revise Line 3 to “we use a non-equivariant, general-purpose backbone...” to clarify our contributions. To demonstrate that a more wide range of base models can be used, we added two experiments on graph isomorphism (Section 3.1) with S_n symmetrization of a GIN base model with node identifiers (GIN-ID) [7], and on particle dynamics (Section 3.2) with E(3) symmetrization of a GNN base model. The base architectures (GIN-ID and GNN) are directly adopted from [3]. In both, we achieved state-of-the-art performance; the results are in Tables 1 and 2 of the common response. The results indicate that our approach works for a wide range of base models. Q3. In [1,2], showing improved performance on image classification is a common application. A3. While rotated image datasets are useful to highlight equivariance, we think the groups are overly simple to compare symmetrization methods comprehensively. This is because the involved groups are very small, e.g., c4 contains four elements representing 90-degree rotations. In such cases, even full group averaging is computable [1]. As our method is designed to overcome the sample complexity of group averaging, we believe that it is more suitable to show our approach in more challenging groups, such as combinatorial, infinite groups, and their products, as we presented in experiments (Section 3). Please note that [3] was also not demonstrated for rotated images, likely for similar reasons. Thus, instead of image classification, we additionally conducted an experiment on E(3) symmetrization of GNN, which is another widely used setup [2, 3]. We have obtained state-of-the-art result, which is in Table 2 of the common response. Q4. Convergence of the p(g|x) and dissimilarity to uniform distribution? A4. We have provided an analysis of the convergence of p(g|x) for S_n in Figure 1 of the main text, with comparison to uniform distribution (random permutations). In Figure 1, it can be seen that the distribution learns to produce consistent samples g ~ p(g|x) in early training, diverging from uniform distribution. In later training, the samples' variance g ~ p(g|x) slightly increases, but validation loss keeps improving, indicating that the model is leveraging stochasticity to improve performance. Q5. Sensitivity to the scales? A5. Our approach is robust to a wide range of scales, as our smallest base architecture involves 107,523 parameters (Section 3.2), and the largest involves 86M parameters (ViT-Base in Sections 3.3-3.4). Q6. A guideline for designing the G equivariant q? How significantly does its expressivity impact performance? A6. We found that having global interactions across input in q is helpful. For example, for S_n, the network q is expected to output a permutation by assigning score to each node. It is expected that it has to perform a kind of comparison over the graph, where global interaction could be useful and was implemented by virtual node and batch normalization. Other than this, the performance was robust across setups for a reasonable q. In all of our experiments on S_n, we use 3-layer GIN, and in all experiments on S_n x E(3) or E(3), we use 2-layer Vector Neurons, and found no issues. Q7. Some experimental comparisons between [1-3] missing? A7. We would like to clarify that our approach is compared to [1-3] for all experiments in Sections 3.1-3.3, except in one case where [3] was unavailable. Note that Equi-Tuning [1] is group averaging [5], as clarified in [1]. In this regard, in all experiments in Sections 3.1-3.3, we made comparisons with GA [1, 6], canonicalization [2], and FA [3]. One exception is in Table 2, where we could not experiment FA since frame for the full group S_n x E(3) has not been identified in current literature. To further provide experimental comparisons to [1-3], we added experiments as mentioned in A1. The results can be found in Tables 2 and 3 of the common response, and they are consistent with our main findings as PS outperforms [1-3]. [1] S. Basu et al., Equi-tuning: Group equivariant fine-tuning of pretrained models [2] S. Kaba et al., Equivariance with learned canonicalization functions [3] O. Puny et al., Frame averaging for invariant and equivariant network design [4] S. Basu et al., Equivariant few-shot learning from pretrained models [5] D. Yarotsky. Universal approximations of invariant maps by neural networks [6] B. Sturmfels. Algorithms in invariant theory [7] R. L. Murphy et al., Relational pooling for graph representations --- Rebuttal Comment 1.1: Title: Official Review by Reviewer SgC2 Comment: The authors have addressed all my questions and concerns well. Thus, I increased my evaluation. --- Reply to Comment 1.1.1: Title: Further Response Comment: Dear Reviewer SgC2, Thank you again for the positive and constructive comments. Following common requests, we have conducted an in-depth empirical analysis of the convergence and sample complexity of the algorithm using the EXP-classify dataset (Sn invariance; Section 3.1). The results can be found in the latest common response: https://openreview.net/forum?id=phnN1eu5AX&noteId=ci75ubQbgK. We think the findings might be of your interest, as it includes results on convergence of p(g|x) in comparison to the uniform distribution Unif(G). Sincerely, Authors of submission 12717
null
null
null
null
null
null
FedFed: Feature Distillation against Data Heterogeneity in Federated Learning
Accept (poster)
Summary: This paper proposes a novel approach called Federated Feature Distillation (FedFed) to mitigate the data heterogeneity problem while preserving privacy. In particular, FedFed partitions data into performance-sensitive features and performance-robust features based on the information bottleneck method. Only performance-sensitive features are shared among clients as they contain minimal private information and significantly contribute to performance. Moreover, incorporating the differential privacy (DP) mechanism can provide an additional layer of privacy protection. In summary, this work is interesting, and the authors provide some theoretical analyses to support their claims. Strengths: This study mitigates the issue of data heterogeneity in Federated Learning (FL) by utilizing a promising approach to information-sharing. Empirical evidence shows that the proposed method is effective in enhancing model performance. Weaknesses: The reviewer believes the evaluation of the privacy leakage is insufficient. The details are as follows: 1. In Section 4.4, the authors conducted a model inversion attack to infer private data using shared features. However, quantitative measurement is missing in this evaluation. Previous studies [1, 2] have utilized metrics such as peak signal-to-noise ratio (PSNR) and Frechet inception distance (FID) to assess the quality of reconstructed data. We recommend that the authors include some quantitative results to strengthen the empirical evidence. 2. Federated learning is known to leak private information when sharing model parameters [3,4]. The FedFed method, which shares both model parameters and features, creates an opportunity for attackers to exploit these two types of information for privacy attacks. However, the experiment showed that the attacks only targeted the privacy information contained in the features, which is not a comprehensive representation of potential privacy leaks. 3. Tables 1, 2, 3, and 4 demonstrate that the FedFed method achieves a higher Top-1 accuracy than FedAvg. However, it's worth considering whether this increased accuracy comes at the cost of higher privacy leakage. [1] The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks. https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_The_Secret_Revealer_Generative_Model-Inversion_Attacks_Against_Deep_Neural_Networks_CVPR_2020_paper.html [2] Knowledge-Enriched Distributional Model Inversion Attacks. https://openaccess.thecvf.com/content/ICCV2021/html/Chen_Knowledge-Enriched_Distributional_Model_Inversion_Attacks_ICCV_2021_paper.html [3] Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning. https://ieeexplore.ieee.org/abstract/document/8835245?casa_token=1iwvBNyN5q4AAAAA:BdxQzounj3eoNv0HIcdMoW7nCaM6xWJFPwZQIosqhvpiXWNaJd-q0MeW_xmiZJkZGVmZXDRE [4] Beyond Inferring Class Representatives: User-Level Privacy Leakage From Federated Learning. https://ieeexplore.ieee.org/abstract/document/8737416?casa_token=zyEVcT_x3oQAAAAA:FfpfqLHOsQJ33Br2OqnYE6fRI3EIgdNPlCSUOC74Yu6qhFjcgJvqHwFLAsPaShPaigK3sjz1 Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. How to conduct the model inversion attack? The reviewer has checked both the experiment section and Appendix E, but some details appear to be missing. 2. Table 14 in the Appendix summarizes the currently available information-sharing methods. According to the authors, the FedFed method provides hierarchical protections to preserve information privacy while overcoming the privacy-performance dilemma. Does this mean that the FedFed method achieves better performance and privacy protection than all the baselines in Table 14? Take Fedproto as an example. This method only exchanges prototypes (i.e., the mean of features) instead of model parameters, while FedFed shares both the model parameters and features. The authors correctly pointed out that FedProto lacks protection on the shared information, but the shared model parameters in FedFed also lack privacy guarantees. Further discussion and empirical results are required to demonstrate the effectiveness of the FedFed method in comparison to previous works. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The authors correctly pointed out that the performance of FedFed is limited by the storage capacity of clients. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer ezSw: > Q1:Quantitative measurement is missing in the evaluation of model inversion attack in Sec.4.4, e.g., PSNR and FID. **A1:** Thank you for your valuable suggestion. Accordingly, we employ PSNR and FID, used in [R1][R2], to provide quantitative results for the samples reconstructed by model inversion attack. We will add these results to our revision. - As in Table 1, if the central server acts as the attacker, it does not have access to the original data, resulting in a poorer performance in attacking the model. - Inspired by your comments, we conducted another experiment where one of the clients acts as the attacker. The client has access to its local data. Under this scenario, the attacker's performance is better than when the server acts as the attacker, while the attacker is still unable to recover data from the shared features. Table 1. The PSNR and FID comparison under two kinds of attack methods. | | PSNR| FID| |---|---|---| | Server acts as a malicious attacker | 15.56 |404.16| | One of the clients acts as a malicious attacker | 18.31 |378.21| > Q2: How to conduct the model inversion attack? **A2:** We assume that the server is a malicious attacker, i.e., the server can access the globally shared data and the global model to conduct a model inversion attack. The results reported in Sec. 4.4 (PSNR is 15.56). Following the valuable comments, we conduct another model inversion attack introduced in [R1]. Specifically, the attacker (one client in the FL system) uses a public dataset (the local data of the malicious client) and auxiliary data (the shared features of FedFed) to train a GAN to attack target models, namely, the GAN used to reconstruct other clients' private data. In our experiments, the attacker hardly recovers others' private data, where the PSNR of the recovered results is 18.31. > Q3: The experiment showed that the attacks only targeted the privacy information contained in the features, which is not a comprehensive representation of potential privacy leaks. **A3:** We agree that sharing model parameters may risk privacy. In this regard, sharing more information could creates an opportunity for attackers to exploit these two types of information for privacy attacks. - Consequently, as mentioned in the comments, we conducted privacy attacks to show that attackers hardly get information in the shared features. - Until now, our work has explored the most sufficient attacks compared with previous works [R3-R4], which employ an information-sharing strategy to tackle data heterogeneity in FL. We totally agree with your comments that a comprehensive analysis of privacy attacks is promising and important. - Actually, we would like to remind you that it seems to be out-of-scope for a 9-page conference paper since our work aims at a new approach to tackle data heterogeneity. We prioritize mitigating the dilemma introduced by the information-sharing strategy. - Moreover, we believe that the reviewer may look forward to more work on investigating privacy attacks in FL, for those with the information-sharing strategy. In this regard, we sincerely expect that FedFed could serve as a baseline model for conducting privacy/attack research in the future. Also, the authors hope to contribute to the deep learning community more. > Q4:Tables 1-4 demonstrate that the FedFed method achieves a higher Top-1 accuracy than FedAvg. However, it's worth considering whether this increased accuracy comes at the cost of higher privacy leakage. **A4:** Sharing information of clients will inevitably introduce some privacy leaks, e.g., sharing models or features. Thus, FedFed distils performance-sensitive features and shares these features with DP protection to mitigate privacy leakage. We will add the discussion in our revision. > Q5: Does Table 14 mean that the FedFed method achieves better performance and privacy protection than all the baselines in Table 14? Further discussion and empirical results are required to demonstrate the effectiveness of the FedFed method in comparison to previous works. **A5:** We appreciate your valuable time and detailed comments. We will add further discussion in our revision. - We listed many excellent works in the research of the information-sharing strategy in Table 14, aiming to highlight the difference between FedFed and these works. Specifically, FedFed shares information with protection, while previous works do not protect the shared information. Thus, it is not fair for previous works to make a direct comparison. This is the reason why we do not report the comparison with these methods. - When talking about privacy and security, we need to consider the attacker's ability and privacy goals. Empirically, we have conducted multiple attacking experiments to explore and verify the improved protection in different settings. - Besides, analyzing all existing attacks with a sufficient theoretical guarantee for shared FL models is out-of-scope for a 9-page conference paper. The shared features are protected with DP. For the mentioned both shared parameters and features, we agree that studying how to perform an attack is a promising direction for FL security. However, our work focuses on tackling data heterogeneity through an information-sharing strategy with a DP guarantee. Thanks for the valuable comments, we will investigate more attacks induced by FL with the information-sharing strategy in our future work. **Reference:**\ [R1] Zhang Y, Jia R, Pei H, et al. The secret revealer: Generative model-inversion attacks against deep neural networks. In CVPR, 2020.\ [R2] Chen S, Kahla M, Jia R, et al. Knowledge-enriched distributional model inversion attacks. In CVPR, 2021.\ [R3] Li D, Wang J. Fedmd: Heterogenous federated learning via model distillation. arXiv preprint arXiv:1910.03581.\ [R4] Lin T, Kong L, Stich S U, et al. Ensemble distillation for robust model fusion in federated learning. In NeurIPS, 2020. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed answers to my comments. This addresses a few of my concerns, and as a result, I have increased my score from 4 to 5. However, the reviewer remains concerned that sharing partial features in data compromises privacy, and thus it is important to compare the privacy-performance tradeoff of the proposed method with the baselines. The reviewer understands that providing a comprehensive analysis of privacy attacks is beyond the scope of this paper. But it is recommended to highlight the additional privacy leakage as a limitation in the manuscript. --- Reply to Comment 1.1.1: Title: Response to ezSw Comment: Thanks for your further suggestions. We agree with you that it is important to compare the privacy-performance trade-off of the existing methods. We will highlight the additional privacy leakage in the updated revision. Thank you for dedicating your time to our paper and raising the score.
Summary: The paper proposes a method based on feature distillation to tackle data heterogeneity. The main contribution , as I see it, is in identifying performance-robust and performance-sensitive features and sharing the latter among clients to mitigate the impact of heterogeneity. Strengths: - The method is simple (this is very important) and can be easily used with existing FL algorithms. - DP is incorporated to protect the leakage of privacy-sensitive information when sharing performance-sensitive features. Weaknesses: - The evaluation is quite limited to vision datasets, which makes me unsure if it is going to generalize across modalities. - Number of clients (K) are also quite limited. In real-world setting 100-1000s of clients are involved in FL process. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Feature distillation idea is proposed in [1]. Isn't the idea same? Please cite [1]. [1] Romero, A., Ballas, N., Kahou, S. E., Chassang, A., Gatta, C., & Bengio, Y. (2014). Fitnets: Hints for thin deep nets. arXiv preprint arXiv:1412.6550. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: - Do not see discussion on whether performance-sensitive features indeed leak information. - Formal definitions of performance-robust and performance-sensitive features are missing. Please also make a subsection to provide detailed descriptions of how these two types of features are identified. - The experiments are only performed using ResNet-18. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer wKGY: > Q1. The evaluation is quite limited to vision datasets, which makes me unsure if it is going to generalize across modalities. **A1:** Thanks for your insightful comments. In this work, we merely focus on the image modality, while omitting the results on other modalities, e.g., text data. This is because applying information bottleneck (the core of FedFed) to text data requires a special design. In response to your valuable comments, we will explore approaches to applying FedFed to text data. > Q2: The number of clients (K) is also quite limited. In real-world settings, 100-1000s of clients are involved in the FL process. **A2:** Thanks for your valuable suggestion. - We followed the experimental settings used in our experiments, omitting results evaluated using more clients. - In response to your valuable comments, we conduct experiments using more clients, i.e., a more realistic setting. We test $K=500$ and $K=1000$ under $\alpha=0.1$ and the sampling rate is 5%, the results are listed in Tables 1 and 2. The results show that FedFed can improve the performance of the baseline method under a more realistic setting. We will add the results to our revision. Table 1. The accuracy of $\alpha=0.1, E = 1, K= 500$ over various datasets. |Method|CIFAR-10 |FMNIST|SVHN|CIFAR-100| |:-----:|:-----:|:-----:|:-----:|:-----:| |FedAvg|47.15%|85.34%|86.31%|45.31%| |FedFed(Ours)|**81.67%**|**87.63%**|**87.56%**|**57.78%**| Table 2. The accuracy of $\alpha=0.1, E = 1, K= 1000$ over various datasets. |Method|CIFAR-10 |FMNIST|SVHN|CIFAR-100| |:-----:|:-----:|:-----:|:-----:|:-----:| |FedAvg|43.15%|81.34%|82.31%|41.31%| |FedFed(Ours)|**78.36%**|**83.63%**|**84.56%**|**52.78%**| > Q3: Do not see a discussion on whether performance-sensitive features indeed leak information. **A3:** We will add the discussion in our revision. - Theoretically. The sharing strategy will leak information, but FedFed employs differential privacy to protect the shared feature. Thus, FedFed can share information with a given privacy budget, theoretically guaranteed with DP theory. - Empirically. We conduct privacy verification experiments including model inversion attack and membership inference attack in Sec 4.4. Besides, we perform an additional model inversion attack. The results are reported in the response to Reviewer #ezSw. > Q4: Formal definitions of performance-robust and performance-sensitive features are missing. Please also make a subsection to provide detailed descriptions of how these two types of features are identified. **A4:** Thanks for your kind suggestions. We will add the following descriptions to our revision. FedFed is built upon the information bottleneck theory, leading to the decision of performance-sensitive and performance-robust features. Specifically, performance-sensitive features $X_s$ statisfy that: $I(Y;X|X_s)=0$, namely, $X_s$ contains all information about the label $Y$. In contrast, the performance-rbust features $X_r$ contains all information about $X$ except for information about label $Y$, i.e., $I(Y;X_r)=0, I(X_r;X)=I(X_r;X|X_s)$. That is, $X_r$ contains all the left information about $X$ when removing $X_s$ from $X$. Thus, we define the performance-robust feature $X_r$ as the difference, i.e., $X_r = X - X_s$, and propose Eq.(5) to realize the intuition. > Q5: The experiments are only performed using ResNet-18. **A5:** Thanks for the valuable suggestions, motivating us to conduct additional experiments on two different model architectures. As reported in Tables 3-4, FedFed can improve model performance under various model architectures. Table 3. The comparison of our methods on various datasets with the VGG classifier. |Dataset|CIFAR-10|CIFAR-100|SVHN|FMNIST|CINIC-10|FEMNIST |:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:---:| |FedAvg|76.32% |63.14% | 85.29%| 84.87% |74.62%|76.73% |FedFed(Ours)|**82.64%** |**67.32%** | **89.42%**| **88.34%** |**86.39%**|**82.98%** Table 4. The comparison of our methods on various datasets with the ResNet-18 classifier. |Dataset|CIFAR-10|CIFAR-100|SVHN|FMNIST|CINIC-10|FEMNIST |:-----:|:-----:|:-----:|:-----:|:-----:|:-----:|:---:| |FedAvg|79.35% |67.84% | 88.34%| 86.37% | 78.19%|79.28% |FedFed(Ours)|**92.34%**|**69.64%** | **93.21%**| **92.34%** | **90.72%**|**85.74%** > Q6: Feature distillation idea is proposed in [R1]. **A6:** Thanks for the kind suggestion. In our revision, we will add the following discussion to highlight the difference between our work and previous work [R1] introducing feature distillation. - Both [R1] and FedFed aim to distil features in the data for better generalization performance. However, the following differences make these two works distinct. - 1) [R1] distils features in the representation space, i.e., features extracted by a hidden layer of deep neural networks, while FedFed distil features in the data space. - 2) [R1] distils features to guide student models to generate outputs similar to teacher models, while FedFed distils features to share across clients to tackle data heterogeneity. **Reference**\ [R1] Romero A, Ballas N, Kahou S E, et al. Fitnets: Hints for thin deep nets. arXiv preprint arXiv:1412.6550. --- Rebuttal Comment 1.1: Comment: Thanks for the response and for running additional experiments. Another suggestion is to provide concrete yet concise examples when providing definitions for performance-robust and performance-sensitive features. Could you please expand on the following: "while FedFed distills features in the data space"? What are features in the data space, and how are they different from the hidden layer features employed in [R1]? --- Reply to Comment 1.1.1: Title: Response to New Questions from Reviewer wKGY Comment: Thanks for your further professional suggestions! > Q1: provide concrete yet concise examples when providing definitions for performance-robust and performance-sensitive features. **A1:** Following your suggestion, we explicate the definition of performance-sensitive features and performance-robust features from the Information Bottleneck perspective. In brief, - At first, we define what a valid partition is (Definition 1); - Then, following the rules of valid partition, we formalize the two types of features (Definition 2). We will elaborate on the definition and explanation in our revision in terms of the following contents. *Definition 1. [valid partition] A partition strategy is to partition a variable $X$ into two parts in the same measure space such that $X=X_1 + X_2$. We say a partition strategy is valid if it holds: i. $H(X, X_1, X_2)=H(X); ii. H(X|X_1, X_2)=0; iii. I(X_1;X_2)=0$, where $H(\cdot)$ denotes the information entropy and $I(\cdot)$ is the mutual information.* Definition 1 captures the desirable attributes when we partition the features, which are abstracted to be a variable in general. Intuitively, - For i and ii: A valid partition should maintain all information of the original variable $X$ losslessly. That is, neither extra information is introduced nor key information is lost. - For iii: After determining the measure space, we partition the $X$ to be $X=X_1+X_2$. The mutual information of $X_1$ and $X_2$ is none. Here, $X_1$ and $X_2$ can be symmetric. Or saying, there is no overlap between $X_1$ and $X_2$. Built upon the valid partition, we present the definition between performance-sensitive features and performance-robust features, as in Definition 2. *Definition 2. Let $X=X_s + X_r$ be a valid partition strategy. We say $X_s$ is performance-sensitive features such that $I(X; Y|X_s)=0$, where $Y$ is the label of $X$. Accordingly, $X_r$ is the performance-robust feature.* Intuitively, performance-sensitive features contain all label information, while performance-robust features contain all information about the data except for the label information. That is, the features to be partitioned are either performance-sensitive features or performance-robust features. Appreciate again for your kind and constructive suggestion. We believe that our paper will be more clear and readable through your writing-shepherding. > Q2: Could you please expand on the following: "while FedFed distils features in the data space"? What are features in the data space, and how are they different from the hidden layer features employed in [R1]? **A2:** Thanks for your helpful comments. We hope the following explanation would make the difference clear between FedFed and [R1]. - In FedFed, we distil features in the data space, where we partition raw data $\mathbf{x} \in \mathbb{R}^{d}$ into two parts, i.e., $\mathbf{x}=\mathbf{x}_s+\mathbf{x}_r, \mathbf{x}_s \in \mathbb{R}^{d}, \mathbf{x}_r \in \mathbb{R}^{d}$. Thus, the aim of FedFed is to distil $\mathbf{x}_s$ from $\mathbf{x}$. - To perform knowledge distillation (i.e., the teacher-student paradigm), [R1] tries to guide student models to generate outputs similar to teacher models using the learned features $\mathbf{f}$. Here, the learned feature is extracted from the neural network $\phi$, i.e., $\mathbf{f}=\phi(\mathbf{x})$. Thanks again. We will elaborate on the corresponding explanation in the paper. **Reference** [R1] Romero A, Ballas N, Kahou S E, et al. Fitnets: Hints for thin deep nets. arXiv preprint arXiv:1412.6550.
Summary: The paper proposes Federated Feature distillation, a method that addresses the tradeoff between privacy and model performance. It involves extracting performance-robust and performance-sensitive features from local data. The latter is shared among clients after applying differential privacy for privacy preservation. The paper also includes an empirical evaluation demonstrating the effectiveness of the proposed approach. Strengths: The paper is well-written and easily comprehensible, with clear motivation and contribution. This paper theoretically shows that the proposed method achieves the same level of privacy with a relatively smaller noise compared to sharing the raw data. The proposed method can be seamlessly combined with existing universal methods, Weaknesses: - Additional communication costs: As sharing globally shared data to all clients, it requires additional communication costs. In F.7 the authors analyze additional communication costs is as same as sending a classifier in approximately 14 communication rounds. But this analysis doesn’t consider the partial participating of federated learning. If we assume the 10% participation rate, sharing the global dataset is equal to the communication costs of 140 communication rounds, which is not small. - More local iterations on training: The presence of a globally shared dataset in FedFed results in each client having a significantly larger amount of local training data, K+1 times more than before. It also results in K+1 times local iterations, which can be computationally expensive for edge devices in federated learning, especially when dealing with massively distributed data in realistic settings. Furthermore, the considerably higher number of local updates in FedFed compared to the approach without FedFed makes it hard to attribute the observed gain in empirical results to a specific factor. It is well-known that increasing local updates can expedite the convergence speed in federated learning. Therefore, the observed gain in empirical results cannot be solely attributed to the FedFed approach, as the increased local iterations inherently provide an advantage. This factor should be taken into consideration when evaluating and comparing the performance of FedFed against other approaches. - There are lines of work that try to share the client data while preserving the privacy inspired by mixup, such as [Shin et al., 2020] and [Yoon et al., 2021]. These methods are not compared as baselines in evaluation results. [Shin et al., 2020] MyungJae Shin, Chihoon Hwang, Joongheon Kim, Jihong Park, Mehdi Bennis, and Seong-Lyun Kim. Xor mixup: Privacy-preserving data augmentation for one-shot federated learning. In ICML, 2020. [Yoon et al., 2021] Tehrim Yoon, Sumin Shin, Sung Ju Hwang, Eunho Yang. Fedmix: Ap- proximation of mixup under mean augmented federated learning. In ICLR, 2021. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - What is the model's performance when trained solely on the constructed global dataset ($D^s$)? - Are the models ($w_k^t$) trained in the Feature Distillation phase identical to the global model ($\phi_k^t$) in the local training phase? If they are the same, do the empirical results, in both cases with and without FedFed, initialize the global model in the local training phase with the identical parameters? - In section F.3, it is observed that sharing the partial feature with DP achieves higher accuracy compared to sharing the raw data with DP, which seems counterintuitive. What is the underlying insight or explanation for this phenomenon? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Limitation about storage overhead is stated while potential societal impact has not been addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer bAmN: > Q1. Additional communication costs: As sharing globally shared data to all clients, it requires additional communication costs. F.7 doesn’t consider the partial participation of federated learning. **A1:** Thank you for bringing this potentially confusing problem to our attention. We will address it in the upcoming revision by incorporating a detailed explanation and analysis. In FedFed, the constructed dataset is shared only once, leading to extra communication costs. Detailed analysis of communication costs can be found in **Joint Response**. > Q2: More local iterations on training: The presence of a globally shared dataset in FedFed results in each client having a significantly larger amount of local training data, K+1 times more than before. Increasing local updates can expedite the convergence speed in federated learning. Therefore, the observed gain in empirical results cannot be solely attributed to the FedFed approach, as the increased local iterations inherently provide an advantage. This factor should be taken into consideration when evaluating and comparing the performance of FedFed against other approaches. **A2:** Thanks for your constructive suggestions. We will highlight the local iterations issue in our revision. - The local iterations are doubled in our experiments since we merely sample the same number of samples in each round to perform local training. In our implementation, we expand the batch size (shared and local samples), rather than increase the local iteration, leading to the same number of local iterations as FedAvg without FedFed. - The perspective about more local iterations is insightful and consistent with our empirical observations. Accordingly, we will follow the direction to explore more possibilities of FedFed. Note that we also find (only one case) increasing local iterations does not always work (c.f. Table 4 on the main page). We believe it would bring something informative, and we are willing to share it with you. > Q3: More relative baselines. **A3:** Following your constructive suggestion, we add more experiments to compare FedFed with other information-sharing strategies [R2][R3]. The results are listed in Table 1. The findings clearly demonstrate that FedFed surpasses advanced information-sharing methods in terms of performance. We will include these results and provide a comprehensive discussion in our revision. Table 1. Comparison with information-sharing methods under $\alpha=0.1, E=1, K=100$ over CIFAR-10. |FedAvg [R1]|Xor mixup [R2]|FedMix [R3]|FedAvg with FedFest(Ours)| |:------:|:--------:|:-------:|:-----------:| |49.72%|76.18%|78.35%|**84.06%**| > Q4. What is the model's performance when trained solely on the constructed global dataset $(D)^s$ **A4:** Thanks for the inspiring question! Following the interesting direction, we train models merely on the constructed global dataset and report the results as follows in Table 2. We can see that the model's performance is bad. We guess the distribution shift between the constructed data and the natural data causes bad performance. However, we believe that training models merely on noisy data is a promising direction and we will explore it in our future work. Table 2. The difference between training in FedFed manner and solely on shared data. | | CIFAR-10 | |:---:|:---:| | FedFed(Ours) | 92.34% | | Solely on constructed global dataset | 27.68% | > Q5. Are the models ($\omega^t_k$) trained in the Feature Distillation phase identical to the global model ($\phi_k^t$) in the local training phase? If they are the same, do the empirical results, in both cases with and without FedFed, initialize the global model in the local training phase with identical parameters? **A5:** Thanks for your inspiring questions. - In our experiments, the model parameters $\omega^t_k$ and $\phi_k^t$ are not the same. The $\omega^t_k$ is used to extract performance-sensitive features while $\phi_k^t$ is the local classifier like other FL methods. - We agree with your point that it is promising to initialize the global model with models trained for feature generation. Accordingly, we use the same model for the mentioned two models and initialize the global model with local models trained for feature generation. The results are listed in Table 3. The results show that employing the same model will further promote the performance and speed up the convergence rate of FedFed. Thanks again for the inspiring and helpful questions! Table 3. Comparison between the initialization of $\phi_k^t$ with/without $\omega^t_k$. | | Accuracy | Round | |---|---|---| | FedFed(the initialization of $\phi_k^t$ without $\omega^t_k$) | 92.34% | 39 | | Inspired experiment(the initialization of $\phi_k^t$ with $\omega^t_k$) | 92.94% | 31 | > Q.6 In section F.3, it is observed that sharing the partial feature with DP achieves higher accuracy compared to sharing the raw data with DP, which seems counterintuitive. What is the underlying insight or explanation for this phenomenon? **A6:** Thanks for pointing out the potentially confusing problem. Models trained over raw data with noise perform worse than those trained over partial data since the privacy budget for these two cases is the same. Namely, to achieve the same privacy budget, raw data are injected with noise on a larger scale than partial data. We will add the explanation to our revision to fix the problem. **Reference**\ [R1] McMahan B, Moore E, Ramage D, et al. Communication-efficient learning of deep networks from decentralized data. In AISTAS,2017.\ [R2] MyungJae Shin, Chihoon Hwang, Joongheon Kim, Jihong Park, Mehdi Bennis, and Seong-Lyun Kim. Xor mixup: Privacy-preserving data augmentation for one-shot federated learning. In ICML, 2020.\ [R3] Tehrim Yoon, Sumin Shin, Sung Ju Hwang, Eunho Yang. Fedmix: Ap- proximation of mixup under mean augmented federated learning. In ICLR, 2021. --- Rebuttal Comment 1.1: Comment: Thanks the authors for the response. Although the proposed method requires considerable extra communication cost in some practical settings, other my concerns have been addressed in the rebuttal. I will keep my original score as is. --- Reply to Comment 1.1.1: Title: Response to Reviewer bAmN Comment: We are glad that we have addressed your concerns. Thanks again for your valuable comments.
Summary: The paper introduces a federated learning framework (FedFed) to tackle data heterogeneity by utilizing an information-sharing approach. The method partitions data into performance-sensitive features and performance-robust features, based on their contribution to model performance. The performance-sensitive features are shared globally to mitigate data heterogeneity, while the performance-robust features are kept locally. The method employs DP to protect performance-sensitive features before sending them to the server. Strengths: + Improving how to handle data heterogeneity in federated learning is a timely and important problem. + The idea of tackling data heterogeneity from the Information Bottleneck perspective is interesting. + The solution appears to significantly boost training performance of various FL algorithms and (in most cases) reduces the required number of communication rounds, while maintaining privacy. Weaknesses: - Overall the practicality of the method is unclear. - The evaluation only considers a single task and uses synthetic non-IID data. - The evaluation should include additional baselines. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: This is an interesting approach but it's unclear how practical it is. The method appears to have several overheads that are not quantified: the computation overhead to generate the performance-robust/sensitive features and to train the local classifier, communication overhead to collect protected features from clients to construct a global dataset and send the dataset back to the clients, and storage overhead to save the models and the private and public datasets on each client. This might make the method not feasible for edge devices with limited capabilities. The experimental results focus on image classification task, with no experiments with other tasks. Also, the experiments rely on extreme non-IID partitions ( α=0.1 and α=0.05 ). It would be useful to test the method with different levels of heterogeneity (e.g., α=0.5 and α=1) and with datasets with natural non-IID partitions like Stack Overflow or Reddit. The evaluation should consider other baselines. It is good that FedFed can be applied on top of FL algorithms. But other extensions to those algorithms should be considered as a baseline too: an obvious one was mentioned by the authors, where the full data + random noise is shared. This baseline will tell us how much better FedFed is than the naive solution. The authors mentioned that this will result in an accuracy loss, but, without evidence to support this. The hyperparameters used in the experiments are not clear, and we don't know if they were tuned to get the best performance for each algorithm or not. The client selection hyperparameter is not mentioned. It is not clear how many clients will be sampled from the $K$ clients and what the selection method is. The method is described as an extension on top of FL algorithms, but the paper does not explain how the extension can be used with other algorithms. In fact, algorithm 1 depicts FedFed as a new FL algorithm and not an extension. It is not clear where $\mathcal{D}^s$ is used, which is why the entire algorithm is proposed! Note that there is a typo in eq. 8. The preliminaries section should introduce information bottleneck. Other questions that come up: 1. FedFed uses eq. 8 in the local update; but how does it extend SCAFFOLD, FedProx, and FedNova since they modify the local update as well? 2. The reported FedNova performance in table 1 does not seem to agree with the performance reported in their paper, where they always outperformed FedAvg; why is that? 3. Why the local epochs E was set to 1 and 5? Why 5 local epochs is not used with the experiment with 100 clients? 4. How many clients were sampled at every round? What is the value of the client selection hyperparameter? *I acknowledge I have read the authors' rebuttal and answers to my questions.* Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: * The paper has discussed the storage limitation but didn't clarify how much extra storage overhead was required and the size of the shared global datasets. * The paper didn't address how much per-round extra communication overhead the method required; although the approach seems to reduce the number of communication rounds significantly in most cases, there were some cases (table 4) that required more communication rounds compared to the other baselines. Moreover, communicating $x_p$ and $\theta$ adds extra communication overhead. Clearly, FedFed gains are not a free lunch, and these limitations must be considered, discussed, and be part of the evaluation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer LGae: > Q1: Some overheads are not quantified **A1:** Thank you for your insightful comments. Accordingly, we add analysis about overheads in **Joint Response**. > Q2: Test with different levels of heterogeneity and datasets with natural non-IID partitions. **A2:** Thanks for your detailed suggestions. - Heterogeneity levels. We conduct additional experiments on $\alpha=0.5$ and $\alpha=1.0$, $\beta$ is the sampling rate. The results are reported in Tables 1-4. Table 1. Accuracy of $\alpha=0.5$ with 10 clients, $\beta=$50%. || CIFAR-10|CIFAR-100|SVHN|FMNIST| |--|--|--|--|--| | FedAvg | 84.56% | 68.78%| 90.21%| 88.78%| | FedFed(Ours)| **93.37%** | **70.91%** | **93.71%** | **94.01%**| Table 2. Accuracy of $\alpha=1.0$ with 10 clients,$\beta=$50%. | | CIFAR-10| CIFAR-100| SVHN| FMNIST| |--|--|--|--|--| |FedAvg| 87.18%| 69.12%| 92.37%|90.98%| |FedFed(Ours)|**94.07%**|**71.01%**|**93.98%**|**95.12%**| Table 3. Accuracy of $\alpha=0.5$ with 100 clients,$\beta=$10%. ||CIFAR-10|CIFAR-100|SVHN|FMNIST| |--|--|--|--|--| | FedAvg | 59.78% | 54.56% | 90.31% | 91.78% | | FedFed(Ours) | **89.19%** | **63.78%** | **92.09%** | **93.13%** | Table 4. Accuracy of $\alpha=1.0$ with 100 clients, $\beta=$10%. || CIFAR-10|CIFAR-100|SVHN|FMNIST| |--|--|--|--|--| |FedAvg|63.19%|57.19%|91.01%|92.98%| |FedFed(Ours)|**90.31%**|**66.89%**|**93.01%**|**93.81%**| We can see that FedFed significantly improves performance. - Realistic Non-IID Partitions. We have verified the effectiveness of FedFed on the FEMNIST dataset [R4], which has more realistic non-IID partitions. The results are reported in Table 5. We can see that FedFed performs well on FEMNIST. We did not conduct experiments on the Stack Overflow and Reddit datasets as mentioned, as these datasets are typically used for the next character prediction, which differs from the classification task studied in our work. Inspired by your valuable comments, we will explore FedFed under more complex scenarios. Table 5. The results on FEMNIST ||FEMNIST| |--|--| |FedAvg|79.28%| |FedFed(Ours)|85.74%| We will add these results to our updated revision. > Q3: Add baseline: the full data + random noise is shared **A3:** We apologize for omitting important information. We will highlight the results (reported in Appendix F.3) on the main page. These results show that simply applying noise to the data causes performance degradation. Other comparisons with existing sharing strategies can be found in the table provided in **A1 of response to Reviewer xchj**. > Q4. The client selection hyperparameter is not mentioned and what the selection method is. **A4:** Thank you for pointing out the confusing parts. We list hyperparameters in Table 7 in Appendix B.1 (i.e., Implementation Details). We apologise for missing how to determine the hyperparameter $\rho$. In FedFed, each client retains a small portion of data that is not used for training. After several rounds of training, these reserved data are utilized to select the hyperparameter $\rho$. We set the sampling rate to be 10% for 100 clients and 50% for 10 clients. We randomly select clients from the client list. Motivated by your comments, we conduct experiments with different sample rates to verify the efficacy of FedFed. Results are given in Tables 1-11 in **Joint Response**. > Q5. The paper does not explain how the extension can be used with other algorithms. **A5:** Thank you for your detailed comments. Algorithm 1 presents how we can obtain the globally shared dataset $D^s$. In Appendix B.2 (Pseudo-code and Explanation), we demonstrate how to implement FedFed with existing methods as a plug-in module. We highlight the modification of the original algorithms with only two lines. We will elaborate on the contents in our updated revision. > Q6. The reported FedNova performance in Table 1 does not seem to agree with the performance reported in their paper; why is that? **A6:** In FedNova, the main results were reported using $K=16$ and $\alpha=0.1$, while the results using $K=100$ clients and $\alpha=0.05$ were omitted. Our experiments are consistent with those reported in [R1]. > Q7. Why 5 local epochs is not used in the experiment with 100 clients? **A7:** We followed the settings used in previous works [R2][R3]. In response to your valuable comments, we conducted additional experiments using 100 clients with 5 local epochs. The results reported in Table 6 demonstrate that FedFed can provide a significant improvement. We will include these results in our updated revision. Table 6 The accuracy of $\alpha=0.1, E = 5, K= 100$ over various datasets. |Method|CIFAR-10 |FMNIST|SVHN|CIFAR-100| |--|--|--|--|--| |FedAvg|60.78%|91.21%|90.31%|51.69%| |FedFed(Ours)|**88.21%**|**92.19%**|**92.78%**|**62.77%**| > Q8. there is a typo in eq. 8. **A8:** We have fixed the typo in the revision. Thanks for your attentive review! > Q9. The preliminaries section should introduce an information bottleneck. **A9:** Thanks for your valuable suggestions. We will add the necessary introduction in the revised version > Q10. How many clients were sampled at every round? **A10:** We set the sample rate to be $10\%$ for 100 clients and $50\%$ for 10 clients. More details of experiments can be found in **Joint Response**. > Q11: limitations must be considered, discussed, and be part of the evaluation. **A11:** Please see **Joint Response**. Following your constructive suggestion, we will highlight the overhead introduced by FedFed on the main page. **Reference**\ [R1] Li et al. Federated learning on non-iid data silos: An experimental study. In ICDE, 2022.\ [R2] Tang et al. Virtual homogeneity learning: Defending against data heterogeneity in federated learning. In ICML, 2022.\ [R3] Hsu et al. Measuring the effects of non-identical data distribution for federated visual classification. arXiv:1909.06335.\ [R4] Caldas et al. Leaf: A benchmark for federated settings. arXiv:1812.01097. --- Rebuttal Comment 1.1: Comment: Thanks to authors for their comments. As several concerns have been addressed (comp. and comm. overheads), and more corroboration of results, I will improve my score. --- Reply to Comment 1.1.1: Title: Response to LGae Comment: We are happy that we have addressed your concerns. Thank you for the effort you put into improving the quality of our paper. Thanks again for raising the score.
Rebuttal 1: Rebuttal: ## Response to All Reviewers: We sincerely appreciate all reviewers' great efforts on review and comments on our work. We especially thank the nice words: - timely and important problem (Reviewer #LGae) - new and interesting high-level idea (Reviewer #xchj & #LGae) and reasonable and simple method (Reviewer #xchj & #wKGY) - significantly boosts training performance, reduces communication rounds, and maintains privacy (Reviewer #LGae ) - empirical evaluation (Reviewer #ezSw & #xchj & #LGae) and theoretical justification (Reviewer #bAmN & #wKGY) - seamless combination or easy adoption in further research (Reviewer #xchj & #LGae & #bAmN & wKGY & ezSw) Due to limited space, we extract a similar question and answer it in response to your valuable comments. ### Joint Response > Various overheads (Reviewer #LGae & #bAmN) introduced by FedFed: Thanks for the comments. According to the empirical analyses given in Appendix F.7, we can see that the communication and computation overheads are relatively mild: #### 1. Computation Overhead - The computation overhead, introduced by FedFed to generate performance-robust/sensitive features, is less than 10% of the overall training computation for the local classifier. #### 2. Communication Overhead - Consider $K$ clients in the FL system. Let $m$ be the size of a single local model. The size of local private data is $\|D_k\| = a$. Then, the ratio of the entire dataset to the model parameter is $\gamma$ as in Appendix F.7, $$ \gamma = \frac{aK}{m}\approx14,$$ - The extra communication cost for a single client is $$(m+m)*T_d+a+aK$$ where $(m+m)*T_d$ denotes the cost of download/upload models for $T_d$ rounds in Algorithm 1. Here, the $a$ denotes the performance-sensitive features sent by each client, and $aK$ is the data received by each client from the globally shared dataset. - In the general process of FL, the overall communication costs are $(m+m)T_r\beta$, where $\beta$ is the sampling rate of a client. Therefore, the ratio of the extra communication overhead to the general FL process is: $$\frac{(m+m)\cdot T_d+a(K+1)}{(m+m)\cdot T_r\cdot \beta} = \frac{T_d}{T_r\cdot\beta}+\frac{a(K+1)}{(m+m)\cdot T_r\cdot\beta} = \frac{T_d}{T_r\cdot\beta}+\frac{\gamma}{2\cdot T_r \cdot\beta}+\frac{\gamma}{2\cdot K\cdot T_r \cdot \beta}$$ Here, we detail two examples in our experiments. - For $K=10, T_d=15$, $T_r=1000$, and $\beta=$50%, the extra communication costs are approximately 4.54%. - When $K=100$, $T_d=15$, $T_r=1000$, and $\beta=$10%, the extra communication costs are approximately 22.07%. #### 3. Storage Overhead FedFed offers three trade-off strategies regarding communication bandwidth and storage. - (i) One-time download: Local clients download the globally shared dataset once. Globally shared dataset costs approximately $14 \times$ storage of classifier model. - (ii) Partial download: A small portion of the globally shared dataset is selected and downloaded. This strategy incurs approximately $1.5\times$ communication cost compared to the previous strategy, while the storage required by the clients is the same as that of local private data. - (iii) Intermittent download: A small set of globally shared dataset is downloaded after every $Z$ rounds. This approach reduces the communication overhead to $\frac{1}{Z}$ of that of strategy (ii) while maintaining the storage overhead at the size of the local data. > Experiments for different sampling rates (Reviewer #LGae): Thanks for the comment. Inspired by Reviewer #LGae, we conduct experiments to investigate the impact of different sampling rates on performance and convergence speed, the results can be found in Tables 1-11. Table 1. The accuracy of $\alpha=0.1, E = 1, K= 10$ over CIFAR-10. |Sampling rate|10% |20%|30%|40%|50%| |--|--|--|--|--|--| |Accuracy|88.58%|89.34%|91.66%|91.94%|92.34% |Round|102|73|52|48|39| Table 2. The accuracy of $\alpha=0.05, E = 1, K= 10$ over CIFAR-10. |Sampling rate|10% |20%|30%|40%|50%| |--|--|--|--|--|--| |Accuracy|82.77% |84.47% | 86.82%| 88.98% | 90.02% |Round| 97| 74| 62| 57| 50|44 Table 3. The accuracy of $\alpha=0.1, E = 1, K= 100$ over CIFAR-10. |Sampling rate|5% |10%|20%|40%|60%| |--|--|--|--|--|--| |Accuracy| 83.67%|84.06%|87.98%|89.17%|89.62% |Round|182|163|90|70|61| Table 4. The accuracy of $\alpha=0.1, E = 1, K= 10$ over CIFAR-100. |Sampling rate|10% |20%|30%|40%|50%| |--|--|--|--|--|--| |Accuracy|63.34%|64.29% | 66.47%|67.32% |69.64%| |Round| 405| 389 | 331| 291 | 283| Table 5. The accuracy of under $\alpha=0.05, E = 1, K= 10$ over CIFAR-100. |Sampling rate|10% |20%|30%|40%|50%| |--|--|--|--|--|--| |Accuracy|63.77%|65.12% | 66.02%|66.82% |68.49%| |Round| 302| 234 | 198| 156 | 137| Table 6. The accuracy of $\alpha=0.1, E = 1, K= 100$ over CIFAR-100. |Sampling rate|5% |10%|20%|40%|60%| |--|--|--|--|--|--| |Accuracy| 57.32%|60.58%|64.34%|66.53%|68.71% |Round|534|448|401|382|298| Table 7. The accuracy of $\alpha=0.1, E = 1, K= 10$ over FMNIST. |Sampling rate|10% |20%|30%|40%|50%| |--|--|---|--|--|--| |Accuracy|88.36%|90.34% | 91.78%|92.01% |92.34%| |Round| 60| 35 | 28| 21 | 14| Table 8. The accuracy of $\alpha=0.05, E = 1, K= 10$ over FMNIST. |Sampling rate|10% |20%|30%|40%|50%| |--|--|--|--|--|--| |Accuracy|87.34%|88.78% | 90.11%|90.34% |90.69%| |Round| 43| 32 | 30| 22 | 16| Table 9. The accuracy of $\alpha=0.1, E = 1, K= 100$ over FMNIST. |Sampling rate|5% |10%|20%|40%|60%| |--|--|--|--|--|--| |Accuracy| 90.87%|92.71%|92.88%|93.51%|93.69% |Round|287|243|213|157|104| Table 10. The accuracy of $\alpha=0.1, E = 1, K= 10$ over SVHN. |Sampling rate|10% |20%|30%|40%|50%| |--|--|--|--|--|--| |Accuracy|89.74%|91.23% | 92.08%|92.82% |93.21%| |Round| 273| 200|188| 143 | 105| Table 11. The accuracy of $\alpha=0.1, E = 1, K= 100$ over SVHN. |Sampling rate|5% |10%|20%|40%|60%| |--|--|---|--|--|--| |Accuracy| 89.37%|91.04%|92.88%|93.51%|93.69% |Round|803|763|701|637|541|
NeurIPS_2023_submissions_huggingface
2,023
Summary: Authors propose method (Federated Feature distillation) to share partial features in the data to tackle data heterogeneity, while the privacy issue is not compromised too much. FedFed partitions data into performance-sensitive features and performance-robust features. The performance-sensitive features are globally shared to mitigate data heterogeneity, while the performance-robust features are kept for local training. Strengths: 1. Authors claim to propose a new prospect on alleviating date heterogeneity in Federated Learning: sharing partial features. It is a new high-level idea to solve heterogeneity in FL. 2. To achieve the idea, authors further propose a method (FedFed). More specifically, it proposed to partition data into performance-sensitive features and performance-robust features is reasonable. The performance-sensitive features are globally shared to mitigate data heterogeneity, while the performance-robust features are kept locally. This idea is resonable. 3. It outperfroms baselines Weaknesses: 1. Insufficient baselines. I wonder how the choices of baselines are made? FedAvg, FedProx, FedNova and SCAFFOLD are early FL models. The latest among them is proposed in 2020. Why not adopt more recent FL models? For instance, FedGen (ICML 2021)[1]. Besides, I believe there are more FL models in recent years. [1] Zhu et al., Data-Free Knowledge Distillation for Heterogeneous Federated Learning. ICML 2021. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weakness Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer xchj: > Q1: Adopt recent baselines, e.g., FedGen (ICML 2021)[R6] **A1:** Following your constructive suggestion, we have compared FedFed with open-source mainstream information-sharing-based works that aim to mitigate data heterogeneity. The results are reported in Table 1. We can see that FedFed outperforms existing methods. We will add the results and discussion to our updated revision. Table 1. Comparison with baselines under $\alpha=0.1, E=1, K=100$ over CIFAR-10. |FedAvg [R1]|FedAvg with CCVR [R2]|FD+FAug [R3]|FedMD [R4] |FedDF [R5]|FedGen[R6]|FedProto [R7]|FedFTG [R8]|FedAvg with FedFest(Ours)| |:------:|:--------:|:-------:|:-----------:|:--:|:--:|:--:|:-:|:-:| |49.72%| 59.19%|63.54%|67.32%|73.41%|76.45%|77.08%|80.73%|**84.06%**| **Reference**\ [R1] McMahan B, Moore E, Ramage D, et al. Communication-efficient learning of deep networks from decentralized data. In AISTAS,2017.\ [R2] Luo M, Chen F, Hu D, et al. No fear of heterogeneity: Classifier calibration for federated learning with non-iid data. In NeurIPS, 2021.\ [R3] Jeong E, Oh S, Kim H, et al. Communication-efficient on-device machine learning: Federated distillation and augmentation under non-iid private data. arXiv preprint arXiv:1811.11479.\ [R4] Li D, Wang J. Fedmd: Heterogenous federated learning via model distillation. arXiv preprint arXiv:1910.03581.\ [R5] Lin T, Kong L, Stich S U, et al. Ensemble distillation for robust model fusion in federated learning. In NeurIPS, 2020.\ [R6] Zhu Z, Hong J, Zhou J. Data-free knowledge distillation for heterogeneous federated learning. In ICML, 2021.\ [R7] Tan Y, Long G, Liu L, et al. Fedproto: Federated prototype learning across heterogeneous clients. In AAAI, 2023.\ [R8] Zhang L, Shen L, Ding L, et al. Fine-tuning global model via data-free knowledge distillation for non-iid federated learning. In CVPR, 2022.
null
null
null
null
null
null
Modulate Your Spectrum in Self-Supervised Learning
Reject
Summary: Feature collapse is a major problem in contrastive-base self-supervised learning. To address this issue, the paper introduces the spectral transformation framework, which aims to mitigate the aforementioned problem. Generally speaking, the spectral transformation expands upon the extensively employed whitening transformation discussed in the literature and treats it as a special case. Furthermore, the paper proposes the IterNorm approach featuring trace loss to enhance the performance of self-supervised models. The experimental results validate the effectiveness of the IterNorm with trace loss (INTL) on various widely-used SSL-evaluation benchmarks, including but not limited to linear evaluation, classification, and object detection. Strengths: 1) The paper addresses a fundamental challenge in the context of contrastive-based self-supervised learning, and proposes an elegant method with theoretical guarantees. Specifically, the proposed framework extends the widely used whitening operation discussed in prior literature, treating it as a special instance, thereby enhancing the theoretical contribution of this work. 2) The experimental results are promising. The proposed method is evaluated on several SSL-evaluation benchmarks, including ImageNet linear classification (achieving 76.6% accuracy on ResNet-50), COCO object detection (achieving a score of 41.2 on ResNet50-C4), and significantly surpasses the previous state-of-the-art methods. 3) The presentation of the paper is good, with a clear elaboration of the motivation and contribution of the proposed method. Weaknesses: 1) The comparison between the proposed method and baseline is unfair. Firstly, the proposed framework employs a 3-layer projection head with a substantially larger dimension (8192), which significantly increases both computation and memory costs. This discrepancy makes it hard to conduct a fair comparison between the proposed method and the baseline approach. 2) As mentioned earlier, the proposed method incurs nearly twice the computation time per epoch and peak memory per GPU compared to the naive contrastive baseline method MoCo/MoCo-v2. 3) It is worth noting that the multi-crop strategy employed in the paper is highly similar to that used in [a], including the crop-nums and crop-sizes. It is unclear why this was not cited in the paper. 4) It should be noted that the multi-crop strategy employed in [a] can also enhance the detection and classification results. Therefore, the absence of this information in the comparison makes it difficult to conduct a fair evaluation. [a] Wang, X., & Qi, G. J. (2022). Contrastive learning with stronger augmentations. IEEE transactions on pattern analysis and machine intelligence, 45(5), 5549-5560. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1) The experiments conducted in the paper are based solely on the ResNet-50 network architecture, it would be beneficial if the authors could provide results based on transformer backbones, given their widespread usage in the self-supervised learning domain. 2) As a theoretical-style paper, it is important to have a fair and direct comparison between the proposed method and the baseline approach. However, the utilization of several tricks and unfair experimental settings in the paper makes it challenging to determine the actual degree of improvement gained from the original technical contribution. Furthermore, such experimental settings make it hard to follow the paper's methodology. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer Fnki We thank the reviewer for the encouraging and insightful comments. Please find our responses to specific concerns and questions below. **Concern 1:** The comparison between the proposed method and baseline is unfair. ... the baseline approach. **Response:** Thanks for your comments. The reason why we set the projection dimension as 8192 is that our initial experiments followed the settings of VICReg and Barlow Twins, both of which use a dimension of 8192 for the projection. Compared to a projection dimension of 2048, using a projection dimension of 8192 can bring about a 0.14% improvement in top-1 accuracy for INTL. Therefore, we followed this setting in subsequent experiments on ImageNet. We report that using a projection dimension of 8192 requires approximately 18% additional GPU memory and 2% time per epoch compared to using the one of 2048. We think it is difficult to apply identical hyperparameters for all baselines and our method. Because each self-supervised method has its own characteristics, it may be unfair to use the same hyperparameters for all methods. For example, a small projection dimension of 512 or 1024 is suitable for W-MSE and BYOL but will have degenerated performance for VICReg and Barlow Twins. Meanwhile, a large projection dimension of 8192 fails to train W-MSE because W-MSE requires the batch size to be larger than the projection dimension. We note that solo-learn [11], which is designed as a framework for integrating and reproducing various self-supervised methods, also adopts different hyperparameters for different methods. **Concern 2:** As mentioned earlier, ... MoCo/MoCo-v2. **Response:** In our experiments, our INTL can also uses Exponential Moving Average (EMA), like the Moco-v2. Our INTL with EMA requires a total of around 23.6 GB GPU memory and 24min46s running time per epoch (Table G of supplementary materials), which is comparable to 20 GB memory and 23min11s running time per epoch required by MoCo-v2 that also uses EMA. Although requiring a bit more memory and computation time, our method achieves a 2.1% improvement in top-1 accuracy on ImageNet (74.3% v.s. 72.2%) compared to MoCo-v2, as shown in Table D of our supplementary materials. Meanwhile, compared to other methods that use an additional predictor (such as BYOL), or use eigen-decomposition (such as W-MSE), our proposed INTL not only reduces memory and computation time but also achieves performance improvement. **Concern 3:** It is worth noting that the multi-crop strategy ... why this was not cited in the paper. **Response:** Thanks sincerely for your constructive suggestions. We mainly draw inspiration from SwAV to set our multi-crop strategy, but did not realize that the similar variant had been proposed in paper [a]. We will cite [a] in the revised version. **Concern 4:** It should be noted that ... makes it difficult to conduct a fair evaluation. **Response:** Note that the results of all methods on CIFAR-10/100 and ImageNet-100, shown in Table 2 of our submission, are trained without multi-crop strategy, following the settings of solo-learn [11]. We also provide the classification results on ImageNet without multi-crop strategy in Table D of our supplementary materials. Without multi-crop, INTL can achieve a top-1 accuracy of 74.3% on ImageNet. All these results indicate that INTL can also achieve better performance or is on par with other SOTA methods without multi-crop strategy. Note that Multi-crop is a common strategy used by SSL methods (such as SwAV, DINO, W-MSE, and so on) to boost performance, so we also apply it to INTL. Following your insightful comments, we conduct additional experiments to evaluate the detection results of INTL without multi-crop. We train INTL w\ and w\o multi-crop on ImageNet for 200 epochs and then transfer pre-trained backbones to other tasks. The results are shown in the table below. It shows that multi-crop strategy can slightly enhance our detection results, while our method without multi-crop can still perform better than SwAV with multi-crop. | Method | COCO detection | COCO segmentation | | :-------------------: | :----------------------------------------------------------: | :----------------------------------------------------------: | | | $AP_{50}$     $AP$     $AP_{75}$ | $AP_{50}$    $AP$     $AP_{75}$ | | SwAV (w\ multi-crop) | 60.2    39.8    43.0 | 56.6     34.6     36.8 | | INTL (w\o multi-crop) | $60.9_{±0.08}$   $40.7_{±0.09}$   $43.7_{±0.17}$ | $57.3_{±0.08}$   $35.4_{±0.05}$   $37.6_{±0.14}$ | | INTL (w\ multi-crop) | $61.2_{±0.08}$   $41.2_{±0.12}$   $44.7_{±0.19}$ | $57.8_{±0.04}$   $35.7_{±0.04}$   $38.1_{±0.12}$ | **Question 1:** The experiments ... on transformer backbones, given their widespread usage in the self-supervised learning domain. **Response:** Thanks for your suggestion, and we conduct additional experiments using ViT backbones. INTL can also perform better than other baselines. Please check Figure 1 of the rebuttal pdf for details. **Question 2:** As a theoretical-style paper, ... Furthermore, such experimental settings make it hard to follow the paper's methodology. **Response:** We sincerely thank the reviewer for the insightful suggestions. Our main theoretical claim is that INTL can avoid dimensional collapse and modulate the spectrum of embedding towards an equal-eigenvalue distribution. This claim is empirically validated in Section 4.2. In terms of the empirical comparison on standard SSL benchmark, we believe our experimental setup is fair, and please see the responses of Concern 1 and Concern 4.
Summary: For self-supervised learning, this paper proposes a spectral transformation (ST) framework to modulate embedding, seeking functions other than whitening that can avoid dimensional collapse. The authors propose IterNorm with trace loss and provide a lot of theoretical and experimental analysis. The results show its effectiveness and advancement. Strengths: 1. This paper provides intuitive experimental verification, extending the whitening transformation to a more general spectral transformation. 2. For the proposed IterNorm with trace loss, this paper conducts rigorous theoretical analysis and practice, showing that it is effective for avoiding collapse. 3. The proposed method can achieve good results without relying on large batches, which can be comparable to supervised performance under regular settings. Weaknesses: 1. The motivation for the proposed IterNorm with trace loss is unclear. The authors empirically observe that IterNorm suffers from severe dimensionality collapse, but do not state the limitations of existing SSL methods in this case. 2. For the ablation study of batch size, Table 4 shows that the performance of the proposed method increases significantly as the batch size increases, indicating that the method is also sensitive to the batch size. In addition, SimCLR, SwAV and other methods require a batch size of 4096. Table 4 should provide the corresponding results with a batch size of 4096 and the sensitivity of other methods to the batch size. 3. Among the existing SSL methods compared in the experimental part, few new methods since 2022 are covered. More state-of-the-art method comparisons need to be added. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please see above weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors have explained some of the limitations in the paper, and others can be seen in weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer SEtq We thank the reviewer for the encouraging and insightful comments. Please find our responses to specific concerns below. **Concern 1:** The motivation for the proposed IterNorm with trace loss is unclear. The authors empirically observe that IterNorm suffers from severe dimensionality collapse, but do not state the limitations of existing SSL methods in this case. **Response:** The main motivation of this paper is to address the dimensional collapse problem in SSL. We generalize the whitening transformation to spectral transformation (ST), which could be a general framework to design algorithms to avoid dimensional collapse in SSL, and we indeed design INTL (IterNorm + trace loss) as a new instance of ST that avoid dimensional collapse theoretically. Note that the motivation to use IterNorm is to avoid the numerical instability of ‘ST using power functions’ (including existing whitening-based methods), which is shown in Lines 201-210 of our submission. The motivation to use INTL is that IterNorm-only also suffers dimensional collapse unexpectedly in SSL, but adding trace loss can avoid it based on our theoretical analyses and empirical validation. **Concern 2:** For the ablation study of batch size, Table 4 shows that the performance of the proposed method increases significantly as the batch size increases, indicating that the method is also sensitive to the batch size. In addition, SimCLR, SwAV and other methods require a batch size of 4096. Table 4 should provide the corresponding results with a batch size of 4096 and the sensitivity of other methods to the batch size. **Response:** Following your comments, we conduct experiments to compare the performances of SimCLR, SwAV, and INTL pre-trained on ImageNet for 100 epochs with respect to the batch sizes. The results are shown in the table below. Due to training with a batch size of 4096 requiring approximately 512GB GPU memory, we are unable to train the corresponding models with our workstation of 4 A100-PCIE-40GB GPUs. The results of other methods using a batch size of 4096 are from the corresponding papers. One important phenomenon in identifying sensitivity to batch sizes is that when the batch size decreases from 128 to 64, the top-1 accuracy of SimCLR and SwAV significantly degraded by nearly 10%, while the top-1 accuracy of our INTL slightly degraded by 1.7%, which performs much better than the other two methods. These experimental results indicate that INTL has good robustness to batch sizes. We will include these results in the revised version. | Method | 32 | 64 | 128 | 256 | 512 | 1024 | 4096 | | :----- | :--: | :------: | :------: | :------: | :------: | :------: | :--: | | SimCLR | - | 52.9 | 62.3 | 63.0 | 65.1 | 66.0 | 66.5 | | SwAV | - | 53.6 | 63.7 | 65.9 | 66.3 | 66.3 | 66.5 | | INTL | 64.2 | **66.4** | **68.1** | **68.7** | **69.5** | **69.7** | - | **Concern3:** Among the existing SSL methods compared in the experimental part, few new methods since 2022 are covered. More state-of-the-art method comparisons need to be added. **Response:** Sincerely thanks for your constructive suggestions. We conduct additional experiments to compare the state-of-the-art methods, including Zero-CL (ICLR 2022) and CW-RGP (NeurIPS 2022), following the same setups as in Table 2. The results are shown in the table below. We find that INTL also outperforms these methods. We would like to add the results in the revised version. | Method | CIFAR-10 | CIFAR-100 | ImageNet-100 | | :----------: | :---------------------------------: | :----------------------------------: | :---------------------------------: | | | top-1   5-nn   top-5 | top-1   5-nn   top-5 | top-1   5-nn   top-5 | | Zero-CL (ICLR 2022) | 90.81   87.51   99.77 | 70.33   59.21   92.05 | 79.26   71.18   94.98 | | CW-RGP (NeurIPS 2022) | 92.03   89.67   99.73 | 67.78   58.24   90.65 | 76.96  68.46   93.76 | | INTL (ours) | **92.60   90.03   99.80** | **70.88   61.90   92.13** | **81.68   73.46   95.42** |
Summary: This submission proposes a spectral transformation for redundancy-reduction-based (a.k.a. whitening-based) self-supervised learning (SSL). Spectrum-domain modulation is helpful in preventing collapsing caused by representations' rank deficiency. Since the whitening operation can be seen as a square-root transformation on spectrum, it can be generalized to other power-function transformation. To avoid numerical instability of matrix decomposition in computing spectra, an approximation based on Newton’s iteration method IterNorm is proposed. While IterNorm by itself suffers from dimensional collapse, its combination with trace loss, the term to encourage the spectrum to have equal-eigenvalue distribution, works well to avoid collapse. Experiments on ImageNet, CIFAR-10/100, and COCO show that the proposed method outperforms existing state-of-the-art methods. Strengths: - The idea of spectrum-domain modulation seems intuitive to avoid dimensional collapse caused by rank deficiency of representations. - State-of-the-art SSL performances using ResNet50 were achieved. - Especially excellent performance avoiding collapse in small batch sizes is practical for low-resource training. Weaknesses: - The difference between existing whitening-based methods and power-function modulation with p = 0.5: Sec 3.2 empirically shows that p == 0.5 (== 1/2) is a good value. At the same time, p4 l 170 says, "whitening is a special instance of spectral transformation, where g(·) is a power function g(λ) = λ^{− 1/2}". This confused me with what difference this generalization made from usual whitening. - Trace loss + BarlowTwins/VICReg?: It is said that IterNorm works well only when combined with trace loss. This naturally implies that the source of success is trace loss itself rather than IterNorm. Is it possible to combine trace loss with existing whitening-based methods part from IterNorm? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - p4 l151 "a function over embedding Z during forward pass, and modulates the spectrum of embedding Z implicitly during backward pass when minimizing MSE loss": I could not follow this part; what kind of modulation is done in the backward path? More concrete explanation is preferable. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 4 excellent Limitations: Limitations or social impact are not discussed, but I do not have particular concerns. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer tQK6 We thank the reviewer for the encouraging and insightful comments. Please find our responses to specific concerns and questions below. **Concern 1:** The difference between existing whitening-based methods and power-function modulation with $p = 0.5$: Sec 3.2 empirically shows that $p==0.5 (== 1/2)$ is a good value. At the same time, p4 l 170 says, 'whitening is a special instance of spectral transformation, where g(·) is a power function $g(λ) = λ^{− 1/2}$'. This confused me with what difference this generalization made from usual whitening. **Response:** In our Spectral Transformation framework, $g(λ)$ can be any function which finnally maps the eigenvalue $\lambda$ to $\lambda g^2(\lambda)$. Therefore, both whitening ($g(\lambda)=\lambda^{-1/2}$) and power function ($g(\lambda)=\lambda^{-p}, p \in \mathbb R$) are special cases of ST. Note that the mapping function $g(λ)$ of IterNorm with $T$ iteration number is $f_T(\frac{\lambda}{tr(\Sigma)}) / \sqrt{ tr(\Sigma)}$ (Line 218), which is not a power function, but an ST by definition (defined over the iterative function $f_{k+1}(x) = \frac{3}{2} f_{k}(x)-\frac{1}{2} x {f_{k}}^3(x), k \geq 0$; $f_{0}(x) = 1.$ shown in Line 216); Besides, different $T$ of IterNorm implies different mapping function $g(λ)$ of IterNorm. Note that we want to show that there exist other functions for ST except whitening that can also work to avoid collapse in Section 3.2 (e.g., the power function for ST with $p = 0.45$ or $0.55$ in Figure 2). Perhaps our unclear description caused your misunderstanding, and we will further refine this section. **Concern 2:** Trace loss + BarlowTwins/VICReg?: It is said that IterNorm works well only when combined with trace loss. This naturally implies that the source of success is trace loss itself rather than IterNorm. Is it possible to combine trace loss with existing whitening-based methods part from IterNorm? **Response:** Following your insightful comments, we conduct experiments on CIFAR-10/100 and ImageNet-100 to evaluate the performance of adding trace loss to BarlowTwins/VICReg, following the same setup in Table 2 of our paper (except that we train models on CIFAR-10/100 for 200 epochs and on ImageNet-100 for 100 epochs due to time limit of the rebuttal period). We set the coefficient of trace loss to 0.01, which is empirically applicable for both methods. The results are shown in the table below. We find that adding trace loss to Barlow Twins is feasible and can slightly improve the performance, but adding it to VICReg heavily reduces the performance, especially on ImageNet-100. We conjecture the reason is likely that adding trace loss to these two methods will affect the intensity of regularization, which may disrupt the original balance and lead to a decrease in performance, or achieve a better balance and boost performance. | Method | CIFAR-10 | CIFAR-100 | ImageNet-100 | | :-----------------------: | :-------------------------------------: | :----------------------------------: | :---------------------------------: | | | top-1   5-nn   top-5 | top-1   5-nn   top-5 | top-1   5-nn   top-5 | | Barlow Twins | 80.43   **76.68**   99.05 | 51.60   42.71   80.37 | 58.34   50.21  83.46 | | Barlow Twins + trace loss | **80.45**   76.32   **99.15** | **51.66   43.94   81.37** | **59.78   50.45   83.80** | | | | | | | VICReg | **83.14   79.62   99.23** | **55.96   46.71   83.37** | **66.01  57.76   89.14** | | VICReg + trace loss | 81.67   78.74   99.06 | 54.75   46.24   82.98 | 63.54   55.18   86.34 | **Question 1:** ‘a function over embedding Z during forward pass, and modulates the spectrum of embedding Z implicitly during backward pass when minimizing MSE loss’: I could not follow this part; what kind of modulation is done in the backward path? More concrete explanation is preferable. **Response:** Thanks for your suggestion. We do not address the exact formulation of how the spectrum of embedding is modulated in each optimization step (backward path), but we address the optimization direction of the spectrum of embedding when using gradient-based methods to modulate the spectrum. That is, what spectrum of embedding will be modulated to be during the course of training. For example, whitening loss encourages the embedding to be full-rank [40] during the course of training. That is, the spectrum of embedding will be modulated to be full-rank when we update the embedding using gradient-based optimization methods. In this paper, we theoretically prove that INTL promotes the equality of all eigenvalues of the covariance matrix of embedding $\mathbf{Z}$ during the course of optimization. This kind of equal-eigenvalue spectrum of embedding is what INTL wants to modulate to be during the backward pass. Following your insightful comments, we will provide a more concrete explanation of the modulation in the revised version. --- Rebuttal Comment 1.1: Comment: I appreciate the Authors' detailed response. The response well answered my concerns, including Concern 1 which was based on my misunderstanding. I recommend including full-iteration comparisons of INTL vs Barlow Twins + trace loss & VICReg + trace loss in the camera-ready. I now safely vote for acceptance and keep my initial rating. --- Reply to Comment 1.1.1: Title: Thank you for the response Comment: Dear reviewer tQK6, We sincerely appreciate you taking the time to review our responses and vote for acceptance of our paper. We will carefully follow all of your comments and include them in the revised version to improve this paper. Best wishes to you. Authors of Paper 6124
Summary: The paper studies the dimension collapse problem in image self-supervised learning. Previously, people use image space data augmentation/momentum/stop gradient and so on to address this problem. This paper proposes to address the problem via spectrum transformation (ST) in the feature space instead (aka balancing the eigenvalue of covariance matrix while keeping eigenvectors the same in mini-batches). Strengths: 1. Its ST generalized previous whitening approach (aka make the covariance matrix to be identity), and extends it to power functions (aka p \in R instead of p = 0.5). 2. For the calculation of whitening matrix, the paper proposes a less expensive approach to calculate (IterNorm). And the paper analyzes how the singular values will change using IterNorm. But to really make things work, the paper adds a trace loss (INTL). 3. The paper shows INTL reaches SOTA performance on image classification and object detection tasks. And INTL is less sensitive to batch size. Weaknesses: 1. The **major concern** for this paper is that, it resembles quite a few similarities in the key ideas to a recent AAAI work: "Spectral Feature Augmentation for Graph Contrastive Learning and Beyond" (https://arxiv.org/abs/2212.01026), but the AAAI work has not been cited or discussed in this work, making the contribution of this work a bit incremental: a. The motivations are similar: deal with feature collapse/dimension collapse via change feature space’s spectrum. b. Similar pipeline, the AAAI paper apply SFA after encoder and before projection, this work applies ST in the end of the pipeline. the AAAI work injects noise for better augmentation while the paper does not. This paper finally needs to add an additional trace loss to make ST work, which may play a similar role to the injected noise. c. Iterative normalization to calculate \Sigma_{-0.5} is like appendix C8 algorithm 2. But with a slightly different formulation. The AAAI paper calculates \Sigma_{-0.5} and \Sigma_{0.5} at the same time. This work only calculates \Sigma_{0.5} with a slightly different iteration function. d. This work’s power functions for ST are like AAAI’s MaxExp(F) and Matrix Square Root, but this work’s approach does not inject a Gaussian noise. e.Both experiments on SSL on ImageNet. The AAAI work explored both graphs and images. This work focuses more on images, so it also explores CIFAR, and also object detection tasks. But AAAI work deals with contrastive SSL (InfoNCE) while this work uses a non-contrastive approach (normalized MSE loss). 2. It states to explore other possible spectrum transformation approaches but only extends from whitening (p=0.5) to power (p \in R) with a limited range of working p around 0.5. This does not validate its statement to generalize ST. 3. Directly using ST (IterNorm) does not work, and finally needs to add trace norm (INTL). It is not shown that if only using the implicit trace norm without explicit ST, whether the performance will still be good. This does not validate its need to do explicit ST. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The notation in Figure 1 is not clear: are X, Z matrices? The notation of Z in Figure 1 and Equation are inconsistent. It would also be better to add notation section since there are many similar symbols. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer Ckne We thank the reviewer for the encouraging and insightful comments. Please find our responses to specific concerns and questions below. **Concern 1:** The major concern for this paper is that, it resembles quite a few similarities in the key ideas to a recent AAAI work. **Response:** We thank the reviewer for pointing out this recently related work [AAAI 2023] and describing the similarities and differences between our work and the [AAAI 2023]. Here, we further complement the differences based on the view of the reviewer: 1. In terms of point (b) of the Reviewer, we address that IterNorm (an instance of ST) requires an additional trace loss to work, but whitening (also an instance of ST) can work well without an additional trace loss; 2. In terms of point (d) of the Reviewer, we address that we use the power function in Section 3.2 as an example to illustrate that there exists ST beyond whitening can avoid dimensional collapse. Note that $g(\lambda)$ of IterNorm with $T$ iteration number is $f_T(\frac{\lambda}{tr(\Sigma)}) / \sqrt{ tr(\Sigma)}$ (Line 218, and $f_T(\cdot) $is defined in Line 216), which is not a power function, but an ST by definition; 3. We theoretically prove that IterNorm with trace loss (INTL) can avoid collapse and modulate the spectrum of embedding towards an equal-eigenvalue distribution. We would like to cite the related work [AAAI 2023] and take credit for its contribution to SSL using spectral analyses in our revised version. **Concern 2:** It states to explore other possible spectrum transformation approaches but only extends from whitening (p=0.5) to power (p \in R) with a limited range of working p around 0.5. This does not validate its statement to generalize ST. **Response:** We highlight that the power function is just a kind of ST, and we use the power function in Section 3.2 as an example to illustrate that there exists ST beyond whitening that can avoid dimensional collapse. Note that the mapping function $g(\lambda)$ of IterNorm with $T$ iteration number is $f_T(\frac{\lambda}{tr(\Sigma)}) / \sqrt{ tr(\Sigma)}$ (Line 218), which is not a power function, but an ST by definition (defined over the iterative function $f_{k+1}(x) = \frac{3}{2} f_{k}(x)-\frac{1}{2} x {f_{k}}^3(x), k \geq 0$; $f_{0}(x) = 1.$ shown in Line 216); Besides, different $T$ of IterNorm implies different mapping function of IterNorm. **Concern 3:** Directly using ST (IterNorm) does not work, and finally needs to add trace norm (INTL). It is not shown that if only using the implicit trace norm without explicit ST, whether the performance will still be good. This does not validate its need to do explicit ST. **Response:** Following your insightful comments, we conduct additional experiments on CIFAR-10 to validate the performance of using trace loss without IterNorm. We use the same experimental setup in Table 2 of our paper, except that the models are only pre-traind for 200 epochs on CIFAR-10. The results are shown in the table below. We observe that trace loss without IterNorm fails to avoid dimensional collapse. Note that IterNorm ensures the covariance matrix of transformed output $\widehat{\mathbf{Z}}$ having eigenvalues between $(0, 1)$. Trace loss combining IterNorm encourages all the eigenvalues of the covariance matrix of $\widehat{\mathbf{Z}}$ towards $1$ and further promotes the equality of all eigenvalues of the covariance matrix of embedding $\mathbf{Z}$ during the course of optimization, as shown by Theorem 2. This provides a theoretical guarantee to avoid dimensional collapse. We will add these results to validate the necessity of ST in Section 4 of our paper. | Method | w\ IterNorm | w\o IterNorm | | :------------- | :---------------------------------: | :-----------------------------: | | | top-1   5-nn  top-5 | top-1   5-nn  top-5 | | w\ trace loss | **90.75   87.58   99.71** | 16.15   12.34   58.65 | | w\o trace loss | 36.15   31.34   73.65 | 10.00   10.00   50.00 | **Question 1:** The notation in Figure 1: are X, Z matrices? **Response:** Yes, $\mathbf{X}$ and $\mathbf{Z}$ are matrices. In Lines 142~144, we illustrate that $\mathbf{Z}$ is the corresponding mini-batch embedding, given mini-batch input $X$. We use the capital letter to represent the mini-batch (a form of matrix) of input ($\mathbf{X}$), embedding ($\mathbf{Z}$), and transformed output ($\widehat{\mathbf{Z}}$). --- Rebuttal 2: Title: We thank the reviewer again for the valuable feedback and happy to address any remaining concerns Comment: We extend our sincere gratitude to the reviewer for their valuable time and insightful feedback. We value the constructive feedback and hope that our responses have appropriately addressed all the concerns. We really appreciate the valuable time to respond to our feedbacks based on the reviewer's comments. Further, we are happy to address any remaining concerns.
Rebuttal 1: Rebuttal: ## Global Response to Reviewers We thank all the reviewers for their detailed and constructive comments. We briefly highlight the merits recognized by reviewers as follows: 1. Great representation, for example, “the paper is well-organized with a clear story and justification” (TM8b), “the presentation is great” (QdWo), and “The presentation of the paper is good, with a clear elaboration of the motivation and contribution of the proposed method” (Fnki). 2. Theoretical contribution, for example, “this paper conducts rigorous theoretical analysis” (SEtq), “the paper proposes an elegant method with theoretical guarantees” (Fnki), and “the paper prove the previous whitening function is a special case of spectrum transformation (ST)” (TM8b). 3. Empirical success, for example, “The paper shows INTL reaches SOTA performance on image classification and object detection tasks” (Ckne), “State-of-the-art SSL performances using ResNet50 were achieved” (tQK6), and “the proposed method is evaluated on several SSL-evaluation benchmarks, and significantly surpasses the previous state-of-the-art methods” (Fnki) We respond to each reviewer in the separate local ‘Rebuttal’ section. We also conduct additional experiments using vision transformer (ViT) backbones, following the suggestion of Reviewer Fnki. The results are shown in Figure 1 of the ‘pdf’ file of global ‘Rebuttal’ section. Pdf: /pdf/988de20be0218ea9cced1a96634ac69b804f6dd6.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper addresses the dimensional collapse problem in self-supervised learning and proposes a framework to moderate the spectrum of embedding. Besides, a new spectral transformation variant, called IterNorm with trace loss (INTL), is proposed that can avoid collapse and modulate the spectrum of embedding towards an equal-eigenvalue distribution. Experiments on various tasks verify the effectiveness of the proposed method. Strengths: - The proposed method looks technically sound. - Extensive experiments have been conducted to verify the effectiveness by comparing with the existing methods in the field. - The presentation is great. Weaknesses: This looks like a good paper. I do not have much to complain about (possibly due to I do not work in this direction). I just have two minor questions. - How do you balance the MSE loss and IterNorm trace loss? Are there coefficients in the two loss terms? If there are, how do you set the coefficient values? Do they keep fixed across all the datasets? - The proposed method INTL is based on the empirical finding that when p is in the neighborhood of 0.5, e.g. 0.45 ∼ 0.55, the model can avoid collapse. Is there any intuitive explanation or theoretic analysis for this phenomenon? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: See the weakness above. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: See the weakness above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer QdWo We thank the reviewer for the encouraging and insightful comments. Please find our responses to specific questions below. **Question 1:** How do you balance the MSE loss and IterNorm trace loss? Are there coefficients in the two loss terms? If there are, how do you set the coefficient values? Do they keep fixed across all the datasets? **Response:** Yes, there is a coefficient between the MSE loss and IterNorm trace loss. We set the loss function of INTL as $\mathcal{L}\_{INTL}=\mathcal{L}\_{MSE}+\beta \cdot \mathcal{L}\_{trace}$. In our experiments, we observe that the coefficient $\beta$ that obtains good performance has relevant to the batch size. So we empirically regress $\beta$ with various batch sizes and obtain that $\beta=0.01*(log_{2} bs-3)$ where $bs$ means the batch size and $bs>8$. We keep the coefficient $\beta$ fixed in this form (i.e., $\beta$ **is determined given the batch size**) across all the datasets and architectures, so our INTL can be directly applied to other datasets and models without tuning the coefficient. These settings and descriptions can also be found in Section C ‘Algorithm of INTL’ of our supplementary materials. **Question 2:** The proposed method INTL is based on the empirical finding that when $p$ is in the neighborhood of $0.5$, e.g. $0.45 ∼ 0.55$, the model can avoid collapse. Is there any intuitive explanation or theoretic analysis for this phenomenon? **Response:** We note that a recent work [40] implied that whitening loss can be decomposed into two asymmetric losses $\mathcal{L} = \frac{1}{m} \| \phi(\mathbf{Z}\_{1})\mathbf{Z}\_{1} - (\widehat{\mathbf{Z}}\_{2})\_{st} \|\_{F}^2 + \frac{1}{m} \| \phi(\mathbf{Z}\_{2})\mathbf{Z}\_{2} - (\widehat{\mathbf{Z}}\_{1})_{st} \|\_{F}^2$, where $\phi(\mathbf{Z})$ is the whitening matrix of $\mathbf{Z}$, and $\widehat{\mathbf{Z}}$ is the whitening output. Each asymmetric loss can be viewed as an online network to match a whitened target $\widehat{\mathbf{Z}}$. As a general form of whitening, our ST can also apply this decomposition to the loss function. Our intuitive explanation is as follows: When $p$ is in the neighborhood of $0.5$, e.g. $0.45 ∼ 0.55$, $\widehat{\mathbf{Z}}$ has a well-conditioned spectrum that each eigenvalue approaches 1. In this case, $\widehat{\mathbf{Z}}$ is a good target for $\phi(\mathbf{Z})\mathbf{Z}$ to match so that the embedding $\mathbf{Z}$ can learn a good spectrum to avoid collapse. For example, if $\mathbf{Z}$ suffers dimensional collapse, $\phi(\mathbf{Z})\mathbf{Z}$ will be low-rank and is impossible to exactly match the full-rank $\widehat{\mathbf{Z}}$. On the contrary, when $p$ deviates from $0.5$ to a certain extent, the spectrum of the transformed output $\widehat{\mathbf{Z}}$ is not well-conditioned and is not a good target in representation. We believe that more theoretical work in this part is worth exploring in the future.
Summary: This paper tackles the dimension collapse problem in self-supervised learning. The authors propose spectral transformation which can be served as an alternative for the whitening function in avoiding dimensional collapse. Further, they propose a new instance of ST, called IterNorm with trace loss (INTL), and prove that INTL can avoid collapse. Results on several SSL benchmarks show promising improvements over previous methods. Strengths: - The paper is well-organized with a clear story and justification. - The paper aims at analyzing the dimension collapse problem and prove the previous whitening function is a special case of spectrum transformation (ST). - The authors claim that the numerical instability can be solved if a spectral transformation can modulate the spectrum without explicitly calculating λ or U. Then, they propose Iterative Normalization with Trace Loss as a solution. - The performance on Imagenet, CIFAR and COCo shows improvements over previous methods such as Btwins, and VICreg. Weaknesses: 1. Comparing methods in the Experiment section: As the proposed method is claimed to be the general form of whitening function in SSL, I am wondering why the authors did not put the performance comparison against whitening loss [20, 40]. As [20, 40] have demonstrated the performance gains over Btwins, it is hard to tell how the INTL it better than [20, 40]. 2. Concerns about novelty. In my view, section 3.2 Spectral Transformation proves the effectiveness of Whitening. The general form of ST is $g(\lambda) = \lambda^{-p}$ and the authors claim that > it seems to perform well to avoid collapse although the transformed output is not ideally whitened when p is in the neighborhood of 0.5, e.g. 0.45 ∼ 0.55. But when p gradually deviates from 0.5, collapse occurs." The whitening function is the case where $p=0.5$. In section 3.3, the iterative normalization has also been explored by [40] in https://github.com/PatrickHua/FeatureDecorrelationSSL/blob/main/models/utils/iterative_normalization.py The major contribution of this paper in my view is to propose a trace loss in iterative normalization to avoid dimension collapse. Yet, as the quantitative comparison to [20,40] is missing, it is hard to evaluate the contribution. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Please refer to the weakness Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: I think the authors well address the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Response to Reviewer TM8b We thank the reviewer for the encouraging and insightful comments. Please find our responses to specific concerns below. **Concern 1:** Comparing methods in the Experiment section: As the proposed method is claimed to be the general form of whitening function in SSL, I am wondering why the authors did not put the performance comparison against whitening loss [20, 40] **Response:** Thanks sincerely for your constructive suggestions. The reason why we did not provide results for shuffled-DBN[20] and CW-RGP [40] is that the results of all baselines in Table 1 (Evaluation on ImageNet) of our paper are nearly pre-trained for 800 or 1000 epochs, while the official results of shuffled-DBN [20] and CW-RGP [40] we can find in corresponding papers [20,40] are only pre-trained up to 200 epochs at most. We think it is unfair to directly compare [20,40] to other baselines in Table 1 that have long-term training. We thus did not put the performance comparison against [20,40]. Following your insightful suggestions, we also conduct additional experiments to compare against [20,40] under the same experimental setup as ours. We first show the results on CIFAR-10/100 and ImageNet-100 in the table below, following the same setup in Table 2 of our paper. We can see that INTL outperforms shuffled-DBN and CW-RGP consistently over all datasets. | Method | CIFAR-10 | CIFAR-100 | ImageNet-100 | | :---------------- | :--------------------------------: | :----------------------------------: | :--------------------------------: | | | top-1   5-nn   top-5 | top-1   5-nn   top-5 | top-1   5-nn   top-5 | | Shuffled-DBN [20] | 91.17   88.95   99.62 | 66.81   57.27   90.78 | 75.27  67.21  93.12 | | CW-RGP [40] | 92.03   89.67   99.73 | 67.78   58.24   90.65 | 76.96  68.46   93.76 | | INTL (ours) | **92.60   90.03  99.80** | **70.88   61.90   92.13** | **81.68  73.46   95.42** | For the experiments on ImageNet, we only compare INTL to shuffled-DBN[20] and CW-RGP [40] under 200 epochs pre-training due to time limit of the rebuttal period, following the same experimental setup as in Table 1 of INTL. The results are shown in the table below. We can see INTL also outperforms shuffled-DBN and CW-RGP consistently. The schedule to train shuffled-DBN and CW-RGP for 800 epochs are currently in the pipeline. We would like to add these results in the revised version. | Method | ImageNet | | :---------------- | :---------------------: | | | top-1   top-5 | | Shuffled-DBN [20] | 65.18   85.32 | | CW-RGP [40] | 69.72   88.92 | | INTL (ours) | **71.10   90.61** | **Concern 2:** Concerns about novelty. In my view, section 3.2 Spectral Transformation proves the effectiveness of Whitening. The general form of ST is $g(\lambda)=\lambda^{-p}$. **Response:** We first clarify the relationship among whitening, Spectral Transformation (ST) and power function ($g(\lambda)=\lambda^{-p}, p \in \mathbb R$) for ST. In our Spectral Transformation framework, $g(\lambda)$ can be any function which finally maps the eigenvalue $\lambda$ to $\lambda g^2(\lambda)$. Therefore, both whitening ($g(\lambda)=\lambda^{-1/2}$) and power function ($g(\lambda)=\lambda^{-p}, p \in \mathbb R$) are special cases of ST. Note that $g(\lambda)$ of IterNorm with T iteration number is $f_T(\frac{\lambda}{tr(\Sigma)}) / \sqrt{ tr(\Sigma)}$(Line 218, and $f_T(\cdot) $is defined in Line 216), which is not a power function, but an ST by definition. We agree that ST proves the effectiveness of Whitening ($p=0.5$) in section 3.2. Meanwhile, we also want to show that there exist other functions for ST except whitening that can also work to avoid collapse in section 3.2 (e.g., the power function for ST with $p = 0.45$ or $0.55$ in Figure 2). This result drives us to believe that other effective ST can be designed/found to avoid dimensional collapse. Perhaps our unclear description caused your misunderstanding, and we will further refine this section. **Concern 3:** In section 3.3, the iterative normalization has also been explored by [40] in the github. **Response:** We thank the reviewer for pointing out that iterative normalization (IterNorm) algorithm has been listed in the github. By checking this github, we believe it is probably the released codebase of the Shuffled-DBN paper [20] (rather than the CW-RGP paper [40]). However, we do not find any description or experiment relating IterNorm in the Shuffled-DBN paper [20]. The paper [20] uses the original ZCA batch whitening algorithm for Shuffled-DBN in its experiments. Note that our paper shows that IterNorm suffers severe dimensional collapse and mostly fails to train the model in SSL. That is why, we guess, Shuffled-DBN paper [20] uses ZCA batch whitening algorithm rather than IterNorm, even though IterNorm is in its github. Moreover, we propose IterNorm with trace loss (INTL), a solution to address the failure of IterNorm, and we theoretically prove that INTL can avoid collapse and modulate the spectrum of embedding towards an equal-eigenvalue distribution during the course of optimization.
null
null
null
null
Recurrent Hypernetworks are Surprisingly Strong in Meta-RL
Accept (poster)
Summary: The paper investigate the performance of a specific type of architecture, RNNs coupled with hypernetworks, in meta-reinforcement learning. Meta-RL aims to address the sample inefficiency of RL algorithms by learning to perform few-shot learning when given a distribution of related tasks for meta-training. The authors note that while specialized and complex meta-RL methods have been proposed in the literature, recent work suggests that using an off-the-shelf sequential model, such as an RNN, trained in an end-to-end manner can serve as a strong baseline. However, the supporting evidence for this claim has been limited. The paper presents an extensive empirical investigation to address this gap. While RNNs can achieve strong performance in meta-RL, the study finds that the use of hypernetworks is crucial in maximizing their potential. Interestingly, when combined with hypernetworks, the simpler recurrent baselines outperform existing specialized methods and establish themselves as state-of-the-art (SOTA) on standard meta-RL benchmarks. Strengths: S1: The paper is extremely clear S2: The authors make sure to tune each methods, which is something I do not see often enough S3: Coupling RNNs and hypernetworks is an idea that is novel in the field of meta-RL, and the attempts at understanding why they might be outperforming other baselines are interesting Weaknesses: W1: While the hyperparameter tuning was fairly done in terms of computation budget, it is to me not clear this strategy is the most relevant. I can imagine that many of the complex methods have many more hyperparameters to tune than the simple methods that you propose, and an excessively small search grid could unfavorably disadvantage the former. W2: The empirical investigation is, as noted by the authors, quite limited. In that regard, more empirical evidence from different benchmarks would make the paper stronger. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Q1: Line 128, I could not understand why the predicted context is not directly used to condition the policy. From my understanding, the context should contain enough information to disambiguate the tasks, and the concatenation of the state and context should be Markovian. Could you provide an intuition why intermediate representations are used? Q2: The RNN-S is an interesting experiment. One additional hypothesis for why RNN+HN might work well is because it allows for some multiplicative interaction between the context and the state. What types of non-linearities were used in the network? Have you used activation units, such as Gated Linear Units (GLU) [1] which allows for multiplicative interactions between the neurons in the RNN-S? [1]: "Language Modeling with Gated Convolutional Networks", Dauphin 2016 Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The limitations I can think of have been addressed by the authors. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. We appreciate you noting the clarity and fair tuning. We address your comments below. If we have addressed these concerns, please do raise our score, and if not, please let us know what remains unclear: * **1) “While the hyperparameter tuning was fairly done in terms of computation budget, it is to me not clear this strategy is the most relevant. I can imagine that many of the complex methods have many more hyperparameters to tune than the simple methods that you propose, and an excessively small search grid could unfavorably disadvantage the former.”** That is a fair concern, but to mitigate this issue we do use the default parameters from prior work (Zintgraf et al., 2020 and Beck et al., 2022), and we perform additional tuning just for the baselines in the Appendix. * **2) “The empirical investigation is, as noted by the authors, quite limited. In that regard, more empirical evidence from different benchmarks would make the paper stronger.”** We do agree that more environments would make our claim stronger and have done so. Please see the global response above. **Questions:** * **1) “Line 128, I could not understand why the predicted context is not directly used to condition the policy.… Could you provide an intuition why intermediate representations are used?”** In Humplik et al., 2019 the authors suggest to condition on an earlier representation since the task can be inferred if necessary, but the earlier layer contains more information about the belief state. More specifically, we pass the inferred mean and variance from an information bottleneck layer, as in Zintgraf et al., 2020. This explicit representation of the task uncertainty enables more efficient exploration when the task is not yet known. For example, the uncertainty will inform the agent whether it has to explore more or not. We have now added some clarification of this point into the paper as well. * **2) “The RNN-S is an interesting experiment. One additional hypothesis for why RNN+HN might work well is because it allows for some multiplicative interaction between the context and the state. What types of non-linearities were used in the network? Have you used activation units, such as Gated Linear Units (GLU) [1] which allows for multiplicative interactions between the neurons in the RNN-S?”** Beck et al. 2022 did consider FiLM as an alternative and restricted type of multiplicative interactions and found the more general hypernetwork to be more capable. For this reason, we also use the hypernetwork. However, we appreciated the idea and think this could make for good future work! --- Rebuttal Comment 1.1: Comment: Following up on the benchmarks (W2), we now have new experiments that show RNN+HN to match VI+HN on ML10 (in addition to our MineCraft results). Please see the new global comment. As this seemed to be much of the prior concern in this review, if we have addressed this issue, please do consider adjusting the score accordingly.
Summary: This paper explores how to approach the problem of meta-reinforcement learning by using hypernetworks. In particular, the authors propose to employ an RL2-like scheme, where instead of producing a vector $\phi$, the recurrent model outputs a set of neural network weights. They dub the method RNN+HN. They introduce a set of task inference-based baselines with and without the usage of hypernetworks. Subsequently, they show that the simple RNN+HN approach outperforms more sophisticated baselines in gridworld environments and in Mujoco. Finally, they perform an ablation study by using RL2 with state-conditioning. Strengths: Overall, I find the paper interesting and the results quite convincing. However, the authors' claims are very strong -- stating that the proposed method is state-of-the-art in the wide field of meta RL. Although the experiments do show that the method works quite well, I do not think it is enough to support this bold claim. As such, for now, I am going with "borderline reject", but I would be inclined to increase the score if the authors either extended the empirical evaluation or toned down the claims presented in the paper. Strengths: - The paper introduces a nice, conceptually simple idea that provides substantial improvements. Parameterizing the policy using a neural network with weights generated by a hypernetwork is an elegant idea. - The empirical results are in general quite good, the RNN+HN method outperforms most of the baselines in many cases. - The paper is well written. - The appendix includes additional ablation studies and more baselines. Weaknesses: - The critical weakness of the paper is the mismatch between the claims and the empirical evaluation. Namely, the authors state multiple times that their method is state-of-the-art in the field of meta-RL, but the experiments do not support that claim sufficiently: - There are not enough external baseline methods. The baselines the authors use are inspired by existing methods but do not correspond exactly to what has been proposed in the literature previously (e.g. TI-naive, TI). Additionally, the authors omit existing established works such as MAML-based algorithms and PEARL. As they say, they "exclude the latter methods since estimation of a policy gradient in policy-gradient approaches requires more data than in our benchmarks". That is, in general, a reasonable assumption, but seems too strict when claiming state-of-the-art. - The authors support their claim by saying that their method outperforms previous SOTA [1, 2], but I'm not convinced that these previous works are still SOTA as of now. Some of the recently published works also show very good results [3, 4]. - Currently, the authors only consider two sets of environments: gridworlds and Mujoco continuous control. I think the empirical evaluation should be extended to include more complex environments such as Meta-World, RLBench, Atari, or DeepMind Alchemy. I think that environments with high-dimensional state spaces (e.g. images) would be interesting to test the limits of the proposed method. - There are some environments where RNN+HN falls behind (e.g. Walker, Cheetah-Vel). This is not a grave problem by itself, but it makes the problem of having relatively few benchmarks even more problematic -- how will this method scale up to other environments? - Why the RNN baseline only appears in a single environment in Figure 8 and Figure 9? I think it's an important baseline that should be included. [1] Zintgraf, Luisa, et al. "Varibad: A very good method for bayes-adaptive deep rl via meta-learning." arXiv preprint arXiv:1910.08348 (2019). \ [2] Beck, Jacob, et al. "Hypernetworks in meta-reinforcement learning." Conference on Robot Learning. PMLR, 2023. \ [3] Melo, Luckeciano C. "Transformers are meta-reinforcement learners." international conference on machine learning. PMLR, 2022. \ [4] Chalvidal, Mathieu, Thomas Serre, and Rufin VanRullen. "Meta-Reinforcement Learning with Self-Modifying Networks." Advances in Neural Information Processing Systems 35 (2022): 7838-7851. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: My main suggestion related to the first point in the Weaknesses section above, i.e. extending the empirical evaluation or reducing the state-of-the-art claims. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The author discuss limitations sufficiently. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your feedback. We especially appreciate you noting the results are quite convincing and that you are open to increasing the score if we adjust our claims or add additional experimentation. We have now done both, and would very much appreciate a corresponding score increase. Answers to your specific questions are below. Please let us know if anything remains unclear: * **1) “There are not enough external baseline methods… the authors omit existing established works such as MAML-based algorithms and PEARL”** VariBAD (Zintgraf et al., 2020) already establishes a performance improvement over PEARL and MAML on the same domains we use. For this reason, we compare to VariBAD, a contemporary VariBAD with a hypernetwork (Beck et al., 2022), and direct task-inference methods. It is fair to note that the direct methods, e.g. TI and TI Naive, are not identical to previous methods. However, these were chosen to maintain the strengths of both VariBAD while also using the more direct supervision afforded by the Belief Agent (Humplik et al., 2019). We feel it is actually more fair to do so because the combination uses insights from both papers, and it makes hyperparameter tuning more fair. * **2) The authors support their claim by saying that their method outperforms previous SOTA [1, 2], but I'm not convinced that these previous works are still SOTA”** The difficulty in establishing SOTA with hundreds of meta-RL papers (Beck et al., 2023) is well taken. We have now adjusted all of our claims throughout the paper to assert that recurrent hypernetworks are surprisingly strong instead of SOTA. The goal of this paper is to show that simpler recurrent methods (trained end-to-end) can vastly outperform contemporary alternatives (often considered SOTA) in the task-inference design space, when hypernetworks are used. The use of transformers (Melo et al., 2022), as suggested, is an orthogonal consideration which we do agree could further improve the latent variable passed to the hypernetwork. This could make for great future work. * **3) “Currently, the authors only consider two sets of environments: gridworlds and Mujoco continuous control… I think that environments with high-dimensional state spaces (e.g. images) would be interesting”** We do agree that more environments (including image observations) would make our claim stronger and have done so. Please see the global response above. * **4) “There are some environments where RNN+HN falls behind (e.g. Walker, Cheetah-Vel).…how will this method scale up to other environments?”** Note that on Cheetah-Vel, TNN+HN still has the greatest return at both the start and end of training. As for Walker, the only baselines (TI and TI_Naive) that outperform RNN+HN only do so marginally and are significantly outperformed by RNN+HN on the vast majority of environments. For scaling up, please see the global response above. * **5) “Why the RNN baseline only appears in a single environment in Figure 8 and Figure 9? I think it's an important baseline that should be included.”** We have since run these experiments and results confirm that the RNN is in fact always performs the same or worse than RNN+S and RNN+HN, as expected. Please see the global response above. --- Rebuttal Comment 1.1: Comment: Thank you for the thorough response, the modifications introduced to the paper, and the additional experiments. Given that the SOTA claims were removed and most of my other comments were addressed, I decided to increase my score to 5: Borderline Accept. I was thinking about increasing the score further, but I decided against it for now, given the comments from the other reviewers. In particular, I find Reviewer YdBr's argument about the Meta-World results from a previous paper convincing. I will follow the further discussions and update my score accordingly. --- Reply to Comment 1.1.1: Title: Response to 8GEx Comment: Thank you very much for the current and ongoing score consideration. One important point on the Meta-World experiments, for clarification, is below. * The Meta-World experiment in Beck et al., 2022 is not statistically significant, which makes it a bit misleading. The fact that that singular results carries so much weight is one of the reasons that the much stronger countervailing evidence in our paper is so necessary to the conversation.
Summary: This paper investigates the performance of recurrent neural networks in meta-RL. They suggested that a current neural network can achieve strong performance with hypernetwork. They compared this method with numerous baselines on several meta-RL benchmarks and found that the recurrent baselines along with hypernetwork could achieve SOTA performance. Strengths: This paper introduced a novel assertion that a recurrent method, when combined with a hypernetwork, can achieve SOTA performance in meta-reinforcement learning. The authors substantiated this claim with rigorous empirical experiments demonstrating superior performance. Furthermore, they conducted an analysis elucidating the reasons behind the method's impressive performance. The paper is well-written and clearly structured, which makes it easily comprehensible for the readers. Weaknesses: 1. The authors could enhance the comprehensiveness of the study by testing their algorithm on a wider range of environments. 2. The final analysis segment of the paper would benefit from further development to unequivocally ascertain why the proposed method can achieve state-of-the-art performance. 3. The paper lacks ablation studies for the different settings of hypernetwork component, leaving its individual contribution to the overall results unclear. 4. The format of the reference is wrong Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. Why in the Cheetah environment, RNN+HN is worse than RNN+S? 2. Please offer more ablation studies of the hypernetwork component. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors have discussed the limitations of this paper in the last section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your feedback and for noting the rigorous empirical experiments, elucidating analysis, and clear structure. We address each of your points in turn below. Please do adjust the score correspondingly, or let us know if anything remains unclear. Thank you! * **1) “The authors could enhance the comprehensiveness of the study by testing their algorithm on a wider range of environments.”** We do agree that more environments would make our claim stronger and have done so. For details, please see the global response above. * **2) “The final analysis segment of the paper would benefit from further development to unequivocally ascertain why the proposed method can achieve state-of-the-art performance.”** In addition to preventing interference between different tasks and scaling trends suggested by Beck et al., 2022, we hypothesize that the low norm afforded by our models, i.e., low sensitivity to the latent variable, and re-conditioning on state are crucial for stable training. We have added information on existing motivation for the use of hypernetworks in related work and on our analysis in the introduction to make this paper more self-contained. * **3) “The paper lacks ablation studies for the different settings of hypernetwork component, leaving its individual contribution to the overall results unclear.”** In this paper we do perform ablation studies to separate the contribution of the hypernetwork from just conditioning on the state again. Additionally, we evaluate alternative initialization of the hypernetwork. However, we note that varying the size of the Hypernetwork was already ablated in (Beck et al., 2022), so we use the best size from that investigation. Thus, we feel we have properly covered the space of hypernetwork design. Most of this paper focuses on showing that a very simple recurrent hypernetwork (with few components to ablate) can outperform contemporary task-inference approaches. Thus, we spend the most effort ablating the design space of task-inference for comparison. * **4) “The format of the reference is wrong”** Could you elaborate on the reference format concern? As far as we are aware, according to the formatting instructions for 2023, “Citations may be author/year or numeric, as long as you maintain internal consistency. As to the format of the references themselves, any style is acceptable as long as it is used consistently.” Link: https://media.neurips.cc/Conferences/NeurIPS2023/Styles/neurips_2023.pdf **Questions:** * **1) “Why in the Cheetah environment, RNN+HN is worse than RNN+S?”** In Cheetah-Vel, RNN+HN has the same asymptotic return as RNN+S and higher initial returns at the start of training. For a brief period of time in the middle of training, RNN+S does achieve higher returns. Since this environment requires less action diversity than the rest (e.g. running at different velocities but not different directions), the curves can likely be explained as follows: In the beginning, RNN+HN learns behavior shared between the tasks more easily due to Bias-HyperInit that ignores the task, then RNN+S is able to learn the task adaptation marginally faster due to very small difference in actions between tasks, then RNN+HN catches up as it is just capable of learning small (and large) changes. * **2) “Please offer more ablation studies of the hypernetwork component.”** (Addressed above.) --- Rebuttal Comment 1.1: Comment: Following up on the range of environments (1), we now have new experiments that show RNN+HN to match VI+HN on ML10, in addition to our MineCraft results that show superior performance. Please see the new global comments. As this seemed to be much of the prior concern in this review, if we have addressed this issue, please do consider adjusting the score accordingly.
Summary: The manuscript proposes to modify a common meta-RL baseline, in which a meta-learned RNN provides the policy network with an encoding of the trajectory. The proposed recurrent hypernetwork instead lets the RNN predict the weights and biases of the policy network. This is shown to be a strong model in comparison with several variations of RNN-based and task-inference meta-RL approaches. Strengths: I think the proposed approach is novel and the idea is described in sufficient detail. The experimentation affords equal computational resources for hyperparameter optimization of all models. The method seems to perform better in 6 out of 7 considered environments. The ablation studies adequately address some questions that I thought of while reading the paper. Weaknesses: 1. The implementation of the RNN and RNN+HN models could've been described in a bit more detail. It is for example not clear what the meta parameters $\theta$ correspond to in the RNN baseline, since line 108 says that $f$ and $\pi$ use distinct parameters from $\theta$. Is this sentence wrong and $\theta$ are the RNN's parameters? I could also not find all hyperparameters of the architectures, such as the type and layer sizes of the RNNs. 1. For the strong claim that recurrent hypernetworks are SOTA in (all of) meta-RL, the presented experiments seem rather limited. What about experiments on RLBench or Meta-World? Also, there are many more categories of meta-RL methods to be considered than RNN-based and task-inference methods. The manuscript can benefit greatly from a clear justification of this strong claim. 1. I could not find an indication of how exactly the performance curves were generated, e.g. how many seeds per model, what exactly does the shaded area convey, etc. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. References to figures such as "See Figures 3 and 4" in line 123, I'd put in parentheses directly in the preceding sentence or say something like "See Figure X for a diagram of Y". ## Acknowledgement of rebuttal I have read the rebuttal and reviews and engaged in discussion with the authors. The authors have mostly addressed my concerns by removing the general claims of the proposed method being state-of-the-art in meta-RL, by providing some clarification on the architecture and experimental setup, and by providing additional experimentation that support some of the design decisions. I have accordingly increased the score. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The authors mention that they cannot guarantee that the proposed method will improve over every baseline nor over every environment. This does not feel like a careful look at model details and how they might affect adaptation to different task variations. One general criticism of many meta-RL methods is that they are evaluated on very narrow task distributions. Maybe the authors could focus on that. The recent meta-RL survey by Beck et al. [1] might give some ideas for discussion. ## References 1. Beck, Jacob, et al. "A survey of meta-reinforcement learning." arXiv preprint arXiv:2301.08028 (2023). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. We especially appreciate your time understanding that this paper is not in your field. We appreciate you noting that the ablation studies answered many questions, that the proposed method is stronger on 6/7 of the environments, and that our hyperparameters optimization affords equal computation to all baselines. Responses to your feedback are below. If we have addressed these concerns, please do raise our score, and if not, please let us know what remains unclear: * **1) “The implementation of the RNN and RNN+HN models could've been described in a bit more detail. It is for example not clear what the meta parameters correspond to in the RNN baseline, since line 108 says that and use distinct parameters from . Is this sentence wrong and are the RNN's parameters? I could also not find all hyperparameters of the architectures, such as the type and layer sizes of the RNNs.”** We use a single layer RNN of size 256, as in Beck et al., 2022. We have added this to the appendix. Additionally, we have clarified in the main body that “$f$ and $\pi$ each use distinct subsets of the meta-parameters, $\theta$.” We meant that the parameters are distinct from each other, not distinct from $\theta$. Thank you for pointing out this ambiguity. * **2) “For the strong claim that recurrent hypernetworks are SOTA in (all of) meta-RL, the presented experiments seem rather limited. What about experiments on RLBench or Meta-World? Also, there are many more categories of meta-RL methods to be considered than RNN-based and task-inference methods. The manuscript can benefit greatly from a clear justification of this strong claim.”** For environments, we do agree that more environments would make our claim stronger and have done so. For details, please see the global response above. For baselines on the remaining experiments, we chose to ablate many task-inference methods to show that simpler black-box methods could outperform them. We believe we have sufficiently covered this space. As for PPG methods, it is already known that such methods cannot solve the benchmarks we use, since they are meant to test rapid adaptation within 1 and 2 episodes, which is not sufficient for PPG methods (Beck et al., 2023) and also shown to be worse in (Zintgraf et al., 2020). Moreover, we were simply looking to show that simpler end-to-end methods can beat the contemporary task-inference methods, not that they all will. * **3) “I could not find an indication of how exactly the performance curves were generated, e.g. how many seeds per model”** The learning curves show meta-episode return, optimized over five learning rates and averaged over three seeds, with a 68% confidence interval using bootstrapping. This information is in section 2 of the Appendix, but we have also now added it to the main body. **Questions:** * **1) “References to figures such as "See Figures 3 and 4" in line 123, I'd put in parentheses directly in the preceding sentence or say something like "See Figure X for a diagram of Y".”** We have updated these references to match your suggestion. --- Rebuttal Comment 1.1: Comment: Thank you for your response and providing some clarification. I agree with reviewer YdBr that it is quite a big change to modify the claim, since the SOTA claim was prominently featured in many parts of the paper. Also, as brought up by reviewer YdBr and acknowledged by the authors in their response, my impression that the RNN+HN method is novel, was wrong (please correct me, if I misunderstood). I will increase the score slightly to a borderline reject, since the authors answered my questions and added a new supporting experiment. I will wait for further discussion with other reviewers to decide whether to adapt the score further. In the meantime, could the authors please indicate how hyperparameters for the new MineCraft experiment were chosen for each method? --- Reply to Comment 1.1.1: Title: Response to NnYJ Comment: Thank you for your feedback. We appreciate you increasing the score and mentioning that you are open to adjusting it further. A response to the concerns you raise here are below. If this does not answer your questions, please let us know and we can clarify further: * **1) "quite a big change to modify the claim"** Mostly what we just did was change *SOTA* to *surprisingly strong*. We would like to emphasize that this is NOT because we believe our method is not SOTA, but simply due to the difficulty of establishing SOTA, as noted by reviewers, when there is no agreed upon standard benchmark, other than MuJoCo, in meta-RL (Beck et al., 2023). Moreover, we feel strongly that even if our results were weaker and only showed RNN+HN to match baselines, it would still be of significant interest to the field. Given that it has widely been assumed in meta-RL that complicated task-inference methods are necessary to be competitive, this paper is the first to present strong evidence in the other direction. Given that we evaluate on MuJoCo, and now MineCraft, we feel strongly that it is of great interest to the meta-RL field that a black box can outperform much more complicated contemporary task-inference methods, some of which consider themselves SOTA (Beck et al., 2022). * **2) "hyperparameters for the new MineCraft "** The learning rate was retuned, as for all environments over [3e-3, 1e-3, 3e-4, 1e-4, 3e-5]. Most other parameters were used from GridWorld, since it also has a discrete action space. Some parameters (e.g. the parameter in the VariBAD codebase called policy_num_steps) were taken from MuJoCo (generally Cheetah-Dir). While some others were chosen reasonably but without tuning (e.g. 2 inner-episodes to form a meta-episode and an action embedding of 8). We expect to release all parameters specific to the MineCraft environment when we release our code.
Rebuttal 1: Rebuttal: In order to address reviewers requests, we have added additional experiments (environments and baselines), adjusted claims, and added details (in a table and text). Figures for new experiments are in an attached PDF, and specifics are below. **TLDR:** * We have removed claims to SOTA * New experiments in MineCraft corroborate our claims * New experiments on previous domains corroborate that RNN alone is a weak baseline * We have added environment details and method details and motivation in text and a table **Updated Title and Claim:** Some reviewers felt that more experiments were needed to claim SOTA. To address this feedback, we have added experiments and removed claims to SOTA. For instance, our new title is *Recurrent Hypernetworks are Surprisingly Strong in Meta-RL*, and we have clarified our contributions at the end of the introduction. Here, we emphasize that we conduct an empirical investigation showing the value of hypernetworks in maximizing the potential of recurrent networks. We clarify upfront that while the use of a hypernetwork with RNNs is not a novel idea, they have never been evaluated in meta-RL beyond a single environment, let alone shown to outperform contemporary task-inference methods (Beck et al., 2022). **Additional MineCraft Environment:** In addition to toning down our claims, we have added more experiments to demonstrate the strength of RNN+HN on more complex and challenging domains. Some reviewers have suggested Meta-World and Alchemy. Meta-World is more challenging; however, such benchmarks are generally too difficult to solve (Beck et al., 2023) and high variance, making them extremely difficult to get informative comparisons even after weeks of training (Beck et al., 2022). Similar issues exist for Alchemy (Beck et al., 2023). Instead, we add experiments with a MineCraft environment. Specifically, we supplement our results with experiments in the MC-LS environment from AMRL (Beck et al., 2020), since it can easily be adapted to meta-RL and was originally designed to test more challenging long-term memory from visual observations in MineCraft. In this environment, the agent must navigate through a series of 16 rooms. In each room, the agent must navigate left or right around a column, depending on whether the column is made of diamond or iron. Correct behavior receives a reward of 0.1. Finally, at the end, the agent must move right or left depending on a signal (red or green) that was shown before the first room. Correct behavior receives a reward of 4, while incorrect behavior receives a reward of -3. In our experiments, we allow the agent to adapt over two consecutive episodes, forming a single meta-episode, where each direction corresponds to a task. On MC-LS we compare RNN+HN to VI+HN, since VI+HN is an established contemporary task-inference baseline (Beck et al., 2022). Additionally, we add an additional seed (four in total) and a linear learning rate decay due to high variance in the environment. In the MineCraft Figure in the PDF, we see that RNN+HN significantly outperforms VI+HN. While VI+HN learns to navigate through all rooms, it does not learn the correct long-term memory behavior. In contrast, RNN+HN is able to adapt reliably within two episodes, and one seed even learns to adapt reliably within a single episode. While further work is needed to learn the optimal policy, these experiments demonstrate that RNN+HN outperforms VI+HN, even on more challenging domains. **Additional RNN Baselines:** Some reviewers noted that the RNN baseline was not run on several MuJoCo experiments and that some results on Walker were not run as long. This was due to computational limitations. However, we have now added the RNN baseline to all MuJoCo experiments, and we have run all methods on the Walker experiment longer to match the duration of the other MuJoCo environments. Please see the PDF for figures. Results are consistent with previous claims: RNN is a weak baseline, RNN+S is a stronger baseline, RNN+HN outperforms both. **Methods Table:** A table summarizing the components in each all baseline methods has been added to the appendix: | |Inference Target|Policy Conditions on State|Hypernetwork|Inference Pre-Training and Parameter Reuse| |-|---|---|---|---| |RNN|None|No|No|N/A| |RNN+S|None|Yes|No|N/A| |RNN+HN|None|Yes|Yes|N/A| |TI Naive|Given|Yes|No|N/A| |TI|Learned|Yes|No|No| |TI++|Learned|Yes|No|Yes| |TI+HN|Learned|Yes|Yes|No| |TI++HN|Learned|Yes|Yes|Yes| |VI|Transitions|Yes|No|N/A| |VI+HN|Transitions|Yes|Yes|N/A| |BI++HN|Base Net|Yes|Yes|N/A| **MuJoCo Details:** One reviewer noted that details on the MuJoCo environments should be included in order to make the paper more self-contained. We have added these details to the appendix. We share the details here for two environments (Cheetah-Dir and Walker) as examples. We evaluate on all four MuJoCo environments used by Zintgraf et al., 2020. All environments require legged locomotion. In Cheetah-Dir, the agent controls a robotic cheetah morphology by outputting a control torque for each of six joints on the morphology. The task is to run forward or backward with as large a velocity as possible. The agent observes the position, angle, and velocity of each body part (17 dimensions in total) and is given a positive reward for its velocity in the direction given by the task and a negative reward for control costs (specifically, 5% of the magnitude of the action vector). In Walker, the agent controls a two-legged torso morphology and outputs a control torque for each of six joints on the morphology. The observations are 17 dimensional for this environment. The tasks are defined as uniform samples of 65 different physics coefficients (e.g. body mass and friction) for the simulation. The agent is given a positive reward for the forward velocity, a positive reward of one reward per timestep, and a negative reward for the control costs (specifically, 0.1% of the magnitude of the action vector). Pdf: /pdf/bb907d7079a5467a252706817920fbde03327b03.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper is an empirical investigation that tests the performance of different variants of three methods for meta-RL: RNN that provides task information implicitly from previous trajectories on the same MDP, TI that trains a task representation with VAE manner, and VI that is the baseline Varibad [1]. On mujoco environment, the paper finds that RNN with output being a hypernetwork that outputs the parameter for the policy works the best, and thus re-establish [2] RNN+hypernetwork as the state-of-the-art method for meta-RL. **References:** [1] Luisa Zintgraf, Kyriacos Shiarlis, Maximilian Igl, Sebastian Schulze, Yarin Gal, Katja Hofmann, and Shimon Whiteson. Varibad: A very good method for bayes-adaptive deep rl via meta-learning. In International Conference on Learning Representation (ICLR), 2020. [2] Jacob Beck, Matthew Jackson, Risto Vuorio, and Shimon Whiteson. Hypernetworks in meta-reinforcement learning. In CoRL, 2022. Strengths: 1. Many variants are tested in the environment. There are 12 variants tested in the paper, which gives a thorough analysis on the effect of using hypernetwork for each method; in addition, the effect of "conditioning on the state twice" is also considered, which makes the conclusion that hypernetwork is crucial more rigorous. 2. The figures appended in the paper does a well job in conveying the architecture of each method. Weaknesses: **1. The experiment results are not convincing enough.** a) The method proposed (line 184) in the paper, RNN+hypernetwork, is already proposed in prior work [1]. More specifically, RNN alone is equivalent to RL2 [3] as line 106 suggests, and RL2+hypernetwork is already tested in [1], which is inferior to Varibad [2]+hypernetwork. Thus, the key contribution of this paper is not proposing new method, but to overthrow the previous conclusion that Varibad+hypernetwork is better. However, this paper only tests gridworld and mujoco environments, and on both environments RNN+HN is only marginally better than VI+HN (which in [1] also has multiple methods that works similarly well, indicating that mujoco alone is not a strong benchmark). Most importantly, the metaworld (ML1 and ML10) environment, which is the decisive evidence that Varibad+hypernetwork is better than RL2+hypernetwork, is missing in this paper. With such result missing, it is hard for the readers to be convinced that the result is the other way around from that in [1]. b) The baselines, though with many variants, do not cover enough areas for the claim of the state-of-the-art. Reviewers of [1] have already pointed out that the state-of-the-art status of Varibad [2], even before the presence of [1], is questionable. Also, while it is true that the meta-RL method can be briefly summarized in policy gradient, implicit task representation and task inference as the paper suggests (line 61-63), in Table 2 of [4] there are many branches within each of the direction. For example, what if transformer instead of RNN is used [5]? Is policy gradient method necessarily worse than the other two branches? For a state-of-the-art method, those methods also need to be considered. **2. The delivery of the paper's idea can be improved.** a) There is no motivation stated about why hypernetwork is used. The paper only states that "we present the key insight that the use of a hypernetwork architecture is crucial ..." (line 36-37) and "hypernetwork has never been widely evaluated in meta-RL" (line 81-82). However, there is no intuitive explanation about why hypernetwork is useful for meta-RL and should be considered in the first place. While there is explanation in [1] about preventing degeneration of multi-tasking, the paper should be self-contained and inform the readers about why hypernetwork, the most important component in the finally proposed RNN+hypernetwork method, is considered for meta-RL. b) The method section can be modified to convey the idea more clearly. Currently, it is hard to identify which method is proposed (line 184) by the paper, and it is unclear for the first-time readers about why multiple methods are listed in the method section (and most of them is not proposed by this paper), instead of the usual paradigm where each key component is listed in a subsection. Since the methods are not novelly proposed by this paper, they can be put into experiment setup section (with more emphasis that this is an empirical study paper rather than conventional paper that proposes a new method, e.g., add a contribution summary at the end of introduction section); or, a summary could be added in the front of the method section, to tell the readers that all those methods listed below are tested in the experiment section, and in each section we discuss one type of method tested. A table summary would also be very helpful. c) There is no detail of the environment settings except gridworld. What is the cheetah-dir environment? What are the definitions of state, action and reward, and how are the MDPs in meta-learning differs? What reward can be considered expert-level performance? While they can be found in prior work, the paper should be self-contained and the settings should be attached in the appendix to help the reader form a better intuition about the environment. **3. Other minor problems:** a) Some experiment results are missing, for example, RNN in Fig. 8b, 8c, 9b, 9c and 9d; b) Fig. 10 is never referred to in the paper; reference to Fig. 10 should be added in "latent gradients" paragraph. c) The x-axis of walker environment is not aligned with other mujoco environments. d) The name, VariBad, should be mentioned in VI and VI+HN to more clearly show the connection of the method tested to existing baselines. **References:** [1] J. Beck et al. Hypernetworks in meta-reinforcement learning. In CoRL, 2022. [2] L. Zintgraf et al. Varibad: A very good method for bayes-adaptive deep rl via meta-learning. In ICLR, 2020. [3] Y. Duan et al. $RL^2$: Fast reinforcement learning via slow reinforcement learning. In arXiv, 2016. [4] J. Beck et al. A survey of meta-reinforcement learning. In arXiv, 2023. [5] L. Melo. Transformers are Meta-Reinforcement Learners. In ICML, 2022. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Besides those listed in the weakness section, I have one question: why the performance of RNN+HN in Fig. 9a and Fig. 10a is different? Below are my suggestions for this paper: 1. Modify the introduction and method section of the paper such that it is more clear that this is an empirical study instead of proposing a novel method, and add contribution summary at the end of the introduction. Add a table that summarizes the methods tested in the paper. 2. Add environment details of the mujoco environments in the appendix. 3. Compare RNN+HN to more methods summarized in [1]; 4. Compare RNN+HN with VI+HN (and other baselines) on metaworld and other more challenging environment; 5. Fix other minor issues mentioned in the weakness section. 6. I strongly advice the authors to open source their code upon acceptance / next submission. The author could also consider adding license statement in the appendix. **References:** [1] J. Beck et al. A survey of meta-reinforcement learning. In arXiv, 2023. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: There is an independent limitation section in the paper, which I think generally discusses an important limitation of the work. Though, as suggested in the weakness section, I think currently the limitation is not sufficiently mitigated. There is no negative societal impact discussed in the work, which I would encourage the readers to add; though the work is still far from application, automated control itself brings potential challenge, such as job loss, to the human society. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. We appreciate you noting the many benchmarks that we compare against and the rigorous conclusion that the hypernetwork is critical. We address your concerns below. Specifically, we add experiments (by running RNN on more domains and adding an additional domain) and add details (motivation for the hypernetwork and method details), as requested. Could you please raise the score correspondingly, or let us know if any topic remains unclear? * **1) Experimental Results** * **a) Meta-World and Degree of Improvement** We do agree that more environments would make our claim stronger and have run more experiments. For this evidence, please see the global response above. Additionally, we also do feel strongly that our existing results show that RNN+HN is not just marginally better than HN+VI, but much better (and much simpler). For example, see the learning curves (blue vs orange) on Walker, Cheetah, and all of the gridworlds. Finally, there is no decisive evidence in the other direction from [1]: The results are not statistically significant in their table, and if you look at the appendix, the learning curves are highly unstable and difficult to draw conclusions from. This lack of evidence and baselines is the main reason for our paper. * **b) Baselines** We have now adjusted all of our claims throughout the paper to assert that recurrent hypernetworks are surprisingly strong instead of SOTA. For our baselines, we chose to ablate a large number of task-inference methods to show that simpler black-box methods could outperform them. We believe we have sufficiently covered this space. As for PPG methods, it is already known that such methods cannot solve the benchmarks we use, since they are meant to test rapid adaptation within 1 and 2 episodes, which is not sufficient for PPG methods [1] and also shown to be worse in [2]. We do intend transformers with hypernetworks as future work, but which sequence model the hypernetwork conditions on is an orthogonal problem. Moreover, we were simply looking to show that simpler end-to-end methods can surprisingly beat the contemporary task-inference methods, not that they all will. * **2) Delivery** * **a) Hypernetwork Intuition** We agree that we can make the paper more self-contained by adding motivation for hypernetworks as in [1] and have now done so. In addition to preventing interference between different tasks and scaling trends suggested by Beck et al., 2022, we hypothesize that the low norm afforded by our models, i.e., low sensitivity to the latent variable, and re-conditioning on state are crucial for stable training. We have added some text to expand on existing motivation for the use of hypernetworks in related work and have added some text on our analysis in the introduction to make this paper more self-contained. * **b) Empirical Study, Contribution Summary in Introduction, Methods Table** We agree that this could be confusing and have added clarification to the introduction and methods to emphasize that this is an empirical study and RNN+HN is not a novel method, though it is the one we find gives the strongest results, and we add a table comparing method components to the appendix. Please see the global response above. * **c) MuJoCo Details in Appendix** We have added details on the MuJoCo environment to the appendix as requested. See the global response above for details. * **3) Minor problems** * **a) Missing Results** We ran these experiments and results show that the RNN in fact always performs the same or worse than RNN+S and RNN+HN, as expected. Plots can be seen in the global response above. * **b) Fig. 10 Reference** The reference has been added. * **c) X-axis of Walker** We have run the limiting Walker experiment for longer and updated the plot with no significant changes to the results. Plots can be seen in the global response. * **d) Calling VariBAD by Name** We have also now added the name Varibad, as requested. **Questions:** * **0) Performance Difference Between Figures** The difference in curves should be due to different seeds for each plot. * **1) Empirical Study, Contribution Summary in Introduction, Methods Table** We have added clarification to the introduction and methods to emphasize that this is an empirical study and not a novel method. We have likewise updated the contribution paragraph at the end of the introduction and created a table (see the global response above). * **2) MuJoCo Details in Appendix** We have added details on the MuJoCo environment to the appendix. The details can also be seen in the global response above. * **3) Methods from [1]** We do compare to the baselines in [1]. In [1], the authors only compare to VariBAD [2] and RL2 [3]. We added the many baseline variants that you mentioned precisely in order to address this limitation in [1]. Note that [1] Does evaluate VariBAD with many different initializations for the hypernetwork and FiLM, but [1] shows them all to be inferior, which we have also found to be true in our experience. (For other types of methods, see 1b above.) * **4) Challenging Environments** The comparison on Meta-World was already made by [1] and the results are not statistically significant in the table. If you look at the appendix in [1], the learning curves are highly unstable and difficult to draw conclusions from. For this reason, we chose to supplement our experiments with MineCraft from AMRL (Beck et al., 2020) instead. See the the global response for details. * **5) Minor Issues** (Addressed above.) * **6) Code** We plan to release code. --- Rebuttal Comment 1.1: Title: Response to the Rebuttal Comment: Thanks for your detailed response, and I appreciate the efforts made by the authors to address my concerns. I think concerns such as delivery are well-addressed. Here are my follow-up responses: 1. While I agree that the supplementary minecraft environment is a stronger evidence than that shown in the original paper that RNN+HN is better than HN+VI, for the claim of "much better" in RL, I would expect at least one scenario where RNN+HN succeeds but HN+VI only reaches medium-level performance (e.g. 50-70% reward) or even completely fails. The performance difference on gridworld and mujoco only seem to be either an issue of convergence speed of ~10% performance difference. 2. Regarding the result for ML1/ML10 in [1]: a) I am referring to the Table 2 of [1] for result (on the 7th page on ArXiv version), where on ML10 task, the performance comparison between VariBAD+HN and RL2+HN is as follows: -|VariBAD|RL2 ---|---|--- HFI|**$28.4\pm 6.0$**| $7.1\pm 2.4$ Bias-HyperInit|**$23.9\pm 6.2$**|$14.2\pm 7.2$ b) in line 106, the authors write "**RNN** ... is equivalent to RL2"; in line 112, the authors write "we follow the initialization method for hypernetworks, Bias-HyperInit, suggested by Beck et al.". Thus, I assume that RNN+HN, the best method found in this paper, is equivalent to the RL2+Bias -HyperInit, which has the performance of $14.2\pm 7.2$ as opposed to $23.9\pm 6.2$ with VariBAD. When you plot them on a figure (which is the bottom-right figure of Figure 1 in the appendix, on the 13th page of the ArXiv version), even with large variance, the upper edge of the RNN+HN can barely touch the lower edge of VariBAD+HN, and the reward is almost doubled in VariBAD. Thus, I think this can be called significant despite the large variant. 3. Another thing that worries me from giving a higher score is that the change of claim is a major revision to the paper's contribution and organization. To sum up, I tend to keep my scores for now and see how the discussion goes (both ours and the discussion between the author and the other reviewers). **References:** [1] J. Beck et al. Hypernetworks in meta-reinforcement learning. In CoRL, 2022. --- Reply to Comment 1.1.1: Title: Response to YdBr Comment: Thank you for the detailed response. We believe there was still some misunderstanding that we have resolved below. Please let us know if this addresses your concerns, and if not, then what else we can clarify: * 1 **"I would expect at least one scenario where RNN+HN succeeds but HN+VI only reaches medium-level performance (e.g. 50-70% reward) or even completely fails."** In terms of area under the curve, our results do show significantly better performance. In terms of the baselines failing, our new MineCraft results does actually show this. Note that the scale of the rewards can be misleading. For example, while the performance difference on Minecraft is only about 25%, RNN+HN can solve the task while HN+VI cannot. The reason the reward difference is not larger is that the (easy) dense reward for maze navigation, which is unrelated the test of adaptation using memory, happens to represent the bulk of the reward obtained. HN+VI does truly fail the task here. Moreover, we feel strongly that even if our results were weaker and only showed RNN+HN to match baselines, it would still be of significant interest to the field. Given that it has widely been assumed in meta-RL that complicated task-inference methods are necessary to be competitive, this paper is the first to present strong evidence in the other direction. * 2 **" I am referring to the Table 2 of [1] ...I think this can be called significant despite the large variant."** In [1], the authors actually did run "two-tailed t-tests with p = 0.05 to determine significance" and put in bold results significantly different from the rest. No results on ML10 were significantly different. This contributed to our hesitancy to rely on ML10. * 3 **"major revision to the paper's contribution and organization."** Mostly what we just did was change *SOTA* to *surprisingly strong*. We would like to emphasize that this is NOT because we believe our method is not SOTA, but simply due to the difficulty of establishing SOTA when there is no agreed upon standard benchmark, other than MuJoCo, in meta-RL (Beck et al., 2023). Given that we evaluate on MuJoCo, and now MineCraft, we feel strongly that it is of great interest to the meta-RL field that a black box can outperform (or even match) much more complicated contemporary task-inference methods, some of which consider themselves SOTA (Beck et al., 2022).
null
null
null
null
null
null
Reducing Blackwell and Average Optimality to Discounted MDPs via the Blackwell Discount Factor
Accept (poster)
Summary: The paper introduces the Blackwell discount factor MDPs. The authors show that when the discount factor is larger than the Blackwell discount factor $\gamma_{bw}$, all discount-optimal policies become Blackwell- and average-optimal, and derive a general upper bound on $\gamma_{bw}$. The authors claim to provide the first reduction from average and Blackwell optimality to discounted optimality via polynomial-time algorithms. I am not much familiar with the literature regarding Blackwell optimality criterion and thus might not fully understand the significant of the results. I would be happy that the authors would highlight for significant of results in compression to the known results in for discounted value criterion and averaged value criterion. Strengths: 1) The assumptions and contributions are clearly stated. 2) It seems that the presented results improve the known results in the study of Blackwell optimality criterion, remove commonly used assumptions and provide efficient algorithms. 3) The application of algebraic techniques to RL might be useful in other settings. Weaknesses: 1) No concrete application or motivation for the work is given. 2) It is unclear to me whether the new definition of Blackwell optimality via the Blackwell discount factor proposed in the paper is better/ outperform/ more useable than the discounted value criterion in standard RL applications. 3) The authors claim throughout the paper that previous literature repeatedly mentioned the discussed problem as important, but not clearly mentioned why. 4) The focus of the paper is mainly about previous works (and their limitations), instead of the new approach (they get to it only in page 6 out of 9!). More elaborated proof sketches would benefit the reader. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1) May you please provide concrete motivation for the use in Blackwell optimality criterion and/or a concrete application? 2) Ergodicity and aperiodicity properties are standard assumption in RL even in simple planning task. It is unclear to me how can you remove them and why it is important. In most application the assumptions are reasonable. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your time reviewing the paper and for your constructive comments. We answer your questions and remarks below. *Motivation and applications of Blackwell optimality.* The motivation for Blackwell optimality mostly comes from some of the shortcomings of the two other return criteria. * The discounted return criterion enjoys nice tractability properties: it can be solved efficiently *without any structural assumptions on the MDP*. However, this requires choosing a value for the discount factor $\gamma$, which may be artificial in some applications. Additionally, the optimal policies may change with the discount factors. These issues are well-documented in the RL/MDP community [1]. * For this reason, *average optimality* is getting some attention in the RL literature [2], especially since many RL applications require extremely long horizons, e.g. in game-solving (the game may end after thousands of decisions) or in healthcare (the health condition of the patient in a far horizon remains an important concern). However, average optimality suffers from its own shortcomings: (a) the vast majority of the literature requires some structural assumptions (unichain, ergodicity, weak-communication, etc.), mostly motivated from a technical standpoint; (b) the case of adversarial/robust MDP is still not very well understood for average optimality; (c) most importantly, average optimality discards {\em any} rewards obtained in a finite horizon. So it does not distinguish between a policy $\pi_{1}$ achieving rewards $0 + 0 + 0 + 0 + ... + 1 + 1 + ...$, where the $0$'s are repeated a $1000$ times before switching to $1$ forever, and a policy $\pi_{2}$ achieving rewards $1 + 1 + ...$ which always obtains $1$. Clearly, $\pi_{2}$ is a better policy, but it attains the same average reward as $\pi_{1}$. * This very shortcoming of average optimality is addressed by Blackwell optimality. Indeed, since a Blackwell optimal policy is optimal for all discount factors close to $1$, it also takes into account the rewards obtained in finite times. Additionally, there is no need to choose a specific value of the discount factor, in contrast to discount optimality. In fact, one can show that Blackwell optimal policies are average optimal but also optimizes the {\em bias} (i.e., the rewards obtained in the transient regime where the Markov chain converges to its stationary distribution), and higher-order terms, as explained in Section 10.2 in [3]. Still, although six decades have passed since Blackwell optimality was introduced in [4], there are still no efficient algorithms to compute a Blackwell optimal policy. As detailed in our paper, all existing algorithms are so complicated that we are not aware of any practical implementations of these ideas. We view our work as a first step toward a tractable approach to Blackwell optimality by providing an algorithm (solving discount optimal policy for $\gamma > \gamma_{\sf bw}$) that is conceptually much simpler than previous methods. Our Theorem 4.7 shows that our algorithm is weakly-polynomial, matching the most widely accepted class of {\em efficient algorithms. We would also like to emphasize that we are the first to show the shortcomings of the existing definition and to clarify some misunderstandings (see page 5, Proposition 3.4).} We will use the additional page available for the final version to clarify this and highlight the positioning of our work. [1] Y. Tang, M. Rowland, R. Munos, and M. Valko. Taylor expansion of discount factors. ICML 2021. [2] V. Dewanto, G. Dunn, A. Eshragh, M. Gallagher, and F. Roosta. Average-reward model-free reinforcement learning: a systematic review and literature mapping. [3] M. L. Puterman. Markov Decision Processes: Discrete Stochastic Dynamic Programming. John Wiley and Sons, 2014. [4] D. Blackwell. Discrete dynamic programming. The Annals of Mathematical Statistics, pages 719–726, 1962. *Common structural assumptions such as ergodicity/weak-communication*. Our goal is to study Blackwell optimality (and consequently average optimality) in all generality. In the course of this rebuttal, we have noticed that our Example 3.5 (shortcomings of the existing definitions of Blackwell optimality) and Proposition 4.3 (need for ``coarseness'' conditions to bound $\gamma_{\sf bw}$) can be extended to the case of weakly-communicating and unichain MDPs. Therefore, the issues identified in this paper still remain under some of the most common structural assumptions on the MDP instance. We refer to the PDF attached to our response common to all reviewers for the new MDP instances and to our response to Reviewer mmfD for the detailed computations. ### Conclusion. We thank you for your time reviewing the paper and your detailed questions, which will lead to an improved manuscript. We are looking forward to hearing back from you and we hope that our responses convince you of the completeness and significance of our paper and lead to an increased score. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response and have no further questions.
Summary: This paper studied the a Blackwell optimal policy can be obtained from an average-optimal policy by introducing the Blackwell discount factor $\gamma_{bw}$. The author further showed that the discount factor can be upper bounded by the bit-size of an MDP instance. Strengths: 1. The paper is well written and organized. The insights behind the main results are well presented. 2. The introducing of the Blackwell discount factor is interesting, it may lead to efficient algorithms in future studies. 3. An upper bound for $\gamma_{bw}$ is provided. Weaknesses: 1. Providing the upper bound of $\gamma_{bw}$ is not enough, still we don’t have a efficient algorithm for finding the Blackwell policy. Also I am afraid for calculating it needs have sufficient information of the MDP structure. 2. Proposition 4.3 is only for the deterministic transitions with $S=2, A=2.$ 3. The $ \eta$ in Theorem is quite close to zero, which means the upper bound for $\gamma_{bw}$ is almost 1. 4. No simulation results are provided, it is better the show the advantage of the proposed approach with even some simple simulations. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. I don’t agree with the authors that they criticize on the existing definition of the Blackwell optimality. The definition looks good to me, we just don’t have an efficient algorithm yet. What if the MDP is unichain or ergodic (different as your example 3.5), does proposition 3.4 still hold? As far as I know, the weakly communicating assumption or the ergodic assumption, is in fact known to be necessary for learning infinite-horizon MDPs with low regret. 2. what is m in your theorem 4.4? 3. Proposition 4.3: \eta >0 → 1>\eta>0 Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: More discussions on the limitations of your approach should be included. Also it is better to make more comparisons between the two discounted factors when there are more assumptions on the MDP. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. We first answer the weaknesses. 1. Note that our bound on $\gamma_{\sf bw}$ leads to a *polynomial* algorithm; we refer to Theorem 4.7 for the complexity statement. Polynomial-time complexity is commonly understood as {\em tractable} empirical performances. Our bound requires the knowledge of the number of states and actions, and the *maximum bit-size* of the MDP. This bit size is simply determined by (a) an upper bound of the rewards and (b) by the distance of positive transition probabilities to 0. The term (a) can be estimated since we only need an upper bound. The term (b) can be estimated when we build the transition probabilities as the empirical frequencies of the observed transitions from historical data, i.e., as $P_{sas'} = n_{sas'}/n_{sa}$ with $n_{sas'}$ the number of observation of the transition $s,a$ to the next state $s'$ and $n_{sa}$ the number of observation the state-action pair $s,a$. 2. We wrote Proposition 4.3 with the simplest MDP instance. We can add fictitious states/actions and non-deterministic transitions to this instance to show that the same conclusion holds with more states/actions/complicated transitions. We can also extend Proposition 4.3 to weakly-communicating MDPs. For this, we add a deterministic transition from $s_2$ to $s_1$, with a reward of $0$ for $a_{1}$ and a reward of $\epsilon$ for $a_{2}$ (see Figure 2 in the PDF attached to our response common to all reviewers). This MDP is weakly-communicating: $\{s_{1},s_{2}\}$ is strongly connected for $a_{2}$. We have $v_{\gamma}^{a_{1}} = 0, v_{\gamma}^{a_{2}} = (-1+\epsilon \gamma)/(1-\gamma)$. Thus $a_2$ is Blackwell optimal for $\gamma \geq 1/\epsilon$. With $\epsilon \geq 1, \epsilon \rightarrow 1$, we have $\gamma_{\sf bw} \rightarrow 1$. This extends Proposition 4.3 to unichain MDPs. We thank you for raising this interesting point, we will add this new result. 3. *The $\eta$ in Theorem is close to zero.* Assuming that you are referring to Theorem 4.4, we agree that the bound on $\gamma_{bw}$ is close to 1, which complicates the practical use of this result. However, we would like to clarify that $1-\gamma_{bw}$ is sufficiently large to have a polynomial sized representation. This fact is used to construct a polynomial-time algorithm in Theorem 4.7. 4. *Experimental validation.* We see our primary contribution as a better understanding of Blackwell optimality in nominal/robust MDPs. We are the first to highlight the notion of the Blackwell discount factors as a more refined tool to obtain Blackwell optimal and average optimal policies, whereas many papers before us were incorrect about when Blackwell optimal policies are optimal or not (see the discussion lines 282-296). We are also the first to show that we can use tools from polynomial analysis to study Blackwell optimality. Crucially, our paper is the first to provide a very simple algorithm to compute Blackwell/average optimal policies (through discounted MDPs), even though this notion has been around since the 1960s (see the last paragraph before section 3.2). We leave obtaining a more refined upper bound as future work. We conclude by noting that most of the recent works in this area do not focus directly on the implementation of the proposed algorithms [1,2,3,4,5]. Our work is a first step toward advancing the methodology and the understanding of an understudied optimality criterion (Blackwell optimality). [1] Wang, Wang, and Yang. Near Sample-Optimal Reduction-based Policy Learning for Average Reward MDP. [2] Jin and Sidford. Efficiently solving MDPs with stochastic mirror descent. ICML 2020. [4] Wang Primal-Dual $\pi $ Learning: Sample Complexity and Sublinear Run Time for Ergodic Markov Decision Problems. 2017 [5] Chen and Wang. Stochastic primal-dual methods and sample complexity of reinforcement learning. 2016. ### Questions. *Q1.* While the existing definition has been introduced for more than 60 years, there exists no public implementation of any algorithm to compute a Blackwell optimal policy; the reasons for this being that algorithms for this task are all quite involved and impractical, see lines 209. Therefore, while the definition of Blackwell optimality is important, it does not lead to efficient algorithms. This is why we see our introduction of the Blackwell discount factor as an important step toward a tractable approach to Blackwell optimality, which has been missing for more than six decades. *Extending Proposition 3.4 to unichain/ergodic MDPs:* We can extend Example 3.5 to a unichain MDP as follows: we add a transition from state $7$ to state $0$, with a reward of $0$. We also add three intermediate states from $0$ to $7$ for action $a_{1}$, so that it takes as many periods to reach state $7$ from state $0$ for the three action s$a_{1},a_{2},a_{3}$. This new MDP is unichain; see Figure 1 in the PDF attached to our response common to all reviewers. We have $v_{\gamma}^{a_{1}} = 1/(1-\gamma^5), v_{\gamma}^{a_{2}} = (r_{1}\gamma + r_{2}\gamma^2)/(1-\gamma^5), v_{\gamma}^{a_{3}} = (r_{4}\gamma + r_{5}\gamma^2)/(1-\gamma^5)$, which are the same expressions as for Example 3.5, up to the denominator $(1-\gamma^5)^{-1}$. Therefore, we proved Proposition 3.4 for unichain MDPs. We thank you for raising this interesting point, we will mention this new result in our final version. 2. The integer $m$ is the maximum bit-size of the MDP instance, defined line 319. Intuitively it is the $\log_2$ of the denominator in the fractional representation of the rewards $r_{sa}$ and the transitions $P_{sas'}$. More details on this are given in Appendix B. For instance, for Riverswim we have $m=14$ since the maximum reward is $10,000$. 3. We will clarify that $\eta \in (0,1)$, thanks. ### Conclusion. We thank you for your detailed questions, which lead to an improved manuscript. We hope that our responses convince you of the completeness and significance of our paper and lead to an increased score. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response. I would like to increase the score. --- Reply to Comment 1.1.1: Comment: We thank you for reading our response and increasing your score.
Summary: The paper studies how to reduce the computation of Blackwell-optimal policies to that of discount-optimal policies. The authors introduce the notion of "Blackwell discount factor", a value $\gamma_{bw} \in [0,1)$ s.t. any discount-optimal policy for $\gamma > \gamma_{bw}$ is also Blackwell optimal. They show that $\gamma_{bw}$ always exists for finite MDPs and upper bound it as a function of the size of the MDP. Finally, they extend these results to robust MDPs. Strengths: 1. The paper studies a relevant problem, that of computing Blackwell optimal policies, which hasn't received much research attention so far (eg, as compared to other popular optimality criteria) 2. The contribution is significant and, to my knowledge, novel. Being able to reduce the computation of Blackwell optimal policies to discounted MDPs is absolutely relevant given the theoretical understanding and efficient algorithms that we currently have for the latter setting 3. The paper is extremely well written. Despite the complexity of the studied topic (which is also not very well known in the RL community), the authors did a very nice job in providing the necessary basics and intuitively discussing all formal results, while providing comprehensive proof sketches. Related works are also discussed with sufficient detail. Weaknesses: 1. While it is nice to have an upper bound on $\gamma_{bw}$ that depends only on quantities known by the learner, the quantity $\eta(M)$ is exponentially small in the size of the MDP (eg in the number of states). This essentially means that, while we can solve a discounted MDP with $\gamma = 1-\eta(M)$ to compute a Blackwell optimal policy, we need to use a discount factor which is extremely close to 1 even for MDPs of moderate size. In practice, such a large value of $\gamma$ may simply make the learning process extremely inefficient. It is thus natural to wonder how conservative the proposed bound is. For instance, do the authors believe that there exist MDPs with a "simple structure" in which $\gamma_{bw}$ is way smaller than the stated bound? 2. There is no numerical simulation. It would have been nice to see some experiments comparing (eg in terms of run-time) existing algorithms for computing Blackwell optimal policies to a method for solving discount MDPs with the (upper bound on the) Blackwell discount factor Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: See above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Limitations have been discussed. No potential negative societal impact Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your detailed comments. We will answer your questions below. *Q1 - conservativeness of our bound on $\gamma_{\sf bw}$.* You are correct that our bound on $\gamma_{\sf bw}$ may be loose. In particular, it relies on Theorem 4.6 (separation between the different roots of a polynomial), which may not be tight for the specific polynomials related to the value functions of the MDP instance. Even though the current approach results in a weakly-polynomial time algorithm, improving this bound is certainly an important future direction. We also note that Proposition 4.3 shows that $\gamma_{\sf bw}$ may be very close to $1$ even in some extremely simple MDP instances (two actions, one absorbing state, one non-absorbing state, deterministic transitions). Also, we can extend Proposition 3.4 (which shows the shortcoming of the existing definition of Blackwell optimality) to the case of unichain MDPs. This can be done as follows. *Extending Proposition 3.4/Example 3.5 to unichain/ergodic MDPs:* We can extend Example 3.5 to a unichain MDP as follows: we add a transition from state $7$ to state $0$, with a reward of $0$. We also add three intermediate states from $0$ to $7$ for action $a_{1}$, so that it takes as many periods to reach state $7$ from state $0$ for the three action s$a_{1},a_{2},a_{3}$. Note that this new MDP is unichain. We refer to Figure 1 in the PDF attached to our response common to all reviewers. Additionally, for this new MDP instance, we have $v_{\gamma}^{a_{1}} = 1/(1-\gamma^5), v_{\gamma}^{a_{2}} = (r_{1}\gamma + r_{2}\gamma^2)/(1-\gamma^5), v_{\gamma}^{a_{3}} = (r_{4}\gamma + r_{5}\gamma^2)/(1-\gamma^5)$, which are exactly the same expression as for Example 3.5, up to the common denominator $(1-\gamma^5)^{-1}$. Therefore, we have proved that the conclusion of Proposition 3.4 holds for unichain MDPs. We thank you for raising this interesting point; we will use the additional page available for our final version to mention this new result. We are also able to extend Proposition 4.3 to weakly-communicating MDPs. We refer to our response to reviewer mmfD for the exact computations and to Figure 2 in the PDF attached to our response common to all reviewers. We will add these new results in our revised manuscript. *Q2. Experimental validation.* We see our primary contribution as a better theoretical understanding of Blackwell optimality in (nominal MDPs and robust MDPs. We are the first to highlight the notion of the Blackwell discount factors as a more refined tool to obtain Blackwell optimal and average optimal policies, whereas many papers before us were incorrect about when Blackwell optimal policies are optimal or not (see the discussion lines 282-296). We are also the first to show that we can use tools from polynomial analysis to study Blackwell optimality. Finally, our paper is the first to provide a very simple algorithm to compute Blackwell/average optimal policies (through discounted MDPs), even though the notion of Blackwell optimality has been around since the 1960s (see the last paragraph before section 3.2). We leave obtaining a more refined and implementable upper bound as future work. We conclude this remark by noting that most of the recent and notable works in this area do not focus directly on the implementation of the proposed algorithms, e.g. [1,2,3,4,5]. We see our work as a first step toward advancing the methodology and the understanding of an understudied optimality criterion (Blackwell optimality). [1] Wang, Jinghan, Wang, Mengdi, et Yang, Lin F. Near Sample-Optimal Reduction-based Policy Learning for Average Reward MDP. arXiv preprint arXiv:2212.00603, 2022. [2] Jin, Yujia et Sidford, Aaron. Efficiently solving MDPs with stochastic mirror descent. ICML 2020. [4] Wang, Mengdi. Primal-Dual $\pi $ Learning: Sample Complexity and Sublinear Run Time for Ergodic Markov Decision Problems.arXiv preprint arXiv:1710.06100, 2017. [5] Chen, Yichen et Wang, Mengdi. Stochastic primal-dual methods and sample complexity of reinforcement learning. arXiv preprint arXiv:1612.02516, 2016. ### Conclusion. We thank you for your detailed questions. We are looking forward to hearing your new thoughts, and we hope that we have answered your questions and concerns and that you will consider increasing your score. --- Rebuttal Comment 1.1: Title: Acknowledgement Comment: I thank the authors for their response. After going through all reviews and rebuttals, I still believe that the paper provides a significant contribution, though with some limitations. I will thus keep my initial view.
Summary: The paper proposed a new concept called Blackwell discount factor $\gamma_{bw}$, which enjoys good properties: any policy that is $\gamma_{bw}$-discount-optimal will be Blackwell optimal as well. If $\gamma_{bw}$ is known, then one can reduce the problem of find average-optimal policy to finding discount-optimal policies, which is practically easier. The authors provide theoretical justifications for Blackwell discount factor and a proper instance-dependent upper bound. In the end, they extend the results to Robust MDP setting, where the definitions and results continue to hold. Strengths: The paper tackles a significant shortcoming in the previous literature on Blackwell optimality: the current definition does not allow us to find Blackwell optimal policy in a naive way that is to choose a large enough $\gamma$ and simply find its discount optimal policy. The paper has a clear and strong motivation to study the problem with a minor question I will mention in the Weakness section. The theoretical results are clearly delivered and significant. Especially some impossible results in Proposition 3.4 and Theorem 3.6 are intriguing and it reveals something fundamental about the Blackwell optimality. Weaknesses: Certain writings and motivation may be taken care: 1. Line 5: computing average-optimal policies requires only the weakly-communication assumption (instead of unichain or ergodicity), which is a reasonable assumption as weakly-communication is necessary for a unique optimal average rewards. 2. Line 43-61: There exists many value iteration algorithm for average-reward setting. I don't fully understand the motivation to find the optimal average-optimal policy through finding discounted-optimal policies. The well-known [UCRL2](https://proceedings.neurips.cc/paper_files/paper/2008/file/e4a6222cdb5b34375400904f03d8e6a5-Paper.pdf) calls value iteration as a sub-routine. 3. Saying the classical definition of Blackwell optimality has shortcomings can be confusing, because the paper does not have a new definition for Blackwell optimality. Instead, it proposes a new concept named Blackwell discount factor. 4. I am not fully convinced that Blackwell discount factor is the only discount factor of interest. In some circumstances, we may want to find the Blackwell optimal policy that is $\gamma$-discount-optimal for as small as possible $\gamma$. 5. I don't see a strong connection between Blackwell optimality and Robust MDP. It is nice to see that similar results continue to hold for Robust MDP. However, the two topics seem orthogonal to me. Before reading Section 4.2, I was expecting to see results on how Blackwell optimality improves robustness, since Blackwell optimality in general sense is one type of robustness w.r.t. the change of discount factors. Minor comments: 1. Line 247: the authors mentioned “in the next proposition”, while it is a Theorem that follows. 2. Theorem 4.5, Theorem 4.6 should be stated as a Lemma. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Is $\gamma_{bw} = \max_{\pi} \gamma(\pi)$? Intuitively, this makes sense as if $\gamma_{bw} < \gamma(\pi')$, then $\pi'$ is not $\gamma_{bw}$-discount-optimal, which violates the definition. If this is true, then can I understand the paper as finding the smallest $\gamma$ such that all Blackwell optimal policies are simultaneously optimal on this $\gamma$? 2. Is it NP-Hard to verify weakly-communicating assumption? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors mentioned approximate Blackwell optimality and robust Blackwell discount factor for other types of uncertainty sets. No societal impacts that need to be addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your time reviewing the paper and for your constructive comments. We will answer your questions and remarks below. ### Writing and motivations. *Points 1-2-3*. We will make these two points more explicit in our final version. Thanks for the clarification. *Point 4*. Thanks for raising this interesting point. One key property of Blackwell optimal policies (that we did not highlight in the paper) is that *their value functions coincide for all $\gamma \in (0,1)$ for all states $s$*. This is because their value functions must coincide on an entire interval close enough to $1$, for instance, for $\gamma \in (\gamma_{\sf bw},1)$. Since the value functions are rational functions (ratios of polynomials), if they are equal for an infinite number of discount factors, then they are equal on the entire interval $(0,1)$. Therefore, all Blackwell optimal policies are $\gamma$-discount optimal (or not) for the same set of discount factors. We will add this discussion in the revised version of our manuscript. *Point 5. Connection with robust MDPs.* We have worked to extend our results on Blackwell optimality to the case of robust MDPs for mostly two reasons. First, from a practical standpoint, this allows us to (partially) address the situation where the data of the MDP instance (rewards and/or transition probabilities) is only partially known. Second, from a methodological standpoint, there has been some recent interest in studying average optimality for the case of robust MDPs [1,2]. Since Blackwell optimality bridges the gap between discounted optimality and average optimality for *nominal* MDPs, we believe that a better understanding of Blackwell optimality for *robust* MDPs is a first step toward new insights for the case of average optimality in robust MDPs as well. [1] Wang, Yue et al. Robust average-reward Markov decision processes. AAAI 2023. [2] Wang, Yue et al. Model-Free Robust Average-Reward Reinforcement Learning. ICML 2023. ### Questions. 1. It is *not* the case that $\gamma_{\sf bw} = \max_{\pi} \gamma(\pi)$. This is discussed in the paragraph line 282 - 296 (specifically, line 292-294): in Example 3.5, we have $\gamma_{\sf bw} = 3/4$, but there is a single Blackwell optimal policy $a_{1}$ and $\gamma(a_{1}) = 1/2$. This discrepancy is what makes introducing the notion of the Blackwell discount factor one of our important contributions. 2. Verifying that a nominal MDP is weakly communicating can be done in polynomial time; see Algorithm 4 and Theorem 3.5 in [3]. The unichain property is NP-hard to verify for nominal MDPs; see [4]. We are not aware of any algorithm for verifying the weakly-communicating property for robust MDPs. We will make this clearer in our manuscript. [3] Kallenberg, L. C. M. Classification problems in MDPs. Markov processes and controlled Markov chains, 2002, p. 151-165. [4] J. N. Tsitsiklis. NP-Hardness of checking the unichain condition in average cost MDPs. Operations research letters, 35(3):319–323, 2007. ### Conclusion. We hope that our responses convince you of the completeness and significance of our paper and lead to an increased score. --- Rebuttal Comment 1.1: Comment: Thanks for addressing my question in point 4 and question 1. I believe my current rating represents the significance of the paper and I will keep my current rating.
Rebuttal 1: Rebuttal: Dear editorial teams We would like to thank all the reviewers for their constructive feedback. We provide our answers to all reviewers individually. We would like to note that we have extended Proposition 3.4 (shortcomings of existing definition of Blackwell optimality) and Proposition 4.3 (need for some ``coarseness" condition to bound the Blackwell discount factor $\gamma_{\sf bw}$) to the case of weakly-communicating MDPs. We present the detailed computations for each reviewer who mentioned it. We also attach here the PDF with the new weakly-communicating MDP instances for our proofs of Proposition 3.4 (Figure 1) and Proposition 4.3 (Figure 2). This strengthens our contributions by showing that the shortcomings of previous work that we identify in our paper also occur under the most common structural assumptions. We hope that our responses convince you of the completeness and significance of our paper and we are looking forward hearing back from the reviewers. Pdf: /pdf/7e743bf8d2b9d7992337758af9c945815ad8ac6f.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper defines the Blackwell discount factor: when the discount factor is larger than this factor, discount-optimal policies become Blackwell and average-optimal. An analytical solution for such factor is also derived. The results are also extended to the setting of sa-rectangular RMDPs. Strengths: Motivated by the original definition of Blackwell optimality, the authors propose the Blackwell discount factor, via which we can solve for the optimal policy of an average-reward MDP by solving a corresponding discounted-reward one, where the restrictive assumptions in average-reward setting is bypassed. This should be an exciting result. The writing is clear and the contents are well organized. Weaknesses: I only have some minor comments. Please see my questions below. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Line 177: It should be "we use the notation...". Line 319: It is not clear to me what "coarseness" is and how Proposition 4.3 shows this dependence. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: Please see the questions above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your time reviewing the paper. *As regards coarseness and Proposition 4.3:* Our goal is to obtain an upper bound on the Blackwell discount factor $\gamma_{\sf bw}$ for a large class of MDP instance. Proposition 4.3 shows that without any condition on the rewards $\boldsymbol{r}$ and the transition probabilities $\boldsymbol{P}$, we may have $\gamma_{\sf bw}$ as close to $1$ as we want by letting the instantaneous reward approaching $0$ (see the proof of Proposition 4.3). Inspecting the proof of Proposition 4.3, we note that $\gamma_{\sf bw} \rightarrow 1$ when the minimum difference between two instantaneous rewards (called $\epsilon$ in the proof) approaches $0$. Therefore, we study the class of MDP instance with bit-size bounded by $m \in \mathbb{N}$, for which there is a lower bound on the minimum difference between two instantaneous rewards, i.e., for which $\epsilon \geq \frac{1}{m}$. This is what we refer to as ``coarseness''. For this class of MDP instance, we can prove a uniform upper bound on $\gamma_{\sf bw}$, as shown in Theorem 4.4. We would also like to note that during this rebuttal, we have noticed that we can extend Proposition 3.4 (shortcomings of the existing definition of Blackwell optimality) and Proposition 4.3 to the case of weakly-communicating MDPs. This strengthens our contributions by showing that the shortcomings of previous work that we identify in our paper also occur under the most common structural assumptions. *Extending Proposition 3.4/Example 3.5 to unichain/ergodic MDPs:* We can extend Example 3.5 to a unichain MDP as follows: we add a transition from state $7$ to state $0$, with a reward of $0$. We also add three intermediate states from $0$ to $7$ for action $a_{1}$, so that it takes as many periods to reach state $7$ from state $0$ for the three action s$a_{1},a_{2},a_{3}$. Note that this new MDP is unichain. We refer to Figure 1 in the PDF attached to our response common to all reviewers. Additionally, for this new MDP instance, we have $v_{\gamma}^{a_{1}} = 1/(1-\gamma^5), v_{\gamma}^{a_{2}} = (r_{1}\gamma + r_{2}\gamma^2)/(1-\gamma^5), v_{\gamma}^{a_{3}} = (r_{4}\gamma + r_{5}\gamma^2)/(1-\gamma^5)$, which are exactly the same expression as for Example 3.5, up to the common denominator $(1-\gamma^5)^{-1}$. Therefore, we have proved that the same conclusion as Proposition 3.4 holds for unichain MDPs. We thank you for raising this interesting point; we will use the additional page available for our final version to mention this new result. *Extending Proposition 4.3 to weakly-communicating MDPs:* For this, we can add a deterministic transition from state $s_2$ to state $s_1$, with a reward of $0$ for action $a_{1}$ and a reward of $\epsilon$ for action $a_{2}$. We refer to Figure 2 in the PDF attached to our response common to all reviewers. First, the MDP instance is weakly-communicating since $\{s_{1},s_{2}\}$ is strongly connected under policy $a_{2}$. In this new MDP instance, we still have $v_{\gamma}^{a_{1}} = 0$ but $v_{\gamma}^{a_{2}} = (-1+\epsilon \gamma)/(1-\gamma)$. Hence $a_2$ is Blackwell optimal when $\gamma \geq 1/\epsilon$. By choosing $\epsilon$ larger than $1$ and $\epsilon \rightarrow 1$, we obtain $\gamma_{\sf bw} \rightarrow 1$. This shows that we can extend Proposition 4.3 to unichain MDPs. We thank you for raising this interesting point; we will add this new result in the final version of our paper. ### Conclusion. We hope that our responses convince you of the completeness and significance of our paper and lead to an increased score. --- Rebuttal Comment 1.1: Title: Thank you! Comment: Thank you for addressing my comments in detail. I will keep my score.
Summary: This paper studies a new class of objective for MDPs. Instead of aiming to find the policy which maximizes the long-term average reward, the authors propose a new approach to find a Blackwell optimal policy. A Blackwell optimal policy also maximizes the average reward, but Blackwell optimality has not been studied much in the literature due to the intractability. In this paper, the authors show that there exists a ‘Blackwell discount factor’ such that any policy which is discount optimal for this specific choice of discount factor is also Blackwell optimal and average reward optimal. However, finding this Blackwell discount factor is quite challenging, although the authors provide a sufficient condition for it, providing an upper bound on the Blackwell discount factor, and showing that any policy which is optimal wrt this upper bound on the discount factor is Blackwell optimal. However, this upper bound depends on properties of the MDP which are may not be known in RL. The authors also provide an extension to robust MDPs. Strengths: The notion of Blackwell optimality is interesting and indeed not studied by the RL community, so it is nice to see it being brought to attention. It is also nice that this provides a method for finding an average-optimal policy in a known MDP without assumptions on the MDP, and it is pleasing that there is an extension to robust MDPs. The paper is reasonably well written and organised. Weaknesses: It seems that Blackwell optimality has been considered in MDPs before, so the main contribution seems to be defining this Blackwell discount factor and showing solving an MDP with that discount factor leads to an average-optimal policy. From the discussion of the literature, it sounds like the existence of the Blackwell discount factor was folklore so it is nice to see it proven. However, the Blackwell discount factor itself seems difficult to find and the proposed upper bound also depends on quantities such as the maximum bit size of the MDP which I imagine is typically not known. However, there was only limited discussion of the proposed bound so I found it somewhat difficult to interpret. The bound on the discount factor also seems to scale with 1-1/(4^S)^S, so as S gets bigger this will approach 1 very quickly (indeed when I plotted it on my laptop, for S=4, it was rounded to 1). Therefore, I am concerned that for most reasonably sized MDPs, we will need to find an optimal policy with a discount factor arbitrarily close to 1 which will still be challenging and may require more assumptions, so I am not completely sure of the benefit of this approach. Lastly, it is not clear how to translate this approach to an RL setting where we do not know the MDP. There was a bit of discussion about this, but not sufficient to understand how to actually use it in RL. There is an attempt to generalise to robust MDPs where we only know a set of transition functions, but I did not think sufficient details were provided to fully understand the usefulness of this result. I also wonder whether this work could be better suited to another venue. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Can more details of calculating the bit-size of an MDP be given? In particular, it would be nice to see an example of the bit-size of an MDP (e.g. riverswim or another classical example) be given? Can this also be translated into the bound on the discount factor? It would also be good to see a plot of the bound on the discount factor with size of the state space to see when it leads to bounds that are essentially 1. It may also be good to have a comparison in terms of computation time/number of iterations of this method and other methods, even if there are different assumptions. More generally, it would be good to have the benefits of the proposed method clearly demonstrated. Can the results for robust MDPs be used in the setting where we define confidence sets using data? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Limitations (e.g. what to do when we don’t know the MDP) are discussed briefly but could be discussed in more detail. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and suggestions. We answer your remarks and questions below. *Therefore, I am concerned that for most reasonably sized MDPs, we will need to find an optimal policy with a discount factor arbitrarily close to 1 which will still be challenging and may require more assumptions, so I am not completely sure of the benefit of this approach.* We agree that the immediate practical utility of our bounds is limited. However, we believe that this result can serve as an important foundation for future theoretical and practical implications. As you observe, our upper bound on $\gamma_{\sf bw}$ is close to one, but the number $1 -\gamma_{\sf bw}$ has a number of bits that is polynomial in the size of the MDP. Theorem 4.7 then shows that the Blackwell optimal policy can be computed in polynomial time using an LP solver. Moreover, we introduce new techniques for bounding the Blackwell optimality, which can be used in the future to further refine the upper bound on the Blackwell discount factor. *I also wonder whether this work could be better suited to another venue.* We believe this is relevant to NeurIPS, particularly because the notion of Blackwell optimality in MDPs and robust MDPs has received increased attention in the reinforcement learning community (such as Dewanto et al. (2020) and (2021), Jin et al. (2021), Tang et al. (2021), Yang et al. (2016)). Our results help to rigorously establish a fact that has been used implicitly (without proofs) in recent work. If you still think that it is not a good fit, we would appreciate suggestions for an alternative venue. *In particular, it would be nice to see an example of the bit-size of an MDP e.g. riverswim or another classical example, be given?* Thank you, that is a great suggestion. We agree that an example would make the results more approachable. For example, the bit-size for riverswim would be about 14 because the number is the largest bit-size is the reward 10,000 in the terminal state. We will add such an example in the revision. *It would also be good to see a plot of the bound on the discount factor with size of the state space to see when it leads to bounds that are essentially* We agree that plotting the discount factor as a function of an MDP parameter would be interesting. We will add such a plot in the final version of the paper. *Can the results for robust MDPs be used in the setting where we define confidence sets using data?* That is an interesting question. Yes, the results also apply to any MDP or robust MDP regardless of how this MDP is constructed. In fact, our Theorem 4.11, which bounds the Blackwell discount factor for robust MDPs is specifically for uncertainty sets based on $\ell_{1}$ or $\ell_{\infty}$ balls. These distances are used to construct uncertainty based on some data, see section 6.2.3 in [1] for healthcare applications and section 6 in [2] for methodology. [1] Grand-Clement, Julien, Chan, Carri W., Goyal, Vineet, et al. Robustness of Proactive Intensive Care Unit Transfer Policies. Operations Research, 2022. [2] Behzadian, Bahram, Russel, Reazul Hasan, Petrik, Marek, et al. Optimizing percentile criterion using robust MDPs. AISTATS 2021. --- Rebuttal Comment 1.1: Comment: Thanks for getting back to me and clarifying some of my questions. In particular, I had missed the significance of Theorem 4.7 (perhaps this can be highlighted more in a revised version) so have raised my score in light of that. However, I am still somewhat unsure of the practical utility of the bounds.
null
null
null
null
Initialization-Dependent Sample Complexity of Linear Predictors and Neural Networks
Accept (poster)
Summary: This paper views a prototype but interesting enough neural network architecture from Rademacher complexity, and uses Rademacher complexity to analyze the sample complexity used for correctly training the neural network. It reveals an approves an interesting result that the Rademacher complexity hence the sample complexity depends on the initialization. The major theorems show the difference of sample complexity bound at zero and non-zero initialization. ============= Rating updated to 7 and Confidence to 3 for the rebuttal. I would be glad to see it published, but the tightness given by shattering, as pointed out by the authors, relies on the "distribution-free learning setting", which might not be practical when considering the real world data (whose distribution might make some neural net architectures more learnable), and does not talk about the optimization algorithm that trains the model. The manuscript is still self-consistent so I give 7, but not higher due to the unanswered larger scope. If the real world experiment shows the gap and difference between two initializations, I would not hesitate to give 7 or 7+, but now I'm still hesitant between 6 and 7. Strengths: I think the mathematics is concrete and solid, the contribution is novel, and it shows an interesting and non-intuitive fact that sample complexity depends on the initialization. All the mathematical definition are rigorously and clearly defined, and there is thorough discussion of the impact of the results. The special bound for deep neural networks is also given as a corollary. Weaknesses: I would like to see more practical experiments that verify the consistency between sample complexity and the Rademacher complexity. Rademacher complexity, especially for deep neural networks or functions with high nonlinearity, is usually not tight for sample complexity bound. Rademacher complexity is an upper bound, i.e., if you have more samples and can find the globally optimal model in the model family, then it’s guaranteed that the model is the ground truth. But it does not say 1) what if number of samples is lower than sample complexity bound, how does the generalization error grow, which depends on the data distribution; 2) if there are enough samples, is the neural net trained with typical methods like SGD the global optimal solution. Those need to be verified by experiments, no need to be exactly consistent, but it cannot be a too large gap. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: I did not understand the difference between the results in Section 3.1 and 3.2, they look quite similar. I would like to see more discussion, corollaries or remarks following the theorems, since they are not described as "sample complexity", but "shatter", etc. I would be willing to increase the score if the weaknesses can be addressed. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 2 fair Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your comments. We will see whether we can incorporate experiments (say along the lines of Bartlett et al [2017]). However, it should be emphasized that our paper is theoretical and focuses on understanding the minimax optimal sample complexity of various predictor classes, similar to many previous papers in the statistical learning theory literature. - Difference between sections 3.1, 3.2: The main difference is that 3.1 focuses on the case of zero initialization ($W_0=0$) and 3.2 focuses on the case of non-zero initialization ($W_0\neq 0$). These settings are qualitatively different, as we show that the sample complexity can be finite in the former case, but infinite in the latter case. --- Rebuttal Comment 1.1: Title: Thanks for the comment, and follow-up Comment: I do agree with the authors' point, but my question is not about just verifying by and experiment, but asking whether the Rademacher complexity is a tight bound -- if the authors can theoretically prove it, say, giving a made-up example and prove it achieves the bound that is exactly the same as the theory, then it would even be stronger than an experiment. Maybe I'm not familiar with the convention in this area, lift score to 5 in case tightness is typically not a concern. Yes, 3.1 and 3.2 are about different initialization assumptions, but I'm a bit confused by the proposition statement. I guess one is a positive result and another is the opposite, could the authors explain more about the different in conclusion (rather than assumption)? --- Reply to Comment 1.1.1: Comment: - Indeed the Rademacher complexity just upper bounds the sample complexity. Therefore Thm. 2 just states that the sample complexity is at most $2^{O(B^2/\epsilon^2)}$, independent of the size/dimension parameters $n$ and $d$. This is why we also prove a tight lower bound of $2^{\Omega(B^2/\epsilon^2)}$, by lower bound the fat-shattering dimension (i.e. Thm. 1), which characterized the sample complexity. To see an example where the sample complexity is the same as the theory, we refer to the proof of the theorem that gives a lower bound on the sample complexity in terms of the fat-shattering dimension, which is analogous to the proof of the VC dimension in a classification task and can be found for example in Anthony and Bartlett [2002]: Page 262, part 3, subsection 19.5, Thm. 19.5. See line 123 in our paper for the full reference. - In section 3.1 with zero initialization, we show that the sample complexity is $2^{\Theta(B^2/\epsilon^2)}$, independent of the size/dimension parameters $n$ and $d$. In contrast, in the case of non-zero initialization in section 3.2, we show that the sample complexity is infinite in a size-independent setting i.e. the sample complexity depends on the size of the model $n$ and $d$, even if the initialization is very small. This is perhaps a surprising result, since in other models, like the class of scalar-valued linear predictors composed with some Lipschitz function, the initialization doesn't affect the sample complexity.
Summary: In this paper, the authors give sample complexity bounds for learning function classes of the form $g(x) = f(Wx)$, where $W$ is a matrix, and $f$ is a Lipshitz function. They sample complexity bounds are based on the Frobeneous norm of the matrix $W - W_0$, where $W_0$ is a fixed initialization matrix. The upper bounds are given in terms of the Rademacher complexity, which the lower bounds are given in terms of the fat-shattering dimension. For the case when $W_0 = 0$, the authors give tight, size independent bounds on the sample complexity, where here size refers to the first dimension, $n$, of $W$. They use this to show some novel sample complexity bounds for neural networks. The then show that when $W_0 \neq 0$, it is impossible to achieve such size-independent bounds, and the resulting sample complexity that grows in $m$. Due to this impossibility result, they are able to show the existence of a certain type of ERM problem for which SGD learns but uniform convergence fails. Finally they restrict their setting to the case of two-layer neural networks and show some specific results that recover or improve upon results in the literature. Strengths: In general the paper is reasonably clear, and the results are solid. They authors close some open questions in related work, and provide novel results in more general settings. Weaknesses: The results in this paper are somewhat incremental in the sense that they seem to primarily just be useful for the case of a one-layer network with multiple outputs (and practically, in such a setting, the "size" parameter $n$ would probably not be so large). Could the authors make the exact dependence on $n$ clear in the resulting sample complexity? (I think it is logarithmic?) *Lacking Motivation*: - The model class $f(Wx)$ studied in this work seems to be different from previous classes studied from a sample complexity perspective, and so the authors should further justify their setting, and why it differs from existing models. Are there settings that generalize this model? And which other settings does this model subsume besides the vector-valued case? The examples given in the paragraph at line 13 could be made even more concrete and obvious to the author. - The authors should provide some references in the intro for where such settings are studied in other works or where they are used in practice. *Comparison with related work could be improved*: - Intro: The authors should explain the result of Vardi et al 22 and Daniely and Granot in the introduction, and specifically explain how their work differs, because it is quite similar. It seems like the major difference is that f has a decomposable structure in the other works? The authors should also spell out the exact open questions they answer from Vardi et al. - The authors should include a detailed quantitative comparison to the most related works. A table (perhaps in the intro, though if it is too complicated, later on) would be helpful, describing the different settings and comparing results ($W_0 = 0$ or not, shallow or deep nets, element-wise activations, etc.). - For the result on deep neural networks on page 6 and in section 4.1, a more detailed quantitative comparison to existing works would be useful. In what regimes are the bounds given in this work better? In what regimes are other works in the literature stronger? - More discussion on the limitations of uniform convergence (UC) and works that go beyond UC would be useful. Eg. https://arxiv.org/abs/1902.04742 , https://arxiv.org/abs/2103.04554, https://arxiv.org/abs/2206.07892 and references therein. The authors should discuss other way of bounding the sample complexity of learning neural networks eg. PAC Bayes bounds etc. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Major comments are in weaknesses. Minor comments: - In line 68, could the authors explain what they mean by the Lipchitz function is a parameter? If possible, this should be explained concisely, but otherwise a pointer to where it is explained later would be helpful. - It is standard to have $n$ be the number of samples and $m$ be a width parameter, so it would clearer if these were swapped Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Sufficient Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your comments, we will improve the presentation according to your suggestions. - Show dependence on $n$: We would like to emphasize that our focus is on size-independent bounds, which do not depend in any manner whatsoever on $n$. This is in line with a huge previous literature in statistical learning theory, as well as on neural networks in particular. Such results are relevant for understanding the generalization capabilities of large models, and are also crucial for understanding how different norms lead to sample complexity control, independent of additional orthogonal constraints such as model size. Understanding how the bounds are affected by joint norm control and model size control is certainly an interesting avenue for future research, but we believe a necessary prerequisite is understanding the effect of each separately. - Motivation of model class: We will add further discussion. In a nutshell, it directly models vector-valued linear predictors composed with a Lipschitz loss (e.g., for multi-task and multi-class learning), as well as generalizes large neural networks. The former is of course studied in many papers. As to the latter, an example reference is Daniely and Granot [2022]. In any case, this model helps us to understand the effect of initialization on the sample complexity (which is a challenging task compared to the case of initialization at zero), the sample complexity of neural networks, and the conditions under which we can achieve size-independent bounds. - Comparison to previous work: We will add more discussions as suggested, and also include a summary below. The open questions in Vardi et al that we answer are (1) Understanding the sample complexity with non-zero initialization (e.g., Thm. 6, see lines 336-337, 352-354), and (2) Extending their results to deeper networks (e.g., Thm. 7, see lines 368-370,378-389). As to Daniely and Granot, their bounds are not size-independent, so it is a different setting than ours. - Line 68: We meant that the predictor class ranges over all possible $L$-Lipschitz functions, composed with a class of linear functions (as opposed to having a single fixed Lipschitz function). See exact definition in the statement of Theorem 2. Further discussion of related works: - Deep neural networks and Frobenius norm: We refer to lines 333-342. In more detail: A width-independent uniform convergence guarantee, depending on the product of Frobenius norm of all layers, has been established in Neyshabur et al. [2015] for constant-depth networks, and in Golowich et al. [2018] for arbitrary-depth networks. However, these bounds are specific to element-wise, homogeneous activation functions, whereas we tackle general Lipschitz activations. Bounds based on other norms include Anthony and Bartlett [1999], Bartlett et al. [2017], Liang [2016], but are potentially more restrictive than the Frobenius norm, or do not lead to width-independence. As we mention on page 6 (lines 226-230), all previous bounds of this type we are aware of strongly depend on various norms of all layers, which can be arbitrarily larger than the spectral norm in a size-independent setting (such as the Frobenius norm and the $(1,2)-$norm), or made strong assumptions on the activation function. - Non-zero initialization: Bartlett et al. [2017] upper bound the sample complexity of neural networks with non-zero initialization, but they used a much stronger assumption than ours: They control the $(1,2)$-matrix norm, whereas the gap between this norms can be arbitrarily large, depending on the matrix size. Vardi et al [2022] also studied the initialization case with element-wise activations, but their result is size-dependent and includes a different technique than ours. - Non-element-wise activations: Daniely and Granot [2022] do provide a fat-shattering lower bound with a general Lipchitz activation (non-element-wise, similar setting as our first part of the paper), which implies that neural networks on $R^d$ with bounded Frobenius norm and width $n$ can shatter $n$ points with constant margin, assuming that the inputs have norm at most $\sqrt{d}$ and that $n = O(2^d)$. However, this lower bound is size-dependent, and moreover does not separate between the input norm bound and the width of the hidden layer. Therefore, their result does not contradict our upper bound (i.e. Thm. 2) which says that it's possible to achieve a size-independent upper bound on the sample complexity. --- Rebuttal Comment 1.1: Comment: I have read the response of the authors; thank you for the discussion of related work. For my question about size-dependence, I meant for the case when $W_0 \neq 0$, though I suppose there is no matching upper bound on sample complexity in that setting? Do the authors expect there to be a sample complexity bound that matches Theorem 3? --- Reply to Comment 1.1.1: Comment: In the case of $W_0 \neq 0$, we don't have a tight upper bound in terms of n and d. We emphasize the message of Thm. 3 that it is impossible to control the sample complexity independent of the size/dimension parameters n and d. We believe that using standard covering number arguments can achieve an upper bound that is polynomial on $n$. It remains to be seen whether the actual dependence on $n$ is polynomial, logarithmic, or something else. In any case, this is an interesting question and we will add it to the open questions (i.e. to subsection 5). Thanks.
Summary: The paper generally studies the sample complexity of the functions of the form $f(Wx)$ and it particularly focuses on size-independent bounds. First it shows matching exponential lower and upper bounds for the case that the reference matrix $W_0=0$. Then, it is showed that one cannot obtain an upper bound in the case that $W_0 \neq 0$. Using these results, they further provide: (1) sample complexity bounds for NNs and (2) an instance where uniform convergence does not hold but learning is possible with SGD. Strengths: - The paper generally studies the sample complexity of $f(Wx)$ quite extensively both in the case that the reference matrix $W_0$ is zero and non-zero. The implications are also interesting: - Neural network results: e.g., depth/width-independent result if the reference matrix is 0 - Another example that uniform convergence fails and SGD succeeds. - Both the main text and the appendix are well-written. Weaknesses: My main concern is regarding Q1 below. There are some limitation/future directions that have been discussed in the paper itself. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Main question: - Q1. In theorem 1, the valued of $d, n$ are asymptomatically determined by $L^2B^2 / \epsilon^2$ which is the exact quantity that gives the shattering. In other words shattering in this case is given by $d$ (or $n$) as well. So I am not sure if this lower bound is independent of the size? (Both the bound and the size depend on $L^2B^2 / \epsilon^2$). Other questions: - Q2. (Although this is not super important for the negative result) Do we have a picture about Theorem 3, 5 when $n$ is not exponentially large w.r.t. $d$? Possible typos: - line 232: $W_j$ instead of $w_j$ - line 234: do not instead of don't - line 422: $x_1, \ldots, x^m$ - line 633: space between $R$ and ( Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The work is theoretical and does not have any negative societal impact. The limitations have been adequately discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your comments, we will fix the typos. - Q1: To prove a lower bound in a size-independent setting, we are free to choose the size parameters $n,d$ as we wish (since the upper bounds should hold for any $n,d$). In particular, we may choose them to depend on $L,B,\epsilon$. It would be interesting to understand the sample complexity when $n,d$ are also controlled, and we left this as an open question (see section 5). - Q2: That's an interesting question for future research. In that case we would attain some dependence on $n$ (but whether it is polynomial, logarithmic or something else remains to be seen). --- Rebuttal Comment 1.1: Comment: Thank you for your response. Could you please clarify more on Q1? Particularly, in Theorem 1, shattering is $\exp(\Theta(d))$ or $\mathrm{poly}(n)$, so the bound depends on dimension/size? (This is related to the first weakness raised by Reviewer 9YyA.) --- Reply to Comment 1.1.1: Comment: In Thm. 1 we showed that for $d = \Theta(B^2/\epsilon^2), n = exp (\Theta(d))=exp(\Theta(B^2/\epsilon^2))$ our class can shatter $exp(\Theta(B^2/\epsilon^2))$ points. It doesn't mean that our bound depends on dimension/size, since $n$ and $d$ also depend on the norm $B$. Without this dependence, the result no longer holds. You may be wondering for example if we can use the same technique to choose any $n$ (say $n >> exp(\Theta(B^2/\epsilon^2))$), and show that our class can shatter $n$ points, which means that the sample complexity also depends on the dimension/size and lead to a contradiction to Theorem 2. The answer is negative. We refer you to the proof of Thm. 1 which is based on Lemma 2. We emphasize two things in Lemma 2: the first is that $n = exp(\Theta(d))$ and the second is that $||W_s||_F^2 \leq 2d$. This establishes the dependence between the norm $B$, to $n$ and $d$. If necessary, we can also explain why Lemma 2 no longer holds if we remove the assumption that $n = exp(O(d))$.
Summary: The authors provide various sample complexity results for linear and nonlinear networks in initialization dependent and independent cases. Here is the summary of results. * For the class of predictors $f(Wx)$, the fat-shattering dimension is characterized and gives a lower bound on sample complexity where the dependence of the bound on the distance from initialization $B$ is exponential (Theorem 1). The output dimension, however, is assumed to be exponential in $B$: $n=\exp(\Theta(L^2B^2/\epsilon^2))$. * The upper bound is driven by Rademacher complexity analysis in Theorem 2, and is shown to be tight. * The proof of Theorem 2 relies on covering number arguments, Dudley’s inequality and utilizing covering bounds from Lemma 1 and 5. * Corollary 2 extends the result of Theorem 2 to neural networks by replacing the function $f$ by all the layers and activations after the first layer. The Lipschitz constant of the new function is equal to the product of spectral norms. * Theorem 3 shows that the fat shattering lower bound holds as well for non-zero initialization, although $\Vert W_0\Vert$ should be as small as $2\sqrt{2}\epsilon$. * Theorem 4 adapts the classical result for convex learning in context of learning vector-valued linear predictors. Theorem 5 adapts Theorem 3 for convex functions. The overall message is that linear predictors combined with convex functions are learnable. * The authors provide generalization bounds to single hidden layer networks with elementwise Lipschitz activation function and with initialization dependence in Theorem 6. This is the extension and improvement of Vardi et al 2022, and Daniely and Garnot 2022. * Theorem 7 provides a generalization bound on deep neural networks with Lipschitz activation under Forbenius norm constraints. The proof is based on Rademacher complexity analysis, and adapts Golowich et al’s peeling lemma to Lipschitz activation. To summarize, the paper contains various theoretical results, some related to vector valued outputs, some related to initialization dependence, some related to convex functions, and some related to Lipschitz assumption. I have some concerns about the theoretical results of the paper, their assumptions and comparison with existing works. Below, I raise some concerns about the assumptions of Theorem 1,3 and 5. The result of Theorem 2 seems peculiar. Theorem 7 is not properly situated with respect to similar norm-based bounds in the literature. Theorem 4 and 6 seems to build heavily on previous results. Strengths: Certain story lines of the paper are very interesting, for example the impact of vector valued outputs on the generalization. Also, adapting Peeling lemma in Theorem 7 is very elegant – although it comes with an additional cost. Weaknesses: * I wonder about the message of Theorem 1. The authors assumed already that the output dimension is exponentially large $n=\exp(\Theta(L^2B^2/\epsilon^2))$, and then show that the network can shatter $\exp(cL^2B^2/\epsilon^2)$ points. This just tells that the fat-shattering dimension would be linearly (or polynomially maybe) dependent on the output dimension and not exponentially on $B$, as it is claimed in the paper. * It is surprising that Theorem 2 do not leave any room for reducing to scalar valued case ($n=1$), where the dependence on $B$ is not exponential anymore. Is there an explanation for this? In this sense, the results are not optimal for $n=1$. More generally, it is surprising that the dimension of the output vector $n$ does not impact the sample complexity. * The exponential dependence arises mainly from the covering number argument in Lemma 1, where the function space is covered by binning $N_x$ and $N_y$. The exponent is basically $N_x$. It would be important to emphasize this in the main paper, explain why $B$ appears in the exponent, and whether this can be circumvented. * I wonder if directly bound the Rademacher complexity for $f(Wx)$ is a good choice. One might be tempted to use variants of vector-valued contraction lemma (for example Maurer’s result) to remove $f$, although with price of dimension dependence. It seems to me that this approach will not suffer from the exponential dependence. * On a similar note, the authors should discuss the existing approaches for bounding the Rademacher complexity of vector valued functions and situate their work with respect to those. * Corollary 2 utilizes the Lipschitz constant of the deep network to bound the generalization error. However, this has been tried already in Bartlett et al 2017 and does not have exponential dependence. Besides, considering Lipschitz constant alone without margin has shortcomings already mentioned in Bartlett et al 2017. The proposed bound suffers from similar shortcomings apart from the exponential dependence. * The assumptions of Theorem3 are not clearly discussed. It seems surprising that the input and output dimensions are chosen as a function of $m$ ($d=\Theta(m)$ and $n=\Theta(\exp(m))$. The norm $B$ is fixed to $\sqrt{2}$ unlike Theorem 1. Overall, it is difficult to parse the message behind Theorem 3. We can raise similar issues for Theorem 5. * The authors claim that Theorem 7 provides “the first of this type (to the best of our knowledge) that handles general Lipschitz activations under Frobenius norm constraints”. Bartlett et al 2017 provide a generalization error for Lipschitz activations using spectral and $(p,q)$ matrix norms. Their bound has only logarithmic dependence on the width, and does not have the product of Frobenius norms, which can be much larger than the product of spectral norms. The authors should compare their result with Bartlett et al. There are other norm based bounds in the literature, and the authors can provide a better comparison with them. * A side note is that Theorem 1 and 2 and 7 are for $W_0=0$. If wonder the initialization dependence, which is in the title, is the core idea of the paper. The paper covers many different topics and can probably focus deeper on some of those. For example, I feel that the result about learning convex functions is expected and not very insightful. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: * In line 550 of supplementary materials, it is assumed that $\frac{B}{\epsilon}>1$. I guess this is because we have the condition on the rank $r\leq B^2/\epsilon^2$. How would the proof work if the condition does not hold, and then $r=0$? The point of working with the norms is that we hope $B$ is small after training. * Line 206, “in sharp contrast to the case of vector-valued predictors” $\to$ maybe “in sharp contrast to the case of *scalar*-valued predictors”? * Line 685 in the supplementary materials: the equality should be an inequality. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: See above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your careful reading of the paper and the detailed comments, we will be happy to add clarifications and discussions as suggested. First, a general comment relevant to most of the points raised: We would like to emphasize that our focus is on *size-independent* bounds, which do not depend in any manner whatsoever on the dimension or number of parameters. This is in line with a huge previous literature in statistical learning theory, as well as on neural networks in particular. Such results are relevant for understanding the generalization capabilities of large models, and are also crucial for understanding how different norms lead to sample complexity control, independent of additional orthogonal constraints such as model size. Understanding how the bounds are affected by joint norm control and model size control is certainly an interesting avenue for future research, but we believe a necessary prerequisite is understanding the effect of each separately. We believe our results certainly pass the novelty test in that regard. For example, the fact that controlling the Frobenius norm alone leads to any kind of finite sample complexity (Theorem 2) should be surprising considering what we currently know (e.g. Maurer's vector-valued contraction result, which is vacuous without controlling the vector dimension). - Message of theorem 1: Actually, the sample complexity cannot simply depend on the output dimension. This is because theorem 2 tells us that there is also an upper bound independent of the output dimension, as long as the Frobenius dimension is controlled. In any case, as written above, our focus is on bounds which do not depend on the output dimension. - Theorem 2 does not reduce to the scalar-valued case: Indeed it should not, since our focus is on size-independent bounds which do not depend on $n$. This is exactly analogous to the classical $\frac{B^2}{\epsilon^2}$ sample complexity bound for linear predictors, which is minimax optimal when the input dimension is unconstrained, but becomes something else if we also constrain the dimension. - Existing works on Rademacher complexity of vector-valued functions: We agree these should be further discussed and would be happy to add such a discussion. As far as we know, all results on this topic strongly depend on the vector dimension, and are not applicable to the size-independent setting that we focus on (we certainly tried :) ). - Corollary 2 and Bartlett et al. (2017): The Bartlett et al. paper uses a different and much stronger assumption that ours: They control the $(1,2)$ matrix norm, whereas we control the Frobenius norm. Besides the latter norm being arguably more natural (as we discuss in the introduction), the gap between the $(1,2)$-norm and the Frobenius norm can be arbitrarily large, depending on the matrix size (and again, we focus on size-independent bounds). In addition, we already know that the product of the spectral norms is insufficient to bound the Rademacher complexity. The main point of our corollary 2 is that *just* by adding control on the Frobenius norm of the *first* layer, we can already get some finite sample complexity. This should be surprising, as all previous norm-based bounds (that we are aware of) required stronger norm controls on *all* the layers to get a finite bound. - Assumptions in Theorem 3 and theorem 5: Again, our focus is on size-independent bounds, so in order to establish a lower bound, it is reasonable to choose $n$ and $d$ to get the tightest lower bound possible (including as a function of the sample size $m$). A similar assumption is used in the classical result that the $\frac{B^2}{\epsilon^2}$ bound for linear predictors is tight (i.e. the input dimension should be larger than the sample size). The message of theorem 3 is that even if the Frobenius norm $B$ is controlled (and even a small numerical constant), we cannot obtain a finite sample complexity when $W_0\neq 0$. Also, the $\sqrt{2}$ factor in theorems 3+5 is just for convenience, and can easily be replaced by $1$ via rescaling. Anyway, we would be happy to further clarify the presentation of the results if needed. - "The overall message [of theorems 4+5] is that linear predictors combined with convex functions are learnable." Actually, the fact that linear+convex functions are learnable is classical and appears in textbooks. The main message is that we present a class that is provably learnable, in a distribution-free learning setting, *without* uniform convergence. Namely, the convergence of the learning algorithm does not depend on the size of the model, but the sample complexity does depend. - Theorem 7 and comparison to Bartlett et al.: Thanks, we will add more comparisons to the literature. Again, we would like to emphasize that Barlett et al 2017 considers an incomparable setting where the $(1,2)$ matrix norm is controlled. - The result about learning convex functions is expected: Certainly, it is just given for completeness. The more interesting result is theorem 5 which shows that uniform convergence does not hold here. - Question 1: The main regime of interest is $\frac{B}{\epsilon}>1$, i.e. that $B$ is not too small and $\epsilon$ is not too large. If $\frac{B}{\epsilon}<1$, then $B<\epsilon$. This means that the predictor class (with a norm bound of $B$) cannot attain an output value varying by more than $\epsilon$, even on a single fixed data point. - Question 2: Thanks, we will fix accordingly. - Question 3: Thanks, we will fix accordingly. Regarding your summary: - We emphasize that Thm. 3 shows that even for very small non-zero initialization, it is impossible to control the sample complexity independent of the size/dimension parameters $d,n$, by showing that the fat shattering is infinite. This is in contrast to Thm. 1 and 2 which showed a tight size-independent bound.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Binarized Spectral Compressive Imaging
Accept (poster)
Summary: This paper presents a deep neural network with binarized operations for low-cost spectral compressive imaging of hyperspectral images (HSI). The network introduces a basic unit for model binarization which adaptively redistributes HSI representations before binarization activation and uses a scalable hyperbolic tangent function to approximate the Sign function in backpropagation. Four binarization convolution modules are designed to solve the dimensional mismatch problem during feature reshaping and propagate full precision information throughout the network. Experiments on both synthetic and real data are conducted for performance evaluation. Strengths: 1. An effective binarized deep neural network is proposed for HSI reconstruction. 2. Four binarized convolution modules are designed to enable the propagation of full precision information through all convolution layers. 3. Based on the ablation experiments in the article, the proposed scalable hyperbolic tangent function provides inspiration for approximating the symbolic function in back propagation. Weaknesses: 1. The proposed binarization scheme is general, without specific design optimized for HSI restoration. Then, it lacks comparison to other general binarized NNs or comparison in other reconstruction tasks. 2. The proposed scheme is quite straightforward and engineering. 3. Approximation error bounds are not discussed, but this is quite important for binarized NNs. 4. Some key technical parts are not clearly/detailly described. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The paper argues that, as mobile devices are more and more widely used, the demands of running and storing HSI restoration models on edge devices grow significantly. Could the authors provide practical cases where mobile edge devices are used for HSI restoration? 2. In the binarized convolution layer, the formula for converting the model parameters from 32 bits to 1 bit is not clearly written. the average of the 32-bit absolute value multiplied by the sign function cannot be turned into a 1-bit number. 3. It is not clear how the proposed method deals with the dimensional mismatch feature remodeling process, and how to solve the dimensional mismatch with binarized convolution modules. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Approximation error bounds of the proposed binarization scheme are not discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: &nbsp; ### Response to Reviewer kC2s &nbsp; Thanks for your valuable comments. Code and models will be released to the public. &nbsp; `Q-1:` The proposed binarization scheme is general, without specific design for HSI restoration. It lacks comparison to other BNNs or in other tasks. `A-1:` In fact, our BiSR-Conv based on HSI nature is tailored for HSI restoration. In Line 132 – 138, we notice that HSI signals have different density and distribution along the spectral dimension due to the constraints of specific wavelengths. To fit this HSI nature, BiSR-Conv redistributes the HSI representations before binarization. We compare our method with other BNNs in Tab. 1. Our method outperforms other BNNs by over 2.55 dB. Following your advice, we also conduct experiments of RGB image denoising ($\sigma = 25$). The BNNs are trained on DIV2K and tested on CBSD68, Kodak24, and Urban100. We also conduct experiments of medical image enhancement on Real Fundus [77]. The PSNR (dB) results are shown in the following table. Our method still outperforms other BNNs. | Datasets | BNN | Bi-Real | IRNet | BTM | ReAcNet | BBCU | BiSRNet | | :- | :-: | :-: | :-: | :-: | :-: | :-: | :-: | | CBSD68 | 22.67 | 28.72 | 29.01 | 29.91 | 29.95 | 30.56 | **31.15** | | Kodak24 | 22.58 | 29.17 | 29.54 | 30.64 | 30.65 | 31.28 | **32.06** | | Urban100 | 22.67 | 28.18 | 28.35 | 29.05 | 29.20 | 29.96 | **30.21** | | Real Fundus | 16.89 | 23.94 | 24.03 | 25.58 | 24.16 | 24.25 | **26.31** | &nbsp; `Q-2:` The proposed scheme is straightforward and engineering. `A-2:` In fact, all the proposed techniques have strong motivations and deep insights. (1) See Line 107 – 112, previous CNN-/Transformer-based methods suffer from heavy computational burden or massive parameters. Meanwhile, these methods often exploit complex computations that are difficult to binarize. Hence, we redesign a simple, compact, and easy-to-deploy base model in Fig. 2 (a). (2) Our BiSR-Conv block is based on the HSI nature. See Line 131 – 134. To handle different density and distribution along the spectral dimension, BiSR-Conv reshapes the HSI features before binarization. (3) In Line 138 – 160, previous Sign approximation functions are inflexible and have large estimation errors. To solve this problem, we propose Tanh($\alpha x$). (4) Full-precision information is critical in BNN. Bearing this key insight into consideration, we design the bypass path in BiSR-Conv and four binarized convolution modules in Fig. 4 to make sure the full-precision information can be propagated into all layers of BNN. &nbsp; `Q-3:` Approximation error bounds are not discussed. `A-3:` Unfortunately, existing BNN works [33 - 39] do not provide theoretical analysis of approximation error bound. This is because the deep neural network is a huge and complex black-box system with a large number of parameters. We compute the approximation error in the following table. Our method achieves the smallest error. | Methods | BiConnect | BNN | Bi-Real | IRNet | ReAcNet | BBCU | BTM | BiSRNet | | :- | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | | Error ( $\times 10^{-4}$ ) | 75.3 | 49.2 | 16.5 | 15.7 | 14.5 | 13.2 | 10.6 | **8.1** | &nbsp; `Q-4:` Some technical parts are not clearly described. `A-4:` Please be specific about which part. We not only describe the details of our method in the paper but also provide code and pre-trained weights to reproduce our method. &nbsp; `Q-5:` Practical cases about mobile edge devices used for HSI restoration. `A-5:` The HSI reconstruction techniques can be applied in hyperspectral (HS) cameras. For example, Motoki et al. [78] develops a video-rate HS camera using compressive sensing techniques. This HS camera can equip with AI-based HSI reconstruction algorithms to accelerate the imaging process. This HS camera can be used in real-world scenarios, including consumer applications such as smartphones and drones. &nbsp; `Q-6:` The formula for converting the model parameters from 32 bits to 1 bit is not clearly written. `A-6:` First of all, the values of the binarized weights used for 1-bit convolution (bit-count and XNOR operations in Eq. 10) are still $\pm 1$. The binarized weights are obtained by Eq. 3. The multiplied 32-bit mean absolute value does not affect the 1-bit convolution. It is outside the binarized weights. Multiplying this scale factor is to reduce the quantization error between 1-bit and 32-bit weights, as analyzed in XNOR-Net [52]. Many later works [33 - 37] also follow this multiplication. &nbsp; `Q-7:` Questions about dimension mismatch. `A-7:` In Line 189 – 201, the feature maps are directly reshaped by the binarized convolution in the normal binarized modules of Fig. 4. As aforementioned, the full-precision information is critical in BNN. Yet, the dimension of the input end of 1-bit conv does not match that of the output end. For example, in the normal downsample module (Fig. 4a), the input shape $\mathbb{R}^{H\times W\times C}$ does not match the output shape $\mathbb{R}^{\frac{H}{2}\times \frac{W}{2}\times 2C}$. This mismatch issue impedes the identity connection from input to output, blocking 32-bit information flow. To address this issue, we first design a bypass identity path in BiSR-Conv, and then use concatenating and splitting operations to free the feature map from being reshaped at the input and output ends of BiSR-Conv. See Fig. 4 (a), the feature map keeps its shape $\mathbb{R}^{\frac{H}{2}\times \frac{W}{2}\times 2C}$ when passing BiSR-Conv in our downsample module. By this means, the full-precision information can flow through our BiSR-Conv and four binarized modules, see the red arrows of Fig. 2 (c) and 4. &nbsp; **References** [77] Rformer: Transformer-based generative adversarial network for real fundus image restoration on a new clinical benchmark. JBHI 2022 [78] Video-rate hyperspectral camera based on a CMOS-compatible random array of Fabry–Pérot filters. Nature Photonics 2023 --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: The response from the authors has well addressed my concerns. I also read the comments from other reviewers. Now I am convinced that the work has both novelty and contributions to the fields of both compact NN design and spectral compressive imaging. Therefore, I would like to raise my score to Accept.
Summary: This paper proposes a novel method, Binarized Spectral-Redistribution Network (BiSRNet), for efficient and practical HSI restoration from compressed measurement in snapshot compressive imaging (SCI) systems. This paper redesigns the base model and presents the basic unit (BiSR-Conv) for model binarization. Specifically, this convolutional layer of BiSRNet is tailored for hyperspectral image processing. Comprehensive quantitative and qualitative experiments show that the proposed BiSR-Net outperforms state-of-the-art binarization methods and achieves comparable performance with full-precision algorithms. Strengths: (1) This paper redesigns a U-Net consisting of the four BCMs as shown in Figure 4 to let the full-precision information flow pass by the all network. This point is critical since previous BNNs do not consider all the situations in feature reshaping. Then in the basic unit, BiSR-Conv, with the insight of treating different HSIs with different densities and distribution, this paper proposes to shift HSIs before binarization to allow more binarized HSI activation. (2) Comprehensive experiments have been conducted including synthetic and real experiments to demonstrate the superiority of BiSRNet. The ablation study is also extensive. The performance of BiSRNet is impressive, which surpasses existing SOTA BNNs by huge margins, over 2.5 dB. Weaknesses: (1) Some details are missing. For example, how did you get the 3D mask $\mathbf{M}^*$? I remember the coded aperture used in CASSI is 2D. Its shape should be $H \times W$. Why does it have an additional dimension here? And how? (2) Some critical experimental analyses are lacking. For instance, at the end of section 4.4, “Binarizing Different Parts”, why binarizing the bottleneck can reduce the most parameters? And why binarizing the decoder can achieve the largest Ops reduction? An analysis should be provided to explain this. (3) I remember Binary Connect [39] binarizes the weights of CNN. But it seems that the activations of Binary Connect [39] are also binarized in Table 1. Why did you do that? Could you please explain this? Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Please refer to Weaknesses Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: This paper proposes BiSRNet for efficient and practical HSI restoration from compressed measurement in snapshot compressive imaging (SCI) systems. But this paper has not discussed the limitations of the proposed method. I suggest the author discuss the potential and limitations of the proposed method in more detail, which will make the contributions of the proposed method more significant. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: &nbsp; ### Response to Reviewer ZUfp &nbsp; Thanks for your valuable comments. Code and models will be released to the public. &nbsp; `Q-1:` How to derive the 3D mask? Why does it have an additional dimension? `A-1:` The original coded aperture $\mathbf{M}^* \in \mathbb{R}^{H \times W}$ is a 2D mask. However, as analyzed in Sec. 1 (Line 9 - 33) of the supplementary, the 3D HSI cube is point-wisely modulated by $\mathbf{M}^*$ in each spectral channel before dispersion. To simply describe the process of modulation and dispersion, we derive a 3D Mask $\mathbf{M} \in \mathbb{R}^{H \times (W + d(N_{\lambda}-1)) \times N_{\lambda}}$ by shifting $\mathbf{M}^*$ as $\mathbf{M} (u,v, n_{\lambda}) = \mathbf{M}^* (x, y + d(\lambda_n - \lambda_c))$ Similarly, we obtain $\tilde{\mathbf{F}} \in {\mathbb R}^{H \times (W + d(N_{\lambda}-1)) \times N_{\lambda}}$ by shifting the original HSI signal $\mathbf{F}$ as $\tilde{\mathbf{F}} (u,v, n_{\lambda}) = \mathbf{F} (x, y + d(\lambda_n - \lambda_c), n_{\lambda})$ Following this, we can simply reformulate the measurement $\mathbf{Y}$ as $\mathbf{Y} = \sum\nolimits_{n_{\lambda}=1}^{N_{\lambda}} \tilde{\mathbf{F}} (:,:, n_{\lambda}) \odot \mathbf{M} (:,:, n_{\lambda}) + \mathbf{E}$. &nbsp; `Q-2:` Why binarizing the bottleneck can reduce the most parameters? And why binarizing the decoder can achieve the largest Ops reduction? An analysis should be provided to explain this. `A-2:` Thanks for reminding. We will add the following explanation in the revision. (1) As shown in Fig. 2 (a) of the main paper, when the feature map is downscaled in the encoder, its channel number is doubled. Thus, the feature map in the bottleneck has the most channel number, leading to the most parameters in the bottleneck part of the network. (2) As shown in Fig. 4 (a) and (b) of the main paper, the convolution of downsample modules in encoder follows the downscaled operation while the convolution of upsample modules in decoder follows the upscaled operation. This implies the input spatial size of the convolution in upsample modules at the same level is larger than that of the convolution in downsample modules, which requires more computational costs. Thus, binarizing decoder achieves the largest Ops reduction. &nbsp; `Q-3:` Why binarizing the activations of Binary Connect in Tab. 1? `A-3:` The activations and weights of all compared BNN methods in Tab. 1 of the main paper are binarized for fair comparison. Thus, we also binarize the activations of Binary Connect. &nbsp; `Q-4:` Limitations of our work `A-4:` In fact, we have discussed the limitations in Sec. 3 (Line 59 - 64) of the supplementary. The main limitation of our work is that the model binarization sacrifices the HSI reconstruction performance. More specifically, compared to the full-precision counterpart, our BiSRNet is 4.35 (34.11 - 29.76) dB lower in PSNR and 0.099 (0.936 - 0.837) lower in SSIM. The PSNR and SSIM are reduced by 12.8\% and 10.6\%, respectively. However, this performance drop is smaller than that of other model binarization methods. To handle this issue, we will study how to preserve more performance while reducing the memory and computational complexity as much as possible in model binarization. --- Rebuttal Comment 1.1: Title: Thank you for addressing my comments Comment: Thank you for addressing my comments, especially on 3D mask derivation. The paper offers many insights into binarized spectral compressive imaging. I'll keep my score of 7. Thank you.
Summary: This work focuses on studying the binarized spectral compressive imaging reconstruction problem. A Binarized Spectral-Redistribution Network (BiSRNet) is proposed. The authors first design a basic U-Net as the base model to begin the binarization. Then a Binarized Spectral-Redistribution Convolutional (BiSR-Conv) unit is proposed to replace the 32-bit convolutional layer of the base model to derive BiSRNet. The BiSR-Conv has two advantages than vanilla 1-bit convolutional layer: (i) can redistribute the spectral distributions before binarization. (ii) has a scalable hyperbolic tangent function to approach the Sign function more closely. Strengths: The novelty and motivation are good. First of all, the research topic is new. Nobody explores binarized spectral compressive imaging before. The authors not only contribute a good method but also conduct experiments using before BNNs designed for other topics. We should respect this. Secondly, the redesigned base model is well motivated. The architecture shows the insight of “full-precision information flow”. The idea of redistributing HSI representations before binarization is interesting and reasonable. The Tanh(\alpha x) is also amazing. Flexibly controlling the gap with Sign function is really cool. + The performance is good and solid. The proposed BiSRNet outperforms the SOTA BNNs widely used in image classification. Meanwhile, the computational and memory complexity of BiSRNet is much lower than full-precision CNNs while the performance is comparable. In the real HSI experiments, it seems that the proposed BiSCI performs better than CNN-based methods in noise suppression. + The experiments are comprehensive, not only quantitative but also qualitative comparisons shows the advantages of the proposed method. The ablation study is sufficient to verify the effectiveness of the proposed techniques. + The codes and pre-trained weights are provided in the supplementary. The reproducibility can be checked and ensured. + The writing is good and easy-to-follow. The figures, tables, and mathematical notations are very clear. Weaknesses: - The definition is a little confusing. Sign(0) = 0 and Tanh(0) = 0. But in this paper, Sign(0) and Tanh(0) is defined as -1, as shown in Eq.(3) and line 152. Why? More explanation should be provided. - Although the proposed BNN achieves good performance with CNN. But compared with Transformer-based methods, there is a large gap. I understand that binarizing Transformer is not easy because of the self-attention mechanism. But it is good idea to keep trying. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Will you also plan to release the code of other BNNs compared in Table 1? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: See the above comments Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: &nbsp; ### Response to Reviewer nobf &nbsp; Thanks for your valuable comments. Code and models will be release to the public. &nbsp; `Q-1:` Why Sign(0) and Tanh(0) are defined as -1? `A-1:` As explained in Line 150 -152 of the main paper, if strictly following the mathematical definition, Sign(0) $= 0 \neq \pm 1$. However, in BNN, the weights and activations are binarized into 1-bit, i.e., only two values ($\pm 1$). Hence, Sign(0) is usually set to $\pm 1$, like [33, 34, 35, 36, 37, 38, 52]. Following this common setting, we also define $\underset{\alpha \rightarrow +\infty}{lim}~\text{Tanh}(\alpha \cdot 0) = -1$ in BNN. &nbsp; `Q-2:` It is a good idea to keep trying on binarizing Transformer. `A-2:` Thanks for your reminding. As you mentioned, Transformer is tough to binarize because of the computation scheme of self-attention. According to the experience, preserving at least 8-bit weights and activations can considerably control the performance gap between quantized and full-precision (32-bit) Transformers. We would like to set your advice as a future research direction and have a try. &nbsp; `Q-3:` Will we plan to release the code of other BNNs compared in Table 1? `A-3:` Yes, of course. We will release all of them. Our goal is to establish a toolbox and baseline for further research in this topic. --- Rebuttal Comment 1.1: Comment: After reading the rebuttal, my concerns have been well solved. Considering the novelty and solid experiments, I tend to keep my original score as "Accept".
Summary: In this paper, binarized neural network is first utilized in hyperspectral image reconstruction. For model binarization, authors propose the Binarized Spectral-Redistribution Convolution (BiSR-Conv), which adaptively redistributes the HSI representations before binarizing activation. Since the Sign function is non-differentiable, a scalable hyperbolic tangent function is applied in backpropagation, which has less approximation error. In BiSR-Conv, the additional residual connection encourages the full-precision information to propagate throughout the whole network. Based on BiSR-Conv, four binarized convolutional modules are designed to address the dimension mismatch issue during feature reshaping. Strengths: An application to a new domain; Clear and consecutive writing; A feasible idea for deploying spectral compression imaging in edge devices. Weaknesses: Experiments are insufficient. How does the ‘full-precision information’ affect the whole network? Is the ‘full-precision information’ play a dominant role in BiSR-Conv or not? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: In Sign Approximation experiment, the proposed scalable hyperbolic tangent function and previous Sign approximation functions should be compared in other BNNs, to further verify the generalization of the proposed function. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: &nbsp; ### Response to Reviewer aGHs &nbsp; Thanks for your valuable comments. Code and models will be released to the public. &nbsp; `Q-1:` How does the 'full-precision information' affect the whole network? Does the 'full-precision information' play a dominant role in BiSR-Conv or not? `A-1:` In BNNs, the weights and activations are binarized into 1 bit. Thus, there is a large gap between the outputs of the binarized and full-precision convolution, which severely degrades the HSI reconstruction performance. Hence, it is very important to allow the full-precision information flow to compensate for this quantization error. (1) To this end, we do not binarize the first (embedding) and last (mapping) convolution modules of the base model in Fig. 2 (a) of the main paper. Then the full-precision representations can be input into the binarized encoder and full-precision derivatives can be backpropagated to the binarized decoder. (2) In addition, to allow full-precision information flow through all binarized convolution layers, we design BiSR-Conv unit and the four binarized convolution modules, as shown in Fig. 4 of the main paper. More specifically, we add a bypass identity path in BiSR-Conv unit to propagate full-precision information, as shown in Fig. 2 (c). Besides, in Sec. 3.3, our binarized convolution modules cleverly use channel-wise concatenating and splitting operations to free the intermediate feature maps at the input and output ends of BiSR-Conv from being reshaped. In this case, the full-precision information flow is not blocked as the normal convolution modules do, as shown in Fig. 4. (3) The experiments to verify the importance of the full-precision flow are reported in Tab. 2 (a) of the main paper. Baseline-1 adopts vanilla 1-bit convolution and the normal convolution modules in Fig. 4. The full-precision information in the binarized parts is impeded. As a result, Baseline-1 only achieves poor results, 23.90 dB. When we successively apply the proposed BiSR-Conv and the four binarized modules, the full-precision information can be propagated through all layers of the BNN, leading to significant improvements of 3.90 and 1.96 dB. (4) Besides, we also conduct more experiments to verify the importance of full-precision information in the following table. When we binarize the feature embedding and mapping blocks in Fig. 2 (a), BiSRNet degrades by 2.13 and 1.06 dB respectively. When we jointly binarize the two blocks, the full-precision information can not be forward and backward propagated. As a result, the reconstruction performance degrades by 2.87 dB. The above results and analysis reveal the significant importance of full-precision information. This is also the key insight and motivation to design our BiSRNet and the four binarized convolution modules in Fig. 4. &nbsp; `Q-2:` In Sign approximation experiment, the proposed scalable hyperbolic tangent function and previous Sign approximation functions should be compared in other BNNs, to further verify the generalization of the proposed function. `A-2:` Thanks for your suggestion. Following your advice, we conduct experiments of Sign approximation in other BNNs. The PSNR (dB) results are reported in the following table. | Methods | BiConnect | BNN | Bi-Real | IRNet | ReAcNet | BBCU | BTM | BiSRNet | Avg | | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | | Clip($x$) | 22.19 | 23.88 | 26.20 | 26.16 | 26.41 | 26.43 | 27.15 | 28.97 | 25.92 | | Quad($x$) | 22.34 | 23.95 | 26.26 | 26.30 | 26.48 | 26.51 | 27.21 | 29.02 | 26.01 | | Tanh($\alpha x$) | 22.93 | 24.46 | 26.94 | 26.95 | 27.23 | 27.25 | 27.97 | 29.76 | 26.69 | Our scalable hyperbolic tangent function achieves 0.77 and 0.68 dB improvements on average than the piecewise linear and quadratic functions, showing the effectiveness of our proposed technique. --- Rebuttal Comment 1.1: Comment: Dear Reviewer, Please take a look at the response from authors to your comments made in your review and update your final score. Thanks, AC
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes a Binarized Neural Network based approach known as BiSRNet for binarized HSI restoration. The main motivation of the paper stems from the fact that any CNN or transformer-based architectures are computationally heavy for devices with low computing power and hence need extremely fast and light weight networks like binary neural networks. This paper redesigns the binary conv layer (BiSR-Conv) to exploit the distributed nature of HSI representations. They also employed a scalable Tanh function to decrease the approximation error. Overall, this paper is well-written and reasons well for various changes to the design of BCNNs. Strengths: - Well-motivated problem and design of the solution - Good empirical evidence - Good presentation - Clear to understand Weaknesses: - Main weakness of this paper is their redesigned conv modules are only applicable to the HSI domain? If so this limits the scope and impact of this work. - Lack of theory Technical Quality: 3 good Clarity: 3 good Questions for Authors: - I am not very familiar with HSI work so I do not have many questions at this point. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Please see the weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: &nbsp; ### Response to Reviewer F7DC &nbsp; Thanks for your valuable comments. Code and models will be released to the public. &nbsp; `Q-1:` Are the redesigned conv modules only applicable to the HSI domain? `A-1:` No. Although this work mainly studies the binarized spectral compressive imaging reconstruction problem in HSI domain, the proposed method can be generalized to other natural and even medical image domains. (1) First of all, in Line 131 – 138 of the main paper, we notice that HSI signals have different density and distribution along the spectral dimension due to the constraints of specific wavelengths. To adaptively fit this HSI nature, we propose to redistribute the HSI representations along the spectral dimension before binarizing the activation. When this technique is used to other image domains, it shifts and reshapes the feature maps in channel wise to preserve more information in activation binarization. (2) Secondly, in Line 138 – 160 and Fig. 3 of the main paper, our designed scalable hyperbolic tangent function is not only applicable to the HSI domain. Instead, it is a very basic and general technique to approximate the Sign function in any BNN for any image domain. (3) Thirdly, allowing full-precision information flow is a critical insight for us to design BiSR-Conv unit in Fig. 2 (c) and the four binarized convolution modules in Fig. 4. The identity paths propagate the full-precision information to all layers of the network. By this means, the quantization error between BNN and CNN can be narrowed down. The proposed BiSR-Conv and the four binarized convolution modules in Fig.4 can replace the vanilla binarized convolution layer and normal downsample / upsample / fusion convolution modules (in Fig. 4) in other BNNs and can be seamlessly generalized to other image domains. We conduct experiments of RGB image denoising ($\sigma = 25$). The BNNs are trained on DIV2K [73] and tested on CBSD68 [74], Kodak24 [75], and Urban100 [76]. Besides, we also conduct experiments of medical image enhancement on Real Fundus [77] dataset. The PSNR (dB) results are shown in the following table. | Datasets| BNN | Bi-Real | IRNet | BTM | ReAcNet | BBCU | BiSRNet | | :- | :-: | :-: | :-: | :-: | :-: | :-: | :-: | | CBSD68 | 22.67 | 28.72 | 29.01 | 29.91 | 29.95 | 30.56 | **31.15** | | Kodak24 | 22.58 | 29.17 | 29.54 | 30.64 | 30.65 | 31.28 | **32.06** | | Urban100 | 22.67 | 28.18 | 28.35 | 29.05 | 29.20 | 29.96 | **30.21** | | Real Fundus | 16.89 | 23.94 | 24.03 | 25.58 | 24.16 | 24.25 | **26.31** | Our method still significantly outperforms other BNNs. These results demonstrate the generality and effectiveness of our method on RGB/Medical image domains. The reason why we focus on studying binarized spectral compressive imaging reconstruction is that this problem has not been studied until now. We aim to fill this research gap. All compared methods in Tab. 1 are re-implemented by us. &nbsp; `Q-2:` Lack of theory `A-2:` This work mainly studies an application problem, i.e., binarized spectral compressive imaging reconstruction, instead of theory. The first keyword labeled on this paper is “Applications” and the primary area of this submission is “Machine Vision” instead of “Machine Learning Theory”. However, there are still some theoretical analysis and derivation in our paper. In Line 138 – 160 of the main paper, we first theoretically prove that the proposed scalable hyperbolic tangent function can arbitrarily approach the Sign function in Eq. (6) and (7). Then we theoretically compute and compare the approximation error of previous picewise linear and quadratic functions and our scalable hyperbolic tangent function in Line 143 – 145, Fig. 3, and Eq. (8). Finally, we theoretically analyze the differentiability and flexibility of previous and our approximation functions to show the advantages of our method. Besides, we also provide detailed theoretical derivation of the CASSI mathematical model in Sec. 1 (Line 8 – 48) of the supplementary. &nbsp; **References** [73] NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study. CVPRW 2017. [74] A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. ICCV 2001. [75] Residual learning of deep convolutional neural networks for image denoising. Journal of Intelligent & Fuzzy Systems, 2019. [76] Single image super-resolution from transformed self-exemplars. CVPR 2015. [77] Rformer: Transformer-based generative adversarial network for real fundus image restoration on a new clinical benchmark. JBHI 2022. --- Rebuttal Comment 1.1: Comment: Dear Reviewer, Please take a look at the response from authors to your comments made in your review and update your final score. Thanks, AC
null
null
null
null
null
null
Learning Generalizable Agents via Saliency-guided Features Decorrelation
Accept (spotlight)
Summary: This paper introduces an original approach named Stochastic Gradient Feature Decorrelation (SGFD), which aims to amplify the generalization capabilities of reinforcement learning (RL) agents across various environmental variations. These variations can encompass task-irrelevant visual attributes such as backgrounds, as well as task-related factors like physical configurations. The authors propose achieving this through a decorrelation of features, executed via a resampling technique intended to minimize the Frobenius norm of the cross-covariance matrix, derived from Random Fourier features. Recognizing the inherent challenges of complete decorrelation, the authors shift focus towards effectively decorrelating the most variable features. This is facilitated by leveraging the saliencies from an environment classification model, which, under ideal circumstances, makes its decisions based on distinct features that are not common to different environments. Strengths: The paper is commendably well-written and coherent, effectively explaining complex ideas in an accessible manner. The authors demonstrate a strong theoretical grounding, with well-motivated intuitions supporting their methodology. Their saliency-guided optimization is an interesting approach, backed by a robust ablation study. SGFD successfully tackles generalization issues relating to both task-irrelevant and task-relevant features, demonstrating a broad scope of applicability. The proposed method do present noticeable enhancements in generalization performance, more so in the case of task-relevant features. Weaknesses: Despite the paper's strengths, there are some areas where it could be improved. Firstly, the methodology requires several environments with variations to train their environment classifier, introducing an element of manual supervision into the learning process, which may not be ideal in all scenarios. Furthermore, the full algorithm can be challenging to comprehend without first referring to Appendix A, suggesting that the main body of the text might benefit from additional clarification. It might also be beneficial to introduce the general objective - namely, the reweighting of the batch sampled from the buffer - earlier in the paper to give readers a clearer understanding of the process. Lastly, while the authors promote their method as an improvement over Soft Actor-Critic (SAC), the achieved results actually utilize the same updates for the encoder as employed by Adaptive Meta-learner of Behavioral Similarities (AMBS). This could lead to misunderstandings, as readers may infer that the saliency-guided resampling alone yields these performances. It would therefore be beneficial for clarity and fairness if the authors explicitly acknowledged this in the experimental section. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: * Could the authors elaborate on the inference procedure employed during testing? This could help elucidate the practical applicability of the methodology. * Does the encoder also benefit from the gradients produced through resampling? Understanding this aspect could contribute to a more comprehensive understanding of the process. * Given the necessity to decorrelate features that vary across environments from those that remain constant — and assuming that the variable features have a higher saliency — it appears that the $p\left(\mathbf{Z}_i\right) p\left(\mathbf{Z}_j\right)$ in equation 7 tends to assign more importance to tuples of varying features. Would it potentially be more efficient to replace it with $|p\left(\mathbf{Z}_i\right) - p\left(\mathbf{Z}_j\right)|$to accentuate the decorrelation between the varying and consistent features? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: While the authors have reasonably addressed the methodological limitations of their approach, they assume no potential negative societal impacts stemming from their method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are particularly encouraged that the Reviewer NdCs finds our method novel and effective. **Reply to the weakness** >**W1. Firstly, the methodology requires several environments with variations to train their environment classifier, introducing an element of manual supervision into the learning process, which may not be ideal in all scenarios.** We agree with the point of view. From a causal perspective, invariance needs to sum up from changing data. It is an intuitive way to train general policies from different environments. In the experiments, we set up four training environments for each task, which puts the cost in an acceptable range. >**W2. Furthermore, the full algorithm can be challenging to comprehend without first referring to Appendix A, suggesting that the main body of the text might benefit from additional clarification. It might also be beneficial to introduce the general objective - namely, the reweighting of the batch sampled from the buffer - earlier in the paper to give readers a clearer understanding of the process.** According to the suggestion, we split the original Figure 2 into two figures in the global response PDF. We clearly illustrate the motivation for feature decorrelation in Figure 1, visualizing the difference before and after sample reweighting. In Figure 2, we explicitly indicate that the model aims to reweight a batch of samples from the replay buffer. Based on the revised Figure 2, we give a more precise technical flow description. **Figure 2: the architecture of SGFD.** Essentially, our SGFD aims to reduce correlations in image features by reweighting samples. This involves five steps: (1) We fetch a sample batch from the replay buffer, which may come from multiple environments with different backgrounds or robot configurations. (2) The image in the sample is compressed by an encoder into latent features. (3) Then, we augment these features using multiple Random Fourier Functions to capture nonlinear correlations. (4) Concurrently, we train a classifier and apply saliency maps to detect features that shift across environments. (5) Finally, SGFD reweights samples to eliminate the correlations between identified features and other features. >**W3. Lastly, while the authors promote their method as an improvement over SAC, the achieved results actually utilize the same updates for the encoder as employed by Adaptive Meta-learner of Behavioral Similarities (AMBS). This could lead to misunderstandings, as readers may infer that the saliency-guided resampling alone yields these performances. It would therefore be beneficial for clarity and fairness if the authors explicitly acknowledged this in the experimental section.** Thanks for the suggestion; we will explicitly describe the details of the encoder in the experimental section, not just in the appendix. **Reply to questions** >**Q1. Could the authors elaborate on the inference procedure employed during testing?** Our method consists of four neural network models: an actor model (policy), a critic model, an encoder, and a classifier model. Among these four models, the critic model is used to improve the actor's performance during the training phase. At the same time, the classifier is used to assist the sample reweighting process. During testing, we deployed the encoder and actor model to the new environment. During inference, the encoder compresses the observed image into a latent representation, and the actor model predicts actions until the environment is terminated or the task is completed. This process will not involve the calculation of the classifier and the critic model. >**Q2. Does the encoder also benefit from the gradients produced through resampling?** The role of the encoder is to compress high-dimensional images into compact representations, which are the basis of other models. In our experiments, the encoder is only updated along with the critic model, which is not affected by sample reweighting. The sample reweighting is only applied to the actor model to avoid confusing which part of the model brings the final result improvement. We will clarify this in the final version to make it easier for readers to understand the process. >**Q3. Would it potentially be more efficient to replace $p(\mathbf{Z}_i)p(\mathbf{Z}_j)$ with $|p(\mathbf{Z}_i)-p(\mathbf{Z}_j)|$ to accentuate the decorrelation between the varying and consistent features?** Intuitively, $|p(\mathbf{Z}_i)-p(\mathbf{Z}_j)|$ can identify pairs of varying features and invariant features with high probability. According to your suggestion, we conducted the related experiment and compared it with Equation (7). Due to the time limit, we tested several scenarios with task-irrelevant situations. | | walker-walk | cheetah-run | finger-spin | walker-run | finger-turn | |-------|-------|-------|-------|-------|-------| | $p(\mathbf{Z}_i)p(\mathbf{Z}_j)$ | $\mathbf{959.1 \pm 26.3}$ | 599.6 $\pm$ 47.2 | 965.7 $\pm$ 45.9 | $\mathbf{420.7 \pm 39.2}$ | $\mathbf{984.3 \pm 11.5}$ | | $\mathbf{abs}(p(\mathbf{Z}_i)-p(\mathbf{Z}_j))$ | 951.2 $\pm$ 27.2 | $\mathbf{605.6 \pm 46.6}$ | $\mathbf{975.5 \pm 39.8}$ | 406.3 $\pm$ 46.1 | 974.2 $\pm$ 13.5 | From the preliminary results, $|(p(\mathbf{Z}_i)-p(\mathbf{Z}_j)|$ achieved a comparable performance as $p(\mathbf{Z}_i)p(\mathbf{Z}_j)$. Two points may cause the fluctuation in performance: (1) Due to the data limit, sample reweighting only achieves approximate perfect decorrelation. (2) The classifier's error may affect the recognition of changed features. Overall, $|p(\mathbf{Z}_i)-p(\mathbf{Z}_j))|$ is a promising approach and will complement experiments in task-relevant situations. Supplementary experiments will be updated in the latest version. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' responses that clarified my questions. After reviewing their explanations, I maintain my opinion that this paper possesses the necessary qualities for acceptance.
Summary: In visual-based Reinforcement Learning (RL), agents often struggle with generalization to environmental variations that were not observed during training that have changed task-irrelevant features and changed task-relevant features. To achieve generalization in environmental variations, The authors popose a sample reweighting method for RL tasks that encourages the agent to understand the impact of changed features on its decisions, called Saliency-Guided Features Decorrelation (SGFD). The authors demonstrated that SGFD significantly outperforms state-of-the-arts in handling changed task-relevant features. Strengths: This paper is well organized and described. Details of the proposed method are well described with supplementary materials. The performance advantages are not huge, but the authors have detailed experiments and analysis of changed feartures that support their claims. Weaknesses: The proposed method use the saliency to calculate and discriminate changed features under environmental changes. Their experimental results and analysis include only one of task-irrelevant feature cases or task-relevant feature cases. The proposed method is dependent on the saliency of the learned policy. Additional analysis of policy dependency would be helpful to support the authos' claims. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Additional experiments and analysis of mixture of changes of task-irrelevant feature cases and task-releeant feature cases would be included. Additional analysis of policy dependency would be helpful to support the authos claims. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The authors describe limitations, but more insights about limitations in real situations that include mixture of changes of task-irrelevant feature cases and task-relevant feature cases would be helpful. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are encouraged that Reviewer SPDk finds our method novel and the idea well-grounded. **Reply to the weakness** >**W1. The proposed method use the saliency to calculate and discriminate changed features under environmental changes. Their experimental results and analysis include only one of task-irrelevant feature cases or task-relevant feature cases.** Firstly, we wish to clarify that variations in task-relevant or task-irrelevant features are widespread in the real world. Therefore, many works focus on one of these situations and propose novel ideas [1-4]. Second, our work improves generalization in both cases and achieves comparable performance to other algorithms in their focus scenarios. Lastly, as suggested, we evaluate the most challenging environment where both task-relevant and task-irrelevant features are changed. Specifically, in the MT-cheetah-run task, both torso length and background noise varied during testing. Similarly, in the MT-finger-turn task, the robot arm's length and background noise changed. Neither the changed background nor the robot's configuration was present in the training environments. | Tasks | SGFD | TED | AMBS | SGQN | DBC | DR | |-------|-------|-------|-------|-------|-------|-------| | Mix-cheetah-run | $\mathbf{378.6 \pm 36.2}$ | 308.6 $\pm$ 47.6 | 265.6 $\pm$ 37.4 | 161.2 $\pm$ 47.5 | 171.2 $\pm$ 36.2 | 189.4 $\pm$ 46.3 | | Mix-finger-turn | $\mathbf{899.3 \pm 29.5}$ | 726.2 $\pm$ 52.2 | 802.5 $\pm$ 53.1 | 632.9 $\pm$ 52.8 | 321.3 $\pm$ 61.6 | 605.6 $\pm$ 57.8 | From the results, the advantage of feature decorrelation is further amplified and outperforms the baseline by 28% when the changes in both cases co-occur. This is because our method can handle the generalization problem of task-relevant and task-irrelevant cases, while previous work usually only focuses on one. In addition, feature decorrelation encourages the model to recover the true associations between the changed features and the optimal behavior and thus be able to adjust actions in unseen environments correctly. We will add this experiment to the latest version. >**W2. The proposed method is dependent on the saliency of the learned policy. Additional analysis of policy dependency would be helpful to support the authors' claims.** Based on the reviewer's comment on the weakness, we speculate that the "policy dependency" mentioned here means that the policy's generalization depends on the accuracy of the saliency map. In the paper, we performed two types of experiments directly related to the saliency map. The first class experiments are Figures 4 and 6 in the main text, which test the performance of our method without the assistance of saliency maps. As shown in Figure 4 (b), without the guidance of the saliency map, the decorrelation ability of the model is obviously limited, and this result also corresponds to the generalization of Figure 6 (b). Another type of experiment is Figure 9 in the appendix, which focuses on testing the accuracy of the classifier. From the results, the classifier achieves an accuracy of over 0.9 within 1e5 steps in each task, which demonstrates that saliency maps can accurately identify features that shift across environments. **Reply to questions** >**Q1. Additional experiments and analysis of mixture of changes of task-irrelevant feature cases and task-relevant feature cases would be included.** According to the suggestions, we evaluate our method on the mixture of changes of task-irrelevant feature cases and task-relevant feature cases. The experiment results can be find in the response of W1. >**Q2. Additional analysis of policy dependency would be helpful to support the authors claims.** Similarly, we discussed the "policy dependency" in our response to W2. **References** [1] Temporal disentanglement of representations for improved generalisation in reinforcement learning. ICLR 2023. [2] Look where you look! saliency-guided q-networks for generalization in visual reinforcement learning. NeurIPS 2022. [3] Learning generalizable representations for reinforcement learning via adaptive meta-learner of behavioral similarities. ICLR 2022. [4] Learning robust state abstractions for hidden-parameter block mdps. ICLR 2021. --- Rebuttal Comment 1.1: Title: Follow-up to Reviewer SPDk Comment: Dear Reviewer SPDk, We value your positive feedbacks and constructive suggestions for our paper and sincerely appreciate your effort in reviewing it. As the end of the discussion is approaching, we kindly request your consideration regarding the possibility of raising the score. We thank the reviewer's effort in the review of our paper. We hope we have effectively addressed all the concerns raised. Should there be any remaining concerns, we stand ready to offer additional clarifications. Thank you again for your dedicated review and invaluable insights. Kind regards, Paper1844 Authors
Summary: This paper deals with a novel sample re-weighting method designed to enhance generalization in visual reinforcement learning tasks across environments with unseen task-irrelevant and task-relevant features. SGFD is composed of two core components: RFF and a saliency-guided model. The paper puts down results comparison with standard datasets to establish the claim. This is a very important problem w.r.t. many downstream tasks of agents where main reason for failures is due to lack of generalization in learning. Strengths: Code and supplementary material content are good. The paper is well written with example in intro. Appendix A,B are the crux of this paper in terms of contributions. Weaknesses: Related work needs an overhaul in terms of gaps and only putting exactly related prior art. Contribution section please re-write and make it crisp to two main claims. Apart from Pearson 218, other correlation checks could be put into function. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Fig. 2 weighted samples in figure, why those weights and what value it adds? Not clear why HSIC was discussed in first place, line 142. Is not Figure 3 too perfect wrt other approaches? Why so? I will also like to see worst results in supplementary to see variance. Table 2 - in term of metric results there exists 2 clusters (walker-walk, finger-spin,finger-turn) and (cheetah-run, walker-run). Is the are any analysis for this? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The work is limited to the specific datasets tried on in terms of training, otherwise adaptable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are particularly encouraged that the Reviewer jEbK finds our method novel and effective. **Reply to the weakness** >**W1. Related work needs an overhaul in terms of gaps.** Building on the work discussed in Section 2, we further complement related work that generalizes to novel environment configurations. These works are typically studied with procedurally generated environments [1][2]. Some approaches to this problem leverage techniques from supervised learning, such as regularization, curriculum strategies, hyper-parameter tuning, and using self-supervised objectives [3-5]. Recurrent Independent Mechanisms leverage modules to learn a state function improved out-of-distribution generalization [6]. Feature-Attending Recurrent Modules showed that a modified attention mechanism led to strong generalization improvements with RL [7]. Our approach differs from these approaches in that we make no assumptions about the structure of task rewards or states, exploiting feature decorrelation to improve the generalization. >**W2. Contribution section please re-write and make it crisp to two main claims.** According to the suggestion, we revised the contribution part and made it crisp to two main claims. Summary of Contributions: (1) We propose the SGFD model that utilizes a sample reweighting method to improve generalization in visual RL tasks, covering both task-relevant and task-irrelevant situations. (2) Experimental results demonstrate that SGFD generalizes well on a wide range of novel environments and significantly outperforms state-of-the-art methods in handling task-relevant variations. >**W3. Apart from Pearson, other correlation checks could be put.** We additionally introduce Spearman's rank correlation coefficient to test our method. Since Spearman's rank correlation coefficient cannot be directly applied in the case of weighted data, we first randomly select 128 samples based on the distribution of weighted values and calculate the standard Spearman's rank correlation coefficient based on these 128 samples. To save space, we take Figure 7 in the appendix as an example and calculate the correlation between the changed features ($s_8-s_{10}$) with other features ($s_1-s_7$). Similar to the Pearson correlation coefficient, we control the value range between 0 and 1, where 1 means complete correlation and 0 means independent of each other. |Unweighted samples|$s_1$|$s_2$|$s_3$|$s_4$|$s_5$|$s_6$|$s_7$| |-|-|-|-|-|-|-|-| |$s_8$|0.81|0.72|0.78 |0.79|0.74|0.74|0.92| |$s_9$|0.81|0.82|0.89 |0.78|0.86|0.82|0.84| |$s_{10}$|0.79|0.75|0.80|0.91|0.87|0.89|0.74| |Weighted samples|$s_1$|$s_2$|$s_3$|$s_4$|$s_5$|$s_6$|$s_7$| |-|-|-|-|-|-|-|-| |$s_8$|0.43|0.16|0.31|0.24|0.35|0.14|0.20| |$s_9$|0.28|0.27|0.31|0.18|0.28|0.24|0.39| |$s_{10}$|0.28|0.32|0.31|0.36|0.31|0.38|0.24| From the results, $s_8-s_{10}$ and $s_1-s_7$ show a strong correlation in the unweighted data, which is significantly reduced after reweighting. **Reply to questions** >**Q1. Fig. 2 weighted samples in figure, why those weights and what value it adds?** The weights in Figure 2 could eliminate the correlation of features in the image, which is beneficial to improve the generalization. For ease of understanding, we discuss the motivation for feature decorrelation in Figure 1 in the general response PDF file. Consider a classifier trained on images labeled 'cat': three with sofas and one with grass in the background. The model may falsely link sofas to cats due to frequent pairing. To mitigate this issue, we can reweight the images to decorrelate the features between backgrounds and animals. In RL, features like varied background noise can correlate with the robot's state, as in Figures 5 (main text) and 11 (Appendix). Such correlations may lead agents to form spurious associations, altering optimal actions with background shifts. >**Q2. Not clear why HSIC was discussed in first place, line 142.** The HSIC is the theoretical motivation behind our approach. Therefore, we introduce HSIC, leading to our correlation evaluation method based on the Random Fourier Function. >**Q2. Is not Figure 3 too perfect wrt other approaches? Why so?** Figure 3 does not represent that our method achieves perfect results on every task. Specifically, each polygon vertex denotes a task, and we normalize all algorithms' performances against the state-of-the-art result. Figure 3 is a visualization of Table 1 and Table 2, so the variance of each task can be seen in Table 1 and Table 2. In addition, we also record the learning curve, and the shaded part of the curve also shows the variance of the performance, as shown in Fig. 12 and Fig. 13 in the appendix. >**Q3. Table 2 - there exists 2 clusters (walker-walk, finger-spin,finger-turn) and (cheetah-run, walker-run). Is the are any analysis for this?** Based on the description, the reviewer may be talking about the results in Table 1. Each task in Table 1 is derived from DeepMind Control Suite [8], which is a widely used benchmark. In this benchmark, each task has an independent reward setting, e.g., 600 is a bad score in finger-spin, but it is close to the optimal solution in cheetah-run. Overall, our method achieves decent results on every task. Therefore, two clusters are observed because they have similar maximum reward settings. **References** [1] Leveraging procedural generation to benchmark reinforcement learning. ICML 2020. [2] MiniHack the Planet: A Sandbox for Open-Ended Reinforcement Learning Research. NeurIPS 2021. [3] Reinforcement learning with augmented data. NeurIPS 2020. [4] Generalization in reinforcement learning with selective noise injection and information bottleneck. NeurIPS 2019. [5] Procedural generalization by planning with self-supervised world models. ICLR 2022. [6] Recurrent independent mechanisms. 2019. [7] Feature-attending recurrent modules for generalization in reinforcement learning. ICLR 2022. [8] Deepmind control suite.2018. --- Rebuttal Comment 1.1: Title: Intermediate comments Comment: Thanks for the clarifications. I am sticking with my initial score while other reviewers are updating.
Summary: This paper propose SGFD, an novel approach aimed at improving the generalization ability of distinguishing task-irrelevant and task-relevant situations. SGFD leverages two core techniques: Random Fourier Functions (RFF) and the saliency map, to estimate the complex non-linear correlations in high-dimensional images, and identify the changed features to achieve decorrelation. Furthermore, it conducts experiments on two benchmarks to exhibit its improved generalization ability. Strengths: 1. Its proposed method has the capability to address problems in two distinct settings. 2. I think utilizing saliency map to further achieve decorrelation is an interesting idea. Weaknesses: 1. Figure 2 is confusing. This figure is meant to illustrate the core content of the paper, its unclear presentation undermines the overall comprehensibility. For instance, it would be clearer if the authors had depicted only one type of environment. The inclusion of two types of environments creates ambiguity as to whether the training was performed in one or both environments. 2. This paper lacks a clear logical flow in the writing, particularly when it comes to the technical details. 3. The performance of the proposed SGFD method does not seem to significantly surpass that of other algorithms. The overlap within the standard deviation intervals between SGFD and other algorithms suggests that the advantages of SGFD are not as pronounced as might be expected. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. Can you visualize the examples of training enviroments and test environments? I don't know the gap between these two environments and how difficult to generalize to the novel scenes. 2. Can you illustrate the Figure 4 to me? I think you need to decorrelate the association between features and the policy, not among different features? But the Figure 4 seems to decorrelate the features. 3. Why the calculated $w*$ only apply to actor loss? Why not critic? 4. Is it possible to apply SGFD on the environments which contains both task-relevant and task-irrelevant features? If you can address my questions, I would consider increasing the score. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Mentioned in weakness and questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the reviewer's positive appraisal, insightful comment, and criticism of our paper. **Reply to the weakness** >**W1. Figure 2 is confusing.** We split the original Figure 2 into two figures in the global response PDF: Figure 1 discusses the motivation for feature decorrelation, while Figure 2 details the technique flow. **Figure 1: the motivation for feature decorrelation.** Consider a classifier trained on images labeled 'cat': three with sofas and one with grass in the background. The model may falsely link sofas to cats due to frequent pairing. To mitigate this issue, we can reweight the images to decorrelate the features between backgrounds and animals. In RL, features like varied background noise can correlate with the robot's state, as in Figures 5 (main text) and 11 (Appendix). Such correlations may lead agents to form spurious associations, altering optimal actions with background shifts. **Figure 2: the architecture of SGFD.** Essentially, our SGFD aims to reduce correlations in image features by reweighting samples. This involves five steps: (1) We fetch a sample batch from the replay buffer, which may come from multiple environments with different backgrounds or robot configurations. (2) The image in the sample is compressed by an encoder into latent features. (3) Then, we augment these features using multiple Random Fourier Functions to capture nonlinear correlations. (4) Concurrently, we train a classifier and apply saliency maps to detect features that shift across environments. (5) Finally, SGFD reweights samples to eliminate the correlations between identified features and other features. >**W2. This paper lacks a clear logical flow in the writing, particularly when it comes to the technical details.** Our writing logic is as follows. In the Introduction, we first discuss the generalization when task-relevant and task-irrelevant features are shifted and point out that it can be achieved by feature decorrelation. Then, the Preliminary Section indicates that the learned encoder obtains the image features. The Method Section starts with using the Random Fourier Function for assessing feature correlation and elaborates on decorrelation through sample reweighting. It then delves into the challenge of decorrelating all feature pairs, introducing the saliency map to focus on shifted features. Finally, Our experiments in Sections 5.2 to 5.4 assess generalization, feature decorrelation, and conduct ablation studies. This logic flow will be clearer in the updated version with the revised Figure 2. >**W3. SGFD does not seem to significantly surpass other algorithms.** The advantages of our SGFD method are reflected in two aspects. Firstly, it improves generalization for both task-relevant and task-irrelevant cases, while previous work usually only focuses on one. Secondly, it shows clear superiority in task-relevant cases, outperforming the state-of-the-art method by 23% on average, as shown in Table 2. This is because feature decorrelation encourages the model to form true associations with the features and thus be able to adjust actions in unseen environments correctly. **Reply to questions** >**Q1. Can you visualize the training and test environments?** We visualize the environments in the "global" response PDF file. As shown in Figure 3, the model is trained in 4 environments with different task-irrelevant features or task-relevant features. During testing, the model is deployed in unseen environments without extra training. Differential settings between training and testing follow the baseline algorithms. >**Q2. Can you illustrate the Figure 4? Why eliminate the correlation between features?** **Illustration Figure 4:** Figure 4 depicts a 10x10 heatmap, visualizing the correlations between ten features ($s_1-s_{10}$) in a robotic arm grasping task. To compute the correlation of weighted data, we fetch a sample batch from the weighted distribution and then calculate the Pearson correlation coefficient. The model could access true states ($s_1-s_{10}$) with physical meaning. Notably, $s_{10}$ denotes the object's size, which varies across environments. For optimal RL model generalization to objects of unobserved sizes, it's crucial to assess correlations between $s_{10}$ and $s_1-s_9$. As highlighted in Figure 4, our method successfully reduces this correlation. **Why decorrelation:** Due to the word limit, please refer to the motivation for feature decorrelation in the W1 response. **Decorrelate associations between features and policies:** In fact, feature decorrelation leads the policy to decorrelate with some features. As the example in reply W1, the policy will eliminate the dependence of the background on decision-making when the semantics of images are independent of the background. >**Q3. Why the calculated $w$ only apply to actor? Why not critic?** With the actor loss under the $w^*$ effect, the model could recover the true associations between the changed features and the optimal actions. In the testing phase, we only deploy the policy network to the new environment without transferring the critic; thus, $w^*$ does not need to change the critic loss. >**Q4. Can you apply SGFD on both task-relevant and irrelevant features?** As suggested, we test the environment where both task-relevant and task-irrelevant features are changed. Specifically, in the MT-cheetah-run task, both torso length and background noise varied during testing. Similarly, in the MT-finger-turn task, the robot arm's length and background noise changed. |Tasks|SGFD|TED|AMBS|SGQN|DBC|DR| |-|-|-|-|-|-|-| |Mix-cheetah-run|$378.6±36.2$|308.6±47.6|265.6±37.4|161.2± 47.5|171.2±36.2|189.4±46.3| |Mix-finger-turn|$899.3±29.5$|726.2±52.2|802.5±53.1|632.9± 52.8|321.3±61.6|605.6±57.8| From the results, the advantage of feature decorrelation is further amplified and outperforms the baseline by 28% when the changes in both cases co-occur. We will add this experiment to the latest version. --- Rebuttal Comment 1.1: Title: Reply to Authors Comment: Thanks for your reply. I have several other questions. Regarding Q3, would the calculated weight be effective on any objective? Why must it only be placed on the actor? It feels to me that there is some disconnection between how to obtain $w$ and how to apply it to RL. Moreover, your answer to Q3 seems subjective; many generalization papers (e.g, SVEA, SGQN) have made modifications to the critic objective, and they are also very effective. Regarding Q4, the effect of the finger turn appears very good, but in fact, you have not shown any visualizations related to anything beyond the cheetah task, leaving the reader in the dark about how you modified your environment. --- Reply to Comment 1.1.1: Title: Reply to the Reviewer NDX8 Comment: We thank Reviewer NDX8 for further response to our rebuttal. We want to further clarify some of the questions as below. **Q1. Why must $w$ only be placed on the actor? Would the calculated weight be effective on any objective?** **Why only apply $w$ to the actor:** It is possible to apply $w$ to the critic. We apply $w$ to the actor for the following two reasons. - The core idea of sample reweighting is to eliminate the correlation between features and then facilitate the policy to distinguish the impact of each image feature on decision-making. Since the actor directly controls the training of the policy, incorporating $w$ into the actor's loss is an intuitive and efficient method. Although modifying the critic loss is incorporated into the algorithm in related works, the critic can only indirectly benefit the policy on $w$ in our method, which is demonstrated in experiments (see the next point). - We conduct ablation experiments to test generalization under four settings: no $w$, $w$ only on the actor, $w$ only on the critic, and $w$ on both the actor and the critic. As shown in Table 1, when $w$ acts on the actor, the generalization is significantly improved, while the gain realized by the critic is relatively modest. This is because the actor loss directly controls the learning of the policy so that $w$ applied to the critic alone cannot prevent the policy from establishing false connections with the background noise. In addition, we do not observe significant improvements when adding $w$ to both the actor and the critic, compared to adding $w$ only to the actor. This further demonstrates that actors with $w$ adequately mitigate spurious associations caused by correlations between features. **Table 1:** The generalization of the $w$ acting on different parts of the algorithm in unseen background noises. | | both the actor and critic | only the actor | only the critic | no $w$ | |-------|-------|-------|-------|-------| | walker-walk | 956.3 | $\mathbf{959.1}$ | 915.2 | 906.7 | | cheetah-run | $\mathbf{606.5}$ | 599.6 | 521.6 | 517.7 | | finger-spin | 958.6 | $\mathbf{965.7}$ | 905.5 | 895.5 | **Would the calculated weight be effective on any objective?:** Although the SAC algorithm is the backbone of our method, the form of sample reweighting makes it possible to transfer to other RL algorithms. Based on the experimental results, we propose to apply $w$ to the loss directly associated with the policy. For the RL method without the policy networks, we will further explore the modification of $w$ in future work. **Q2. The visualizations of finger turn.** Due to the page limit of the global rebuttal, we only visualized the configuration of the cheetah. According to the official regulations of NeurIPS 2023, we provided an anonymous link in the AC comment to visualize the configuration of the three robots used in our experiments. After AC's confirmation of anonymity, you can see it in the comments at the top. Specifically, the settings of the Finger and walker are referred to [1], which modifies the Finger's length and walker's foot. For background noise, Finger and cheetah are similar to the walker we showed in the global rebuttal PDF (Figure 3 (a)). The visualization will be updated to the revised version. **Reference** [1] Learning robust state abstractions for hidden-parameter block mdps, ICLR 2021. ----- We hope that our response can address your comments and we would appreciate it if you considered increasing your score. Paper1844 Authors
Rebuttal 1: Rebuttal: Dear Reviewers, We are very grateful to the reviewers for their valuable suggestions, which further improved our work. We provide three visualizations about our motivation, technique, and environmental setting with a submitted 1-page pdf. - Figure 1: The motivation for feature decorrelation used in our method. - Figure 2: The technique flow of our method. - Figure 3: The visualization of training and testing environments. Thank you again for your careful review and helpful comments. Kind regards, Paper1844 Authors Pdf: /pdf/06eebcde4a430c316b452d4b876446487e239479.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper presents a method to weight experiences in reinforcement learning so that the state features they depict are maximally decorrelated. The assumption is that learning with decorrelated features leads to better generalization to unseen task instances because the policy does not confound the role of different features when making decisions. The idea is novel and interesting and the experimental evaluation includes several simulated embodied AI tasks and comparisons to other baselines. Strengths: - The presented method is novel. The ideas of the paper are well-grounded. - The method seems mathematically sound. The appendix includes all relevant proofs. - The experimental evaluation, including several baselines and different tasks, demonstrates the strengths of the algorithm - The problem addressed in the paper is critical for the robot learning community: improving the generalization capabilities of trained policies. Weaknesses: - The writing could be improved. The text focuses on describing the steps but would benefit from explaining more the reasoning behind the design decisions. - The assumptions on the problem structure are unclear. What type of relationship between features and actions and features-to-features should be present in the problem? What happens if features are not (cannot be) decorrelated? - It is unclear what the features are. I can’t find that information either in the text or in the appendix, maybe I’m missing it. Please, clearly discuss what the features are and whey they come from. Include it in the main text, e.g., in the experimental setup description. Explain and discuss how they are selected/created. In these times of representation learning, especially for vision-based control, is this a strong limitation of the method? - Fig. 2 is quite complex. What are the colors of the “enriched features” indicating? What are the “G”s indicating? This figure could serve as a good summary of the method but it needs to be improved for that and include a more complete caption. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Explain the features per task and where they come from. Discuss the applicability of the method if it requires pre-specifying optimal features. How would the method work with learned latent features? Could you include an evaluation on that? - Can you include completely distracting features to evaluate the robustness of the solution? - Please, discuss what would be necessary for this method to work directly with images as input. It feels strange to only go “the other way”: from features to images (saliency) in order to be able to check when they are co-ocurring, but there is no discussion about the normal direction: from images to features. - How does the method scale with the number of features? And with the number of samples? What is the computational cost? Can you experiment with that and provide some evaluation? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: - The limitations section is acceptable, but several significant limitations for the method to be truly impactful are not discussed (see my comments above). Please, include them. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are particularly encouraged that the reviewer finds our method novel and effective. **Reply to the weakness** >**W1. Explaining more the reasoning behind the design decisions.** As suggested, we introduce an example to explain why feature decorrelation can promote generalization, as shown in Figure 1 in the "global" response PDF file. **Figure 1: the motivation for feature decorrelation.** Consider a classifier trained on images labeled 'cat': three with sofas and one with grass in the background. The model may falsely link sofas to cats due to frequent pairing. To mitigate this issue, we can reweight the images to decorrelate the features between backgrounds and animals. In RL, features like varied background noise can correlate with the robot's state, as in Figures 5 (main text) and 11 (Appendix). Such correlations may lead agents to form spurious associations, altering optimal actions with background shifts. This example can be first proposed in the method section and then fluently lead to the motivation of feature decorrelation. >**W2. What type of relationship between features and actions and features-to-features should be present? What happens if features cannot be decorrelated?** **Features-to-actions and inter-feature relationships.** We aim to recover the true association between features and actions, and for inter-feature relationships, we target their decorrelation. In W1's example, sample reweighting decorrelate the robot's state from the background, facilitating the model to recognize that the true association between background features and actions is 0. **What happens if features cannot be decorrelated?** In Figure 6 (a), the generalization will be hurt if the features are not decorrelated. In practice, due to the limited data, the sample reweighting often yields approximately independent results, not completely independent, as shown in Figure 4. >**W3. What the features are and where they come from.** Our method takes images as inputs, extracting latent features via an encoder trained alongside RL models. In our method, we leverage saliency map to identify features that vary across environments. In the main text, latent features are first mentioned on line 124. The structure of the encoder is mentioned in line 478 of the appendix. >**W4. Fig. 2 is quite complex. What are the colors of the “enriched features” indicating? What are the “G”s indicating?** The term "enriched features" refers to latent features mapped by multiple RRFs to assess nonlinear correlation. The symbol "G," initially representing the robot's center of gravity, was removed to prevent confusion. This is clarified in the updated Figure 2 in the "global" response PDF. We've replaced "enriched features" with "RFF map" for clarity, referencing Equation (4). The architecture now aligns better with the method's two subsections. **Figure 2: the architecture of SGFD.** Essentially, our SGFD aims to reduce correlations in image features by reweighting samples. This involves five steps: (1) We fetch a sample batch from the replay buffer, which may come from multiple environments with different backgrounds or robot configurations. (2) The image in the sample is compressed by an encoder into latent features. (3) Then, we augment these features using multiple RRFs to capture nonlinear correlations. (4) Concurrently, we train a classifier and apply saliency maps to detect features that shift across environments. (5) Finally, SGFD reweights samples to eliminate the correlations between identified features and other features. **Reply to questions** >**Q1. How would the method work with learned latent features? Could you include an evaluation on that?** As mentioned in W3, our method processes images directly, with the encoder utilizing the AMBS [1] update method alongside RL models. We'll detail the experimental setup in the main text, not just the appendix. >**Q2. Can you include completely distracting features to evaluate the robustness of the solution?** In Table 1, test environments have all distracting features. Inputs are 3x84x84 RGB pixel values, perturbed by real photos. We visualized the training and test environments in Figure 3 in the "global" response PDF file. >**Q3. What would be necessary for this method to work directly with images as input.** Our method takes images as input and uses an encoder to transform pixels into latent features. This encoder serves both the classifier and RL models. In experiments, the gradients from saliency maps are only propagated to the latent features, not the initial image layer. >**Q4. How does the method scale with the number of features? And with the number of samples? What is the computational cost?** **Number of features** As the inputs are images, the number of features is assumed to be the output dimension of the encoder. When accessing the ground-truth state of the environment, we demonstrate the decorrelation ability of the model for different numbers of changed features in Figure 4 of the main text and Figure 7 of the Appendix. **Number of samples and computational cost** The computational cost grows linearly with batch size. Using an off-policy RL algorithm, the cost is O(N x B), with N as environment steps and B as sample batch size. For clarity on computational cost, we've timed training with and without SGFD for batch sizes 64, 128, and 256. The total number of steps is 1 million, tested on an NVIDIA RTX 3090 GPU and Intel Xeon Platinum 8255c CPU at 2.50 GHz. |batch size|64|128|256| |-|-|-|-| |SGFD|13 h|16 h|22 h| |no SGFD|12 h|15 h|20 h| The results show that SGFD indeed brings extra computational cost due to the sample reweighting for decorrelation. However, compared to the RL algorithm, SGFD brings significant generalization at a cost of less than 10%. **References** [1] Learning generalizable representations for reinforcement learning via adaptive meta-learner of behavioral similarities. ICLR 2022. --- Rebuttal Comment 1.1: Comment: Thanks for the replies, they clarify some misconceptions. I have raised my score.
null
null
null
null
null
null
Free-Bloom: Zero-Shot Text-to-Video Generator with LLM Director and LDM Animator
Accept (poster)
Summary: This paper presents a novel zero-shot text-video generation method that leverages an LLM as a director to generate per-frame prompts fed into a pre-trained text-image model such as Stable Diffusion. In order to retain temporal coherence, the authors propose (1) a noise distribution consisting of an interpolation of global and local noise, and (2) attend to prior frames in the self-attention layers during the generation process. In addition, they propose a dual interpolation method to generate higher fps video. Strengths: - The paper is well written and easy to understand - The presented method is novel and interesting, and outperforms SOTA zero-shot text-video methods, as well as SOTA trained video generation methods in general frame fidelity and visual semantics, while requiring not text-video training itself - This paper presents an interesting direction in leveraging an LLM’s understanding of the world through language for video generation Weaknesses: - Quantitative results / evaluation are a little weak, given the relatively small sample size (20 prompts, 4 videos each) - Results of ablations are shown qualitatively, and could be improved more through more rigorous quantitative evaluations Technical Quality: 3 good Clarity: 3 good Questions for Authors: - How is $\lambda$ chosen during generation? Is it fixed, or does it change over the video timestep? - How does this method behave in scenes with more content? (e.g. more people, more animals). Can the LLM directory + SD model distinguish specifications of motions between two humans doing different things? - How about scenes with larger shifts in motion (e.g. a camera panning to the left)? As in this case frame 0 (concatenated to the self-attention) would be less useful. - Or more complex motion that may be difficult to describe in words (e.g. each step in a dance)? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes, the authors discuss limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Benchmark**: Quantitative results/evaluation are a little weak, given the relatively small sample size. Thanks for pointing out this. The standard benchmarks for video generation, especially for zero-shot video generation, are still building up in the community. The videos in our benchmark are diverse, including Text2Video-Zero prompts and prompts with rich semantic content (such as how to showcase flower blooming, saying hello, ice melting, etc.). Here, we enlarge our benchmark to 68 prompts with 4 videos each due to time limitations. Note that both the quantity and diversity of videos in our benchmark exceed that in previous methods. | Method | Training-Free | CLIP Metrics↑| Fidelity↑ | Temporal↑ | Semantic↑ | Rank↓ | | --- | :---: | :---: | :---: | :---: | :---: | :---: | |VideoFusion [1]| - | 0.471| 3.386 | 3.914 | 3.209 | 2.368 | | LVDM [2] | - | 0.476 | 3.276 | 3.674 | 3.206 | 2.558 | | T2V-Zero [3] | √ | 0.482 | 3.469 | 2.721 | 3.003 | 3.033 | | ours | √ | 0.471/0.483* | 4.185 | 3.318 | 3.909 | 2.042 | [1] Luo, Zhengxiong, et al. "VideoFusion: Decomposed Diffusion Models for High-Quality Video Generation." *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*. 2023. [2] He, Yingqing, et al. "Latent video diffusion models for high-fidelity video generation with arbitrary lengths." *arXiv preprint arXiv:2211.13221* (2022). [3] Khachatryan, Levon, et al. "Text2video-zero: Text-to-image diffusion models are zero-shot video generators." *arXiv preprint arXiv:2303.13439* (2023). > **Quantitative ablations**: Results of ablations are shown qualitatively, and could be improved more through more rigorous quantitative evaluations Thanks for your valuable feedback. Per your advice, we conducted additional ablation studies and evaluated their performance in both human evaluation and automatic metrics. As presented in all dimensions, the absence of either the noise sampling module or the attention shifting module would significantly degrade the performance, and the lack of both would result in extremely poor results. |Serial Prompting|Joint Noise Smapling|Shifting Self-attention|CLIP Metrices↑|Temporal↑|Identical↑|Rank↓| |:---:|:---:|:---:|:---:|:---:|:---:|:---:| |√|-|-|0.4429 / 0.4661|1.62|1.95|3.66| |√|√|-|0.4482 / 0.4712|1.83|2.30|3.04| |√|-|√| 0.4509 / 0.4736|2.52|2.71|2.19| |√|√|√| 0.4521 / 0.4773|3.67|3.46|1.11| > $λ$: How is $λ$ chosen during generation? Is it fixed, or does it change over the video timestep? The hyperparameter setting of $λ=0.2$ is recommended for most cases. $λ$ is fixed and does not change over the video timestep. $\lambda$ is used in the joint noise sampling module to regulate the ratio between the independent noise and the united noise. A video’s per-frame noise weighs from the frame-wise independent noise and one unified video noise, where the weights of the two noises are the same across different frames. >**Scenes with more content**: How does this method behave in scenes with more content? (e.g. more people, more animals). Can the LLM directory + SD model distinguish specifications of motions between two humans doing different things? We supplemented some additional examples of scenes with more subjects in Figure 5 in the one-page PDF. As shown in the case “a dog is running and then another dog joins”, our pipeline performs well on simple scenarios involving two or three objects performing similar actions. However, when involving complex scenes with different subjects with diverse features performing distinct actions, such as shown in the case “A man is dancing and a woman is watching”, our method does indeed face limitations. In our practice, LLMs can separate and describe the detailed actions of subjects within each sentence in a reliable manner. In contrast, the current version of LDMs struggles to associate the action with the corresponding subject. If a more powerful LDM emerges in the future, we believe our pipeline holds the potential to achieve this as well. >**Scenes with larger shifts in motion**: How about scenes with larger shifts in motion (e.g. a camera panning to the left)? As in this case frame 0 (concatenated to the self-attention) would be less useful. Thank you for considering more general applications and scenes. Unfortunately, this challenging camera panning lies beyond the scope and capability of our research, posing a significant challenge even for models trained on massive video data. The reason is that when involving camera panning, the model is needed and expected to construct from or at least have the ability to image a 3D scene from one single frame, which is very challenging. Also, the current LDMs remain unresponsive to layout descriptions, hindering the realization of scene shifting even with detailed prompts like “subject on the left”, ”… in the middle”, ”… on the right”, etc. >**Complex motion in words**: Or more complex motion that may be difficult to describe in words (e.g. each step in a dance)? Our pipeline is capable of generating corresponding results for some straightforward action sequences, such as raising and lowering the right hand, lifting the left foot, and similar motions. Figure 6 in the one-page PDF shows these cases. But, our method indeed is limited in cases of intricate dance movements, where the pose details are required. On the one hand, LLMs often fail to provide prompts detailed at fine granularity enough, which is also the limitation of using natural language as the condition. For example, for the input “a man is hip hop dancing”, the LLMs are likely to provide vague and high-level state descriptions like “does the ‘wave’ by creating a ripple effect through his body”. On the other hand, the LDMs cannot generate images that match prompts with highly detailed descriptions. How to make LDMs comprehend complex prompts, including spatial directions, colors, and textual information, remains a prominent research focus in the T2I diffusion area. --- Rebuttal Comment 1.1: Title: Response Comment: Thank you for your clarifications and further evaluations. My main concern regarding evaluation was addressed, so I will raise my score to a 7. --- Reply to Comment 1.1.1: Title: Reply to Reviewer 6FyB: Thank you! Comment: We thank the reviewer for the quick reply and consideration! It is heartening to receive this positive evaluation. Your detailed review and feedback are valuable to us in further shaping our work. Once again, we express our gratitude for your time and appreciation!
Summary: This paper proposes a novel approach to achieve Text-to-Video generation through Zero-Shot learning using LLMs and a diffusion-based image generation model. By using LLMs to generate detailed and varied descriptions for each frame of the video, the proposed method captures the changes in the video frames. The generated video frames are then constrained by self-attention module in SD and input noise to align with the input text and maintain a certain level of coherence. This paper also proposes a zero-shot frame interpolation algorithm based on a diffusion-based image generation model. While the current results may not be very satisfactory, this approach is a low-cost and Zero-Shot learning method that does not require additional training and can directly utilize existing SD models. Strengths: 1.The entire process is Zero-Shot learning, without the need for any additional training, making the method low-cost. Moreover, compared to existing zero-shot video generation algorithms, the proposed approach has achieved significant improvements in performance. 2.This algorithm effectively combines two pre-trained models that typically require significant computational resources for training and easy to follow. 3.Since there is no fine-tuning or post-pretraining of the existing powerful image generation model, the proposed method preserves the ability of the model to generate high-quality images to the maximum extent. Compared to models trained on video data, the image quality of the generated frames is better. Weaknesses: 1.Lacking a comparison of the quality of video generation guided by different large language models. Only ChatGPT was used in the experiments. 2.The computational and time overhead during inference may increase when using LLMs. 3.The proposed frame interpolation algorithm for video frames was not compared numerically with other state-of-the-art algorithms on the test set in the article. 4.The ablation study only provides qualitative analysis and does not include quantitative comparisons. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1.What is the test set in Table 1? I did not find any relevant explanation in the paper. 2.I didn't understand Figure 5, and what is the variable 𝜏* referring to? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: 1.The proposed pipline was not trained on video data, so compared to methods trained on videos, it is still weaker in terms of temporal consistency of the generated video frames. 2.The proposed method is largely limited by the high-order reasoning capability of LLMs for video imagination and the image generation capability of the diffusion-based image generation model. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Comparion with different LLMs**: Lacking a comparison of the quality of video generation guided by different large language models. Only ChatGPT was used in the experiments. Thanks for your valuable feedback. Per your advice, we conduct experiments using other LLMs (GPT-4 and Bard), and the results validate that our methods are very robust to these LLMs. Due to limited space, we only show the case "a flower is gradually blooming" in Figure 4 in the one-page PDF. And six per-frame prompts of every LLM are given below. In practice, LLMs output prompt sequences to provide semantic guidance for frames. As long as the semantic guidance is reasonable and coherent, our proposed modules on the LDM can ensure the generation of sensible video. ``` GPT-3.5: > A close-up view of a green stem with buds tightly closed, waiting to bloom. > The buds slowly unfurl, revealing delicate petals in vibrant colors. > The flowers continue to bloom, forming a beautiful cluster of various shapes and hues. > A gentle breeze sways the blossoms, creating a mesmerizing dance of colors. > The cluster of flowers is now fully open, displaying a stunning arrangement of shapes and textures. > As the sunlight bathes the blooms, they glisten with dewdrops, exuding a sense of freshness and vitality. GPT-4: > A tight bud is suspended in the center of the frame against a background of soft pastels. > Small, delicate cracks appear on the surface of the bud, revealing a hint of white color within. > The bud begins to unfurl, petals delicately stretching outward, their hue a pure white. > The bloom is halfway open, the curling petals forming an intricate pattern, sunlight shimmering off their velvety surface. > The flower is almost fully bloomed, the vibrant petals dominating the frame, contrasting starkly with the greenery in the background. > The flower is in full bloom, petals splayed out in a radiant display, revealing a golden heart of stamens in the very center. Bard: > A close-up of a white flower bud, tightly closed. > The flower bud begins to open, revealing a few white petals. > The flower continues to open, more petals unfolding. > The flower is now fully open, with a delicate white bloom. > The flower begins to wilt, the petals starting to curl. > The flower is now completely wilted, the petals falling to the ground. ``` > **The computational and time overhead** during inference may increase when using LLMs. Thanks for pointing out this limitation. We agree that the cost of LLMs should be taken into consideration. However, LLMs' inference is fast currently (considering it can engage in real-time conversations with ChatGPT), which is a small overhead compared to the entire pipeline since the LDM is relatively slower for generating even a single image (20-30s). > **Quantitative comparison of interpolation**: The proposed frame interpolation algorithm for video frames was not compared numerically with other state-of-the-art algorithms on the test set in the article. Per your feedback, we conduct experiments on our frame interpolation algorithm on both human evaluation and automatic metrics. For the Image CLIP score, we average the CLIP similarity of the image features of every two adjacent frames. In the user study, we ask raters to compare the two videos from different methods. Our method surpasses another SOTA approach in all dimensions, particularly in terms of individual frame quality. ||Image CLIP Score|Temporal|Fidelity|Rank| |:---:|:---:|:---:|:---:|:---:| |Ours Vs AMT [1]|74.55%|74.07%|81.48%|74.07%| [1] Li, Zhen, et al. "AMT: All-Pairs Multi-Field Transforms for Efficient Frame Interpolation." *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*. 2023. > **Quantitative ablation study**: The ablation study only provides qualitative analysis and does not include quantitative comparisons. Thanks for your valuable feedback. Per your advice, we conducted additional ablation studies and evaluated their performance in both human evaluation and automatic metrics. As presented in all dimensions, the absence of either the noise sampling module or the attention shifting module would significantly degrade the performance, and the lack of both would result in extremely poor results. |Serial Prompting|Joint Noise Smapling|Shifting Self-attention|CLIP Metrices↑|Temporal↑|Identical↑|Rank↓| |:---:|:---:|:---:|:---:|:---:|:---:|:---:| |√|-|-|0.4429 / 0.4661|1.62|1.95|3.66| |√|√|-|0.4482 / 0.4712|1.83|2.30|3.04| |√|-|√| 0.4509 / 0.4736|2.52|2.71|2.19| |√|√|√| 0.4521 / 0.4773|3.67|3.46|1.11| > **Test set**: What is the test set in Table 1? I did not find any relevant explanation in the paper. Thanks for pointing out this. We use prompts from the webpage of Text2Video-Zero, and we add more prompts that incorporate more complex event content. Our motivation is to validate whether our method achieves better results in the existing cases but also has an impressive effect on cases with rich semantic content (such as how to showcase flower blooming, saying hello, ice melting, etc.). The quantity and diversity of videos in our test set exceed that of the current methods for zero-shot video generation. > $τ^*$: I didn't understand Figure 5, and what is the variable 𝜏* referring to? $τ^*$ is the hyperparameter in the interpolation module, which is the step threshold of changing the weight from focusing on contextual frame paths to the self-denoising path. Figure 5 illustrates the influence of the selection of $τ^*$. We sincerely apologize for the oversight in Figure 5, where the image for $τ^*$ approaching 1000 and $τ^*$approaching 1 are switched. We defined our interpolation step in Eq.8, in which $m(t)$ varies with denoising timesteps and controls the weight of two paths. To make the approach simpler, we set $m(t)$ as a step function empirically in the experiments. It can be formulated as $m(t)=0.1, t\geqτ^*; m(t)=1,t<τ^*$ (larger t indicates earlier denoising stage). --- Rebuttal Comment 1.1: Comment: Dear Authors, Thank you for your message and we will take your response into consideration. Best,
Summary: In this work, the authors propose a principled pipeline for text to video generation. Specifically, three techniques are proposed: (1) a LLM based per-frame prompt generation, so that the motion/dynamics of each frame can be better specified. (2) a noise joint sampling schedule, and a step-aware attention shift is proposed to enhance temporal consistency. (3) an interpolation module to generate longer videos. Extensive experiments and results have demonstrated the effectiveness of the proposed work. Strengths: 1. The pipeline is technically sound and generally makes sense to me. It is straightforward to use the strong prior from pretrained LLM to give more temporal/motion/appearance information to each frame, to facilitate video generation. The design of noise scheme, sampling, and attention shift is technically sound. 2. The exposition of this paper is very good and clear to me. Weaknesses: 1. In my honest opinion, directing comparing this work w/ many baselines is kind of unfair, since this involves strong prior in LLM, and this is the key factor of improvement in video generation. I strongly recommend the authors to compare w/ some similar text2video pipelines that also factorize the unified single prompt condition into per-frame motion-aware conditions. For example, first generate a sequence of optical flow maps, then takes them as conditions to generate the video. 2. Since I'm not the expert in NLP domain, I'm concerned about the ability of LLM to generate very reasonable or temporally-coherent per-frame prompts. I'm also curious about the feature trajectory of such text features. Say, you generate the per-frame prompt of p1, p2, p3, ..., pn, and use the text encoder to get text features f1, f2, ..., fn for cross-attention. Do they form a smooth trajectory in the high-dimensional text feature space? Do the authors think that, a smooth feature transition among frames can guarantee a smooth video? If so, a naive idea for improvement of this work is to add some regularizations to constrain the smoothness of text features across frames. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: elaborated in the above sections. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Prior in LLM (part 1)**: To my honest opinion, directing comparing this work w/ many baselines is kind of unfair, since this involves strong prior in LLM, and this is the key factor of improvement in video generation. As the reviewer pointed out, using LLMs introduces prior knowledge about how the visual content evolves throughout the frames according to the unified prompt. One of our contributions is to propose this pipeline to leverage this prior in the text-to-video generation. However, language is sometimes not as exact as other conditions, such as pose. For example, it is hard to use language to precisely describe where a person's pose skeleton and the positions in a picture. From this aspect, we consider that the knowledge from LLMs could be a kind of weak prior. Also, although LLM brings a prior, it also poses a hard challenge: given per-frame fine-grained prompts, how to adapt LDMs to produce a coherent and contextually fitting frame sequence. Per-frame prompts lead to significant incoherence in the LDM’s frame results, and language alone can not guide LDMs on how to make each adjacent frame coherent. > **Prior in LLM (part 2)**: I strongly recommend the authors to compare w/ some similar text2video pipelines that also factorize the unified single prompt condition into per-frame motion-aware conditions. For example, first generate a sequence of optical flow maps, then takes them as conditions to generate the video. Thank you for this advice. We reconducted a survey of recent works, but unfortunately, we did not find a zero-shot work capable of giving a text input and generating per-frame motion flow or per-frame optical flow. Actually, our pipeline achieves text-to-per-frame condition generation and then video generation. We humbly ask for your kind advice if we have missed some work. As an alternative, we compare our method against Tune-a-Video, which introduces a reference video as a much stronger prior than ours. Figure 2 in the one-page PDF shows that Tune-a-Video benefits from the strong prior but is also highly limited by the strong bias from the reference video, failing to generate videos of specified subjects yet similar and consistent movements. On the contrary, our method can generate diverse videos corresponding to the input text. > **The ability of LLM (part 1)**: Since I'm not the expert in NLP domain, I'm concerned about the ability of LLM to generate very reasonable or temporally-coherent per-frame prompts. In our practice, the LLMs can actually produce reasonable and temporally-coherent per-frame prompts because LLMs can learn common behaviors, phenomena, and events as well as their co-occurrence relationships, sequence patterns, and semantic associations from large-scale text data. All the latest LLMs we experiment with, such as GPT-3.5, GPT-4, bard, etc., are capable of decomposing actions like ”jumping into the water“ and translating "say hello" into waving hands and smelling. Our user study also suggests that raters perceive almost all video content trajectories as reasonable. Certainly, LLMs do have their limitations: they cannot provide guidance as precise as other strong conditions, such as pose sequences, as mentioned above. However, we claim this is also a benefit to keep the flexibility and generalization to generate a variety of videos from one given prompt. Coherence of details not provided by LLMs can be constrained and improved by our other three techniques, i.e., joint noise sampling, attention shift, and dual-path interpolation. > **The ability of LLM (part 2)**: I'm also curious about the feature trajectory of such text features. Do the authors think that, a smooth feature transition among frames can guarantee a smooth video? If so, a naive idea for improvement of this work is to add some regularizations to constrain the smoothness of text features across frames. Thank the reviewer for this suggestion! We consider that all text features in one video only need to be relatively close (or similar) to each other. In our practice, most of the per-frame text features generated by ChatGPT remain relatively close to each other. Figure 3 in the one-page PDF shows average CLIP similarities (average on all cases in the test set) between six text features in each video are high. We have also applied this characteristic in our interpolation module, where we perform linear interpolation on the adjacent text features to acquire in-distribution text features for the interpolated frame. Thanks to the high similarities and our proposed techniques on sampling and attention to encourage LDM to output smooth frame sequences, text features are not required to form a smooth trajectory. --- Rebuttal Comment 1.1: Title: Thanks for the detailed response Comment: I thank the authors for the detailed response, which has addressed most of my concerns! In terms of text-flow-video papers, maybe they are some concurrent NeurIPS submissions, so it's unfair to ask the authors to compare w/ them. I apologize for asking about this in the initial review. Actually, all the NeurIPS submissions in my batch of 5 papers are about DM-based text-to-video generation, and imho, this paper's technical soundness, exposition, and contribution could be the best. So I'm raising my score to 7 and vote for the acceptance of this paper. --- Reply to Comment 1.1.1: Title: Reply to Reviewer mgrm: Thank you! Comment: Thank the reviewer for the appreciation and engagement in assessing our work! We are grateful for the reviewer’s understanding regarding the challenge in comparison. We highly agree with the consideration of this aspect that once the concurrent works are made available, we will study, compare, and analyse accordingly. Once again, we express our gratitude for the time and prompt reply!
Summary: The paper proposes a zero-shot text-to-video generating pipeline called Free-Bloom, which first use LLM to generate prompt sequence decribing frames in a video, then generate frames according the prompts. To enhancing coherence, the authors proposed joint noise sampling, step-aware attention shift and dual-path interpolation. The authors compare their method with other video generator quantitatively (Clip metrics, user study) and qualitatively. Strengths: 1. explore zero-shot text-to-video generation LLM director, which leverage the story generation ability of LLM to generate semantic meaningful frame sequences. 2. Good per-frame quality. 3. tried several methods to enhancing coherence of zero-shot video generation. Weaknesses: 1. insufficient ablation study. a. there's only two qualitative cases in ablation study, which is not convincing enough. More cases and quantitative results (e.g. CLIP metrics) should be provided. b. given that joint noise sampling, step-aware attention shift and dual-path interpolation are universal technics for zero-shot video generation, experiment result of combining these technics and past zero-shot methods (e.g. text2video-zero) is needed. 2. The coherence is not good enough. e.g. sudden changes in the color / indentity can be observed. There is a fatal flaw in technical design: authors set m(t) to 1 and discard attention shift when t is small, which may lead to strong incoherence. 3. Insufficient clarification of experiments, e.g. prompts to LLM, hyperparameters. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. Please provide more clarification of experiments, including prompt engineering to get frame description, hyperparamters (such as \tau, \tau*) 2. The coherence is not good, but the idea of leveraging LLM is interesting. Is it possible to combine LLM director with other existing pretrained text-to-video methods (e.g. make-a-video) to enhance conherence? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: see weakness and questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Insufficient ablation study (part 1)**: more cases and quantitative results should be provided. Thanks for your valuable feedback. Per your advice, we conducted additional ablation studies and evaluated their performance in both human evaluation and automatic metrics. As presented in CLIP metrics and all dimensions in the user study, the absence of either the noise sampling module or the attention shifting module would significantly degrade the performance, and the lack of both would result in extremely poor results. |Serial Prompting|Joint Noise Smapling|Shifting Self-attention|CLIP Metrices↑|Temporal↑|Identical↑|Rank↓| |:---:|:---:|:---:|:---:|:---:|:---:|:---:| |√|-|-|0.4429 / 0.4661|1.62|1.95|3.66| |√|√|-|0.4482 / 0.4712|1.83|2.30|3.04| |√|-|√|0.4509 / 0.4736|2.52|2.71|2.19| |√|√|√|0.4521 / 0.4773|3.67|3.46|1.11| > **Insufficient ablation study (part 2)**: experiment result of combining these proposed technics and past zero-shot methods (e.g. text2video-zero) is needed. We combine our technics with text2video-zero (T2VZ) and show the results in Figure 1 of the one-page PDF. - **Joint Sampling w/ T2VZ:** we substitute the initial noise of T2VZ with our sampling method. Compared with the T2VZ’s original noise that adds motion dynamics to the base latent code, our sampling method can ensure the quality of generated individual frames remains dependable even as the number of frames increases. - **Attention Shift w/ T2VZ:** we re-implemented the attention control in T2VZ. The figure shows that under the scheme proposed by T2VZ, our step-aware attention shift can achieve slightly better results than T2VZ's attention strategy. But, we still want to claim that our attention shift's contribution cannot be fully reflected in this experiment, that is, working much better for challenging cases with more complex motions and frame-wise prompts than "moving image" in T2VZ. When all the frames are generated based on one single prompt, it is more like creating "a trembling gif", leading attention to previous frames and current frames less critical. - **Interpolation w/ T2VZ:** we directly apply our interpolation algorithm to the generated video of T2VZ. Our interpolation module can supplement intermediate states between two frames in the original video, thereby increasing the frame rate. > **The coherence is not good enough (part 1).** We have significantly improved the temporal coherence compared to previous zero-shot T2V methods, and our semantic coherence is much better than both zero-shot and trained T2V methods, according to our user study. We appreciate the reviewer for pointing out the coherence concern, and the coherence in video generation is essential but incredibly challenging in a zero-shot setting without any reference videos. Many of the efforts in our work revolve around coherence, and our new technicals indeed improve coherence a lot and can be a good starting point for future research to enhance coherence further. > **Coherence (part 2)**: Sudden changes in the color/indentity can be observed. There is a fatal flaw in the technical design: authors set m(t) to 1 and discard attention shift when t is small, which may lead to strong incoherence. This is not a design flaw. The $m(t)$ influences interpolated frames but is irrelevant to the contextual frames in the low-frame-rate stage. Thus, it is unrelated to the identical/coloring coherence. Also, attention shift is not discarded in the interpolation process (equation 8), whether t is small or large. The term $\tilde{\mathbf x}_t^f$ after $(1-m(t))$ is derived from the latents of contextual frames, and the latents are obtained through their attention-shifting on their paths. And the term following $m(t)$ pertains to the frame denoising path, on which the attention-shift operation is also performed. Notice that smaller $t$ indicates the later denoising stage, thereby needing to focus more on the interpolated frame's denoising path. So, we set $m(t)$ as a step function empirically to simplify the pipeline and attain straightforward results. Certainly, if we define $m(t)$ as a more elegant function varying through time steps, the performance of interpolation may be further enhanced. > **Insufficient clarification of experiments**, e.g. prompts to LLM, hyperparameters (such as \tau, \tau*). **Prompts to LLMs:** Please refer to Appendix Section C.1. The settings of the **hyperparameters** $τ=400$ and $τ^*=400$ (when timestep T=1000) are recommended for most practical use cases. According to specific cases, they could be additionally adjusted. - For cases involving more semantic and motion changes, as exemplified by “Teddy bear jumps into water” in Figure 3, it is advisable to opt for larger values of $τ$ and $τ^*$. This choice put more emphasis on the current frame by shifting attention to the current frame at an earlier step ($τ$) and increasing the weight of the self-denoising path at an earlier step ($τ^*$). - For cases with basically similar frames, such as “Light a match then the match goes out” in Appendix Figure 2, smaller $τ$ and $τ^*$would be suitable. > Is it possible to **combine LLM director with other existing pretrained text-to-video methods** (e.g. make-a-video) to enhance coherence? We agree that leveraging pre-trained text-to-video methods can improve the coherence of generated videos, as training with an immense amount of video samples allows the models to acquire knowledge about how to maintain coherence between frames. However, it is not easy to combine LLM director with pre-trained T2V methods in a training-free manner because videos generated by pre-trained T2V models are likely to conflict with the per-frame prompts. Addressing the conflict is a promising direction but is currently out of this work's scope. On the other hand, if requiring additional training, it is against the initial motivation of our present work considering the data- and cost-efficient. --- Rebuttal Comment 1.1: Comment: Thanks for your reply, especially the additional ablation study! I am still a little bit confused about Coherence (part 2): 1. Is attention-shifting only applied in Video Generation Stage, but not applied in Interpolation Empowerment Stage, according to the caption of Figure 2? 2. Assume that attention-shifting is also used in Interpolation Empowerment Stage. Considering (a) attention-shifting is not used when t < τ (Equation 6) and (b) m(t) = 1 when t < τ∗ (Section 5, Line 251), in my understanding, neither contextual path nor attention-shifting is used when t < min(τ∗, τ) . Will this cause incoherence? --- Reply to Comment 1.1.1: Title: Reply to Reviewer VS1k: Thanks for your response and further consideration! Comment: Thanks for your response and for acknowledging the additional ablation study! > Is attention-shifting only applied in Video Generation Stage, but not applied in Interpolation Empowerment Stage, according to the caption of Figure 2? The attention shift is applied in both stages. In Figure 2, the denoising processes of both the Video Generation Stage and the Interpolation Empowerment Stage go through Diffusion U-Net, in which the proposed attention operation (Equation 6) at every denoising step is applied. > Considering (a) attention-shifting is not used when $t < τ$ (Equation 6) and (b) $m(t) = 1$ when $t < τ^∗$ (Section 5, Line 251), in my understanding, neither contextual path nor attention-shifting is used when $t < \min(τ^∗, τ)$. Will this cause incoherence? As the reviewer pointed out that when $t < \min(τ^∗, τ)$, neither ***contextual path*** nor ***attention to contextual contents*** is used. In our original rebuttal, the term “attention shift” refers to the overall attention operation (*not only the attention to contextual contents*) in Equation 6, and we apologize for the confusion caused by it. We consider and experimentally showcase that this would not inherently cause contextual incoherence, but instead enhance semantic coherence: - Contextual coherence, such as the scene's general layout, shapes, and critical features, can be established during the denoising process's early stage (i.e., when $t$ is large). - When $t$ is small, paying attention to the contents within one frame conditioned on semantic information is necessary to ensure semantic coherence. In contrast, excessive attention to contextual contents would generate frames with minor differences, making videos into "trembling gifs". Therefore, we filter out the contextual content in both attention and interpolation during the denoising process’s late stage (when $t$ is small). Certainly, if we define $m(t)$ as a more elegant function varying through time steps, the performance of interpolation may be further enhanced. Still, the step function does not inherently lead to contextual incoherence. For attention strategy, the results of applying attention to contextual contents in the whole denoising process are shown in Section 5.2 (E). As for the interpolation module, we showcase the balance between two paths in Section 5.2 (Dual-path interpolation).
Rebuttal 1: Rebuttal: We thank all reviewers for engaging in the review process. Our code will be made public upon acceptance. We are deeply encouraged by positive comments from the reviewers. We appreciate the recognition and endorsement of our proposed zero-shot pipeline, such as acknowledging its qualities as interesting (VS1k and 6FyB), novel (6FyB and yWMb), effective (yWMb), and technically sound (mgrm). VS1k, yWMb, and 6FyB agree that our method generates videos with high per-frame quality (fidelity). Both mgrm and 6FyB agree that our paper is “clear”, “well-written”, and “easy to understand”. --- In our individual replies, we attempted to address specific questions and comments as clearly and detailed as possible. Moreover, we added several additional results to the one-page PDF and the individual replies to strengthen our work overall. Here, we briefly summarize these additional experiments and evaluations: - Supplemented ablation study of (a) numerical comparison on the test set (b) combining our techniques with Text2Video-Zero. - Quantitative comparison of interpolation module. - Quantitative results in an enlarged test set. - Qualitative results of Tune-A-Video under complex conditions. - Visualization of CLIP similarities on cases between all per-frame prompts in our test set. - Qualitative results of using other LLMs. - Qualitative results of videos with multiple subjects. - Qualitative results of videos with straightforward motions. --- We hope that these additional results further strengthen Free-Bloom’s position as the state-of-the-art generative model of zero-shot text-to-video generation and demonstrate: - Our pipeline is novel to combine the LLM director and the LDM animator and is the first T2V work capable of generating semantic-coherent videos (such as the whole process of flower blooming rather than a set of “moving images”) in a zero-shot and training-free manner. - Our Free-Bloom can generate flexible and versatile videos according to one unified prompt in a general level of scenes. - Our novel techniques of sampling, attention, and interpolation can greatly improve the temporal coherence of the generated videos while being faithful to the rich semantic content. - Our Free-Bloom is robust to different LLMs and is compatible with the existing LDM-based extensions. Pdf: /pdf/208bd5835b3843fcf26c25ec83c02c0ca490bf40.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Kernel-Based Tests for Likelihood-Free Hypothesis Testing
Accept (poster)
Summary: This paper studies a problem in the area of likelihood-free testing, where one is given black-box access to samples $X\sim P_X$, $Y \sim P_Y$ of size n, as well as "real-world data" $Z \sim P_Z$ of size m. The authors propose a new model ("Mixed likelihood-free testing") where it is known that $P_Z = (1 - \nu) P_X + \nu P_Y$ for rate parameter $\nu$, and the goal is to determine whether $\nu = 0$ or $\nu \ge \delta$. They propose a kernel based test statistic for this problem, and analyze its performance theoretically, and also provide lower bounds. They also perform experiments based on Higgs Boson detection and detecting images produced by a diffusion model to support their theoretical claims. Strengths: This paper is well written, and the problem is well motivated from the point of view of practical applications (Higgs Boson detection, diffusion image detection). The paper has an extensive set of experiments on real data, and there is an interesting theory to support empirical findings. Overall, I think this is an interesting paper. Weaknesses: The upper and lower bounds in the theory are a little difficult to interpret, especially the dependence on the eigenvalues of the kernel - it would be useful if the authors could provide some examples with simple kernels to illustrate better what the bounds are saying. Some of the things done in the experiments, such as training a kernel on the test data (especially overlap between training and evaluation data), are not captured/explained by the theory. Also, while Figure 1 seems like it agrees with the theory, it includes all the samples, including ones used for training, calibration, and evaluation. On the other hand, the plot involving only the evaluation samples (Figure 8 in appendix) seems like it agrees more with the naive expectation of $\min\{n, m\} \gtrsim n_{TS}(\alpha, \epsilon, \mathcal P)$. Since the theory is supposed to reflect performance relative to evaluation samples for a fixed kernel, it would be useful if the authors could provide a more detailed explanation of both these plots. It would also be useful if the authors could provide a simple synthetic experiment to illustrate the theoretical bounds for the setting that it is supposed to explain (with a fixed kernel) The dependence on mis-specification parameter $R$ is not captured by the lower bounds, and Proposition 4.1 (which is about the calibration of the test statistic) assumes that $R=0$. I would be curious to know if proposition 4.1 can be made stronger. In particular, can anything be said about the empirical settings where presumably $R > 0$? One very interesting thing the authors do in experiments is to use a fraction of the samples from $P_X$ and $P_Y$ to train a kernel that maximizes the (normalized) expectation gap of the statistic. While this seems to work well, there doesn't seem to be a theoretical explanation for why this is a good idea. Of course, while it may not currently be possible to make theoretical claims about a neural net based kernel, is it possible to make a theoretical claim about some simpler model? Or make some abstract claim given black box access to an algorithm that can take samples and generate a kernel with some good properties? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1) What is the relationship between this problem/likelihood-free testing more generally, and the distribution testing literature in theoretical CS? Can you include a brief comparison? 2) While Proposition 4.1 says that the p-value estimate is good as n_cal-> infty, what is the finite-sample relationship between n_cal, n_eval and the test statistic? Is there something quantitative that can be said here? What happens if $R > 0$? 3) Is it possible to say anything about the kernel training phase theoretically? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate the helpful comments in the review and will make sure to address them in the revision. In the following we will address some of the questions mentioned in the review. 1. $\textbf{“The upper and lower bounds in the theory are difficult to interpret.”} $ In terms of simple examples, we would like to highlight those given in Section 3.5 as well as the three examples (discrete distributions, Holder-smooth distributions, and Sobolev-smooth densities) of specific kernels provided in Appendix B, which show that our upper bounds recover minimax optimal results in past literature ([1]). Among them, the discrete case could be the simplest example to provide the reader with some intuition. In the revision, a more focused discussion will be included to help readers understand our results by adding explicit evaluations of kernel spectra. Please also see general response #1 for more. 2. $\textbf{“What is the relationship between LFHT and the distribution testing literature in TCS ?”} $ The closest work to LFHT in the TCS literature is two-sample testing with unequal sample size which is in fact equivalent to LFHT in the regime m > n for discrete distributions (this can be found in [1]). There has also been some work on LFHT in information theory [5]. We recommend the papers [1,3] as a good resource for connections to prior results. To the best of our knowledge, the mLFHT problem hasn’t been considered by the TCS community and we are not aware of related/similar work. 3. $\textbf{“What is the finite-sample relationship between $n_\mathsf{cal}$, $n_\mathsf{eval}$ and the test statistic? Is there a version of Proposition 4.1 when $R>0$? ”} $ That is an important question that we hadn’t considered before. A naive answer would be to force all the k batches to be disjoint, namely take $k=\lfloor n_\mathsf{cal}/m \rfloor$. In this case, the variance would be of order $\mathcal O(m/n_\mathsf{cal})$ and thus $n_\mathsf{cal} = \omega(m)$ is sufficient for the variance to decay to zero. Improving this naive bound, or alternatively studying the hardness of estimating the p-value could be an interesting direction. When $R>0$ estimation of the p-value becomes subtle. By definition the p-value of the composite hypothesis is $p:=\sup_{\nu(P)=0}(P(\hat{T}<T))$. However, here the maximizer might not be $P=P_X$, which invalidates the current algorithm. Due to the generality of our setup, designing an algorithm to consistently estimate the p-value in the misspecified setting seems like a challenging open problem. 4. $\textbf{“Is it possible to say anything about the kernel training phase theoretically?”}$ In the context of two-sample testing, Theorem 6 in [2] shows under mild assumptions, that estimating the kernel by optimizing the objective J is a consistent procedure (i.e. it identifies a maximizing kernel). The same paper (page 3, left column, bottom) includes a justification of using J as the objective that relies on the asymptotic power of the resulting test. In a similar vein we include a heuristic justification of our use of the objective J in Appendix F1. However, these results leave much to be desired. Studying the problem of kernel selection in a simplified/toy setting (even finding the right way to pose the question) might uncover interesting phenomena, although we have not spent time on this. 6. $\textbf{Figures 1,8 and the trade-off}$ Figure 1 agrees with our theory insofar as it exhibits a clear asymmetric trade-off between n and m with m leveling off at a lower value than n (although we do not claim that our theoretical results strictly imply that this should be the case for trained kernels). We view Fig. 1 as important empirical evidence and motivation for further study of this simulation-experimentation tradeoff in (m)LFHT. We agree that on Figure 8 the precise nature of the trade-off is visually less clear, this may be due to the high numerical precision and/or samples required to observe the trade-off. We are happy to include a synthetic experiment to verify the soundness of our theoretical trade-off in the revision. $\textbf{References}$ [1] Gerber, P. R., and Polyanskiy, Y. Likelihood-free hypothesis testing. 2022. [2] Liu, F., Xu, W., Lu, J., Zhang, G., Gretton, A., and Sutherland, D. J. Learning deep kernels for non-parametric two-sample tests. 2020. [3] Gerber, P.R., Han, Y. and Polyanskiy, Y. Minimax optimal testing by classification. 2023. [4] Bhattacharya, B. and Valiant, G. Testing closeness with unequal-sized samples. 2015. [5] Huang, D. and Meyn, S. Classification with high-dimensional sparse samples. 2012. --- Rebuttal Comment 1.1: Comment: Thank you for your comments. I will keep my rating as is.
Summary: This paper introduces a new framework for likelihood-free hypothesis testing (LFHT), denotes as Mixed-LFHT or mLFHT. The setting of mLFHT is as follows: assume $n$ i.i.d. samples from a background disttribution $P_X$, and the same number of samples from a signal distribution $P_Y$. Also, assume we are given another $m$ i.i.d. samples from a mixed distribution $P_Z=\nu P_Y + \left(1-\nu\right)P_X$, and the aim is to "test" and see if $\nu$ is greater than some positive threshold $\delta$. Paper proposes a new kernel-based test statistic for this problem and then analyzes it from both theoretical and experimental perspectives. The main idea is to somehow "map" the empirical distributions of observed data into an RKHS (for some kernel of choice) and then measure the discrepancies of the observed data from both $P_X$ and $P_Y$. From a theoretical POV, authors claim to acheieve non-trivial sample complexity bounds for the non-asymptotic regime. Both upper and lower-bounds on the error have been established in Thms 3.2 and 3.3, respectively. From an experimental side, authors propose a new algorithm to learn the kernel from data, and also showcase the applicability of their test statistic and algorithm on a number of real-world tasks. I am not an expert in this particular field and therefore not much aware of other similar works. However, paper is not well-written and seems a little rushed. Regarding the technical contribution, I have a number of questions (please see questions section) before reaching a final conclusion. For now, my vote is borderline reject, with a relatively low confidence. Strengths: - Paper investigates an important problem for the ML community in general. - The mathematical tools (specially those from functional analysis and RKHS-related areas) used for the theoretical parts are sufficiently sophisticated and might be interesting for many researchers in this field. - Paper attempts to propose a new setting which comes with both a) theoretical guarantees and b) applicability on real-world data, - I did not completely checked the proofs, however, so far I have not found any miscalculations or mathematical flaws. Weaknesses: - IMO paper is not well-written, and this might be the biggest weakness of this work. Introduction seems short and not that much informative. Also, transitions from one part or section to another do not seem to be smooth enough. There are some cases where an equation or formula is referred to, but in fact it appears (for the first time) at least two pages later which makes the paper a little confusing for a general reader. Positioning of the work w.r.t prior papers is lacking in some ways, and some assumptions are not sufficiently justified. Overall, it seems that authors have tried to compress too much information into a 9-page limited manuscript which has degraded its readability. I suggest a major reconsideration in the writing, or even maybe considering this work for a journal paper instead of a conference. - One of the main theoretical contributions, as far as I have understood, is the claimed "non-trivial" behavior of the sample complexity bound which is outlined in Theorem 3.2, where authors show a non-symmetic behaviour between $m$ and $n$. However, in L.79 authors mention a similar relation in an existing work (Ref. [18]), which I assume is achieved for a slightly different problem. If this non-symmetric relation has been already discovered, it might affect the contribution level of this paper (please correct me if I am wrong). - In Thm. 3.3 authors claim to achieve an error lower-bound for mLFHT which matches (more or less) the error upper-bound in Thm 3.2. However, my understanding is that the bound in Thm 3.3 is mainly for the particular kernel-based solution of this paper, and not for mLFHT in general. The actual error lower-bound might be a lot smaller (or not). Again, please correct me if I am wrong. - I have some questions regarding a number of assumptions and derivation in the paper to see how they might limit the applicability of this solution (please see Questions section). ----------------------------------------------------------------- Minor comments: - Caption of Fig. 1: Eq. (4) appears on page 4, but figures is on page 2. - L.82 to L.89: explanations are vague, please consider some improvements. - Section 1.2 is a key part of the technical contribution (introducing mLFHT), but there are no prior announcements or motivations for it. Please highlight it more. - L.105 to L.109: Explanations seem to be non-informative, and can be removed. - Transition from Section 1.2 to 2.1 is not smooth at all. Also, the math that comes afterwards is unncessarily intense and some of it can be moved to appendix (Specially the "Removing the Bias" part, I guess it won't hurt the paper it this part is moved). - Again, section 2.2 needs more prior advertisement and explanations. - L.140 to L.151: explanations seem vague. - L.154: please consider citing some ref. paper for the theorem, otherwise it might look like it is your contribution. - L.179: Authors claim that this paper imposes less restrictions on the class of distributions for $P_X$ and $P_Y$ compared to other works. I am not completely sure about this claim (see questions), however, it might be a good idea to advertise it more. For example, authors can bring it up in the Introduction section. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Please position your work w.r.t Ref. [18], since they also have achieved a similar non-symmetric and non-aasymptotic relation between $m$ and $n$. - Please justify the assumptions (i) and (iii) in L.165 and L.167, respectively. Assumption (ii) makes sense, but I assume (i) and (iii) might be limiting. Are they necessary in general (are they fundamentally required)? If yes, then please elaborate. If not, please explain how this will affect the applicability of this method in real-workd tasks. - In Theorem 3.2, how can we deal with $R$? While reading Section 3.5, I noticed $R$ automatically becomes zero in other existing works since they assume $P_Z=P_X$ or $P_Z=P_Y$. This makes the comparison a little troublesome. Again, is the appearance of $R$ inevitable in your equations? or is it a limitation of this work? - Related to previous question: In L.101 it has been said that $P_Z\triangleq (1-\nu)P_X+\nu P_Y$. Then, in L.163 we have $\nu=\arg \min_{\nu'} \mathrm{MMD}(P_Z,(1-\nu')P_X+\nu' P_Y)$. Does this mean $R$ is always zero, since $P_Z$ is already assumed to be a convex combination of $P_X$ and $P_Y$? This part is confusing, please clarify. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: - Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer’s comments on our writing as well as the ‘minor comments’; our revision will incorporate all of the helpful suggestions. The page limit is indeed challenging, but we believe we are able to make the improvements necessary for good readability. Below, we address some more specific questions brought up in the review. 1. $\textbf{ “Comparisons to Ref. [18].”}$ In short, we study a generalization of the problem studied in [18], and we obtain results that imply many of the results therein. Indeed, [18] was the first to identify the trade-off between m and n, but we show that this phenomenon apply much more generally and are the first to empirically demonstrate it in real datasets (beyond information theoretically worst case constructions) via a novel, theoretically provable, efficient algorithm. Please see also our general response #2. 2. $\textbf{ “Thm 3.3 applies only to the particular kernel-based solution and not mLFHT in general.”}$ Our lower bounds in Thm 3.3 apply to the mLFHT problem in general (not algorithm based). In other words, any algorithm for mLFHT must use a number of samples greater than the bound in Thm 3.3. Our proof (Appendix E) uses information-theoretic techniques and bounds the total variation distance between classes of constructed hard instances involving the eigenfunctions of the kernel. We will clarify this in the revision. 3. $\textbf{ “Justify the assumptions (i) and (iii) in L.165 and L.167.”}$ Assumption (i) is crucially required for our results and we believe is somewhat mild (the upper bound on the density). Note also that the base measure can be selected arbitrarily, giving a lot of flexibility in what is considered “bounded” density (e.g. the base measure could become singular at the border of the manifold, etc). As for (iii), it is less restrictive than requiring that R=0 (i.e. that P_Z is exactly a mixture of P_X and P_Y) which was done in prior work [18]. Therefore, (iii) is a relaxation of a commonly used assumption. Specifically, tuning $R$ controls how misspecified the problem is allowed to be. See also our general response #3 about (iii). 4. $\textbf{ Questions 3-4}$ In L101-105 we describe a not fully general version of mLFHT (to keep the introduction simple), which corresponds to the case R=0 of L162-167. Our referencing equation (mLFHT) in Theorem 3.2 indeed makes this confusing, we will correct this. The introduction of $R$ should be viewed as an extension (or relaxation, cf. previous part) not as a limitation, we think. Please also see general response #3 for more. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response. My concerns w.r.t. lower-bound are alleviated. However, my other two concerns are still remaining: 1) paper (at least in parts) should be rewritten to reflect the fact that the proved non-symmetric relation between $m$ and $n$ is not entirely new, and the current paper has only shown its applicability in a more general setting. 2) The more general setting in this work has introduced at least one new parameter, denoted as $R$, which appears in almost all the bounds and is in fact a fundamental property of the problem set. However, authors have not discussed this parameter in details. Is it necessarily needed? how tight are the bounds w.r.t. $R$? Overall, assuming the authors will correct all the shortcomings mentioned during the reviews (specially the confusing parts between L.101-105 and L.163), I raise my score to borderline accept. --- Reply to Comment 1.1.1: Comment: Thank you for your effort in reviewing our paper. We will make sure to edit the paper according to your advice. Let us address your two points below. 1) We hope we clearly identified that our work is an extension of the prior art, e.g. quote from the second paragraph of the paper: ''Recently, Gerber and Polyanskiy (2022) proposed likelihood-free hypothesis testing (LFHT) as a simplified model and found minimax optimal tests for a range of non-parametric distribution classes, thereby identifying a fundamental simulation-experimentation trade-off between the number of simulated observations n and the size of the experimental data sample m. Here we extend Gerber and Polyanskiy (2022) to a new setting designed to model experimental setups more truthfully and derive sample complexity (upper and lower bounds) for kernel-based tests over non-parametric classes.'' However, we will do more and promote this fact in the abstract. Our general response #2 contains more details related to this question. 2) Indeed, in this work we introduced at least two more parameters: the $\delta$ in (mLFHT), which in our opinion is crucial, and the robustness parameter $R$. We acknowledge that we have not identified optimal tradeoffs for either of them: it is to be a subject of future work. Furthermore, in all honesty, we did not view $R$ as very important (unlike $\delta$), since it was introduced only with the goal of demonstrating that our results are at least somewhat robust to misspecification; it is not $\textit{needed}$ in the sense that our work mostly, if not only, concerns the case $R=0$. We thank you for making us think deeper about this parameter and hopefully, we’ll get optimal results in the future. Our general response #3 contains more details related to this question.
Summary: The paper addresses a likelihood-free hypothesis testing (LFHT) scheme, where the data distribution is a mixture of two distributions. The null hypothesis is given by one of these distributions, whereas the alternative hypothesis is that the mixture coefficient is lower-bounded by some delta. The motivation for such a scheme is that it is tied to some applications in particle physics, for example, the Higgs boson discovery. This is a generalization of the original LFHT scheme, which is recovered if delta=1. Goal of the paper is to characterize the tradeoff between n, i.e., the size of the dataset and m, the number of samples available from the mixture data distribution available at test time. Specifically, the authors derive lower and upper bounds on the minimal sample complexity. They also empirically verify this trade-off for kernel based hypothesis testing, where the kernels are parameterized via neural networks. Strengths: An interesting result is the dependency between n and m for which the authors derive lower and upper bounds, with a non-trivial trade-off between these quantities. It seems though that this was already addressed in prior work, but this is not my research area and I may be missing other contributions. Weaknesses: It seems that the applications for the proposed mixture LFHT are quite limited. In particular, the Higgs boson discovery is a very specific application, and therefore it is unclear whether this work extends to other key applications. The authors mention in Section 4 that the results from Theorem 3.2 cannot be directly applied because of the dependency of data and kernel. This is not clear to me. Isn’t the main idea of a kernel that it is learned from the data? It also seems to be the reason why their proposed generalization bounds cannot be included in Fig. 1 as a reference, which questions the usefulness of these bounds as fundamental limits of the scenario at hand. The statement in line 274 seems like a trivial observation, typically the validation set is always chosen much smaller than the training set. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weaknesses. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Does not apply. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate the helpful comments in the review and will make sure to address them in the revision. In the following we will address some of the questions mentioned in the review. 1.$\textbf{”It seems that the applications for the proposed mixture LFHT are quite limited.”}$ Our motivation for defining mLFHT is in fact very much practically driven: if one tries to model high-energy physics (LHC) or gravitational waves experiments (LIGO) those typically do have an alternative (non-null) hypothesis under which samples come from a mixture of noise and signal distributions. One good illustration of the interest of the physics community is [4] with over 8000 citations which studies a problem essentially equivalent to mLFHT in an asymptotic and parametric setting. Furthermore, since it is both an extension of LFHT and two-sample testing, mLFHT shares many applications arising therein (e.g. generative model comparisons, bioinformatics, etc, [1, 2]). The main objective of this work is to propose a theoretically provable efficient algorithm that is also practical in handling real-world data (which is well-modeled by our mixture framework), including the two examples shown in this paper. Specifically, what is crucial in the mLFHT setting is that despite the fact that the weight of the “signal” component in the alternative hypothesis can be very small, the number of simulation samples does not need to be large because the simulation samples can be forced to come directly from the signal component of the mixture. This prevents a certain kind of “curse of dimensionality” as we attempted to explain in the last paragraph of Section 3. We will make sure to explain the motivation and novelty of mLFHT more clearly in a revision. 2. $\textbf{“Connection of Figure 1 and Theorem 3.2.”}$ As you point out, our theoretical upper bounds rely on an a priori fixed kernel, while in our empirical experiments we train the kernel based on part of the data. Instead of using artificial experiments with a fixed kernel to empirically illustrate the soundness of our sample complexity bounds, we chose to use a state-of-the-art algorithm. The purpose of Fig. 1 is to show that an asymmetric simulation-experimentation tradeoff arises on real data and with complex real-world (kernel-training) algorithms beyond the theoretical minimax fixed kernel setting. We view Fig. 1 as important empirical evidence and motivation for further study of this simulation-experimentation tradeoff in (m)LFHT. 3. $\textbf{“L274 is trivial”}$ We believe that the observation is not entirely trivial since $X^{ev}, Y^{ev}$ are not used for validation in the traditional sense but rather serve as samples in estimating (evaluating) the test statistics conditioned on the kernel trained on a held-out set. $\textbf{References}$ [1] Lintusaari, J., Gutmann, M. U., Dutta, R., Kaski, S., and Corander, J. Fundamentals and recent developments in approximate Bayesian computation. 2017. [2] Bounliphone, W., Belilovsky, E., Blaschko, M. B., Antonoglou, I., and Gretton, A. A test of relative similarity for model selection in generative models. 2016 [3] Gerber, P. R. and Polyanskiy, Y. Likelihood-free hypothesis testing. 2022. [4] Cowan, G., Cranmer, K., Gross, E. and Vitells, O. Asymptotic formulae for likelihood-based tests of new physics. 2011.
Summary: This paper proposes a test statistic that can be used for likelihood-free hypothesis testing [LFHT] (for distributions $P_X, P_Y$, given a sample from $P_Z$, decide if $P_Z = P_X$ versus $P_Z = P_Y$) and mixed likelihood-free hypothesis testing [MLFHT] (for $P_Z = (1 - \nu) P_X + \nu P_Y$, decide if $ \nu = 0 $ versus $ \nu \ge \delta$). The test statistic is defined as the MMD distance between $P_Y$ and $P_X$ for an appropriately chosen kernel $K$. The paper then shows an upper and lower bound for these two testing problems. These depend on a base measure $\mu$ and a kernel $K$, both of which can be chosen, such that if $e_j$ are eigenfunctions of $\mu$, then $K(x, y) = \sum_j \lambda_j e_j(x) e_j(y) $. The bounds are then stated in terms of $\lambda$ (either $\mid\mid \lambda \mid \mid_\infty $ or $ \mid \mid \lambda \mid\mid_2$). Strengths: - The results are very general with reasonable assumptions. The authors assume that the distributions $P_x, P_y, P_z$ have bounded densities wrt the base measure $\mu$, and the sample complexity scales linearly with this bound on the density. This is in contrast to existing work, which assumes more descriptive assumptions on these distributions. - There is empirical support for the asymmetric sample complexity between the train sample $n$ and the test sample $m$. - The lower bounds are general, in that although they are minimax, they apply for any choice of $\mu, K$. - The upper bound allows for distribution mismatch, i.e., the test sample need not be a perfect mixture of $P_X, P_Y$. The bound holds as long as the true distribution is close in MMD to some mixture of $P_x, P_y$. - This approach also allows for learning a kernel from the data, although the theory bounds don't hold in this case as it violates independence between choice of the kernel and the training samples. Weaknesses: - The bounds are a little non-intuitive / roundabout, as they depend only on the base measure $\mu$ and the kernel $K$, both of which do not directly depend on the hypothesis class containing $P_x, P_y$. From my understanding, the only place the hypothesis class shows up is that it must have bounded density with respect to the measure $\mu$. - The novelty of the results over existing work is not explained fully. Is the main contribution of the work bounds for the mixed-distribution testing? Or is the test statistic and the analysis also entirely new? - As mentioned by the authors, there is gap between the upper and lower bounds for the training set size $n$ and its dependence on the mixture coefficient $\delta$. However, I think this is acceptable and a good open question. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weaknesses section. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors discuss differences between the upper and lower bounds, but apart from that, there's no discussion of limitations. There should be more discussion about drawbacks of this approach wrt existing approaches -- surely there must be some based on how general these results are in comparison to existing work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate the helpful comments in the review and will make sure to address them in the revision. In the following we will address some of the questions mentioned in the review. 1. $\textbf{“The bounds are a little non-intuitive / roundabout.”}$ Please see our general response #1. We believed that having a mild assumption on the hypothesis class helps our setting to model real-world tasks better. On the other end, indeed the dependency between the trained kernel and finite observations from the hypotheses distributions lacks a satisfying theory. We consider this as a major open problem worth investigating in not only (m)LFHT but also other applications of kernel methods (e.g. two-sample testing [3]). 2. $\textbf{“The novelty of the results over existing work is not explained fully.”}$ Please see our general response #2 on our novelty over [1]. To specifically address the other points, a version of our MMD test statistic was indeed proposed in prior work (e.g., [2]). However, the theoretical analysis of the MMD statistic (especially non-asymptotically) on the (m)LFHT problem is novel to our knowledge. We will make sure the novelties are highlighted and contrasted properly in the revision. 3. $\textbf{“Discussion of limitations.”}$ Beyond limitations in Appendix L, some other limitations include our kernel optimization lacking sufficient theory (and therefore, the upper bound must rely on fixed kernels), dependency on certain parameters being open, and several open questions when model misspecification ($R>0$) is present. We will make sure to include a more thorough discussion in the revision. $\textbf{References}$ [1] Gerber, P. R. and Polyanskiy, Y. Likelihood-free hypothesis testing. 2022. [2] Bounliphone, W., Belilovsky, E., Blaschko, M. B., Antonoglou, I., and Gretton, A. A test of relative similarity for model selection in generative models. 2015. [3] Liu, F., Xu, W., Lu, J., Zhang, G., Gretton, A., and Sutherland, D. J. Learning deep kernels for non-parametric two-sample tests. 2020. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. I have read the other reviews and responses. I agree with the reviewers regarding clarity of the results, and I think including the suggested edits will help improve the paper. While there do remain open questions related to this work, I think the contributions warrant an accept, and I'll keep my score as is.
Rebuttal 1: Rebuttal: We greatly appreciate the valuable suggestions given by reviewers and will revise our manuscript accordingly. In this response, we would like to address several comments that appear in multiple reviews. 1. $\textbf{“Interpretation of the bounds and its dependence on kernel, base measure, hypotheses”}$ Due to the minimal assumptions for the generality of our results, our bounds can indeed be challenging to interpret. We note that the dependence on the kernel spectrum is unavoidable as can be seen from comparing our upper and lower bounds. To build intuition, in Section 3.5 and Appendix B our upper bound (Thm 3.2) is applied with some explicit kernels (Gaussian kernels/discrete kernels) and to classical nonparametric classes such as smooth densities and bounded discrete PMFs. These applications recover minimax optimal sample complexity shown in past work [1,2]. We hope these concrete examples help in interpreting our results within a broader context. We will take more care in the revision to emphasize these. 2. $\textbf{“Novelty over [1] (Ref. [18] in our paper)”} $ [1] is an important inspiration for our study, however, we innovate over [1] in several ways, which we try to summarize below. i) $\textbf{New problem setting.}\quad$ [1] derived the minimax tradeoff for several specific classes of distributions that are different from our own (they studied smooth densities on finite-dim spaces and discrete PMFs under TV distance, whereas we study $\underline{\text{unconstrained densities}}$ but under kernel distance). Moreover, our work extends [1] to a more general setting (mLFHT), which on one hand is more practical, and on the other hand, exhibits a different tradeoff (e.g. mitigates the curse of dimensionality in # of simulated samples (L236)). This setting arises in several scientific disciplines (HEP, LIGO, etc) and is more realistic (without assuming smoothness). ii) $\textbf{More general results.}\quad$ Although under different settings, our upper bound results, applied with specific kernels (Appendix B), recover several of [1]’s key (minimax optimal) results for LFHT as well as several of [2] in two-sample testing. Thus, our theoretical bounds are more general while still being optimal when specialized to prior settings. iii) $\textbf{Practical algorithm and empirical trade-off.}\quad$ Our (novel) kernel-based algorithm is efficient and able to solve real-world tasks, whereas the tests considered in [1] require discretizing space into a large (exponential in dimension) number of bins and are thus impractical (though minimax optimal). Furthermore, our experiments confirm the validity of the theoretical insight that one can trade off m versus n and that in general one needs more simulation samples n than real samples (asymmetry of the trade-off). We will make sure that our contributions are clarified and emphasized in the revision. 3. $\textbf{“Dependence on R” and “Assumption (iii)”}$ First, we want to clarify that L.101 is informal and, in fact, the special case ($R=0$) of the mLFHT problem stated in Section 3.2. The latter can be formally written as \begin{align*} H_0: P_Z\in \lbrace P_Z : \nu(P_Z)=0 \rbrace \quad \text{versus} \quad H_1: P_Z\in\lbrace P_Z:\nu(P_Z)\geq\delta\rbrace \end{align*} where $\nu(P_Z)=\arg\min_{\nu’}\mathrm{MMD}(P_Z,(1-\nu’)P_X+\nu’ P_Y))$. Intuitively, (iii) says that “the test distribution P_Z should not be too far away from some mixture of labeled data,” where $R$ governs how mis-specified we allow the problem to be. Thus, (iii) shouldn't be thought of as an 'assumption', but rather as a relaxation of the assumption (present in prior work on LFHT) that P_Z is an exact mixture of P_X and P_Y. We want to clarify that the major focus of our paper is on the case when $R=0$, i.e. when there is no model mis-specification and $P_Z$ comes from a mixture of $P_X$ and $P_Y$, as is assumed in many applications and past literature. Our theory and experiments cover this scenario perfectly by simply plugging in $R=0$ in Thm 3.2. In Thm 3.2, we included the more general case where $R$ can be positive in order to present the most general result possible. The dependence on $R$ is not a crucial part in the theory or experiments. While a direct controlled study involving $R$ would be hard (since it depends on the kernel learned from samples), we believe that further investigation of (m)LFHT involving non-trivial model mis-specification is an interesting open direction worth investigating. $\textbf{References}$ [1] Gerber, P. R. and Polyanskiy, Y. Likelihood-free hypothesis testing. CoRR, abs/2211.01126, 2022. [2] Li, T. and Yuan, M. On the optimality of gaussian kernel based nonparametric tests against smooth alternatives. arXiv preprint arXiv:1909.03302, 2019
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Enhancing Robot Program Synthesis Through Environmental Context
Accept (poster)
Summary: This paper introduces the Environmental-context Validated lAtent Program Synthesis framework (EVAPS), which builds upon the SED approach by utilizing a trail-eval-repair loop to enhance program evolution and improve generalization capabilities. EVAPS leverages partial environmental observations by initially obtaining candidate programs from existing synthesizers. By executing these candidate programs and capturing the environmental context before and after each action, EVAPS models both the environmental and syntax contexts concurrently. Through iterative modifications to address semantic conflicts across program tokens, EVAPS aims to correct erroneous program fragments that do not produce the desired output. The proposed framework was evaluated in the partially observed Vizdoom domain, involving a robot navigating a 3D world and interacting with various elements, including objects and adversaries. The experiment results demonstrate the superiority of the approach over the baselines in accurately modeling program semantic subtleties and effectively resolving potential errors. Strengths: + EVAPS synthesizes robot programs operating in partially observed environments (surrounding RGBD image inputs). + EVAPS uses a trail-eval-repair loop to iteratively improve program generalizability by rectifying potentially erroneous code segments. + EVAPS is capable of handling noisy observations. A key novelty of EVAPS, when compared to previous program synthesis approaches in the Karel domain, is its ability to handle partial environmental observations. This capability is crucial for synthesizing robot programs, as relying on global environment information is often impractical in real-world robot navigation tasks. EVAPS consists of two essential modules: the partial observation leveraging module, which emphasizes the environmental context of specific program tokens, and the code symbol alignment module, which focuses on modeling the implicit connections between different code segments in semantic spaces. By integrating both syntax and semantic information, these modules facilitate program repair and enhance the effectiveness of EVAPS in generating accurate and meaningful robot programs. Weaknesses: The title of the paper may appear to overstate the actual contribution made by the research. The choice of the Vizdoom environment as a benchmark for evaluating Robotic Program Synthesis in this paper raises some concerns. The evaluation of EVAPS in the navigation-focused Vizdoom environment does appear to have limitations. While efficient navigation is undoubtedly an important robotic task, it may not fully represent the diverse range of tasks in robotics, particularly those involving object manipulation in fully observable environments. It would be valuable to explore the applicability of EVAPS in synthesizing robot programs for other more realistic environments like Miniworld and AI2-Thor, which involve long-horizon vision-based tasks. Assessing the performance of EVAPS in such contexts would provide a more comprehensive evaluation of its capabilities. Given the current state of the paper, it remains uncertain whether EVAPS is adequately equipped to handle the challenges posed by complex robotic navigation tasks. To determine the true extent of EVAPS' applicability across a broader range of robotic tasks and environments, additional research and experimentation are necessary. Nevertheless, I do believe this paper contributes to the program synthesis community. The comparative evaluation of EVAPS against the baselines, when provided with partial observations, showcases its superior performance, thereby highlighting the significance of the observation leveraging module and the code symbol alignment module. However, there remains a lingering uncertainty regarding the practical implications of this demonstrated value in the context of robot control. Technical Quality: 3 good Clarity: 3 good Questions for Authors: EVAPS seems incremental to SED. To gain a comprehensive understanding of EVAPS' contribution, it would be beneficial to apply the framework to the Karel domain, where the baselines were evaluated. This approach would effectively demonstrate the effectiveness of the observation leveraging module and the code symbol alignment module. Is there any evidence or indication from the authors regarding the performance of EVAPS specifically in the Karel domain? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The paper presents an approach for robot program synthesis. However, the evaluation falls short in providing a convincing argument on how EVAPS can be effectively applied to learn robot-control policies, as the experiments are confined to a relatively simple video game environment. Further investigation and experimentation in more complex and realistic robotic scenarios would be necessary to demonstrate the practical utility and effectiveness of EVAPS in learning robot-control policies. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the insightful comments and constructive suggestions. We summarise the issues pointed out and address them in the following: > A gap between the title and experiment environment in which this work is evaluated Thanks for pointing this out. In this work, we explore the topic "enhancing robotic program synthesis with the execution environment" and believe that it will complement traditional program synthesis and is worthwhile exploring. Considering navigation is one of the most common and important tasks in robotics, we evaluate our work on navigation tasks in robotics. To be accurate, we will make our paper title specific to navigation tasks in the revision. > Choice of experiment environments As the reviewer pointed out, there exist several commonly used execution environments for evaluating approaches in robotics including Karel, Vizdoom, Miniworld and AI2-Thor. We went through the following process to choose our experiment environment. We conducted a review of robotic program synthesis works published in the recent 5 years (the most related works are listed below [1-9]) and found that they use either Karel or Vizdoom as the experiment environment. In the end, we chose Vizdoom over Karel as our experiment environment since tasks in Vizdoom involve more actions and scenarios are more complex. The differences between them are shown as follows. Karel is a 2D grid world involving 5 marker actions primitives and 5 perceptions primitives for obstacle detection. Vizdoom is a 3D semi-realistic simulation environment involving 7 interactive action primitives and 6 perception primitives [3]. | Environment | Dimension | DSL space (action,perception) | State Representation | Task Type | ---- | ---- | ---- | ---- | ---- | | Vizdoom | 3D | ( 7 , 6 ) | 120 x 160 x 3 | Navigation | | Karel | 2D | ( 5 , 5 ) | 8 x 8 x 16 | Navigation | In general, Vizdoom covers Karel in terms of environmental dimensions, the complexity of state representation as well as action and perception space. Thus, we chose Vizdoom as our execution environment. > Indication of the performance of our approach in the Karel domain As discussed above, Vizdoom is sort of an upgraded version of Karel. Relevant actions and perceptions in Karel are contained in Vizdoom. Most of the scenarios and tasks in Karel can be found in Vizdoom. Since our approach outperforms the baselines in Vizdoom, it is reasonable for us to believe consistent results can be achieved in the Karel domain. Reference [1] Duan, Xuguang, et al. "Watch, reason and code: Learning to represent videos using program." In Proceedings of ACM MM, 2019. [2] Dang-Nhu, Raphaël. "PLANS: Neuro-symbolic program learning from videos." In Proceedings of NeurIPS, 2020. [3] Sun, Shao-Hua, et al. "Neural program synthesis from diverse demonstration videos." In Proceedings of ICML, 2018. [4] Gupta K, Christensen P E, Chen X, et al. "Synthesize, execute and debug: Learning to repair for neural program synthesis". In Proceedings of NeurIPS, 2020. [5] Chen, Xinyun, Dawn Song, and Yuandong Tian. "Latent execution for neural program synthesis beyond domain-specific languages." In Proceedings of NeurIPS, 2021. [6] Manchin, Anthony, et al. "Program Generation from Diverse Video Demonstrations." arXiv preprint arXiv:2302.00178 (2023). [7] Chen, Xinyun, Chang Liu, and Dawn Song. "Execution-guided neural program synthesis." In Proceedings of ICLR, 2018. [8] Shin, Eui Chul, Illia Polosukhin, and Dawn Song. "Improving neural program synthesis with inferred execution traces." In Proceedings of NeurIPS, 2018. [9] Trivedi, Dweep, et al. "Learning to synthesize programs as interpretable and generalizable policies." In Proceedings of NeurIPS, 2021. --- Rebuttal Comment 1.1: Title: Thanks for your response Comment: I appreciate the authors' response. However, I am still not convinced that the video game experiments sufficiently demonstrate the suitability of EVAPS for synthesizing robot programs in visually realistic navigation environments such as Miniworld and AI2-Thor, which entail navigating complex and partially observable long-horizon robot tasks. It would be advisable for the authors to assess the applicability of EVAPS across a wider spectrum of navigation tasks and environments. With that being mentioned, even if the title is narrowed down to navigation tasks, I would still argue that the evaluation does not provide support for it. Regarding program synthesis for robot navigation with partial observability, it looks like at least the benchmarks from the following papers are more challenging than the set of tasks explored in this paper. [1] Cao, Yushi, et al. "GALOIS: Boosting Deep Reinforcement Learning via Generalizable Logic Synthesis." Advances in neural information processing systems (2022). I would keep my score based on the above discussion. --- Reply to Comment 1.1.1: Title: Response to Reviewer mJeF's Comment Comment: Thank you very much for your further comments. As we understand, the remaining concern is the choice of experimental environment. We agree with the reviewer that experiments should be conducted in a more realistic environment. This is the reason we chose VizDoom as our experimental environment. 1. VizDoom is a more realistic environment. In fact, VizDoom contains more challenging scenarios than the suggested environments: - VizDoom is more complex compared to MiniGrid [1] used in the GALOIS [2] work pointed out by the reviewer: | Environment | Dimension | Action Primitives | State Representation | ---- | ---- | ---- | ---- | VizDoom | 3D | 7 | 120 x 160 x 3 | MiniGrid | 2D | 5\* | 20 x 20 (Used in GALOIS [2]) *Note\*: Only a maximum of 5 actions primitives are simultaneously available in a single scenario.* Considering all four scenarios (DoorKey, BoxKey, UnlockPickup, Multiroom) evaluated in the GALOIS [2] work, "equivalent" scenarios can be found in VizDoom. - VizDoom is more complex than MiniWorld [3] suggested by the reviewer. As described in the README.md of the official repo [4], MiniWorld can be seen as a simpler alternative to VizDoom or DMLab. For your convenience, we brought relevant text here: > MiniWorld is a minimalistic 3D interior environment simulator for reinforcement learning & robotics research. It can be used to simulate environments with rooms, doors, hallways and various objects (eg: office and home environments, mazes). *MiniWorld can be seen as a simpler alternative to VizDoom or DMLab.* It is written 100% in Python and designed to be easily modified or extended by students. 2. VizDoom is the commonly adopted experimental environment by the recent robotic program synthesis works [5, 6, 7, 8] according to our literature review reported in the previous response. Hopefully, this response can address your concern. References [1] Maxime Chevalier-Boisvert, Lucas Willems, and Suman Pal. "Minimalistic gridworld environment for openai gym.", 2018. [2] Cao, Yushi, et al. "GALOIS: boosting deep reinforcement learning via generalizable logic synthesis." In Proceedings of NeurIPS, 2022. [3] Chevalier-Boisvert, Maxime, et al. "Minigrid & Miniworld: Modular & Customizable Reinforcement Learning Environments for Goal-Oriented Tasks." CoRR, 2023. [4] Chevalier-Boisvert, Maxime, "MiniWorld: Minimalistic 3D Environment for RL & Robotics Research", GitHub repository, 2018. [5] Manchin, Anthony, et al. "Program Generation from Diverse Video Demonstrations." arXiv preprint arXiv:2302.00178, 2023. [6] Dang-Nhu, Raphaël. "PLANS: Neuro-symbolic program learning from videos." In Proceedings of NeurIPS, 2020. [7] Duan, Xuguang, et al. "Watch, reason and code: Learning to represent videos using program." In Proceedings of ACM MM, 2019. [8] Sun, Shao-Hua, et al. "Neural program synthesis from diverse demonstration videos." In Proceedings of ICML, 2018.
Summary: The paper claims that global observation in robot program synthesis is not achievable, so it proposes to use partial observation. And it learns an observation embedding module and a semantic-grammatical alignment module to repair the candidate programs that can increase the accuracy and generalization of robot program synthesis. Strengths: 1. The problem this paper focused on is important to the application of PS as the perfect and complete observable environment is often not available in the real world. 2. The experiment setting and analysis are clear and comprehensive. The paper analyses the effect of multiple aspects factors in the in-perfect observation of the proposed method. Weaknesses: [W1]. The key weakness is the choice of baseline methods. As a program repair method, it’s better to compare with methods that have a fair setting rather than program generation methods. There are other program repair methods that utilize trajectory to enhance performance. Moreover, the baseline methods are too old (most before 2020). [1] Execution-guided neural program synthesis [2] Write, execute, assess: Program synthesis with a REPL [2] Improving neural program synthesis with inferred execution traces [W2]. Needs more explanation about the difference between global environment information and partially observable environments. [W2.1] As using pre-observation and post-observation which are related to specific program segments to enhance the information contained in the embedding is a general trick. The PS methods which take global environment information as input can also learn such representation to enhance accuracy and generalization. What is the key connection between the proposed method and partially observable environments? [W2.2] Needs some insightful evaluation in the results section to demonstrate the relation between the main motivation (partial observable environment) and the proposed architecture. [W3]. There are some typos, like line 203, citation 41 is not the baseline Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: Please see weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the insightful comments and constructive suggestions. We summarise the issues pointed out and address them in the following: --- > Choice of baseline We conducted a review of robotic program synthesis works published in the recent 5 years and identified SED proposed in 2020 as the state-of-the-art technique in robotic program synthesis. In the evaluation, we compared our approach with SED as shown in Section 4.2. The experiment results show our approach significantly outperforms SED. Regarding traditional program repair approaches, we are indeed aware of a branch of rich works in the program repair area, which can be classified into three categories: - Search-based. This type of approach considers program repair as a search problem, exploring the space of all possible program candidates to identify one that satisfies the given weak specification, i.e., test cases. The representative work of this area is GenProg [1]. - Semantic-based. This type of approach extracts semantic information from the program under repair (typically represented as path constraints) and then generates patches by solving those constraints. The representative work is SemFix [2]. - Learning-based. This type of approach leverages a number of patches generated by developers to learn a model that repairs programs. An earlier and representative work is Prophet [3]. Those approaches depend on high-quality test suites to validate patch candidates, which are unavailable in the setting our approach is targeted at, i.e., Vizdoom. Besides, Karel and Vizdoom are designed for specific purposes. Complex features in programs that traditional program repair approaches rely on can be abstracted away. These pose challenges for applying traditional program repair approaches to such domains. For these reasons, robotic program synthesis work with a focus on program repair SED [4] also does not have a comparison with traditional program repair approaches. > More explanation on the difference between global execution environment and partial execution environment In the context of robotic systems, "partial environment" pertains to the immediate surroundings that the robot is capable of perceiving through its sensory devices, whereas the concept of "global environment" encompasses a comprehensive understanding of the entire scenario, including elements beyond the robot's perceptual capabilities [5, 6]. In the real world, global observations are typically unavailable and only partial observations are available. Thus, our approach is focused on partial observations for enhancing program synthesis. > The key connection between the proposed approach and partially observable environments The key connection between the proposed approach and observable environments is our unique design that establishes a connection between the program execution context and the partially observable environment. As shown in lines 127-133, our approach takes the execution context of statement $S$ and the partially observable environment of statement $S$ as a unit for model training. This enables the combination of program syntax and the corresponding partially observable environment to predict a token, thereby enhancing the accuracy of program synthesis. > Evaluation of the key connection between the proposed approach and partially observable environment We conducted a study to evaluate the effectiveness of our design in the experiment. As shown in Table 2, EVAPS achieves the best performance and its performance significantly goes down when the unique design is taken away. “EVAPS+O” and “EVAPS+S” are variants of EVAPS without connecting the program execution context and the partially observable environment and they both underperform EVAPS. > Typos Thank you for your feedback, we will carefully proofread the manuscript and correct errors pointed out. References [1] Le Goues, Claire, et al. "Genprog: A generic method for automatic software repair." Ieee transactions on software engineering, 2011. [2] Nguyen, Hoang Duong Thien, et al. "Semfix: Program repair via semantic analysis." In Proceedings of ICSE, 2013. [3] Long, Fan, and Martin Rinard. "Automatic patch generation by learning correct code." In Proceedings of POPL, 2016. [4] Gupta K, Christensen P E, Chen X, et al. "Synthesize, execute and debug: Learning to repair for neural program synthesis". In Proceedings of NeurIPS, 2020. [5] Chen, Ci, et al. "Motion planning for heterogeneous unmanned systems under partial observation from uav." In Proceedings of IROS, 2020. [6] Katsumata, Yuki, et al. "Map completion from partial observation using the global structure of multiple environmental maps." Advanced Robotics, 2022. --- Rebuttal Comment 1.1: Title: Thanks for your rebuttal Comment: 1. Thanks for the authors' rebuttal, the concern about baselines is addressed. I suggest the author add these discussions to the paper. 2. However, I still think that the core design of the proposed method and the problem of the partially observed env lack some key and reasonable connection. I will keep my score. Thanks for the rebuttal. --- Reply to Comment 1.1.1: Title: Thanks Comment: Thanks for your comments. We will add the discussion about the baselines and elaborate more on the key connection between the proposed method and the partially observed environment in the revision.
Summary: This paper proposes EVAPS to enhance robotic program synthesis by integrating parietal environmental observations. Specifically, EVAPS utilizes both the environmental context leveraging module and the code symbol alignment module to improve its ability to rectify semantically erroneous program segments and generalize across various tasks. Comprehensive experiments on the partially observed Vizdoom benchmark demonstrate its superior performance over other baselines across various tasks. Overall, this is a well-written paper and its extensive experiments and ablation studies verify the effectiveness of the proposed method. I would be leaning to accept this paper. Strengths: * The proposed method is technically sound and well motivated. * This paper is well written and well structured. It provides clear formulations and an overview figure to well explain the method. * The experimental evaluation is rather comprehensive and provides convincing results to demonstrate the effectiveness of the proposed method. Weaknesses: I did not identify any weaknesses for this paper as I am not familiar with the task of robotic program synthesis. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Can you shed some light on the limitations of the proposed method? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: This paper just discussed the limitations very lightly. I would like to see more discussion on its limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for acknowledging our contributions. We address your comments in the following. > Limitation Since our approach is based on self-supervised training, it relies on the quality of the data used for training. However, data quality is a common issue in this field. Moreover, we anticipate some practical difficulties during the transition from the simulation environment to the real world. It is labor-intensive to collect enough training environment data from the real world. --- Rebuttal 2: Title: Official comment by Reviewer 2nvQ Comment: Thanks for the response. I will stick to my leaning to accept recommendation. --- Rebuttal Comment 2.1: Title: Thanks Comment: Thank you for taking the time to respond.
Summary: The paper proposes Environmental-context Validated lAtent Program Synthesis framework (EVAPS). a program synthesis model that generates executable program for robotic programming, evaluated in the vizdoom environment. It initially obtains candidate programs using other available synthesizers, then performs program repair by executing the candidate program and collect resulted partial environmental observations. It outperfoms a range of prior works for viszoom program synthesis, and demonstrated robustness again observation noises and task complexity. Strengths: - using the aid of partially observed environments for program synthesis and repair is novel and reasonable - the proposed framework is sound, and the design choices for utilizing observations, and using a graph structure to aggregate environmental and syntactic information flow is convincing - extensive experiments Weaknesses: - the writing of the paper can be improved, by reducing repetitive and inconsistent adjectives - do the baselines use program repair? Does any of them rely on environmental observations? - the assumption of executing candidate programs makes more sense when a privileged simulation environment is available, and when real-time control is not needed. Can the author provide further justification regarding this? How would you expect to transfer to real world? - the author claim that EVAPS shows is more robust agains noise in the conclusion, but there's no comparison with baseline on this matter in the experiment section. Technical Quality: 3 good Clarity: 3 good Questions for Authors: see weeknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The major limitation i see in this framework is the assumption of aqcuiring partial observations and executing program candidates in the environment. These assumption is not valid in real-world real-time control. More justifications on this would be very helpful. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your constructive comments. We address your comments in the following. > Writing of the paper can be improved Thanks for your comments. We will proofread the manuscript carefully. > Do the baselines use program repair? Does any of them rely on environmental observations? In baselines, SED [1] uses execution feedback to fix the program. It first produces initial programs using the neural program synthesizer component, then utilizes a neural program debugger to iteratively repair the generated programs. SED also uses environmental observation for training the neural program debugger. However, there exist significant distinctions between SED and our approach: - SED's execution feedback relies on a global perspective, which is only available in some special cases. In contrast, our approach embraces partial observation, which is more achievable in the real world. - SED treats the program as a whole for training while our approach pays more attention to the execution context. - Our approach establishes a connection between the program execution context and the partial observable environment and this connection significantly enhances the performance of our approach. > How would you expect to transfer to the real world? The partial observation that our approach relies on is often available in practice, so it is theoretically feasible to apply our approach to the real world. Certainly, we anticipate some practical difficulties during the transition from the simulation environment to the real world. It is labor-intensive to collect enough training environment data from the real world. > The author claim that EVAPS shows is more robust against noise in the conclusion, but there's no comparison with the baseline on this matter in the experiment section. To clarify, we are not evaluating if EVAPS achieves better robustness over other baselines. Instead, we aim to evaluate how EVAPS performs in an environment with noise. The result shows that our approach demonstrates relatively good robustness. Reference [1] Gupta K, Christensen P E, Chen X, et al. "Synthesize, execute and debug: Learning to repair for neural program synthesis". In Proceedings of NeurIPS, 2020. --- Rebuttal Comment 1.1: Comment: Thanks for your rebuttal. I will keep my score. --- Reply to Comment 1.1.1: Title: Thanks Comment: Thank you for taking the time to respond.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes an approach for program synthesis in robotic domains where the environment context can provide valuable insight for what the correct program should be. The approach takes candidate programs, executes them to get a trace of observations, and then passes the program and trace through a combination of local and global feature extraction with neural networks. The output is a refined program. The approach is trained and tested on randomly generated Vizdoom programs and is substantially better than the alternatives. They also have an ablation study for the contribution of the two types of feature extraction and a study of the approach's noise tolerance. Strengths: The results are good. The description of the approach is very clear and easy to understand. Some details are missing but overall it's clear. Vizdoom seems like a popular domain and their approach performs much better than the tested alternatives. The introduction does a great job of describing the intuition behind incorporating observation feedback into the program proposal. I don't have a lot to say about the paper. It is mostly a standard type of paper of applying a new, well-thought out approach to an existing domain. The approach seems sound and the results are good. The main insight is incorporating the environmental feedback to improve the program, which is similar to execution-guided synthesis. Weaknesses: The paper doesn't have any glaring weaknesses. As usual, it could be strengthened by applying the approach to another benchmark. My main concern is that the gist of the idea seems very similar to work on execution-guided synthesis. But, the approach is shown to be much better than SED on the vizdoom benchmark, so there must be something more here. # Addressing rebuttal I have read the authors rebuttal. In particular, they satisfactorily address my main concern about similarity to other work, which I have no further concerns about. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Can SOTA approaches on Karel be applied to Vizdoom? If so, how do they compare? It seems like the intuitive reason this approach works is similar to the idea behind execution-guided synthesis work applied to Karel, so that could be a good comparison to include. Where do the program candidates come from before refinement? I don't recall the paper explaining this. This is a big confusion of mine. I would prefer to have the description of the Vizdoom domain in the evaluation section rather than the preliminary. A figure showing an example task would be helpful too. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your meticulous review of our paper and for acknowledging our contribution. Please see below our responses to your comments. --- > Can SOTA approaches on Karel be applied to Vizdoom? > My main concern is that the gist of the idea seems very similar to work on execution-guided synthesis. In general, approaches on the Karel domain can be applied to the Vizdoom domain. We conducted a review of robotic program synthesis works published in the recent 5 years and identified SED [1] proposed in 2020 as the SOTA approach, which we have evaluated in the paper. However, there exist significant distinctions between SED and our approach: - SED's execution feedback relies on a global perspective, which is only available in some special cases. In contrast, our approach embraces partial observation, which is more achievable in the real world. - SED treats the program as a whole for training while our approach pays more attention to the execution context. - Our approach establishes a connection between the program execution context and the partial observable environment and this connection significantly enhances the performance of our approach. > Where do the program candidates come from before refinement? In lines 63-64, we have mentioned that "*EVAPS initially obtains candidate programs through existing synthesizers*". Specifically, the candidate programs used in the paper are generated by the same approach used in SED. We will elaborate more in the revision. > I would prefer to have the description of the Vizdoom domain in the evaluation section rather than the preliminary. A figure showing an example task would be helpful too. Thank you for your suggestion, we will revise the manuscript following your suggestion. References [1] Gupta K, Christensen P E, Chen X, et al. "Synthesize, execute and debug: Learning to repair for neural program synthesis". In Proceedings of NeurIPS, 2020. --- Rebuttal Comment 1.1: Title: Response Comment: Thank you for your response and addressing my comments and concerns. --- Reply to Comment 1.1.1: Title: Thanks Comment: Thank you for taking the time to respond.
null
null
null
null
null
null
The Best of Both Worlds in Network Population Games: Reaching Consensus and Convergence to Equilibrium
Accept (poster)
Summary: This paper examines the connection between the notions of consensus and equilibrium in a multi-agent system where multiple interacting sub-populations coexist. They argue that consensus can be seen as an intricate component of intra-population stability, whereas equilibrium can be seen as encoding inter-population stability. They show that smooth fictitious play can achieve both consensus and convergence to equilibrium in diverse multi-agent settings. Strengths: Excellent paper that brings together the concepts of consensus and equilibrium. Strong theory, interesting experiments. Weaknesses: 1) don't forget the conclusion in the final version of the paper. 2) It is clear that in the long run, due to the strong law of large numbers, FP is such that agents within a population will form the same beliefs. Therefore, the point of view taken by the authors is to study a representative agent. I say: what could be interesting is the transient regime where agents beliefs have not yet converged. There, one could use the central limit theorem, and look at the interplay between, on the one side, consensus that has not been reached yet, and the convergence of populations to equilibria. You could also look at it with 2 different learning rates (one for intra, one for inter populations, I suggest you explore connections with this https://arxiv.org/pdf/2205.02330.pdf) Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: please comment on 2) above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to provide us with your valuable feedback, and also for recognizing our paper to be an “Excellent paper Excellent paper that brings together the concepts of consensus and equilibrium” with “Strong theory, interesting experiments.” **Comment**: "don't forget the conclusion in the final version of the paper." **Response**: We will be more than happy to add a conclusion section in our final version. **Comment**: “It is clear that in the long run, due to the strong law of large numbers, FP is such that agents within a population will form the same beliefs. Therefore, the point of view taken by the authors is to study a representative agent. I say: what could be interesting is the transient regime where agents beliefs have not yet converged. There, one could use the central limit theorem, and look at the interplay between, on the one side, consensus that has not been reached yet, and the convergence of populations to equilibria. You could also look at it with 2 different learning rates (one for intra, one for inter populations, I suggest you explore connections with this https://arxiv.org/pdf/2205.02330.pdf)” **Response**: This is indeed an excellent and deep direction for follow-up work that will probably require to bring in the mix a number of new technical ideas. Thank you very much for the suggestion! We will make sure to include in the discussion of interesting future work as well as add the relevant citation. Thank you again!
Summary: The authors defined a network population game, which is a multipartite network game where each partite is a population and agents in the same population do not interact with each other and only interact with agents from other populations. In this way, each population can be easily abstracted into a "super-agent", and each population has separate beliefs about different neighbor populations. The authors show that when the agents perform interactions using the network population model and adopt a smooth fictitious play dynamic, the population's belief will gradually have lower variance and the mean belief will reach a quantal response equilibrium (QRE) in both weighted zero-sum games and exact potential network games. Strengths: 1. The intention of this paper is good, which tries to address both the consensus and the convergence of multi-agent learning systems 2. The overall flow of this paper is easy to follow, and the authors conduct both theoretical and numerical studies Weaknesses: 1. The justification for using the network population game is not sufficient. When this specific game model fits real-world problems, especially related to belief updates. For now, the purpose of this assumption seems like making the convergence and part much easier to study. 2. The reason for using the smooth fictitious play dynamics is also not sufficiently justified. The reason for including the \nu penalty term in the utility function should be provided (e.g., saying this is an entropy-based cost and why this is reasonable), and whether it is necessary for the convergence and consensus study should be elaborated. 3. No elaboration on how the \epsilon term and the A_ij influence the consensus and convergence 4. Lack of discussion on the limitations Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. What is the intuition behind the perturbed payoff in Eqn (5)? Is this cost term a necessity for consensus and convergence? 2. Line 203 says "Agents maintain separate beliefs about different neighbor populations", where is this previously justified and why is this reasonable? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The authors did not discuss the limitations of this work in the paper. I'm not sure if and how the population size of each population will influence the consensus and equilibrium outcome and whether there will be any fairness issues based on this. For now, I will not flag ethics review issues since I can't identify or exclude them. It will be nice for the authors to add discussions on the limitations, at least in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to provide us with your valuable feedback. **Comment**: “The justification for using the network population game is not sufficient…easier to study.” **Response**: Network population games model scenarios which are characterized by the presence of multiple interacting populations. A good survey is [1]. This survey has been cited nearly 3000 times indicating the wide applicability and interest in related questions. Arguably, one the simplest and very natural settings are interactions between two different type of agents (e.g. male and female) that gives rise to bi-matrix population games that correspond to single-edge networks in our case. We gave a brief description of some applications in Footnote 1 in the paper, however, we will be more than happy to expand on it. Overall, you can think of nodes as types and each agent needs to formulate beliefs about how other types of agents will act when they interact with them in the world. **Comment:** ”The reason for using the smooth fictitious play dynamics is also not sufficiently justified… elaborated.” and “What is the intuition behind the perturbed payoff…convergence?” **Response**: We chose smooth fictitious play due to two reasons: (i) it is amongst the most well-known and commonly studied models in AI and game theory, and (ii) it is a belief-based learning model and naturally suits the investigation of consensus formation (consensus can be intuitively understood as the convergence of agents’ beliefs). The term $ v( x_i(k))$ in Equation 5 is standard for smooth fictitious play ([1] and the references therein); it represents the perturbation on the expected payoffs and can also be understood as the control cost to implementing a strategy. A typical form is an entropy-based one $v (x_i(k)) = -\frac{1}{\beta}\sum_{s_i \in S_i} x_{is_i}(k)\ln (x_{is_i}(k))$. The rationale is that by adopting this form, an agent’s choice in response to his/her beliefs becomes probabilistic such that better responses are more likely than worse responses, but the best ones are not played with absolute certainty; this aligns with human’s bounded rationality and error-prone decision-making [2]. We will elaborate more on this in our revision. Importantly, a specific form of $ v( x_i(k))$ is NOT necessary for our study. We wrote in Line 192-193 that “all our results readily generalize to any function $v$ satisfying the above two standard assumptions.” We explained the two standard assumptions in Line 187-188. **Comment**: “No elaboration on how the \epsilon term and the A_ij influence the consensus and convergence.” **Response**: All our consensus and convergence results hold given a positive value of $\epsilon$. The payoff matrix $A_{ij}$ of the 2-player subgames has NO effect on consensus formation. We showed this formally in Theorem 1, and discuss this in Line 226 (where we wrote “Note that the above theorem makes no assumption about the 2-player subgames agents play”). Regarding the effect of the payoff matrices $A_{ij}$ on convergence to equilibrium, we defined in Equation 13 $A_{ij}$ that satisfies the weighted zero-sum property, and established the convergence in weighted zero-sum network (population) games in Theorem 2 and Theorem 4. Moreover, we defined in line 296 $A_{ij}$ that satisfies the exact potential property, and established the convergence in star-structure exact potential network (population) games in Theorem 3 and Theorem 5. In the Introduction, we summarized the effects of $A_{ij}$ on convergence to equilibrium in Line 109-116. **Comment**: “Lack of discussion on the limitations” and “The authors did not discuss the limitations … at least in the appendix.” **Response**: Thank you! We will make sure to expand on this more intuitively, however, our theorems discuss precise conditions on the dynamics and games such that they apply. Expanding these results even further is an interesting direction for future work. In terms of the population size, we wrote in Line 77 and Line 158 that this paper considers “a population (continuum) of agents”. This assumption allows for better analytical tractability, and is standard in studies of population games [3] and mean-field games [4]. Empirically, we observed that if the number of agents is sufficiently large (e.g., more than hundreds) for each population, our theoretical findings still hold. We will discuss the above limitations in our revision. **Comment**: “Line 203 says… why is this reasonable?” **Response**: We mentioned this in Line 174, where we wrote “Agent $k$ maintains a weight $\kappa^i_{js_j}(k)$ for each opponent strategy $s_j \in S_j$ of each population $j\in V_i$.” As agents maintain $\kappa$ for each population and form their beliefs based on $\kappa$ (Equation 3), they naturally maintain separate beliefs about different populations. We gave a real-world example for separate beliefs in Footnote 4, where we wrote “people form beliefs about the behaviours of taxi drivers vs non-professional drivers after observing the numerous driving behaviours on the road.” Thank you again for your constructive comments! We hope that you will consider revising the score if we have addressed your concerns satisfactorily. [1] J. Hofbauer, E. Hopkins. Learning in perturbed asymmetric games.Games Econ. Behav., 2005 [2] R. McKelvey, T. Palfrey. Quantal response equilibria for normal form games.Games Econ. Behav., 1995 [3] W. Sandholm. Population games and evolutionary dynamics. 2010 [4] J. Lasry and P. Lions. Mean field games. Japanese journal of mathematics, 2007 --- Rebuttal Comment 1.1: Title: More questions on the multi-partite network structure Comment: Thank you for the rebuttal. I'm still not fully convinced that the multi-partite graph is a natural assumption, it reads like agents in the same population do not interact with each other. Could you please elaborate on whether your results can generalize to other network structures? --- Reply to Comment 1.1.1: Comment: Thank you for your question. We will answer in two ways. First, we will describe several examples of settings where agents of the same population do not interact with each other. Secondly, we will describe a reduction that allows us to model such intra-population interaction using our current model. There are numerous examples of multipopulation interaction where the related interaction is captured by a normal form game that is asymmetric and requires exactly one individual of each population to play the game. The prototypical such example is Men-Women interaction in a game like Battle of the Sexes (or variants thereof). Similarly, we can think of an ecosystem with a graph of predator/prey interactions. The fact that there is no self-interaction within a population would capture that there is no in-species cannibalism. Or a more human centric example would capture battle tactics in a multi-army combat setting. Combatants of the same army do not face each other. A digital analogue of this example would be an E-sport competition in a multi-player game where each let's say of 5 agents compete against each other in a winner take all match. Each agent is produced by a different company (i.e. DeepMind, OpenAI, etc) and the way these agents work e.g. in the Double Oracle PSRO[1] literature is that they are encode a distribution over different NNs agents each with different capabilities. So actually, each digital agent is best thought of as a large mixture of distinct agents. Now, we will point out how our current setting actually allows for interactions between agents of a single population. Take our current setting and for each node/population i create a copy i' that is connected to the corresponding set of agent copies as the original node i, with exactly the same set of games and the initial state of node i' is identical to that of node i. Now, create a symmetric two-player game between nodes i and i'. This will allow us to capture the intra-population interaction. By the symmetry of the setting, the initial symmetry between node i and node i's will be preserved for all time t>0 and this "mirror" population node allows us to capture such intra-population interaction within our current setting. We are happy to expand upon such ideas and in general it is known that such learning in network games can be adapted to allow for self-loops without significant changes in the underlying analysis (see e.g. [2]). [1] Lanctot et al. "A unified game-theoretic approach to multiagent reinforcement learning." Advances in neural information processing systems 30 (2017). [2] Boone et al. "From Darwin to Poincaré and von Neumann: Recurrence and cycles in evolutionary and algorithmic game theory." Web and Internet Economics: 15th International Conference, WINE 2019, New York, NY, USA, December 10–12, 2019, Proceedings 15. Springer International Publishing, 2019. Title: Reply to the questions on multi-partite network structure
Summary: This paper combines reaching consensus and convergence to equilibrium for network population games. Consider a network whose vertices correspond to a population. Edges between vertices (or populations) represent two-player sub-games between each pair of agents in these neighboring populations. The authors specifically focus on smoothed fictitious play for these two-player sub-games while agents seek to reach consensus in their beliefs about agents' policies in neighboring populations. In that sense, the approach is analogous to (or motivated by) the anonymous random matching interpretation of fictitious play dynamics to justify the myopic nature of the agents [Fudenberg and Kreps, Learning mixed equilibria. Games and Economic Behavior, 1993]. In particular, consider (large) populations of agents in each player role. Each period, all agents are matched to play the game and are told only to play in their own match. Agents are unlikely to play their current opponent again for a long time, even unlikely to play anyone who played anyone who played her. So, if the population size is large enough compared to the discount factor, it is not worth sacrificing current payoff to influence an opponent’s future play. In these populations, agents share their belief. The consensus dynamics presented serve this purpose. Therefore, the results are expected to hold even though I have not checked the proofs in detail. Strengths: - Convergence of SFP dynamics in weighted zero-sum network games and exact potential network games with star structure. Weaknesses: - There is no motivating example for the network population game formulation. - Results for potential network games are presented only for star structure. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Can you provide a motivating example justifying the network population model in practice (specifically the 2-player sub-games)? - What is the reason to restrict the network structure to star structure only in potential network games? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I have not identified any discussion about the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and for your constructive comments on our paper! **Comment**: “There is no motivating example for the network population game formulation” and “Can you provide a motivating example justifying the network population model in practice (specifically the 2-player sub-games)?” **Response**: Network population games are widely studied with numerous well-known variants. A good survey is [1]. This survey has been cited nearly 3000 times indicating the wide applicability and interest in related questions. Arguably, one the simplest and very natural settings are interactions between two different type of agents (e.g. male and female) that gives rise to bi-matrix population games that correspond to single-edge networks in our case. Moreover, the survey shows a lot of interest in focusing on interactions where each type of agents have only a handful of options (e.g. Battle of the Sexes, Prisoner's Dilemma etc). We briefly described some applications in Footnote 1 in the paper. We sincerely appreciate your feedback, and we will gladly elaborate more on these and further examples in the introduction in our revision. **Comment**: “Results for potential network games are presented only for star structure” and “What is the reason to restrict the network structure to star structure only in potential network games?” **Response**: Our proof of the Lyapunov function in the case of potential games depends on the cancellation of several terms, which currently utilises the assumption of a star structure. We agree that it is a very interesting direction to extend our results further. Nevertheless, we want to point out that several prior works in learning in games have utilised similar assumptions such as star network structure, e.g., [2-4]. Thank you again for your positive assessment and constructive comments on our paper! [1] Gyorgy Szabo, Gabor Fath Evolutionary games on graphs, Physics reports, 2007. [2] Panageas, Ioannis, and Georgios Piliouras. "Average case performance of replicator dynamics in potential games via computing regions of attraction." Proceedings of the 2016 ACM Conference on Economics and Computation. 2016. [3] Sai Ganesh Nagarajan, et al. From chaos to order: Symmetry and conservation laws in game dynamics. In International Conference on Machine Learning, pages 7186–7196. PMLR, 2020. [4] Sela, Aner. "Fictitious play in ‘one-against-all’multi-player games." Economic Theory 14.3 (1999): 635-651. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. My concerns have not been addressed properly. My current understanding is that the two-population subgame structure is used for mathematical tractability. I am paraphrasing my questions for clarity: - Clearly network population games are popular. I am asking for some tangible motivating examples for the 2-player subgame structure. More explicitly, can the authors provide some tangible examples motivating the reward function defined in Eq. (1) as a summation of two-population games? For example, related to the political opinion clusters from Footnote 1, what does Eq. (1) imply? - Smooth fictitious play is known to converge equilibrium in exact potential games with finitely many players without any condition on the interconnections among them, e.g., see Section 4.2 in [Hofbauer and Sandholm, On the global convergence of stochastic fictitious play, Econometrica 2002]. If the population is acting identical to a single agent, what is the restriction preventing us to address network structures beyond star? I have an additional question: Do the results generalize to the cases where agents follow the classical fictitious play? Is there a particular reason to use the smoothed version apart from mathematical tractability? --- Reply to Comment 1.1.1: Title: Reply to the Comment by Reviewer 2uya Comment: Thank you for your questions. **Reply to your first question**: We will describe two families of examples where 2-player/population subgame structure emerges. The first is actually an arbitrary congestion game with linear costs. Such settings, although maybe not obvious at a first glance are actually reducible to 2-player subgame structures. The two-agent interaction in entry $(i,j)$ captures the extra cost that e.g. each agent causes to the other one when the first player chooses path $i$ and the second player chooses path $j$. Since we have assumed that the costs increase linearly with the number of agents we can compute the total additive effect by merely summing up the costs over all such two agent interactions. Another example but now with adversarial incentives is that of tournament competition where every agent has to compete against every other agent and wants to maximize the number of heads-up matches they win. This is standard for example in chess. Now, if we want to have a population version of this game imagine an international chess tournament where every node/player is actually a nation and is represented by a team of players and players get matched randomly. Finally, if one wants to have a large population version of the above we can consider a similar version of the above chess competition but now between AI companies such as DeepMind, OpenAI etc each of which is submitting a single PSRO [1] type of mixture of NN agents. For the case of opinion formation imagine a cluster of nations that has to choose between two competing political philosophies/religions/coalitions etc but the safety of a nation depends on how many of its neighbors share the same attitudes. **Reply to your second question**: The model explored by Hofbauer and Sandholm is simpler than ours. Critically, the state space of our model includes both choice distributions $x$ and as well as beliefs $\mu$. In contrast, Hofbauer and Sandholm models only has choice distributions. Hence, arguments in this previous paper do not translate to ours and cannot say anything about the evolution of beliefs, which is a key aspect of our model. As we see in the proof of our convergence result, the Lyapunov functions (Equation 63 in the appendix) includes "mixed" terms that combine both $x$ and $\mu$ terms. Such complexities are not needed in the Hofbauer and Sandholm model. **Reply to your third question**: Studying different learning dynamics is a very interesting direction for future work. In this paper we focus on SFP and questions about other dynamics although interesting are out of the scope of the current work. Furthermore, we believe that in our setting FP would actually not be a good choice as in FP dynamics all agents are playing pure strategies, whereas our goal in this paper is to study the evolution of beliefs, which necessitates randomization at the level of the individuals. [1] Lanctot, Marc, et al. "A unified game-theoretic approach to multiagent reinforcement learning." Advances in neural information processing systems 30 (2017).
Summary: This paper examines the connection between the notions of consensus and equilibrium in a multi-agent system where multiple interacting sub-populations coexist, and it aims at answering below two central research questions in network (population) games scenario: [1] Are there natural multi-agent learning models that can achieve the best of both worlds—reaching consensus as well as convergence to equilibrium—in diverse settings? [2] How does the consensus formation process affect equilibrium selection in multi-agent learning? The authors argue that consensus and equilibrium, the fundamental notions of these two fields, can be both understood as stability concepts in a multi-agent system where there co-exist multiple interacting sub-populations. In particular, consensus can be seen as an intricate component of intra-population stability, whereas equilibrium can be seen as encoding inter-population stability. The authors show that SFP (smooth fictitious play) algorithm can achieve consensus as well as convergence to equilibrium in a wide range of network population games (and unlike previous literature, here a coordinative reward structure is not a prerequisite for achieving consensus). They also empirically shows that consensus formation process plays a crucial role in the seminal thorny problem of equilibrium's election in multi-agent learning (e.g., starts with the same initial mean belief, a large variance of initial beliefs results in a more desired equilibrium). Experiments were conducted for the scenario of Equilibrium Selection in Two-Population Stag Hunt Games. Strengths: 1. The paper is organized and presented well and clearly, I found the manuscript is reader friendly. 2. This work extend existing literature in multiple directions and presented a couple of nontrivial novel theoretical results, which is quite beneficial to the research community. E.g., SFP in network (population) games has not been previously explored until this work, unifying consensus formation and learning in multi-agent games, etc., proving consensus without assuming a coordinative reward structure, etc.. 3. There are quite some helpful/valuable elaborations/explanations about comparing this work with relevant literature and discussing the differences and advantages. Weaknesses: 1. I'd like to see some discussions about the future research directions, and how this work could inspire/benefit other future research. 2. Regarding the figure 1 about impacts of variances of initial beliefs, it seems to be just plot of one example setting, and the empirical conclusion is not very convincing to me and I'd like to see some more clarification/elaboration/justification. For example, consider such a scenario where the starting mean belief is already the optimal value of a final desired steady state/equilibrium, it seems that if the staring variance is 0 (extreme case of small variance), then convergence to optimal results is already achieved, which is obviously better than a larger initial variance with the same starting mean belief, which is a counter example of the empirical conclusion of the paper saying that a larger initial variance (given same starting mean belief) is preferred. I would like to see either some rigorous mathematical proofs about such conclusion, or very comprehensive empirical studies on this before drawing the conclusions on this. 3. In page 4, it was mentioned that "This paper formally shows that the probability distribution over initial conditions can eventually degenerate to a point mass, and leveraging on this, presents a novel technique for proving the convergence of learning dynamics." It would be good to see some more elaborations on how this "novel technique" could be used for proving the convergence of learning dynamics in other relevant problems. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Please see my questions/comments/suggestions in above section when talking about weakness. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you! We sincerely appreciate your diligent and thoughtful comments on our paper. **Comment**: “I'd like to see some discussions about the future research directions, and how this work could inspire/benefit other future research.” **Response**: We appreciate your interest in future directions of our work! We will be more than happy to add a section at the end of the paper where we expand on them. **Comment**: “Regarding the figure 1 about impacts of variances of initial beliefs, it seems to be just plot of one example setting, and the empirical conclusion is not very convincing to me and I'd like to see some more clarification/elaboration/justification. ... before drawing the conclusions on this.” **Response**: Our main argument regarding our stag-hunt game experiments is that a larger initial variance can promote convergence to the payoff dominant equilibrium (S,S). However, this does not imply “a larger initial variance (given same starting mean belief) is preferred”. While Figure 1 presents two example belief distributions with the same initial mean belief, Figure 2 provides additional evidence covering *ALL* possible initial mean beliefs under the same game settings. In Figure 2, we demonstrated that increasing the variance of initial beliefs from 0 to 0.02, 0.05, and 0.1 expands the region of attraction of the payoff dominant equilibrium (S,S), allowing a wider range of initial mean beliefs to approach it. Based on the enlarged region of attraction, we concluded that a larger initial variance can promote convergence to the equilibrium (S,S). Importantly, this does not mean that a larger initial variance is always preferred. We will make sure to add further elaboration on our empirical results to make the above points more precise in our revision. **Comment**: “In page 4, … how this "novel technique" could be used for proving the convergence of learning dynamics in other relevant problems.” **Response**: By “novel technique”, we meant our approach for establishing the convergence result of smooth fictitious play in network population games. The novelty of our approach is largely characterized by two key steps: (i) we proved the variance of the belief distribution tends to zero in the limit, and (ii) we leveraged the zero-variance (i.e. consensus) property to extend the convergence result in classic network games to network population games. Our approach highlights the zero-variance/consensus property as a valuable tool for establishing the convergence of learning under population settings. For future research that studies learning dynamics under population settings, one can find inspiration in our approach by initially verifying the zero-variance/consensus property and subsequently leveraging this property to establish the convergence result. We will incorporate the above points into our revision. Thank you again for your positive feedback and constructive comments! --- Rebuttal Comment 1.1: Comment: I've read the authors' rebuttal (which help provides some helpful clarifications) as well as all the reviews from other reviewers. I'd keep my rating unchanged as 6 with weak acceptance suggestion, taking into account all of them. I think the manuscript might be acceptable for publication here, but I won't push hard if other reviewer has strong objections on this. --- Reply to Comment 1.1.1: Comment: Thank you for your response and your continued support.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Principled Weight Initialisation for Input-Convex Neural Networks
Accept (poster)
Summary: The paper discusses a principled weight initialization strategy for Input-Convex Neural Networks (ICNNs). The authors propose a new theory that generalizes signal propagation theory to include weights without zero mean, and derive a principled initialization strategy for ICNNs from this theory. They demonstrate the effectiveness of their initialization strategy through empirical experiments and apply ICNNs in a real-world drug-discovery setting. Strengths: 1. The paper proposes a new theory that generalizes signal propagation theory to include weights without zero mean, which is a significant contribution to the field. 2. The authors derive a principled initialization strategy for ICNNs from their new theory, which improves learning and generalization in ICNNs. Weaknesses: 1. The proposed approach cannot be applied to networks with skip connections, limiting its usage in real-world models. 2. The experimental section is not comprehensive and there is a lack of ablation study Technical Quality: 3 good Clarity: 3 good Questions for Authors: N/A Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank you for your feedback. We hope to address all of your points in detail soon! For now, we would like to ask for some further clarification on what ablation studies are missing. This would help us to better address your concerns in our detailed response. Thanks in advance! --- We thank the reviewer for their positive and encouraging feedback. We hope that we can shed light on the mentioned weaknesses. ## Skip connections Our proposed initialisation scheme was developed to enable faster training in ICNNs. Because skip-connections have a similar purpose, we would argue that our initialisation can be a replacement for skip-connections. Prior to our work, CNNs could not be trained without skip-connections. With our principled initialisation, we make it possible to train ICNNs without skip-connections. Furthermore, our initialisation appears to enable better results than ICNNs with skip-connections, which have more parameters. There is also a strong trend to replace skip-connections in regular networks. For example in (Zhang et al., 2022) it is shown that regular deep networks can be trained to the same performance as ResNets by carefully controlling signal propagation. Also, note that skip-connections in ICNNs usually connect layers with the input to the network and not block-wise, as in residual networks. Therefore, ICNNs used in prior work have a strong tendency towards dynamics that are similar to single-layer networks and suffer from the feature reuse problem (Zagoruyko et al., 2016). The ICNNs that we trained in this work have to develop feature hierarchies and are therefore a step forward for researchers working with ICNNs. Furthermore, we emphasise that our method can be applied to networks with skip-connections in practice. It is just the theory that is difficult to derive because of the dependency between the inputs and outputs of the residual branch. ### Additional References - Zagoruyko, S., & Komodakis, N. (2016). Wide Residual Networks. Proceedings of the British Machine Vision Conference 2016, 87.1-87.12. https://doi.org/10.5244/C.30.87 - Zhang, G., Botev, A., & Martens, J. (2022). Deep Learning without Shortcuts: Shaping the Kernel with Tailored Rectifiers. International Conference on Learning Representations 10. https://openreview.net/forum?id=U0k7XNTiFEq ## Ablation study First of all, we would like to point out that our main experiments are already an ablation study. We start from ICNNs with skip-connections (the most common form of ICNN at the time of writing) and establish the baseline performance. The first ablation is obtained by removing the skip-connections which results in networks that become practically impossible to train. Finally, we add our principled initialisation to the networks without skip-connections and find that the initialisation improves not only the networks without skip-connections, but also the networks with skip-connections. Unfortunately, we were unable to deduct which ablations you would have liked to see in addition. We included learning curves for different choices of $\rho_*$ as suggested by reviewer f5Cq (figure 2c in the rebuttal PDF). We also investigated what happens if we choose to initialise the biases with $\sigma_b > 0$ (figure 2a in the rebuttal PDF). Finally, we added an experiment to see how important the effect of the bias shift is by only using eq. (8) for initialising the bias parameters. The weight parameters were initialised using an initialisation scheme for regular networks. The results of this experiment can be found in figure 2b of the rebuttal PDF. As suspected, only initialising the bias parameter already leads to good performance, but results tend to be better/more consistent when including the weights for the initialisation.
Summary: The proposed method aims to solve the initialisation problem in an Input convex Neural Network (ICNN), where the weights are required to the non-negative. The commonly applied approach, setting the negative entries stamped from zero-mean Gaussian as zero, leads to varies the desired mean. By analysing the signal propagation in the Neural Networks, the authors are able to sample the weights with expected mean and variance. The method is evaluated on several tasks and compared with the ICNN without the proposed initialisation method with some promising results. Strengths: 1. The problem solved in the submission is important in the ICNN setting. 2. The derivation of the method is concrete. Weaknesses: Some details are not well explained and are sometimes confusing. 1. $\rho_*$ is defined $\frac{1}{\sigma^2_*}Cov[s_1^-, s_2^-]$ in line 178 and the author claim $\rho_*$ is independed on $\frac{1}{\sigma^2}$ in line 184. And then \rho_* is set as $\frac{1}{2}$ heuristically. The experiments are not carefully designed to support the submission and I am not sure some experiments are sufficiently conducted. 1. In Figure 2, the loss of ICNN does not change during the training process in the MNIST setting but in CIFAR10 and CIFAR100 there is no such phenomenon. 2. In Figure 3, drawing the conclusion that with the proposed initialisation the training dynamic is more stable is not concrete via eyeball comparison. Some quantities are needed. 3. And according to my experience of training non-convex NN on cifar10, the average testing loss does not really show a dramatic increase after some training iterations and non-convex NN has the highest test loss and accuracy which are unusual. And the rest of the learning curve is more reasonable with low test loss and corresponding high accuracy. 4. It can be noticed that with skip on ICNN the performance of ICNN improves, I think applying the proposed initialisation to ICNN with skip will give good support to the submission. Additional question, which does not affect my score. Since in all the settings in the submission, the non-convex NN has the best performance compared with other ICNN-based methods, so is there any scenario for ICNN or why ICNN is essential in the first place? Notations: In some equations the $s_i$, $s_j$ are mixed used with $s_1$, $s_2$, for example, the one below line 172 and the one below 175. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. In Figure 2, Non-convex and ICNN have lower training loss than other baselines at iteration 0 which, I believe, indicates the initialisation can the author explain why it happens? 2. Can we treat $\rho_*$ as a hyperparameter? If so how it affects training and generalisation of the ICNN models? these two questions are not discussed. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: See the sections above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank you for your feedback. We hope to address all of your points in detail soon! For now, we would like to ask for some further clarification on what quantities the reviewer would like to see in the comparison. This would help us to better address your concerns in our detailed response. Thanks in advance! --- We thank the reviewer for their critical feedback and apologise for details that were confusing or unclear. We try to clarify these details better in the answers below and the final version. We hope that you will take the time to consider our rebuttal and are willing to update your score after possible further discussions. ## Weaknesses Concerning the unclarity: 1. We agree that the statement might be confusing, but it is inherent to the definition of correlation. Because the correlation is defined in terms of the variance they can not be independent. Our statement about eq. (11) and (12) being independent aims to make clear that once we have computed the correlation, we do not explicitly need the variance to derive our initialisation. We reformulated line 184 to avoid confusion. The choice for setting $\rho_* = \frac{1}{2}$ can indeed be considered as a heuristic. Our main motivation was to get a closed-form result for $\arccos(\rho_*)$, but other values are possible as well (more details below). We will add a section in the appendix with these details in the final revision. Concerning the experiments: 1. Note that the loss for ICNN does actually go down during the first hundred update steps, but we agree that this is barely visible in the plot. The CIFAR-10 and CIFAR-100 plots for ICNNs without skip-connections or initialisation show the same behaviour: the training loss goes down initially, but stops improving at some point. We suspect that this happens because the network attempts to reduce the activation strength of the network by pushing weights and biases down. In the example of the MNIST models, the biases in the first unconstrained layer soon become all negative. As a result, the ReLU activations, which are the inputs to the first constrained layer, are all zero and the network can not learn a function from the inputs. The skip-connections alleviate these issues by bypassing these dead layers. Our initialisation simply provides a better starting point where activations do not need to be reduced to improve the error early in training. We will try to include a more elaborate discussion in the final version of the paper. 2. We could not find the point where we conclude that training dynamics are more stable in the context of Figure 3. Figure 3 aims to show that our initialisation does not only affect the empirical error but also translates to generalisation performance. We agree that there is little to no difference for the MNIST experiments and we also acknowledge that in our manuscript (line 283). However, we would argue that for the CIFAR10 experiments, our method (ICNN + init) clearly leads to faster learning compared to other ICNNs. The quantitative accuracies for the validation runs can be found in Table 3 in the appendix. 3. The increase in the test loss in Figure 3 can be explained by overfitting on the training and/or validation data. Note that the optimal models were chosen based on validation accuracy, as indicated by Table 3 in the appendix. Because the loss is only a proxy for accuracy, the optimal model might indeed be in an overfitting regime in terms of test loss. By revisiting Table 3, we realised that Figure 3 does not depict the early stopping. We updated the figure for the final version (Figure 3 in the rebuttal PDF). 4. One of our contributions is to show that ICNNs do not need skip-connections if they are initialised in a principled manner. This indicates that skip-connections mainly help in making ICNNs trainable. We did (accidentally) run the Tox21 experiments with both skip-connections and our initialisation and did not observe any substantial improvements over using only our initialisation. However, the skip-connections do effectively modify the signal propagation in a non-trivial way. It would require additional analysis to obtain a principled approach for initialising networks with skip-connections. We also refer to a similar reply we gave to reviewer Ryn9 with more references in this context. Concerning your additional questions: - The performance of ICNNs should not be compared directly with regular networks. As indicated by reviewer kLTe, ICNNs have theoretically less capacity. ICNNs have the unique property that they are convex. This can be useful or is even necessary in various settings (e.g. energy-based models, optimal transport, level-set exploration, …) that we describe in our related work section. - The notation on lines 172 and 175 is on purpose. We aim to compute the (co)variance for arbitrary $s_i$ and/or $s_j$ on the left-hand side. Under our assumptions, the (co)variance turns out to be independent of the index $i$ or $j$. We explicitly use $s_1$ and $s_2$ on the right-hand side to emphasise this independence. ## Questions 1. The ICNNs have only non-negative weights. Due to the ReLU activation function, also activations will be positive. Computing these dot products typically leads to numerically large values. This is also why the loss will typically be large. Our initialisation counters these effects mainly by initialising the bias parameters with negative values. 2. $\rho_*$ can indeed be treated as a hyper-parameter. Because of the tendency of the correlation to grow as the network grows deeper, lower $\rho_*$ values will typically make it possible to train deeper networks. We ran ablation experiments on the choice for $\rho_*$ to obtain figure 2b in the rebuttal PDF. As expected, lower values for $\rho_*$ can enable deeper networks, but overall, performance is very similar. We will include these results in the appendix of the final version.
Summary: This paper investigates the initialization for input-convex neural networks. They generalize the signal progagation theory by removing the assumption of centred weight distrubution. The experiments show that the proposed initiaization method is effective in a set of datasets. Strengths: 1 This paper generalizes signal propagation theory by removing the assumption that weights are sampled from a centred distribution. This generalization is necessary for ICNNs. 2 The experimetal results are solid and convincing. The experiment on the real-world drug discovery task is nice. Weaknesses: 1 The theoretical contribution of this paper is limited. The theoretical results are derived following the framework proposed in [1]. 2 It seems that the authors only consider the forward propagation of an initial ICNN. However, the backward propagation and the output diversity are also crucial for the initialization. [1] Samuel S Schoenholz, Justin Gilmer, Surya Ganguli, and Jascha Sohl-Dickstein. Deep information propagation. In International Conference on Learning Representations, 2017 Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Please see weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors havve adequately addressed the limitations, and there is no potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Theoretical contribution We do not entirely understand why our generalisation of signal propagation theory and the derivation of the initialisation for ICNNs would be considered a _”limited”_ theoretical contribution. The initialisation for ICNNs is indeed _limited_ to ICNNs. However, the generalised signal propagation theory could be useful outside of the scope of ICNNs as well. For example, it could be used to study certain effects in regular networks due to the initial weights of a network not having exactly zero mean in practice. After all, our theory shows that feature correlation affects signal propagation as soon as the weights do not have zero mean. As mentioned in the introduction and section 2 of our manuscript, we build on the framework presented in the thesis of Neal (1995), which was also the basis for the work from Poole et al. (2016). The main contribution to signal propagation from Poole et al. (2016) was to include the correlation between samples in the analysis. The work from Schoenholz et al. (2017) extends the propagation in (Poole et al., 2016) by including depth scales, dropout and the backward pass. Further work extended the propagation for resnets (Yang et al., 2017) and convolutional layers (Xiao et al., 2018). This information can also be found in the related work section of our manuscript. In this context, we believe that our extension of signal propagation theory is not more _limited_ than any of these published works. Furthermore, we believe that our derivation of an initialisation for ICNNs is also not more _limited_ than e.g. the one that was recently derived for hypernetworks (Chang et al., 2020). Finally, our work does not include the correlation between samples, in contrast to (Poole et al., 2016; Schoenholz et al., 2017). We show that the variance in a layer not only depends on the variance but also on the correlation between features in the previous layer when weights have non-zero mean. As a result, the propagation of correlation between features interferes with the propagation of variance, which is arguably more complex than studying independent signals as in (Poole et al., 2016; Schoenholz et al., 2017). Note that although the expressions are very similar, the correlation between samples and the correlation between features are two different things. In this sense, we would argue that our work is not _limited_ to a simple derivation of (Poole et al., 2016) or (Schoenholz et al., 2017), but rather provides a different perspective into signal propagation theory. ## Backward Analysis We agree that incorporating the analysis of the backward pass to derive an initialisation is generally desirable. Therefore, we decided to include the generalised signal propagation of the backward pass in the appendix. The mean and variance propagation of the deltas in backpropagation (using similar assumptions as Schoenholz et al. (2017) is given by $$\begin{align*} \mathbb{E}[\delta_j^{-}] &= M \mu_w \mathbb{E}[\phi'(s_1^{-})] \mathbb{E}[\delta_1] \\\\ \mathbb{E}[\delta_i^{-} \delta_j^{-}] &= \updelta_{ij} M \sigma_w^2 \mathbb{E}\bigl[\phi'(s_1^{-})^2\bigr] \mathbb{E}\bigl[\delta_1^2\bigr] + \mu_w^2 \mathbb{E}[\phi'(s_i^{-}) \phi'(s_j^{-})] \sum_{k,k'} \mathbb{E}[\delta_k \delta_{k'}], \end{align*}$$ Where $\updelta_{ij}$ is the Kronecker delta and $\delta_i = \frac{\partial L}{\partial s_i}$. Note that, similar to the forward pass, the covariance between entries in the delta vectors interferes with the variance propagation. The kernel for the derivative of $\operatorname{LReLU}$ with parameter $\alpha$ is given by $$\mathbb{E}[\operatorname{LReLU}'(s_1 \mathbin{;} \alpha) \operatorname{LReLU}'(s_2 \mathbin{;} \alpha)] = (1 - \alpha)^2 \frac{1}{2 \pi} \arccos(-\rho) + \alpha.$$ This should make it possible to derive initialisations that also incorporate the backward pass, e.g. using appraoches from (Glorot et al., 2010) or (Defazio et al., 2021). ## Experiments We thank you for the positive assessment of our experimental section. We also think that the latent-space exploration of molecules is a particularly illustrative example of the usability of ICNNs.
Summary: NOTE: edited after author rebuttal and score has been updated. This paper is related to input convex neural networks (ICNN). It analyzes the signal propagation through such a network and based on that proposes a new initialization scheme that allows the networks to be trained efficiently. It investigates the efficacy of the scheme in various ML tasks, including image benchmarks and drug discovery tasks. Strengths: The initialization scheme is well grounded in theory (although with some stated simplified gaussian assumption). The method seems to work well in practice, at least in the tasks used in the paper. It is experimentally shown that in contrary to previous work claims, ICNNs do not require skip connections in order to be trainable to good results. Weaknesses: The ICNN model family is intuitively less powerful than normal neural networks. The authors do not discuss this and the chosen experiments seem to be such that the performance degradation compared to normal neural networks does not happen. Could the authors discuss the limitations of the ICNN models, e.g., could imagenet level or GPT-style text understanding models be trained in the ICNN setting? The abstract says that the new initialization allows for more efficient latent space exploration, but as far as I see, there is no comparison to other methods in the paper. Figure 1 shows preactivation distributions on the top, but the scale of the x axis is missing, please add that to the figure. It also seems that the activations start to concentrate around 0 as one moves deeper in the network. Is this the case and could the authors discuss this phenomenon and how this will affect very deep networks (e.g., 20 layers). In the experimental sections, it seems that latent space exploration is the main use case enabled by the ICNN and the new init method? Could the authors discuss other use cases of ICNN and for example explain why they did not repeat the experiments from previous ICNN papers and show improvements stemming from their improved initialization? Also this use case seems very interesting, could the authors discuss pros and cons related to other methods that could be used to explore the latent space of this task? Also, are there some limitations of ICNN in this task or similar tasks, e.g., related to complexity or size of the network and the data? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Minor spelling mistake 3.2 ”sufficiently good to the derive” - delete ”the”? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: As mentioned in the paper, the signal propagation model assumes gaussian distributed preactivations, which is not always what happens in reality. This is well noted in the paper, and at least in the practical tasks considered in the empirical section, the derived initialization scheme still works well. Although the authors say that it would be possible to analyze skip connections, they do not perform the analysis. The reasoning that skip connections are not needed might not hold for deeper networks. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Limitations and Strengths of ICNNs You are right that the ICNN family is strictly speaking less powerful than regular networks because they are constrained to have convex decision boundaries. Note that it is possible to construct a universal approximation theorem using theorem 1 from (Yuille and Rangarajan, 2001), which has been used in (Sankaranarayanan and Rengaswamy, 2022). Our manuscript might indeed give the (misleading) impression that ICNNs are direct competitors of regular networks. However, ICNNs are supposed to be used if the convexity provides additional benefits or is required for a method to work. We provide the comparison with regular networks to emphasise that ICNNs are not necessarily harder to train than regular networks when using a principled (e.g. our proposed) initialisation. As the reviewer points out, there is indeed no ICNN that reaches the quality of state-of-the-art regular networks at ImageNet or the text understanding quality of GPT. However, we do believe that the difficulty of training ICNNs is slowing research in this direction down and that our initialisation method might make ICNNs a more viable alternative. We have added a few sentences to the experiment section to emphasise the goal of our comparisons with non-convex networks and to reduce this possible source of confusion. ## Choice of Experiments Indeed, we show the latent space exploration for molecular generation as a particular example, in which ICNNs could be relevant. Notably, we do perform experiments from other ICNN papers, concretely the computer vision benchmarks were also included in (Sivaprasad et al., 2021). For further examples where ICNNs are useful, we refer to our related work section. It might also be important to point out that our initialisation merely allows for more efficient training of ICNNs and is not the enabling component for these experiments. Our motivation behind these experiments are some initial results by Nesterov et al. (2022) who use ICNNs to “encourage” the latent space of an auto-encoder to build convex decision boundaries which allow efficient exploration of level sets. Several more methods for latent space exploration are compared in Du et al. (2023). They introduce a new method, ChemSpaceE, for latent space exploration, which does not use ICNNs, and a way to evaluate these methods. However , we did not manage to reproduce their results and were therefore unable to include meaningful comparisons. Furthermore, these other methods for latent space exploration do not allow traversing level-sets, which is eventually the main feature provided by ICNNs and the property that we chose to highlight in our final experiment. We will try to rephrase section 5.3 to resolve these sources of confusion: - the motivation for this particular experiment - the fact that ICNNs enable the level-set exploration, not our initialisation. ### Additional References Du, Y., Liu, X., Shah, N. M., Liu, S., Zhang, J., & Zhou, B. (2022). ChemSpacE: Interpretable and Interactive Chemical Space Exploration. Transactions on Machine Learning Research. ## Other Comments The scale of the x-axis in figure 1 is implicit in the width of the bins, which is constant for all histograms in a sub-plot. However, we agree that this should have been explained and an explicit scale indication is the better way to communicate this information. The activations tend to zero because any pre-activation that becomes negative is mapped to zero when using ReLU non-linearities. We found that this can be alleviated by initialising the bias parameters with non-zero variance. Due to eq. (10), this also requires a different weight variance to keep the same propagation. On the other hand, the instability of the fixed point (see appendix A.4) can lead to drift effects in the correlation, which effectively makes it hard to stabilise the propagation in very deep networks. Finding a setting where the correlation fixed point is actually stable would resolve this issue, but remains a problem for future work. We have updated figure 1 to provide an indication of the scale of the x-axis for the final version (figure 1 in the rebuttal PDF). We will also include a discussion about initialising the bias parameters with random samples drawn from a Gaussian distribution with variance $\frac{1}{2}$. We repeated the experiments from figure 5 in the appendix for this setting and find that the results are practically the same. Figure 2a in the rebuttal PDF shows the comparison between the initialisation with constant and random bias initialisation. These results also suggest that the randomness in the biases might enable training even deeper networks. --- Rebuttal Comment 1.1: Title: Thank you Comment: Thank you for the rebuttal. My main concerns have been addressed and this has been reflected in the edited score.
Rebuttal 1: Rebuttal: We thank the reviewers for their feedback and suggestions. The comments and concerns of each reviewer will be addressed individually and the paper will be updated correspondingly. We did observe a common (potential) misunderstanding about our contribution in the reviews. Some reviewers claim in their summary that we _“analyse”_ the signal propagation to derive an initialisation. Although _analyse_ captures the detailed study of signal propagation, we believe it does not quite capture the value that we believe to have added to the traditional theory. Therefore, we would like to emphasise that we do not just use signal propagation theory as a tool to derive an initialisation. Instead, we generalise the existing signal propagation theory to allow initial weights with non-zero means. Only then do we have the signal propagation tools to derive an initialisation for ICNNs. Possibly this was already clear for the reviewers, but we want to avoid any misunderstandings. A detailed discussion can be found in the reply to reviewer YxjJ. We look forward to further feedback during the discussion phase. Pdf: /pdf/cdc589c39fff2aff7b5159f257d661eb18725fdf.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
RETVec: Resilient and Efficient Text Vectorizer
Accept (poster)
Summary: This paper introduces RETVec, a resilient and multilingual text vectorizer designed for neural-based text processing. RETVec combines a unique character encoding with an optional small model to embed words into a 256-dimensional vector space. The RETVec embedding model is pre-trained using pair-wise metric learning, making it robust against typos. RETVec does not require dataset pre-processing and does not have out-of-vocabulary (OOV) tokens, as it accepts all valid UTF-8 characters. The authors provide a comprehensive evaluation of RETVec, demonstrating that it is faster and less memory-intensive than other vectorizers on multi-core CPUs and GPUs. Models trained with RETVec have slightly higher accuracy, greater resilience to typos, and better resilience to adversarial attacks compared to models trained with other vectorizers. Strengths: - The paper introduces RETVec, a text vectorizer that combines a unique character encoding with an optional small model, which addresses several challenges associated with existing text vectorizers. - The paper provides a comprehensive evaluation of RETVec, demonstrating its performance in terms of speed, memory usage, and resilience to typos and adversarial attacks. Also, RETVec has the potential to significantly improve the performance of neural-based text processing, particularly in multilingual settings and in situations where typos and adversarial attacks are common. - The authors provide code for their method, and they promise to make it open-source to the public, which will further facilitate the community. Weaknesses: - Though the proposed method exhibits significant improvements on several evaluations, it does not outperform sentencepiece overall when used for training pre-trained language model (BERT), which is a main-stream approach for many NLP tasks. - It is vague how to effectively combine the proposed method with pre-trained language model, especially in light of the recent emergence of large language models. The paper would have been strengthened by discussing these aspects. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Major: 1. In light of the recent advances in large language models (LLMs), it would be better to discuss the potential usage of the proposed method. 2. Regarding the experiments on pre-training BERT, it seems that the proposed method will change the traditional training routine of these pre-trained models. Unfortunately, the proposed method does not exhibit significant improvements over sentencepiece tokenizer. Besides the results in Figure 5, it would be better to illustrate the potential usage of the proposed method combined with pre-trained models (and LLMs, of course). Also, does the proposed method boost the training speed or convergence speed for BERT? Minor: 1. line 53: Glove -> GloVe 2. line 240: missing reference to GLUE benchmark Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: A more explicit discussion of the limitations of RETVec would strengthen the paper and provide a more balanced view of its potential applications and implications. For example, a discussion on its inferior results on training BERT, and talk about what would be the best way or recommended way to use the porposed approach with BERT (or similar)? Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful review and constructive feedback. Please find our responses below. > **Q1:** Though the proposed method exhibits significant improvements on several evaluations, it does not outperform sentencepiece overall when used for training pre-trained language model (BERT)... **A1:** Currently, we have not found a way to leverage RETVec to outperform other vectorizers across all tasks for pre-trained BERT models while providing increased robustness and efficiency. RETVec’s resilience comes from its learned embeddings which are pre-trained using a deep similarity loss, which means that the vocabulary size is not limited, as is in the case of standard tokenizers like SentencePiece. The major downside of not having a restricted set of token IDs in the vocabulary is that RETVec embeddings cannot be converted into a softmax output, which makes the standard pre-training tasks such as masked language modeling (predicting masked token IDs) unusable as-is. As a result, we needed to use alternative pre-training tasks to pre-train language models such as BERT. We felt that it was important to report this shortcoming as it emphasizes the current limitations for RETVec. We plan to use the extra space in the final version to discuss this challenge in more detail, describe interesting directions to explore in future work for combining RETVec with pre-trained language models, as well as add additional references that highlight the potential tradeoffs between robustness and accuracy [1, 2]. [1] Zhang et al. “Theoretically Principled Trade-off between Robustness and Accuracy.” ICML, 2019. [2] Tsipras et al. “Robustness May Be at Odds with Accuracy.” ICLR, 2019. > **Q2:** It is vague how to effectively combine the proposed method with pre-trained language model... **A2:** We experimented with various approaches and reported the most successful one we found so far in the paper in Section 8. As discussed above, we don’t have a definitive answer yet on what is the best way to pre-train large language models using RETVec, especially since combining contrastively pre-trained word embeddings with LLMs is a largely unexplored area of study. We will make sure to include a discussion on this topic in the future work section, as we think the question deserves its own research given its complexity. > **Q3:** In light of the recent advances in large language models (LLMs), it would be better to discuss the potential usage of the proposed method. **A3:** We will devote the extra space in the revised paper to address this point and emphasize that future work is needed to develop a better pre-training methodology that is more compatible with RETVec, due to the unique challenges of using pre-trained word embeddings instead of token IDs to represent text. As part of this discussion, we will make sure to highlight the potential benefits of using RETVec as the vectorizer for LLMs, including better multilingual capabilities, adversarial robustness, and smaller model size. In particular, for “small” or “medium-scale” LLMs (less than or around 1 billion total parameters), the embedding layers' parameters often accounts for more than 20% of the total parameters [3], which could be potentially reclaimed by using RETVec. [3] Biderman et al. “Pythia: A Suite for Analyzing Large Language Models.” arXiv:2304.01373. > **Q4:** Regarding the experiments on pre-training BERT, it seems that the proposed method will change the traditional training routine of these pre-trained models...it would be better to illustrate the potential usage of the proposed method combined with pre-trained models (and LLMs, of course). **A4:** Pre-trained BERT with RETVec exhibits higher resilience to adversarial attacks and typos compared to SentencePiece (Figure 5) while remaining competitive (RETVec surpasses SentencePiece performance on 5/8 of the GLUE tasks, as shown in Table 5). As discussed above, we have not yet found the best pre-training methodology compatible with RETVec which will provide these benefits (efficiency and robustness) while offering equal or better performance on all tasks. Finding a more effective approach for pre-training RETVec-based models is the clear next step for our future work. For now, we reported this tradeoff and provided an in-the-wild study of the practical benefits of using RETVec with BERT when classifying adversarial text content such as spam (Section 9). We will use the extra space we have to discuss the usage and potential applications of RETVec in large pre-trained models including LLMs and our plans for future work. > **Q5:** Also, does the proposed method boost the training speed or convergence speed for BERT? **A5:** Yes, the training speed of BERT with RETVec is slightly faster than BERT with SentencePiece. The same number of training steps are needed so the convergence speed remains the same (the same number of training steps were also used to ensure a fair comparison in our benchmarks). We will make sure to highlight this fact in Section 8 and report overall pre-training and fine-tuning computational costs. > Minor: line 53: Glove -> GloVe; line 240: missing reference to GLUE benchmark Thank you for spotting those, we will correct them. > **Q6:** A more explicit discussion of the limitations of RETVec would strengthen the paper and provide a more balanced view of its potential applications and implications... **A6:** We thank the reviewer for their insightful comments, and recognize the importance of such discussion. We will make sure to devote the extra space in the final version to discuss limitations and challenges in the existing pre-training methodology for BERT, as well as potential applications and future work to adapt RETVec to large pre-trained language models. We hope that our responses helped address your questions. Please let us know if you have any further questions or feedback. Thank you for the review! --- Rebuttal Comment 1.1: Comment: Thanks for your response. I slightly increased my rating to reflect the authors' response.
Summary: This paper introduces RETVec, a resilient and efficient text vectorizer designed for neural-based text processing. RETVec is a multilingual tool that combines a novel character encoding with a pre-trained embedding model to create a 256-dimensional vector space. The vectorizer is significantly more resilient to typos and adversarial text attacks than other state-of-the-art vectorizers.The results show that RETVec outperforms other vectorizers in terms of accuracy and resilience to typos and adversarial attacks. Strengths: RETVec is a novel and efficient text vectorizer that is designed to be resilient to typos and adversarial text attacks. This is a significant contribution to the field of natural language processing, as text vectorization is a critical component of many NLP tasks. The paper provides a detailed description of RETVec's architecture and evaluation methodology. This makes it easier for other researchers to understand and replicate the results of the paper. The paper evaluates RETVec's performance on four different datasets with drastically different dataset sizes, number of languages, classification tasks, and text lengths. This demonstrates the versatility and effectiveness of RETVec across a wide range of NLP tasks and datasets. The results show that RETVec outperforms other vectorizers in terms of accuracy and resilience to typos and adversarial attacks. This is a significant finding, as it suggests that RETVec could be a valuable tool for real-world NLP applications where robustness to errors and attacks is critical. Weaknesses: The paper does not provide a detailed analysis of RETVec's limitations and potential failure cases. While the paper does mention some of the challenges faced during the development of RETVec, a more thorough analysis of its limitations and potential failure cases would have been helpful to better understand the scope of its applicability. The paper conducts experiments on classification tasks. I'm curious how it performs on text generation tasks, like machine translation. Since this determines the scalibility of a practical text vectorizer. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to the weakness Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful review and insightful comments. Please find our response below. > **Q1:** The paper does not provide a detailed analysis of RETVec's limitations and potential failure cases. While the paper does mention some of the challenges faced during the development of RETVec, a more thorough analysis of its limitations and potential failure cases would have been helpful to better understand the scope of its applicability. The paper conducts experiments on classification tasks. I'm curious how it performs on text generation tasks, like machine translation. Since this determines the scalibility of a practical text vectorizer. **A1:** Through this review, we realized that we have been overly focused on classification tasks and adversarial resilience, which were the main goals of RETVec. We plan on fixing this in the revised version by adding an in-depth discussion on the challenges and potential approaches for applying RETVec to other tasks, including text generation tasks, other sequence-to-sequence tasks, and pre-training large language models. Generative tasks are challenging because the 256-float embedding returned by RETVec cannot be converted into a softmax output like in the case of token IDs outputted by other vectorizers such as SentencePiece. As a result, there is no straightforward way of training a generative model which predicts the next token ID in the sequence, and we have to experiment with alternative forms of pre-training such as predicting the top N words, training a decoder for the RETVec embedding model, decoding character-by-character, or using a VQ-VAE model [1]. There has been limited work on the area of combining contrastively pre-trained word embeddings with pre-trained language models on text generation tasks. Thus, we are still unsure which methodology is the best, and we plan on using the extra page in the final version to discuss these challenges, potential applications, our initial results, and outline the need for future work in this direction. Please let us know if you have any additional questions or feedback. Thank you for the review! [1] Van den Oord et al. “Neural Discrete Representation Learning.” NIPS 2017. arXiv:1711.00937. --- Rebuttal Comment 1.1: Comment: Thanks for your response, and it resolves my concerns! I decide to raise the score.
Summary: The paper introduces RETVec, a resilient, efficient, and multilingual text vectorizer designed for neural-based text processing. It addresses the limitations of existing approaches by combining a novel UTF-8 character encoder with a small model. RETVec does not require dataset pre-processing and accepts all valid UTF-8 characters, eliminating the need for out-of-vocabulary tokens. The embeddings are trained using pair-wise metric learning, ensuring that words with typos are embedded close to the original word. RETVec outperforms other vectorizers on text classification tasks, exhibiting higher accuracy, greater resilience to typos, and better resilience to adversarial attacks. The paper provides a TensorFlow implementation of RETVec, along with pre-trained models. Strengths: 1) Addresses the limitations of existing text vectorization approaches. 2) Combines a novel UTF-8 character encoder with a small model. 3) Does not require dataset pre-processing and eliminates the need for out-of-vocabulary tokens. 4) Trained on a word dataset with more than 157 languages. 5) Space-efficient and suitable for on-device model deployment. 6) Outperforms other vectorizers on text classification tasks, with improved accuracy and resilience to typos and adversarial attacks. 7) Provides a TensorFlow implementation and pre-trained models. Weaknesses: The paper does not provide detailed comparison results with other vectorizers on different languages and multilingual settings. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1) How does the character encoder handle rare or uncommon characters in the UTF-8 character set? 2) Are there any limitations or performance trade-offs when using RETVec with extremely long words? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: 1) The paper focuses on text classification tasks and does not explore other natural language processing tasks. 2) The paper does not provide insights into the interpretability of RETVec embeddings and their usefulness in downstream tasks. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful review and constructive feedback. Please find our responses below. > **Q1:** The paper does not provide detailed comparison results with other vectorizers on different languages and multilingual settings. **A1:** We evaluated RETVec on the Amazon Multilingual Reviews Corpus and reported the results in Figure 2, broken down per language. Figure 2 shows that RETVec outperforms the four baseline text vectorizers including SentencePiece and BPE on all 6 languages in the dataset, with a strong lead in Chinese and Japanese. We will add results on another dataset with more languages for the revised version. > **Q2:** How does the character encoder handle rare or uncommon characters in the UTF-8 character set? **A2:** RETVec’s character encoder uses 24 bits per character. The character encoder converts every valid UTF-8 character into its integer codepoint before converting it into a binary representation, which ensures that 100% of the UTF-8 character set can be uniquely represented. Additionally, for the RETVec model, we ensure that all UTF-8 characters are seen during training by including 10% random UTF-8 character strings in the training dataset and applying random character insertion and substitution augmentations. In order to visualize how RETVec handles uncommon words with rare UTF-8 characters, we will add a plot of the similarity distance between words and their typo-laden versions for both a set of common words and a set of random strings/uncommon words using the same typos, and show that they are comparable. This will demonstrate that every token, including those containing rare UTF-8 characters, is handled in a similar fashion by RETVec. > **Q3:** Are there any limitations or performance trade-offs when using RETVec with extremely long words? **A3:** In the ablation study, we trained RETVec models with input word lengths ranging from 12 to 32, but we did not see any performance improvements by increasing the word length above 16 characters per word (Table 6). Furthermore, increasing the input word length also increases RETVec’s model size and latency – we will add these metrics to Table 6 as well and discuss them in further detail in Section 10. > **Q4:** The paper focuses on text classification tasks and does not explore other natural language processing tasks. **A4:** RETVec is designed for adversarially resilient text classification and with on-device use-cases in mind, which is why most of our benchmarks are focused around classification performance and adversarial robustness. We realized that we should have devoted more space to discuss other use-cases such as text generation in future work and plan to use the extra space in the revised version to correct this. > **Q5:** The paper does not provide insights into the interpretability of RETVec embeddings and their usefulness in downstream tasks. **A5:** We will try to display the clusters of word embeddings using an embedding projector (https://projector.tensorflow.org/) and add it to the GitHub repository and paper appendix. This visualization will help demonstrate that syntactically similar words (e.g. a word and a typo version of a word) are clustered closer together while the embeddings of different and dissimilar words are further apart. This will hopefully provide some intuition on the RETVec embedding space and offer insights into the interpretability of RETVec embeddings. We are unsure how to provide insights into the embeddings’ usefulness in downstream tasks. Please let us know if you have any additional questions or feedback, we would be happy to incorporate any further feedback into the revision of the paper. Thank you!
null
null
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
CorresNeRF: Image Correspondence Priors for Neural Radiance Fields
Accept (poster)
Summary: The submission #8302, entitled "CorresNeRF: Image Correspondence Priors for Neural Radiance Fields" proposes a novel set of losses to improve the quality of NeRF under challenging conditions. In particular, the developed strategy effectively deals with the problem of sparse images. To achieve these performances, the authors take advantage of an out-of-the-box matching strategy between pairs of images to enforce geometric constraints during the training of the implicit representation. In particular, two types of losses are proposed to improve the quality of the reconstruction, namely the reprojection loss and the depth loss. An extensive series of experiments demonstrates the relevance of these extra losses incorporated into the training. Another advantage of the proposed strategy is that it can easily be integrated into most implicit reconstruction techniques. Strengths: - The paper is well-written and straightforward - The approach is simple and can easily be integrated into most NeRF-based approaches leading to improved results - The proposed technique is very effective under sparse view constraints - The computational overhead is very limited as matching strategies are often very fast Weaknesses: - The approach is very simple; the losses in themselves are not really new but demonstrate very effective results. The contributions appear to be limited, but the quality of the results might justify an acceptance. For this reason, I would like to express a mixed opinion regarding the acceptance of this work. Note that a relatively similar loss (on 3D structure obtained via correspondences) is applied in "Structure-Aware NeRF without Posed Camera via Epipolar Constraint" but with less success than in this manuscript #8302. - CorressNeRF demonstrates good performance when few images are used, but it would be interesting to know the effect of these losses in more common scenarios. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - An ablation study with an increasing number of images could be interesting to analyze how the method scales and if the demonstrated improvement are also valid with a larger density of images. - Analysis with different matcher/keypoints would be a plus. - What is the effect of the density of matched points? For instance, if a very sparse SIFT matching was used, what would be the expected effect of that? - Exploring other types of loss would be interesting, for instance, some epipolar losses instead of the reprojection loss. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 2 fair Limitations: I have very little to say about this paper as it is very clear and straightforward. I would like to kindly recommend additional experiments, as explained in the previous section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Q1: Ablation study with an increasing number of images. We thank the reviewer for the suggestion. We tested the robustness of CorresNeRF with varying input view counts. Specifically, we doubled the number of input views from 3 to 6 and then evaluated CorresNeRF's performance on the LLFF dataset. | Method | PSNR↑ | SSIM↑ | LPIPS↓ | Depth MAE↓ | | -------------------- | ----- | ----- | ------ | ---------- | | NeRF (3 views) | 16.79 | 0.56 | 0.37 | 1.66 | | CorresNeRF (3 views) | 19.83 | 0.7 | 0.29 | 0.91 | | NeRF (6 views) | 20.15 | 0.69 | 0.22 | 1.08 | | CorresNeRF (6 views) | 21.51 | 0.74 | 0.22 | 0.85 | The table indicates that CorresNeRF consistently outshines the baseline NeRF model, regardless of whether 3 or 6 views are used. Given that CorresNeRF is a plug-and-play module that can be added to any NeRF, provided quality image correspondences are available, its addition can boost the performance of NeRF, even in dense-view configurations. ### Q2: Analysis with different matcher/keypoints. Does it work with very sparse correspondence, e.g., SIFT matching? We thank the reviewer for the question. We assessed CorresNeRF's performance using correspondences derived from different image matching techniques, namely LoFTR [55] and DKMv3 [56]. The LLFF dataset served as our testing ground. | Method | PSNR↑ | SSIM↑ | LPIPS↓ | Depth MAE↓ | | ----------------------- | ----- | ----- | ------ | ---------- | | NeRF | 16.79 | 0.56 | 0.37 | 1.66 | | CorresNeRF (with LoFTR) | 18.13 | 0.64 | 0.30 | 1.10 | | CorresNeRF (with DKMv3) | 19.83 | 0.70 | 0.29 | 0.91 | The results show that CorresNeRF can outperform the vanilla NeRF, irrespective of the image matching technique employed, as long as quality correspondences are acquired. Additionally, CorresNeRF benefits from a "free" performance boost when a superior image matching method is employed, offering avenues for further enhancing CorresNeRF's performance. - [55] LoFTR: Detector-Free Local Feature Matching with Transformers, CVPR 2022 - [56] DKM: Dense Kernelized Feature Matching for Geometry Estimation, CVPR 2023 To study the effect of the density of matched points, we obtained correspondences using image matching methods and subsequently sampled a subset (50%, 25%, 12.5%, 6.25%, and 3.125%) of these correspondences to train CorresNeRF. We then assessed CorresNeRF's performance on the LLFF dataset. | Method | PSNR↑ | SSIM↑ | LPIPS↓ | Depth MAE↓ | | ------------------------------- | ------ | ----- | ------ | ---------- | | NeRF | 16.79 | 0.56 | 0.37 | 1.66 | | CorresNeRF (with 3.125% corres) | 18.616 | 0.647 | 0.322 | 1.129 | | CorresNeRF (with 6.25% corres) | 18.934 | 0.657 | 0.299 | 1.113 | | CorresNeRF (with 12.5% corres) | 18.854 | 0.66 | 0.287 | 1.108 | | CorresNeRF (with 25% corres) | 19.068 | 0.669 | 0.269 | 1.10 | | CorresNeRF (with 50% corres) | 18.986 | 0.67 | 0.266 | 1.085 | | CorresNeRF (with 100% corres) | 19.83 | 0.70 | 0.29 | 0.91 | Notably, even with only 3.125% of the correspondences, CorresNeRF significantly outperforms the baseline NeRF model. The performance of CorresNeRF enhances as the correspondence quantity increases. When 100% of the correspondences are used, CorresNeRF achieves its peak performance. Thus, as long as quality correspondences are provided in adequate numbers, CorresNeRF can surpass the regular NeRF's performance. ### Q3: Exploring other types of loss, e.g. epipolar loss. We thank the reviewer for the suggestion. The epipolar loss, as discussed in the Structure-Aware NeRF paper, bears similarities to the reprojection loss used in CorresNeRF. We will delve deeper into the intricacies of the epipolar loss in the finalized version of our CorresNeRF paper. --- Rebuttal 2: Comment: Dear Reviewer, We sincerely thank you for your precious time and efforts in reviewing our paper. We want to inquire whether our response has addressed your questions and concerns. We are more than happy to discuss with you further and provide additional materials. Best regards, Authors --- Rebuttal Comment 2.1: Comment: Dear Reviewer, We sincerely thank you for your precious time and efforts in reviewing our paper. As we are approaching the deadline of the discussion period, we would like to inquire whether our response has addressed your questions and concerns. We are more than happy to discuss with you further and provide additional materials. Thank you again for the review and comments! Best regards, Authors
Summary: The paper presents NeRF regularization method for few view NeRF. Strengths: 1. Using the state-of-the-art image matcher to regularize NeRF training is novel. 2. The paper is well-written and clear. Weaknesses: 1. The paper proposes employing the cutting-edge image matcher to enhance NeRF training. However, a similar idea was presented in Neuris [1], which also suggested using patch matching to optimize NeRF training. Intuitively, one could assume that a state-of-the-art image matcher would identify more precise correspondences than patch match, leading to superior results. However, considering that Neuris integrates additional monocular depths and surface normals, the effectiveness of combining these three methods remains uncertain. Therefore, the author appears to have overlooked an essential baseline, Neuris. It is recommended that the author carry out experimental work based on the Neuris setting rather than implementing Neuris in their own setup, which make the conclusion more convincing. 2. The pixel loss and depth loss appear to aim towards the same goal. The concept of using both has been previously explored in DSAC [2], but was later discarded in DSAC++ [3], deemed as unnecessary. While the author provides an ablation study to illustrate the effectiveness of the reprojection loss, its value remains questionable. This is primarily because, in multiview settings, the reprojection loss mirrors the depth loss, making its unique contribution uncertain. 3. Minor, The citation of UNISURF in Figure 5 seems to be wrong. [1] NeuRIS: Neural Reconstruction of Indoor Scenes Using Normal Priors [2] DSAC-Differentiable RANSAC for Camera Localization [3] Visual Camera Re-Localization from RGB and RGB-D Images Using DSAC Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Can the author show the proposed method is better than Neuris? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The image matcher sometimes fails when there are not enough overlap regions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Q1: Comparison with Neuris. We have examined the Neuris method and believe its approach of utilizing normal and monocular depth priors complements CorresNeRF. Notably, CorresNeRF serves as a plug-and-play module applicable to any NeRF, provided reasonable image correspondences can be achieved. Consequently, the image correspondence priors in CorresNeRF can integrate seamlessly with the normal and monocular depth priors in Neuris. Moreover, the Neuris paper predominantly focuses on indoor scenes, having been evaluated solely on the ScanNet dataset. It requires a monocular depth and surface normal models that are specifically trained for indoor settings. Conversely, CorresNeRF's image matching methods are versatile, catering to both indoor and outdoor scenes. ### Q2: Pixel loss and depth loss comparison. We thank the reviewer for the question. We believe that the pixel reprojection loss is more closely associated with the image matching method. This loss is defined in the 2D image space and is directly tied to the 2D image correspondences. In contrast, the depth loss is defined in the 3D space and has a strong connection to the NeRF model, given that the rendered depth in NeRF is the weighted sum of points along the camera ray. Based on our ablation study (Table 3 in the main paper), both the pixel loss and depth loss contribute significantly to the performance of CorresNeRF. ### Q3: Minor Citation issue of UNISURF. We thank the reviewer for pointing this out. We have fixed the citation issue. --- Rebuttal Comment 1.1: Comment: As I mentioned, Neuris also uses patch matching as prior. I think the author should show image matcher is better than patch match. If the method cannot outperform Neuris, the paper is meaningless. It is not convincing to say normal and monocular depth priors are compatible with CorresNeRF. I will maintain my rating unless I see concrete results. --- Reply to Comment 1.1.1: Title: Additional results and discussions on NeuRIS Comment: ### Summary We would like to express our gratitude to the reviewer for the additional comments. To address these comments, we have conducted further experiments with NeuRIS using the DTU dataset under the same sparse view setting as described in the paper. In summary: - NeuRIS performs similarly to, or even worse than, the baseline NeuS method upon which it is based. CorresNeRF significantly outperforms both NeuRIS and NeuS. - The performance of NeuRIS is highly sensitive to the quality of the normal priors. Since high-quality image correspondences are easier to obtain than normal priors, we regard CorresNeRF as a more practical solution. - We carefully conduct the experiments to ensure a fair comparison between CorresNeRF and NeuRIS. We have also visualized the normals for a more intuitive comparison and analysis. - We consulted with the author of NeuRIS and received confirmation regarding our observa- tions about the performance of NeuRIS on the DTU dataset. ### Quantitative Results We report the Chamfer-L1 distance results on the DTU dataset in the table below. All models were trained using the same three input views as described in the main paper, and the evaluation was done using the official DTU evaluation script. The values reported are Chamfer-L1 distances, where lower values are better. | Scene | UNISURF | VolSDF | NeuS | Neuris | Ours | | ------- | ------- | -------- | ---- | ------ | -------- | | scan24 | 7.81 | 7.00 | 6.06 | 5.55 | **2.73** | | scan37 | 7.54 | 6.95 | 7.24 | 6.84 | **4.92** | | scan40 | 6.37 | 7.47 | 7.68 | 4.74 | **3.00** | | scan55 | 8.38 | 2.90 | 5.85 | 7.10 | **2.37** | | scan63 | 8.40 | 4.58 | 8.84 | N/A | **2.52** | | scan65 | 5.08 | **2.30** | 4.65 | 5.48 | 2.71 | | scan69 | 7.42 | 3.85 | 6.30 | 7.81 | **2.05** | | scan83 | 7.92 | 9.14 | 9.62 | 11.28 | **3.14** | | scan97 | 8.73 | 3.50 | 4.82 | 5.82 | **2.27** | | scan105 | 8.89 | 6.52 | 8.19 | 7.60 | **3.61** | | scan106 | 5.89 | **1.76** | 4.99 | 7.72 | 2.08 | | scan110 | 7.68 | N/A | 5.75 | 4.91 | **2.03** | | scan114 | 3.43 | **0.81** | 2.01 | 5.29 | 1.37 | | scan118 | 6.47 | 3.93 | 6.16 | 6.89 | **1.83** | | scan122 | 8.51 | **1.45** | 4.25 | 6.72 | 2.85 | | mean | 7.23 | 4.44\* | 6.16 | 6.70\* | **2.63** | \* Averaged over the valid results only. We observe that NeuRIS performs similarly to, or even worse than, the baseline NeuS method upon which it is based. This underperformance is likely due to the inaccuracy of the pre-trained normal priors (TiltedSN and SNU) when applied to the DTU dataset. We provide further visualizations and discussions in the following sections to elucidate this issue. ### Visualizations of the Normal Priors We propose that the performance of NeuRIS is highly sensitive to the quality of the normal priors. A key reason for NeuRIS’s poor performance on the DTU dataset seems to be the inaccuracy of these pre-trained normal priors. To further validate this hypothesis, we visualize the normal priors computed by TiltedSN on the DTU dataset. As shown in Figure 1, while most of the normal predictions are reasonable, a significant number of normals point in tilted or incorrect directions. This discrepancy is likely due to TiltedSN being pre-trained on indoor scene datasets, which differ substantially from the object-level DTU dataset. We also provide visualizations of the normal priors on the ScanNet dataset, which are used in NeuRIS. These visualizations are shown in Figure 2. ### Replies from the Author of NeuRIS To further investigate the performance of NeuRIS on the DTU dataset, we contacted the first author of NeuRIS. The author provided us with the following reply: > "The two pre-trained normal priors (TiltedSN and SNU) are indeed not very > accurate on the DTU dataset. Whether the performance will be better or worse > (compared to the baseline Neus) may depend on other factors, such as the > number of sparse views used in the experiment." The NeuRIS author also reviewed our visualizations of the DTU normal priors and confirmed the correctness of our implementation and our observations. Furthermore, we found that the NeuRIS codebase contains partial code for loading DTU data. However, the author neither reported the evaluation results for the DTU dataset in the paper, nor provided the full configurations and documentation necessary to run experiments using the DTU dataset in their codebase. --- Rebuttal 2: Comment: Dear Reviewer, We sincerely thank you for your precious time and efforts in reviewing our paper. We want to inquire whether our response has addressed your questions and concerns. We are more than happy to discuss with you further and provide additional materials. Best regards, Authors
Summary: This paper proposed CorresNeRF, a method that leverages image correspondence priors to improve NeRF training on sparse input views. The correspodence matching is computed by off-the-shelf methods. The authors augue that the introduced inexpensive image correspodence priors can be used to supervise training of arbitrary NeRFs and lead to better performance / faster convergence taking sparse view inputs. Further, a robust correspondence loss is designed, including reprojection loss and depth loss baesd on correspondence priors. Overall, the method demonstrates superior reconstruction quality against baselines like VolSDF and NeuS, etc. Strengths: The paper's main contributions are summarized as follows: - Introduction of image correspondence as a cost-effective prior to supervise the training of any NeRFs. - Design of a pipeline to obtain robust correspondences from standard methods, including automatic augmentation and filtering. - Introduce a robust correspondence loss incorporating reprojection loss and depth loss based on correspondence priors. - Extensive experiments conducted on various baselines and datasets demonstrating the method's effectiveness across different types of neural implicit representations. The authors conduct extensive experiments on various datasets, which demonstrate the effectiveness of their method. They find significant improvements in both novel view synthesis and surface reconstruction metrics. The proposed method outperforms other state-of-the-art sparse-view reconstruction methods and works well with various types of NeRF, including those with other priors. Weaknesses: There are some limitations that should be properly discussed: - Dependence on the quality of image correspodence. The quality of the obtained image correspodence matching significantly impacts the effectiveness of the proposed approach. Less accurate correspondences can negatively affect the supervised training of NeRF, which might lead to suboptimal reconstruction results. - Performance in non-sparse scenarios. The method is focused on the advantage of using CorresNeRF in sparse-view configurations, but it doesn't mention how this method would perform in dense-view configurations. It would be interesting to show such an ablation study to verify this. - The main comparisons show VolSDF and NeuS results as baselines. However, a more adequate baseline would be SparseNeuS: SparseNeuS: Fast Generalizable Neural Surface Reconstruction from Sparse Views. ECCV 2022. - The method shows 3 input views on DTU/LLFF datasets. What happen if arbitrary number of input views are given? This is soemwhat related to the 2nd point above. But it would be nice to have such experiments to better evaluate the robustness of the proposed pipeline. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Some questions: - Clarification on the image correspondence methods.The paper mentioned that image correspondences are obtained via off-the-shelf method. Could you provide more details on how they are acquired and how this (i.e. selection of the method) would affect the final reconstruction quality? - Robustness to the noise/outliers in the correspondence. An automatic augmentation and outlier removal process was designed, which seems to be key to the proposed approach. It would be useful to provide more details on the robustness of the design with some various level of noise/outliers on some scenes. - What is the minimum quality of correspondence necessary for the method to outperform traditional NeRF implementations? - Impact of loss terms on results. How did the inclusion of correspondence pixel reprojection loss and correspondence depth loss affect the final results? Could you clarify how these losses contribute to the reconstruction quality? - Extreme and failure cases. One such case I'd imagine is glossy/specular or textureless surfaces. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Limitations are discussed in the above weakness section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Q1 (from weaknesses): Dependence on the quality of image correspondence. We thank the reviewer for the suggestion. We employed image matching methods to obtain correspondences and subsequently introduced Gaussian noise to these correspondences. Specifically, we added Gaussian noise with standard deviations of 1, 2, and 4 pixels to both x and y pixel coordinates of the correspondences. We then assessed the performance of CorresNeRF using the LLFF dataset. Section 3.2 describes how CorresNeRF employs an automatic outlier removal process based on camera reprojection error. Column 2 in the table below reports the relative number of correspondences remaining after this filtering process. | Method | Corres # After Auto Filter | PSNR↑ | SSIM↑ | LPIPS↓ | Depth MAE↓ | | ---------------------------- | -------------------------- | ----- | ----- | ------ | ---------- | | NeRF | 0.00% | 16.79 | 0.56 | 0.37 | 1.66 | | CorresNeRF (noise_std = 4px) | 13.96% | 18.31 | 0.61 | 0.48 | 1.04 | | CorresNeRF (noise_std = 2px) | 27.04% | 19.16 | 0.66 | 0.33 | 1.06 | | CorresNeRF (noise_std = 1px) | 48.91% | 19.31 | 0.67 | 0.28 | 1.06 | | CorresNeRF (noise_std = 0px) | 100.00% | 19.83 | 0.70 | 0.29 | 0.91 | When image correspondences are contaminated by noise, the automatic outlier removal process discards more of them, resulting in fewer correspondences for CorresNeRF to utilize. However, the correspondences that remain are deemed higher quality. Consequently, CorresNeRF maintains satisfactory performance even with noisy correspondences. ### Q2 (from weaknesses): Performance in non-sparse scenarios. We tested the robustness of CorresNeRF with varying input view counts. Specifically, we doubled the number of input views from 3 to 6 and then evaluated CorresNeRF's performance on the LLFF dataset. | Method | PSNR↑ | SSIM↑ | LPIPS↓ | Depth MAE↓ | | -------------------- | ----- | ----- | ------ | ---------- | | NeRF (3 views) | 16.79 | 0.56 | 0.37 | 1.66 | | CorresNeRF (3 views) | 19.83 | 0.7 | 0.29 | 0.91 | | NeRF (6 views) | 20.15 | 0.69 | 0.22 | 1.08 | | CorresNeRF (6 views) | 21.51 | 0.74 | 0.22 | 0.85 | The table indicates that CorresNeRF consistently outshines the baseline NeRF model, regardless of whether 3 or 6 views are used. Given that CorresNeRF is a plug-and-play module that can be added to any NeRF, provided quality image correspondences are available, its addition can boost the performance of NeRF, even in dense-view configurations. ### Q3 (from weaknesses): Comparison with other baselines, such as SparseNeuS. In summary, as long as reliable image correspondences can be established, CorresNeRF can serve as a generic plug-and-play module to enhance the performance of any NeRF model, including SparseNeuS. SparseNeuS learns generalizable priors from image features, encoding coarse-to-fine geometry volumes for generic surface prediction. These methods are distinct from the image correspondence priors used in CorresNeRF. Consequently, SparseNeuS can be integrated with CorresNeRF to further enhance performance. We plan to incorporate additional experiments combining CorresNeRF with SparseNeuS in the final version of the paper. ### Q4 (from questions): Clarification on the image correspondence methods, including the selection of the method. We assessed CorresNeRF's performance using correspondences derived from different image matching techniques, namely LoFTR [55] and DKMv3 [56]. The LLFF dataset served as our testing ground. | Method | PSNR↑ | SSIM↑ | LPIPS↓ | Depth MAE↓ | | ----------------------- | ----- | ----- | ------ | ---------- | | NeRF | 16.79 | 0.56 | 0.37 | 1.66 | | CorresNeRF (with LoFTR) | 18.13 | 0.64 | 0.30 | 1.10 | | CorresNeRF (with DKMv3) | 19.83 | 0.70 | 0.29 | 0.91 | The results show that CorresNeRF can outperform the vanilla NeRF, irrespective of the image matching technique employed, as long as quality correspondences are acquired. Additionally, CorresNeRF benefits from a "free" performance boost when a superior image matching method is employed, offering avenues for further enhancing CorresNeRF's performance. ### Q5 (from questions): Robustness to the noise/outliers in the correspondence, and what is the minimum quality of correspondence necessary for the method to outperform traditional NeRF implementations? Please kindly refer the answer above for Q1. ### Q6 (from questions): Impact of loss terms. How do pixel reprojection loss and correspondence depth loss affect the final results? We believe that the pixel reprojection loss is more closely associated with the image matching method. This loss is defined in the 2D image space and is directly tied to the 2D image correspondences. In contrast, the depth loss is defined in the 3D space and has a strong connection to the NeRF model, given that the rendered depth in NeRF is the weighted sum of points along the camera ray. Based on our ablation study (Table 3 in the main paper), both the pixel loss and depth loss contribute significantly to the performance of CorresNeRF. ### Q7 (from questions): Extreme and failure cases As described in the limitations section of the paper, CorresNeRF is dependent on the results produced by the image matching method. If the input surface is glossy, specular, or lacks texture, the image matching method may fail to generate accurate correspondences. Moreover, such correspondences could be eliminated by the automated outlier removal process, which is based on camera reprojection errors. In these scenarios, the enhancements offered by CorresNeRF over the baseline NeRF model might be minimal. --- Rebuttal Comment 1.1: Comment: Dear Reviewer, We sincerely thank you for your precious time and efforts in reviewing our paper. As we are approaching the deadline of the discussion period, we would like to inquire whether our response has addressed your questions and concerns. We are more than happy to discuss with you further and provide additional materials. Thank you again for the review and comments! Best regards, Authors --- Rebuttal 2: Comment: Dear Reviewer, We sincerely thank you for your precious time and efforts in reviewing our paper. We want to inquire whether our response has addressed your questions and concerns. We are more than happy to discuss with you further and provide additional materials. Best regards, Authors
Summary: The paper introduces CorresNeRF, a method that leverages image correspondence priors to improve the performance of Neural Radiance Fields (NeRF) in scenarios with sparse input views. The authors propose a plug-and-play module that incorporates correspondence priors into the training process by adding loss terms on the reprojection error and depth error of the correspondence points. They develop an adaptive algorithm for augmenting and filtering the correspondence priors to enhance their quality. The proposed method is evaluated on novel view synthesis and surface reconstruction tasks using density-based and SDF-based neural implicit representations across different datasets. The proposed CorresNeRF utilizes image correspondence priors to supervise the training of NeRF models. This approach addresses the challenge of sparse input views and enhances the performance of NeRF in reconstructing 3D geometries. The authors propose an automatic augmentation and outlier removal process for improving the quality and robustness of the correspondence priors. This process enhances the dense correspondence estimation and mitigates the effects of inaccurate correspondences. The paper formulates a correspondence loss that incorporates reprojection and depth errors based on the correspondence priors. This loss effectively guides the learning of implicit functions in NeRF models and improves their performance. Strengths: The paper demonstrates several strengths across different dimensions: The paper introduces the concept of leveraging image correspondence priors to improve the performance of NeRF models in sparse-view scenarios. This novel approach addresses the challenge of reconstructing 3D geometries with limited input views and introduces the use of image correspondences as explicit supervision for learning implicit functions in NeRF. The combination of image correspondence priors and NeRF training is a creative and innovative approach that expands the capabilities of NeRF models. The paper addresses a significant problem in the field of 3D reconstruction and view synthesis. Sparse-view scenarios are common in real-world applications, and improving the performance of NeRF models under such conditions has practical implications. The proposed method offers a practical and effective solution by leveraging image correspondence priors, which are readily obtainable and can be computed using standard methods. The experimental results demonstrate the superiority of the proposed approach over previous methods, highlighting its potential for advancing the state-of-the-art in novel view synthesis and surface reconstruction tasks. The paper presents a well-designed methodology with clear objectives and a systematic evaluation process. The authors carefully consider the limitations of existing methods and propose solutions to overcome them. The proposed CorresNeRF method incorporates robust correspondence loss and automatic augmentation and filtering of correspondence priors, enhancing the quality and effectiveness of the training process. The experimental evaluation is thorough, encompassing various neural implicit representations and datasets, and the results demonstrate significant improvements in performance metrics. The proposed method is described in a structured manner, with detailed explanations of the augmentation and filtering process, formulation of correspondence loss, and evaluation metrics. The figures and equations further enhance the clarity of the paper, aiding in the understanding of the concepts and techniques presented. Weaknesses: While the paper demonstrates several strengths, there are also a few areas where it could be improved: Experimental Evaluation: The paper would benefit from a more detailed analysis of the computational efficiency and resource requirements of the CorresNeRF method. Providing insights into the computational demands and resource utilization of the approach would help readers understand the practical implications and scalability of the method. Image Correspondences: While the paper introduces image correspondences as priors, it is important to acknowledge the potential challenges in estimating accurate and robust image correspondences, especially in scenarios with occluded or noisy images. Conducting a sensitivity analysis of correspondence accuracy would provide a clearer understanding of the method's performance under different conditions and shed light on its robustness and generalization capabilities. Comparison with State-of-the-Art: The paper would benefit from a more comprehensive comparison with existing state-of-the-art methods for sparse-view reconstruction, such as MVSNeRF and GeoNeRF. Providing a thorough evaluation and comparison against these methods would help establish the superiority and novelty of the proposed CorresNeRF method. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Could you provide a more detailed analysis of the computational efficiency and resource requirements of the CorresNeRF method? Specifically, it would be valuable to include information on training time, inference speed, and memory utilization to understand the practical implications and scalability of the approach. Estimating accurate and robust image correspondences can be challenging in real-world scenarios. It would be helpful to discuss the performance under inaccurate correspondence. It would be beneficial to provide a more comprehensive comparison with existing state-of-the-art methods for sparse-view reconstruction, such as MVSNeRF and GeoNeRF. Including a thorough evaluation and comparison against these methods would strengthen the justification for the superiority and novelty of the proposed CorresNeRF method. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The limitation is discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Q1: Experimental Evaluation: computational efficiency of CorresNeRF. We thank the reviewer for the question. At inference time, CorresNeRF operates at exactly the same runtime as the baseline NeRF model. However, during training, CorresNeRF incurs additional runtime overheads due to the search for correspondences and the computation of the correspondence loss. We've conducted a supplementary runtime analysis experiment. Specifically, we evaluated the runtime for both the forward pass (rendering) and backward pass (gradient computation), performing these measurements over 100 iterations with a batch size of 1024 on the fern scene of the LLFF dataset. Testing was conducted on a single NVIDIA RTX 2080Ti GPU, and both average and standard deviation of the runtimes were reported. | Method | Training Forward (ms) | Training Backward (ms) | | ---------- | --------------------- | ---------------------- | | NeRF | 51.722 ± 0.312 | 70.941 ± 0.429 | | CorresNeRF | 126.896 ± 3.254 | 124.160 ± 2.371 | It's worth noting that the runtime overhead of CorresNeRF is contingent upon the ratio of pixels possessing valid correspondences. We believe the additional overhead introduced by CorresNeRF is justifiable, especially given the substantial performance enhancement over the baseline NeRF model. ### Q2: Image Correspondences: sensitivity analysis of image correspondences We thank the reviewer for the suggestion. We conducted two sets of supplementary experiments to assess the impact of 1) quality and 2) quantity of image correspondences on CorresNeRF's performance. Our results indicate that CorresNeRF maintains impressive performance even when faced with noisy correspondences or when utilizing only a small subset of image correspondences. This underscores the robustness of CorresNeRF; it demonstrates that as long as reasonable correspondences are present, CorresNeRF can amplify the performance over the baseline NeRF model. **Effects of correspondence quality (robustness to noise)** We employed image matching methods to obtain correspondences and subsequently introduced Gaussian noise to these correspondences. Specifically, we added Gaussian noise with standard deviations of 1, 2, and 4 pixels to both x and y pixel coordinates of the correspondences. We then assessed the performance of CorresNeRF using the LLFF dataset. Section 3.2 describes how CorresNeRF employs an automatic outlier removal process based on camera reprojection error. Column 2 in the table below reports the relative number of correspondences remaining after this filtering process. | Method | Corres # After Auto Filter | PSNR↑ | SSIM↑ | LPIPS↓ | Depth MAE↓ | | ---------------------------- | -------------------------- | ----- | ----- | ------ | ---------- | | NeRF | 0.00% | 16.79 | 0.56 | 0.37 | 1.66 | | CorresNeRF (noise_std = 4px) | 13.96% | 18.31 | 0.61 | 0.48 | 1.04 | | CorresNeRF (noise_std = 2px) | 27.04% | 19.16 | 0.66 | 0.33 | 1.06 | | CorresNeRF (noise_std = 1px) | 48.91% | 19.31 | 0.67 | 0.28 | 1.06 | | CorresNeRF (noise_std = 0px) | 100.00% | 19.83 | 0.70 | 0.29 | 0.91 | When image correspondences are contaminated by noise, the automatic outlier removal process discards more of them, resulting in fewer correspondences for CorresNeRF to utilize. However, the correspondences that remain are deemed higher quality. Consequently, CorresNeRF maintains satisfactory performance even with noisy correspondences. **Effects of correspondence quantity** We obtained correspondences using image matching methods and subsequently sampled a subset (50%, 25%, 12.5%, 6.25%, and 3.125%) of these correspondences to train CorresNeRF. We then assessed CorresNeRF's performance on the LLFF dataset. | Method | PSNR↑ | SSIM↑ | LPIPS↓ | Depth MAE↓ | | ------------------------------- | ------ | ----- | ------ | ---------- | | NeRF | 16.79 | 0.56 | 0.37 | 1.66 | | CorresNeRF (with 3.125% corres) | 18.616 | 0.647 | 0.322 | 1.129 | | CorresNeRF (with 6.25% corres) | 18.934 | 0.657 | 0.299 | 1.113 | | CorresNeRF (with 12.5% corres) | 18.854 | 0.66 | 0.287 | 1.108 | | CorresNeRF (with 25% corres) | 19.068 | 0.669 | 0.269 | 1.10 | | CorresNeRF (with 50% corres) | 18.986 | 0.67 | 0.266 | 1.085 | | CorresNeRF (with 100% corres) | 19.83 | 0.70 | 0.29 | 0.91 | Notably, even with only 3.125% of the correspondences, CorresNeRF significantly outperforms the baseline NeRF model. The performance of CorresNeRF enhances as the correspondence quantity increases. When 100% of the correspondences are used, CorresNeRF achieves its peak performance. Thus, as long as quality correspondences are provided in adequate numbers, CorresNeRF can surpass the regular NeRF's performance. ### Q3: Comparison with the state-of-the-art (MVSNeRF and GeoNeRF) We thank the reviewer for the suggestion. In essence, if reasonable image correspondences can be secured, CorresNeRF can serve as a versatile plug-and-play module to enhance any NeRF model, including MVSNeRF and GeoNeRF. MVSNeRF and GeoNeRF are expansive NeRF models suitable for sparse-view settings. MVSNeRF capitalizes on a plane-sweeping cost volume, while GeoNeRF constructs cost volumes through transformer-based feature aggregation. These methodologies are distinct from the image correspondence priors employed by CorresNeRF. Hence, combining these techniques with CorresNeRF could potentially yield further performance improvements. We plan to incorporate additional experiments, merging CorresNeRF with MVSNeRF and GeoNeRF, in the finalized version of the paper. --- Rebuttal 2: Comment: Dear Reviewer, We sincerely thank you for your precious time and efforts in reviewing our paper. We want to inquire whether our response has addressed your questions and concerns. We are more than happy to discuss with you further and provide additional materials. Best regards, Authors --- Rebuttal Comment 2.1: Comment: Dear Reviewer, We sincerely thank you for your precious time and efforts in reviewing our paper. As we are approaching the deadline of the discussion period, we would like to inquire whether our response has addressed your questions and concerns. We are more than happy to discuss with you further and provide additional materials. Thank you again for the review and comments! Best regards, Authors --- Rebuttal 3: Comment: The training speed of NeRF will be significantly slower when incorporating the proposed correspondence design, which will be a much bigger problem for the current fashion of fast scene representation (e.g., Instant-NGP). With the high correspondence computation, I believe the fast speed merit of Instant-NGP or other fast models will be lost completely. The application scenario and practical value of this method is relatively narrow. I keep my rating of borderline reject. --- Rebuttal Comment 3.1: Comment: We appreciate the reviewer's insights and the introduction of Instant-NGP and other fast models into our discussion. In response to the concerns raised: > Reviewer comment: "With the high correspondence computation, I believe the fast speed merit of Instant-NGP or other fast models will be lost completely." The above statement is not true. **In fact, the speed advantages of Instant-NGP are still retained.** For instance, if CorresNeRF takes 2.x times the training time of the standard NeRF, the "Corres-Instant-NGP" will similarly take 2.x times the training time of Instant-NGP, preserving the speed advantage of Instant-NGP. In CorresNeRF, image correspondences are pre-computed and cached. The additional runtime mainly comes from the extra forward/backward pass for corresponding pixels. As such, the runtime overhead shall be calculated on a **"relative scale"** rather than an "absolute scale". Furthermore, CorresNeRF introduces **zero inference overhead** but offers a superior reconstruction quality. > Reviewer comment: "The application scenario and practical value of this method is relatively narrow." It's important to highlight that CorresNeRF and Instant-NGP address **orthogonal challenges**. While Instant-NGP emphasizes rapid training, the primary goal of CorresNeRF is to enhance reconstruction quality in sparse-view contexts. Given the orthogonal design considerations of these methods, a direct comparison of their training times may not be meaningful. Moreover, we anticipate that Instant-NGP's performance would be much worse than that of CorresNeRF in sparse-view conditions, such as in 3-view or 6-view scenarios.
Rebuttal 1: Rebuttal: We thank all the reviewers for their insightful comments and questions. In this section, we provide a summary of our responses and present new experimental results. Specifically, we introduce new experiments that examine: - The robustness of CorresNeRF with noisy correspondences - The robustness of CorresNeRF when using fewer correspondences - The performance of CorresNeRF with additional input views (3-view and 6-view) - The performance of CorresNeRF when using different image matchers ## Robustness of CorresNeRF with Noisy Correspondences We employed image matching methods to obtain correspondences and subsequently introduced Gaussian noise to these correspondences. Specifically, we added Gaussian noise with standard deviations of 1, 2, and 4 pixels to both x and y pixel coordinates of the correspondences. We then assessed the performance of CorresNeRF using the LLFF dataset. Section 3.2 describes how CorresNeRF employs an automatic outlier removal process based on camera reprojection error. Column 2 in the table below reports the relative number of correspondences remaining after this filtering process. | Method | Corres # After Auto Filter | PSNR↑ | SSIM↑ | LPIPS↓ | Depth MAE↓ | | ---------------------------- | -------------------------- | ----- | ----- | ------ | ---------- | | NeRF | 0.00% | 16.79 | 0.56 | 0.37 | 1.66 | | CorresNeRF (noise_std = 4px) | 13.96% | 18.31 | 0.61 | 0.48 | 1.04 | | CorresNeRF (noise_std = 2px) | 27.04% | 19.16 | 0.66 | 0.33 | 1.06 | | CorresNeRF (noise_std = 1px) | 48.91% | 19.31 | 0.67 | 0.28 | 1.06 | | CorresNeRF (noise_std = 0px) | 100.00% | 19.83 | 0.70 | 0.29 | 0.91 | When image correspondences are contaminated by noise, the automatic outlier removal process discards more of them, resulting in fewer correspondences for CorresNeRF to utilize. However, the correspondences that remain are deemed higher quality. Consequently, CorresNeRF maintains satisfactory performance even with noisy correspondences. ## Robustness of CorresNeRF with Reduced Correspondence Quantity We obtained correspondences using image matching methods and subsequently sampled a subset (50%, 25%, 12.5%, 6.25%, and 3.125%) of these correspondences to train CorresNeRF. We then assessed CorresNeRF's performance on the LLFF dataset. | Method | PSNR↑ | SSIM↑ | LPIPS↓ | Depth MAE↓ | | ------------------------------- | ------ | ----- | ------ | ---------- | | NeRF | 16.79 | 0.56 | 0.37 | 1.66 | | CorresNeRF (with 3.125% corres) | 18.616 | 0.647 | 0.322 | 1.129 | | CorresNeRF (with 6.25% corres) | 18.934 | 0.657 | 0.299 | 1.113 | | CorresNeRF (with 12.5% corres) | 18.854 | 0.66 | 0.287 | 1.108 | | CorresNeRF (with 25% corres) | 19.068 | 0.669 | 0.269 | 1.10 | | CorresNeRF (with 50% corres) | 18.986 | 0.67 | 0.266 | 1.085 | | CorresNeRF (with 100% corres) | 19.83 | 0.70 | 0.29 | 0.91 | Notably, even with only 3.125% of the correspondences, CorresNeRF significantly outperforms the baseline NeRF model. The performance of CorresNeRF enhances as the correspondence quantity increases. When 100% of the correspondences are used, CorresNeRF achieves its peak performance. Thus, as long as quality correspondences are provided in adequate numbers, CorresNeRF can surpass the regular NeRF's performance. ## Performance of CorresNeRF with more input views (3-view and 6-view) We tested the robustness of CorresNeRF with varying input view counts. Specifically, we doubled the number of input views from 3 to 6 and then evaluated CorresNeRF's performance on the LLFF dataset. | Method | PSNR↑ | SSIM↑ | LPIPS↓ | Depth MAE↓ | | -------------------- | ----- | ----- | ------ | ---------- | | NeRF (3 views) | 16.79 | 0.56 | 0.37 | 1.66 | | CorresNeRF (3 views) | 19.83 | 0.7 | 0.29 | 0.91 | | NeRF (6 views) | 20.15 | 0.69 | 0.22 | 1.08 | | CorresNeRF (6 views) | 21.51 | 0.74 | 0.22 | 0.85 | The table indicates that CorresNeRF consistently outshines the baseline NeRF model, regardless of whether 3 or 6 views are used. Given that CorresNeRF is a plug-and-play module that can be added to any NeRF, provided quality image correspondences are available, its addition can boost the performance of NeRF, even in dense-view configurations. ## Using Different Image Matchers for CorresNeRF We assessed CorresNeRF's performance using correspondences derived from different image matching techniques, namely LoFTR [55] and DKMv3 [56]. The LLFF dataset served as our testing ground. | Method | PSNR↑ | SSIM↑ | LPIPS↓ | Depth MAE↓ | | ----------------------- | ----- | ----- | ------ | ---------- | | NeRF | 16.79 | 0.56 | 0.37 | 1.66 | | CorresNeRF (with LoFTR) | 18.13 | 0.64 | 0.30 | 1.10 | | CorresNeRF (with DKMv3) | 19.83 | 0.70 | 0.29 | 0.91 | The results show that CorresNeRF can outperform the vanilla NeRF, irrespective of the image matching technique employed, as long as quality correspondences are acquired. Additionally, CorresNeRF benefits from a "free" performance boost when a superior image matching method is employed, offering avenues for further enhancing CorresNeRF's performance. [55] LoFTR: Detector-Free Local Feature Matching with Transformers, CVPR 2022 [56] DKM: Dense Kernelized Feature Matching for Geometry Estimation, CVPR 2023"
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper proposes an approach for sparse-view NeRF reconstruction by using image correspondences as a prior. NeRF under the sparse-view regime is overparameterized and under constrained hence requiring a prior to optimize. This paper proposes to use image correspondences that are extracted across the different views, in particular, they use DKMv3. They propose two additional loss functions based on the correspondences from the prior: the first uses the reprojection error by using the expected depth as predicted by the NeRF while the second uses a correspondence-depth based loss by finding the closest 3D points in space based on the correspondence from the prior. Experiments on novel view synthesis and surface reconstruction show the improvement of their proposed approach. Strengths: The paper proposes to use image correspondences as a prior for sparse view nerf reconstruction which is intuitive and sound. Image correspondences as a prior is generalizable and hence a pretrained model can be used, namely DKMv3. They propose two simple, yet intuitive losses for their approach. Experiments show that the proposed method is able to perform better compared to existing baselines. Weaknesses: The effectiveness of the method is relying on the accurate prediction of the correspondences, and it is known that correspondences can be erroneous on texture less regions, illumination changes or wide-baseline cameras. On real scenes, these issues might arise more -- e.g. on sparse scannet images as used by existing benchmarks [53, 54]. The sparse view inputs here have wide-camera baselines as opposed to forward facing scenes in LLFF. It would be more convincing if the method can also perform reasonably in such settings. Some references on sparse view NeRF: **[53]** Dense Depth Priors for Neural Radiance Fields from Sparse Input Views, CVPR '22 **[54]** SCADE: NeRFs from Space Carving with Ambiguity-Aware Depth Estimates, CVPR '23 Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. What happens when other image correspondence priors (not DKMv3) is used? It will be beneficial to see how this would affect results, for example for both another neural network based prior as well as a handcrafted prior, e.g. output of COLMAP, even if it will only give correspondences at sparse pixel locations. 2. NeRF depth is used for the correspondence pixel reprojection loss, and it is known that in the sparse regime the NeRF depth can be erroneous. Did this cause an issue in the convergence of the network? Would it have helped to add in a depth loss supervision as a prior? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have included limitations in the main paper of the submission. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Q1: Performance on wide-camera baselines (e.g. ScanNet) in addition to forward-facing LLFF. We appreciate the reviewer's inquiry. In our paper, we present evaluations of CorresNeRF on the LLFF dataset with forward-facing cameras, as well as on the DTU dataset where the cameras have a spherical configuration. In "wide-camera" scenarios such as the ScanNet dataset, establishing correspondences between input images necessitates a significant overlap between images containing textured regions. Provided that reasonable correspondences can be achieved through image matching techniques, CorresNeRF remains effective. We intend to include additional experiments on the ScanNet dataset in the paper's final version. In essence, CorresNeRF depends on image matchers to ascertain correspondences. As long as these correspondences are established, CorresNeRF can function as a generic plug-and-play module, enhancing the efficacy of any NeRF model, which bodes well for the broader community. ### Q2: What happens when other image correspondence priors is used? We thank the reviewer for the question. Indeed, the quality and quantity of image correspondences influence CorresNeRF's performance. We've introduced a new experiment where we compare CorresNeRF's performance when using different image matching techniques, specifically LoFTR [55] and DKMv3 [56], on the LLFF dataset. Here are the results: | Method | PSNR↑ | SSIM↑ | LPIPS↓ | Depth MAE↓ | | ----------------------- | ----- | ----- | ------ | ---------- | | NeRF | 16.79 | 0.56 | 0.37 | 1.66 | | CorresNeRF (with LoFTR) | 18.13 | 0.64 | 0.30 | 1.10 | | CorresNeRF (with DKMv3) | 19.83 | 0.70 | 0.29 | 0.91 | It's evident that regardless of the chosen image matching method, if viable correspondences are established, CorresNeRF can enhance performance beyond the original NeRF. Additionally, when a superior image matching method is employed, CorresNeRF naturally benefits, potentially driving its performance further. Regarding COLMAP correspondences, we applied COLMAP to the LLFF dataset using specified cameras in 3 sparse views. Here's a summary: | Scene | COLMAP: Num of Corres (Pixel Coverage %) | DKMv3: Num of Corres (Pixel Coverage %) | | -------- | ---------------------------------------- | --------------------------------------- | | fern | 362 (0.19%) | 368,798 (57%) | | flower | 685 (0.35%) | 356,044 (73%) | | fortress | 609 (0.31%) | 430,044 (76%) | | horns | 512 (0.27%) | 271,705 (47%) | | leaves | 201 (0.11%) | 198,412 (48%) | | orchids | 229 (0.12%) | 242,620 (37%) | | room | 345 (0.18%) | 260,308 (40%) | | trex | 644 (0.34%) | 233,950 (35%) | From this data, it's clear that COLMAP yields sparser correspondences compared to DKMv3. While CorresNeRF's performance might be restricted with only a few correspondences, it still outperforms the standard NeRF model. We direct the reviewer to Table 1 in the supplementary material for more detailed statistics. - [55] LoFTR: Detector-Free Local Feature Matching with Transformers, CVPR 2022 - [56] DKM: Dense Kernelized Feature Matching for Geometry Estimation, CVPR 2023 ### Q3: NeRF depth can be erroneous in the sparse regime. Did this cause an issue in the convergence of the network? Would it have helped to add in a depth loss supervision as a prior? We thank the reviewer for the question. Indeed, NeRF's depth can be unreliable in a sparse regime without an auxiliary prior. In CorresNeRF, image correspondences are used as priors, ensuring the depth values are tethered by the corres depth loss and pixel reprojection loss. This makes the depth values more reliable than those in the standard NeRF model. Regarding the proposition of introducing depth loss supervision as a prior, we believe that CorresNeRF's corres depth loss and pixel reprojection loss already implicitly offer such supervision. Given the camera parameters and correspondences, depth values can be triangulated, and CorresNeRF's loss terms appropriately model this relationship. --- Rebuttal Comment 1.1: Comment: Dear Reviewer, We sincerely thank you for your precious time and efforts in reviewing our paper. We want to inquire whether our response has addressed your questions and concerns. We are more than happy to discuss with you further and provide additional materials. Best regards, Authors --- Reply to Comment 1.1.1: Comment: Dear Reviewer, We sincerely thank you for your precious time and efforts in reviewing our paper. As we are approaching the deadline of the discussion period, we would like to inquire whether our response has addressed your questions and concerns. We are more than happy to discuss with you further and provide additional materials. Thank you again for the review and comments! Best regards, Authors
null
null
null
null
null
null
Parameter-efficient Tuning of Large-scale Multimodal Foundation Model
Accept (poster)
Summary: This paper proposes a novel approach to address the challenge of high learning costs when migrating large models to specific downstream tasks. The proposed method aims to reduce task complexity and improve the consistency across the different modal outputs of multimodal models. The authors employ a LoRA-like technique that involves fine-tuning the transformer by appending adjustment matrices to the qkv matrices. This process is further optimized through CP Decomposition, an approach used to reduce the scale of the fine-tuning parameters. Then use Informative Context Enhancement mechanism to compute weights and the mixed graph-text features are adjusted according to these weights, facilitating the generation of superior fused features. To prevent the loss of textual information during the alignment of deep multimodal networks, the authors propose using a Gated Query Transformation. This approach calculates the blending ratio of text features for enhancement. The paper claims that these techniques allow the model to outperform the current best methods on several downstream tasks, using fewer fine-tuning parameters, and even surpasses fully fine-tuned methods. Strengths: (1) Innovative and Effective Approach: Building upon the foundations of similar works like LoRA, this paper makes significant strides by achieving better fine-tuning performance with lower parameter usage. (2) Proposes Novel Techniques: The paper introduces innovative methods such as Informative Context Enhancement and Gated Query Transformation to enhance the fusion of modalities. These methods appear to be highly effective in improving model performance, and could potentially be applied in a range of different contexts. (3) High-Quality Visualizations: The paper includes aesthetically pleasing and informative figures and tables, which contribute to the clarity and overall quality of the work. The visualizations effectively aid in the understanding of complex concepts and methodologies. Weaknesses: (1) Not Clearly Written: There appears to be some confusion in the terminology used, especially around 'soft prompts'. While the paper asserts that its fine-tuning approach can be seen as a 'soft prompt', the proposed method is more akin to parameter-efficient transfer learning. Therefore, the use of the term 'prompt' may not be accurate in this context. A clearer and more precise usage of technical terminology would be beneficial for readers and would also strengthen the overall quality of the paper. (Now author states that in the final version, they will replace the inappropriate term "prompt" with "multimodal parameter-efficient transfer learning based on mode approximations" to better reflect the essence of our method.) (2) Lack of Details on Gated Query Transformation: The paper does not provide a clear and thorough explanation of the implementation details for the Gated Query Transformation. For future improvements, it would be beneficial to include more technical details of this novel technique, which would improve the clarity of the paper and make it easier for others in the field to replicate and build upon this work. (Now author states the final version will be more detailed) (3) Manual Parameter Tuning: The paper suggests that the rank hyperparameter needs to be manually adjusted, which could pose an obstacle to scalability and efficiency. More research is required on how to optimally choose this hyperparameter's value. (Author said they use a detailed grid search experiments to get good rank) Technical Quality: 3 good Clarity: 3 good Questions for Authors: (1) Justification for Terminology: Could you clarify why you choose to refer to your method as a 'multimodal prompt'? Given the technique's resemblance to low-rank fine-tuning, the terminology might be misleading. Understanding the rationale behind this terminology would greatly aid in interpreting your work. (2) Clarification on Gated Query Transformation: The paper could benefit from further details on the application of the Gated Query Transformation. Specifically, is the text information fused with the image features before entering the cross-attention, according to the gate values? This part of the methodology was not entirely clear in the paper, and further explanation would enhance the clarity of your methods and potentially support the broader understanding and application of your techniques. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: (1) Hyperparameter Selection: The need for manual selection of the rank hyperparameter is a key limitation of the proposed method. While it is acknowledged by the authors, there is no substantial discussion on how this limitation can be addressed. (2) Training Time: Another potential limitation that should be acknowledged is the impact on training time. Although the proposed method reduces the parameter count for downstream tasks, it doesn't seem to significantly decrease training time. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your recognition of the novelty and effectiveness of our approach as well as the visualization demonstration. We will continue to improve based on your feedback, and we believe that our Aurora has a very positive impact on promoting the efficient transfer of multimodal large models in the community. >**Not Clearly Written.** Thanks for the comment on the terminology and we greatly appreciate your valuable feedback. We agree with your statement, and our method is essentially a parameter-efficient transfer learning method. Our initial understanding is that the features obtained by passing the input through the learnable parameters constructed by mode approximation are prompts, which are added to features obtained by the pre-trained network. Therefore, in the final version, we will replace the inappropriate term "prompt" with "multimodal parameter-efficient transfer learning based on mode approximations" to better reflect the essence of our method. >**Lack of Details on Gated Query Transformation.** Thanks for your sincere comment on the detail supplement and we greatly appreciate your valuable feedback. We will add solid proof behind the argument regarding the loss of textual information and implementation details in our final version for better understanding. >**Manual Parameter Tuning.** Thanks for the comment on the hyper-parameter and we greatly appreciate your valuable feedback. Actually, we borrow the idea of [1] to automatically estimate the intrinsic dimension and then tune the model according to the estimated rank value (around 60). And then to further investigate the effectiveness of our low-rank decomposition, we implement a detailed grid search experiments on different rank values, which is shown in the Figure 3 in the paper. And an obvious phenomenon is that the performance improvement starts to level off after the rank increases to 64. Furthermore, we will attempt to intergrate the intrinsic dimension estimator with our method more tightly. >**Justification for Terminology.** Our initial understanding is that the features obtained by passing the input through the learnable parameters constructed by mode approximation are added to the features obtained by the pre-trained network as prompts. As a result, our mode approximation modules in the multi-modal network are called `multimodal prompt'. In the final version, we will replace the inappropriate term "prompt" with "multimodal parameter-efficient transfer learning based on mode approximations" to more accurately reflect the nature of our method. >**Clarification on Gated Query Transformation.** Thanks for your sincere comment on the detail supplement and we greatly appreciate your valuable feedback. Loss of textual information in deep multi-modal fusion branches essentially forms the basis of introducing Gated Query Transformation. Please refer to the answer for reviewer NuU2 for details of solid proof. Gated Query Transformation utilizes the gated function to fuse textual information with the feature in the fusion branch as the input for the cross-attention in the next layer to avoid textual information loss. We will add the solid proof behind the argument regarding the loss of textual information and implementation details in our final version for better understanding. [1] Chen B, Huang K, Raghupathi S, et al. Automated discovery of fundamental variables hidden in experimental data[J]. Nature Computational Science, 2022, 2(7): 433-442. --- Rebuttal Comment 1.1: Comment: Thanks your feedback, After a careful re-evaluation of your paper and its rebuttal, I acknowledge you addressed our initial concrens about "prompt". Your method will be a useful way to finetuning a VLM. --- Reply to Comment 1.1.1: Title: Response to reviewer Srms Comment: Thank you once again for providing us with your valuable feedback on our paper. We are grateful to learn that our responses have successfully addressed your concerns. Your efforts in reviewing our work, as well as your insightful comments and support, are sincerely appreciated. Your suggestion has been invaluable in shaping the final version of our paper. We genuinely value your contributions and will ensure that your valuable suggestions are carefully incorporated. --- Rebuttal 2: Title: Thanks for your efforts and look forward to your reply. Comment: We sincerely appreciate your review and the constructive suggestions you have provided once again! Through our discussions and the reviewers' responses, it appears that we have effectively addressed the major concerns raised by everyone, and received a higher score from Reviewer kJXf. This outcome has greatly benefited us, and we would like to express our gratitude to all of you for your support! &emsp; After carefully reviewing your feedback once again, we have summarized the key points and will implement these modifications in the next version: * Rectify the use of "prompts" and replace it with "parameter-efficient transfer learning method" for accurate representation. * Supplement more details regarding the important modules and polish up the writing * Supplement broad details on our training like parameter tuning tricks. &emsp; We firmly believe that our framework (AURORA) for parameter-efficient transfer of multimodal models plays a significant role in advancing the community. And we are committed to making our complete code and training details publicly available. Moreover, we are eager to engage in further discussions with you to enhance our understanding of the domain and further improve the quality of the paper. &emsp; And we deeply appreciate that if you could reconsider the score accordingly. We are always willing to address any of your further concerns.
Summary: This paper aims to design a lightweight prompt tuning method (i.e. Aurora) for cross-modal transfer. The main idea follows the observation by LoRA [15] that most of the features are redundant and a low-rank ∆W can be learned to adapt the features. Different from LoRA, they adopt CP decomposition [47] to decompose the learnable parameters into a series rank-one tensors. They also propose Informative Context Enhancement and Gated Query Transformation for better modality alignment. However, the connection between the two parts is not clear. Experiments show that Aurora performs better than LoRA and is at least comparable to full finetuning methods. Strengths: Strength 1. Figure 1 highlights the difference between the proposed method and the baselines. 2. The proposed method is lightweight and effective. 3. Aurora is applicable to both image-text and video-text retrieval. Weaknesses: Major 1. The main idea of the paper is lightweight adaptation achieved by adopting CP decomposition [47] for the adapter weights. While it is shown to be effective and parameter-efficient, there is limited technical novelty introduced by this work. In terms of story-level novelty, it mainly follows the observation and approach in LoRA [15], and it’s not considered novel. 2. The improvement of Aurora over UniAdapter [32] is marginal in Table 1. 3. What are BLIP, BLIP+LoRA zero-shot performance in Table 4? 4. The new thing of this paper is Informative Context Enhancement and Gated Query Transformation. However, there is loose connection between the main idea and these two modules. These two modules are orthogonal to the low-rank approximation part. What will be the performance if LoRA is integrated with these two modules? 5. Which part of the method does Parameter Sharing in L248-L255 correspond to? Minor 6. What are x-axis and y-axis in Figure 7? 7. The main flow of the architecture is not clear. In section 3.3, Gated Query Transformation is presented after Context Enhancement. However, Gated Query Transformation manipulates f, but Context Enhancement depends on f. Is the former performed prior to the latter? 8. The insight for Gated Query Transformation is unclear. How about replacing t’ in L191 with t? Technical Quality: 3 good Clarity: 3 good Questions for Authors: The author is suggested to address the concerns in the weakness section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitation is mentioned at the end of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your recognition of the lightweight design, effectiveness, and comprehensive experiments of our method. We will continue to improve based on your feedback, and we believe that our Aurora has a very positive impact on promoting the efficient transfer of multimodal large models in the community. >**Limited technical novelty.** We believe that our paper has certain novelty and contributes to the entire community for three reasons. **First**, our main advantage is that the mode approximation based on CP decomposition has a lighter parameter decomposition architecture compared to LoRA. **Second**, our low-rank decomposition method has better mathematical interpretability as theoretical support (please refer to Appendix F). Most importantly, **third**, Aurora is not significantly dependent on rank, demonstrating true high parameter efficiency. Specifically, in Figure 3 of the main text, part (d), we can observe that as the rank increases, our method Aurora does not exponentially increase the burden of learnable parameters. Experimental results have further validate our idea. > **Marginal improvement in Table 1.** We would like to point out that our advantages beyond UniAdapter are actually clear not only on parameter efficiency but also on performance. **First**, in the cross-modal retrieval task in Table 1, we achieve better results than UniAdapter even when the fine-tuned accuracy is already near 100\%. **Second**, in the tasks of Table 2 and 3, we achieve even greater advantages over UniAdapter, with around a 3\% improvement in performance in both video-text retrieval and VQA tasks. **Third**, our Aurora already achieves a leading advantage with rank=64. In fact, when we compare fairly with UniAdapter using rank=512, our leading advantage will be further expanded to around 5\%. > **Zero-shot performance in Table 4.** We add additional experiments on BLIP and BLIP+LoRA, where BLIP is the base pretrained version and BLIP+LoRA is fintuned with LoRA. The experiment results in Table are shown below: | Method | \# Parameter || MSRVTT (T2V) | | | DiDemo (T2V) | | |-- | :------: | :--: | :--: | :--: | :--: | :--: | :--: | | | | R@1 | R@5 | R@10 | R@1 | R@5 | R@10 | | BLIP | 223M | 41.5 | 62.0 | 70.7 |42.1 | 59.6 | 67.3 | | BLIP + LoRA | 10.6M| 42.7 | 62.8 | 71.4 | 43.3 | 60.3 | 68.2 | >**Loose connection between main modules.** Mode approximation is designed for high-efficiency transfer, however, there exists no pure module that can be adapted and suffers from no modality alignment pain. Therefore, how to utilize the feature outputs of the mode approximation module to boost the performances on multi-modal tasks is the core motivation of Informative Context Enhancement and Gated Query Transformation. We also add some experiments on **LoRA integrated with Informative Context Enhancement and Gated Query Transformation**. The results are shown below, we can draw the following conclusions: **First**, Informative Context Enhancement and Gated Query Transformation indeed boost the performances of multi-modal tasks even with LoRA, which validates the effectiveness of our proposed module. **Second**, the increase on LoRA is obviously lower than that on Aurora, which can be attributed to that better representations learned on downstream tasks cause better modality alignment results. | \#Tunable | | MSCOCO I2T | | | MSCOCO T2I | | | FLICKR30K I2T | | | FLICKR30K T2I | | |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | - | R@1 | R@5 | R@10 | R@1 | R@5 | R@10 | R@1 | R@5 | R@10 | R@1 | R@5 | R@10 | | 10.6M | 80.1 | 94.4 | 97.5 | 62.3 | 84.5 | 90.9 | 96.5 | 99.9 | 100.0 | 86.2 | 97.4 | 98.7 | | **\#Tunable** | | **MSRVTT** | | | | | | **DiDemo** | | | | | | - | R@1 | R@5 | R@10 | MdR | | | R@1 | R@5 | R@10 | MdR | | | | 10.6M | 50.6 | 72.7 | 81.6 | 2.0 | | | 51.7 | 76.0 | 83.4 | 2.0 | | | > **Parameter Sharing explanation.** The parameter sharing described in L248-L255 means that the trainable parameters of the textual branch, visual branch, and multimodal fusing branch in BLIP share the same U and V decomposition factors when performing mode approximation. Detailed descriptions can be found in L144-L146. >**Figure 7 explanation.** Figure 7 shows a comparison of the distribution statistics of the parameters for the pre-trained model and our Aurora after efficient fine-tuning. The x-axis represents the parameter values, and the y-axis represents the frequency of occurrence. >**Demonstration order.** Gated Query Transformation is implemented prior to Context Enhancement following Figure 2. We will change the order in which these two modules are presented in the final version to help readers better understand. >**Insight for Gated Query Transformation.** Loss of textual information in deep multi-modal fusion branches essentially forms the basis of introducing Gated Query Transformation. The solid proof is also given in A2 for reviewer NuU2. Since $t'$ is learned by autograd, we output the mean value of the zero-initialized learnable transformation matrix $\gamma$ and $\beta$, which is 1.17, 0.23 (1.09, 0.36) on Didemo (Flickr30K) separately. It demonstrates that scaling textual information is beneficial for training. --- Rebuttal Comment 1.1: Comment: Thanks the author for providing the detailed rebuttal. The additional experiments and explanation has addressed my concern and I would like to increase the score. --- Reply to Comment 1.1.1: Title: Response to Reviewer kJXf Comment: Thank you once again for providing us with your valuable feedback on our paper. We are grateful to learn that our responses have successfully addressed your concerns. Your efforts in reviewing our work, as well as your insightful comments and support, are sincerely appreciated. We sincerely appreciate your willingness to increase the score based on these improvements. Once again, thank you for your valuable feedback and support. --- Rebuttal 2: Title: Thanks for your efforts and look forward to your reply. Comment: We sincerely appreciate your review and the constructive suggestions you have provided once again! Through our discussions and the reviewers' responses, it appears that we have effectively addressed the major concerns raised by everyone, and received a higher score from you. This outcome has greatly benefited us, and we would like to express our gratitude to all of you for your support! &emsp; After carefully reviewing your feedback once again, we have summarized the key points and will implement these modifications in the next version: * Enhance the clarity of our novelty in writing and provide a comprehensive explanation of the motivations behind crucial modules. * Supplement additional ablative experiments to further validate the effectiveness of our method and important modules. * Refine the paper's details, such as the writing flow, interpretation of figures and tables, to reduce confusion. &emsp; We firmly believe that our framework (AURORA) for parameter-efficient transfer of multimodal models plays a significant role in advancing the community. And we are committed to making our complete code and training details publicly available. Moreover, we are eager to engage in further discussions with you to enhance our understanding of the domain and further improve the quality of the paper. &emsp; We are always willing to address any of your further concerns.
Summary: This paper proposes a parameter efficient adaptation technique Aurora for multi-modal models. Particularly, the proposed method motivates their design by suggesting that the original pre-trained weight matrices have redundancies due to their high dimensional nature and the downstream task often requires low-dimensional reparameterization only. Aurora supplements the original weight matrices with series-one rank tensors which are only learned during the fine-tuning process. In addition, to enhance the modality alignment between the vision and text representations, Aurora utilizes informative context enhancement module and gated query transformation which fuses and explicitly relates the textual representations with the multi-modal fusion representations in the cross-attention block of BLIP. Extensive experiments over various benchmarks shows the effectiveness of Aurora in comparison with fine-tuning and parameter efficient adaptation approaches. Strengths: Strengths: (1) The idea of decomposing learnable pre-trained matrices into small rank tensors is encouraging, as it explicitly allows to adapt only necessary amount of parameters for efficient and effective adaptation. (2) The paper has performed extensive evaluations with proper ablation studies which justifies their design choices. (3) The proposed methods performs favorably well with very less number of learnable parameters. Weaknesses: Weaknesses: (1) The overall paper presentation style is very confusing, specially the main methodology section. There is no preliminaries on the baseline architecture on which the proposed solution has been built. It is very difficult for the readers to grasp the contents without knowing the main model architecture. For example, in line 161-162, the authors have mentioned the cross-attention module, but unfortunately no prior information about that block is provided anywhere in the manuscript. Also the writing is not clear and I found it difficult to understand the manuscript. (2) How the proposed solution is considered as a prompt learning variant? If I understood correctly, the additional learnable parameters are utilized as part of the model and the input tensor has to be multiplied with it. It is not the case where the learnable parameters are part of the inputs, which is the core definition of prompt learning. (3) The proposed multi-modal alignment module seems to be heavily designed for the BLIP multi-modal model. It is not clear if these components could be utilized in other multi-modal models. It will be good to see the generalization of proposed approach to other recent VL models. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please refer to the weaknesses section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors have discussed the limitations and societal impacts are highlighted. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your recognition of our design idea and comprehensive experiments. We will continue to improve based on your feedback, and we believe that our Aurora has a very positive impact on promoting the efficient transfer of multimodal large models in the community. > **Confusing presentation.** Thanks for the comment on our work and we greatly appreciate your valuable feedback. We apologize for any confusion caused by the lack of preliminaries on the baseline architecture used to build our proposed solution. Due to the page limit, we put the details of the whole architecture of the pretrained model BLIP in Part D of the Appendix. Following your sincere suggestion to introduce more preliminary knowledge of our base model, we will revise the paper to include a more detailed explanation of the network architecture and its role in our proposed solution in a more clear way. We will work to improve the clarity of our writing to make it more accessible to readers in the final version. > **Wrong use of prompt learning.** Thanks for the comment on our work and we greatly appreciate your valuable feedback. Actually, prompt tuning is one typical way of parameter-efficient transfer learning (PETL). Our work is based on the existing roadmap of the PETL by decomposing the pre-trained networks with learnable parameters. These learnable parameters are multiplied with the input as the "soft prompts" for the pre-trained parameters to implement PETL. Therefore, our Aurora is quite different from the typical prompts as the inputs. In the final version, we will replace the "prompt" with "multimodal parameter-efficient transfer learning based on mode approximations" to better reflect the essence of our method. > **Generalization results.** Following your suggestion to further validate the generalization ability, we extend our Aurora to a more recent sota vision-language model, InstructBLIP [1]. We apply Aurora to the Q-former architecture in InstructBLIP, and the results are shown below. | Method | OKVQA | A-OKVQA | COCO Caption | |----------|:--------:|:-----:|:----------:| | InstructBLIP+FTE(188M) | 54.9 | 55.9 | 68.0 | | InstructBLIP+LoRA(11M) | 53.3 | 52.8 | 67.4 | | InstructBLIP+UniAdapter(18M) | 53.2 | 53.5 | 67.2 | | InstructBLIP+Aurora(0.5M) | 53.7 | 54.1 | 67.6 | [1] Liu H, Li C, Wu Q, et al. Visual instruction tuning[J]. arXiv preprint arXiv:2304.08485, 2023. --- Rebuttal Comment 1.1: Comment: Dear Authors, Thank you for providing a rebuttal response, it has majorly addressed my concerns. Yes, the final version manuscript should use replace the "prompt learning" phrase to avoid any confusion in the research community. Based on the rebuttal response, I will keep my current score. --- Reply to Comment 1.1.1: Title: Response to Reviewer ME39 Comment: Thank you for your response. We greatly appreciate your acknowledgment that our rebuttal has effectively addressed your concerns. We have duly noted your suggestion regarding the replacement of the term "prompt learning" in the final version of the manuscript. We will ensure that this change is made to avoid any potential confusion within the research community. Once again, we would like to express our gratitude for your valuable feedback and for contributing to the improvement of our manuscript. --- Rebuttal 2: Title: Thanks for your efforts and look forward to your reply. Comment: We sincerely appreciate your review and the constructive suggestions you have provided once again! Through our discussions and the reviewers' responses, it appears that we have effectively addressed the major concerns raised by everyone, and received a higher score from Reviewer kJXf. This outcome has greatly benefited us, and we would like to express our gratitude to all of you for your support! &emsp; After carefully reviewing your feedback once again, we have summarized the key points and will implement these modifications in the next version: * Add a subsection about ‘Revisiting Backbone’ to introduce the base model. * Rectify the use of "prompts" and replace it with "parameter-efficient transfer learning method" for accurate representation. * Conduct experiments on two additional base models to validate our advantages on the generalization. &emsp; We firmly believe that our framework (AURORA) for parameter-efficient transfer of multimodal models plays a significant role in advancing the community. And we are committed to making our complete code and training details publicly available. Moreover, we are eager to engage in further discussions with you to enhance our understanding of the domain and further improve the quality of the paper. &emsp; And we deeply appreciate that if you could reconsider the score accordingly. We are always willing to address any of your further concerns.
Summary: The paper addresses the problems of (i) transfer learning and (ii) reducing the multimodality gap in multimodal models. To address (i) it presents a technique which can be viewed as a generalization of LoRA; instead of independently factoring representing matrices as a low rank representation, all the matrices of the transformer get stacked together and are represented as a low rank tensor. To address (ii) two techniques are presented: one which aims to improve the representations by allowing information exchange between different examples in the batch (Informative Context Enhancement) and another which aims to prevent the loss of text information for deep models (Gated Query Transformation). Ablation studies are performed to justify the different design choices. Strengths: 1. Mode approximation (Generalizing LoRA by stacking the matrices of all the transformer layers and using CP decomposition to represent that) is a nice idea and experimentally shows to be a parameter efficient way (beats LoRA) of adapting frozen models to new domains 2. Thorough ablation studies are performed to demonstrate the impact and justify existence of each presented component in the final model. Weaknesses: 1. Informative Context Enhancement seems to allow information exchange between different examples in the batch. That makes it dependent on the batch size, but the impact of changing it is not evaluated. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Why are some methods present in ‘Methods with frozen backbone’ in Table 2 omitted from the same section in Table 3? (e.g. LoRA) 2. Is |B| in the ‘Informative Context Enhancement’ section referring to the number of tokens in the entire batch? Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your highly accurate summary of our work and recognition of our comprehensive experiments. We will continue to improve based on your feedback, and we believe that our Aurora has a very positive impact on promoting the efficient transfer of multimodal large models in the community. >**Impact of the batch size.** Thanks for the comment on our work and we greatly appreciate your valuable feedback. We implement different numbers of batch size to investigate the effectiveness of Informative Context Enhancement. Some results are shown below: > >| DataSet | Batch Size = 4 | Batch Size = 8 | Batch Size = 16 | Batch Size = 32 | >| -------------------------------- | :------------: | :------------: | :-------------: | :-------------: | >| MSCOCO (I$\rightarrow$T, R@1) | 80.4 | 80.6 | 80.7 | 80.8 | >| Flickr30K (I$\rightarrow$T, R@1) | 96.9 | 97.1 | 97.2 | 97.2 | >| DiDemo (R@1) | 53.1 | 53.1 | 53.2 | 53.4 | >| MSRVTT-QA | 44.4 | 44.7 | 44.8 | 45.0 | >**Lack of results.** Thanks for the comment on our work and we greatly appreciate your valuable feedback. We apologize for the omission of the comparison experiment results on LoRA in Table 3. We have added them, and the complete results for Table 3 are shown below: > >| Method | #Tunable | test-dev test-std | Method | #Tunable | test acc | >| -------------------------------- | :----------: | :--------------------: | ------------------ | :----------: | :----------: | >| Methods with frozen backbone | | | | | | >| LoRA (r=32) | 10.6M | 74.11 74.24 | LoRA (r=32) | 10.6M | 44.3 | |UniAdapter (r=512) | 18.8M | 75.44 75.56 | UniAdapter (r=512)| 18.8M | 44.7| |Aurora (r=64) | 0.1M | 77.69 77.87 | Aurora (r=64) | 0.1M | 44.8| >**Symbol not clear.** Thanks for the comment on our work and we greatly appreciate your valuable feedback. $| \mathcal{B}|$ is the number of image-text pairs in the entire batch, and "feature" means the [cls] token of each image and text. In other words, each [cls] token represents the global information of the image and text. --- Rebuttal Comment 1.1: Comment: Thank you for the thorough clarification of all my questions. --- Reply to Comment 1.1.1: Title: Response to Reviewer 37bN Comment: Thank you for providing a comprehensive clarification of all my queries. Your detailed responses have been immensely helpful in enhancing my understanding of the topic. I genuinely appreciate the time and effort you have dedicated to addressing my concerns. --- Rebuttal 2: Title: Thanks for your efforts and look forward to your reply. Comment: We sincerely appreciate your review and the constructive suggestions you have provided once again! Through our discussions and the reviewers' responses, it appears that we have effectively addressed the major concerns raised by everyone, and received a higher score from Reviewer kJXf. This outcome has greatly benefited us, and we would like to express our gratitude to all of you for your support! &emsp; After carefully reviewing your feedback once again, we have summarized the key points and will implement these modifications in the next version: * Supplement more details regarding the ablation experiments for validation. * Provide further experimental writing details to better elucidate AURORA. &emsp; We firmly believe that our framework (AURORA) for parameter-efficient transfer of multimodal models plays a significant role in advancing the community. And we are committed to making our complete code and training details publicly available. Moreover, we are eager to engage in further discussions with you to enhance our understanding of the domain and further improve the quality of the paper. &emsp; And we deeply appreciate that if you could reconsider the score accordingly. We are always willing to address any of your further concerns.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper proposes AURORA, a method which uses mode approximation to boost the knowledge transfer in Vision Language models and enhances alignment between the modalities in a lightweight parameter efficient manner. In addition to this, the paper further proposes Context Enhancement module and Gated Query Transformation module to boost the modality fusion in an adaptively controllable way. The proposed method is evaluated on six cross-modal tasks and two zero-shot tasks and is compared to existing PETL methods. Strengths: - The paper seems to be the first which has used mode approximation in Vision-Language models to efficiently achieve modality fusion. - The proposed method is novel with respect to existing prompt tuning methods. - The proposed methods works by tuning a very small number of parameters which can save time and computational resources. Weaknesses: - The motivation for using Mode approximation is not very clear in the paper. For instance there could be other methods which can solve the redundancies in the attention weights. There is no detailed explanation of the theoretical basis or formal analysis of how mode approximation aids in prompt learning. Adding a brief theoretical background could provide more insights into the method's underlying principles. - The authors haven't provided any solid proof behind the argument regarding loss of textual information in multi-modality fusion branches, which essentially forms the basis of introducing Gated Query Transformation. - The authors should have compared their method with previous prompt learning methods [1, 2] for zero-shot out-of-distribution classification tasks. [1] M. U. khattak, H. Rasheed, M. Maaz, S. Khan, and F. S. Khan, “Maple: Multi-modal prompt learning”. IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023 [2] K. Zhou, J. Yang, C. C. Loy, and Z. Liu, “Learning to prompt for vision-language models”. International Journal of Computer Vision (IJCV), 2022 Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Can the method be applied to a broader range of vision-language tasks, and how well does it adapt to different modalities or data distributions? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - Please have a look at Questions and Weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your professional comments. We will continue to improve based on your feedback, and we believe that our Aurora has a very positive impact on promoting the efficient transfer of multimodal large models in the community. > **Motivation not clear.** The motivation for our mode approximation module lies in two folds. **First**, it can approximate the weights in large-scale model with a little learnable parameters. **Second**, it is based on the algorithm of CP decomposition which has good theoretical convergence property. We agree with you that theoretical background could provide more insights into the method. Therefore, we have provided a detailed concept definition in Appendix Part A . **Third**, we provide detailed theoretical analysis and derivation in Appendix Part F to validate that the proposed method can achieve good convergence.\ We acknowledge that there are other methods to address redundancy in attention weights. However, Aurora employs mode approximation to generate lightweight prompts, which is an effective approach that achieves good performance with very few parameters. We believe the advantage of this approach is that it enables efficient parameter transfer with multimodal prompt tuning under extremely few parameters while reducing computational and storage costs while maintaining performance. The advantages of Aurora are further demonstrated in the experimental results. > **No solid proof.** We provide the motivation for designing the Gated Query Transformation module in the following text, which is that the information content of textual information decreases as the layer depth of cross-attention increases.\ Specifically, in the cross-attention mechanism, the encoding vectors are used to calculate the attention distribution for generating the output vectors. And the query vectors are used to calculate the weights of the attention distribution, where the query vectors come from the textual tokens under the multimodal setting. \ Suppose we have $L$ layers in the cross-attention mechanism, and the **encoding vector (from visual modality) for layer $l$ is denoted as $e_l$**, and the **query vector (from textual modality) for layer $l$ is denoted as $q_l$**. We aim to prove that with the increase of the layer number $L$, the textual information content of the query vector $q_L$, which is the encoding vector, becomes lower and lower.\ We can use the concept of entropy to measure the information content of the query vector. Then we can view the query vector $q_L$ as a random variable that has all possible encoding vectors $e_l$ as its possible values. To compute the entropy of the query vector $q_L$, we need to calculate the probability distribution $p(q_L)$. We can assume that $p(q_L|e_1, ..., e_L)$ is the probability distribution of the query vector $q_L$ given all the $L$ encoding vectors. Then, we can use Bayes' theorem to transform $p(q_L|e_1, ..., e_L)$ into the product form of $p(e_1, ..., e_L|q_L)$ and $p(q_L)$ as follows:\ $$ p(q_L|e_1, ..., e_L) = \frac{p(e_1, ..., e_L|q_L) p(q_L)}{p(e_1, ..., e_L)} $$\ Since $p(e_1, ..., e_L)$ does not depend on $q_L$, we can treat it as a constant and get:\ $$ p(q_L) = \frac{p(e_1, ..., e_L|q_L) p(q_L)}{p(e_1, ..., e_L)} $$\ Then, we can express the entropy of the query vector $q_L$ as:\ $$H(q_L) = -\sum_{e_1, ..., e_L} p(e_1, ..., e_L) \sum_{q_L} p(q_L|e_1, ..., e_L) \log p(q_L|e_1, ..., e_L) \ = -\sum_{e_1, ..., e_L} p(e_1, ..., e_L) \sum_{q_L} p(q_L) \log p(q_L) \ \sum_{e_1, ..., e_L} p(e_1, ..., e_L) \sum_{q_L} p(e_1, ..., e_L|q_L) \log p(e_1, ..., e_L|q_L)$$ The first summation term represents the entropy of the marginal distribution of $q_L$, which is independent of $e_1, ..., e_L$. Therefore, we can treat it as a constant and rewrite the second summation term as: $ -\sum_{e_1, ..., e_L} p(e_1, ..., e_L) H(e_1, ..., e_L|q_L) $.\ Here, $H(e_1, ..., e_L|q_L)$ is the conditional entropy of $e_1, ..., e_L$ given $q_L$. It represents the uncertainty of all possible encoding vectors $e_1, ..., e_L$ given the query vector $q_L$. Since the conditional entropy increases with the uncertainty of the conditioning variable, we can infer that the information content of $q_L$ decreases with the increase of $L$, as the uncertainty of the encoding vectors given $q_L$ increases.\ In summary, as the layer number $L$ increases in the Transformer, the information content of the query vector $q_L$ decreases, indicating that the query vector becomes less informative about the encoding vectors in the subsequent layer. >**Lack of comparison.** Our paper mainly copes with the typical multi-modal tasks based on the BLIP architecture including cross-modal retrieval, VQA on both image and video modalities. We will further add the comparisons with the work you mentioned in the final version. During the rebuttal period, we would like to first add parts of the results which are shown below. | Datasets | ImageNet | | Caltech101 | | StanfordCars | | |----------|:--------:|:-----:|:----------:|:-----:|:------------:|:-----:| | Method | base | novel | base | novel | base | novel | | MaPLe | 76.66 | 70.54 | 97.74 | 94.36 | 72.94 | 74.00 | | Aurora | 76.59 | 70.75 | 98.13 | 94.52 | 73.75 | 74.28 | > **Generalization results.** Following your suggestion to further validate the generalization ability, we extend our Aurora to a more recent sota vision-language model, InstructBLIP[1]. We apply Aurora to the Q-former architecture in InstructBLIP, and the results are shown below. | Method | OKVQA | A-OKVQA | COCO Caption | |----------|:--------:|:-----:|:----------:| | InstructBLIP+FFT(188M) | 54.9 | 55.9 | 68.0 | | InstructBLIP+LoRA(11M | 53.3 | 52.8 | 67.4 | | InstructBLIP+UniAdapter(18M) | 53.0 | 53.5 | 67.2 | | InstructBLIP+Aurora(0.5M) | 53.7 | 54.1 | 67.6 | [1] Liu H, Li C, Wu Q, et al. Visual instruction tuning[J]. arXiv preprint arXiv:2304.08485, 2023. --- Rebuttal Comment 1.1: Comment: Thank you for providing a detailed response to my queries. 1. I am satisfied with the Motivation statement you have provided. Indeed the theoretical proof is in agreement with your motivation. 2. Regarding Gated Query Transformation, the entropy explanation seems fair and has a strong link with the motivation behind introducing Gated Query Transformation module. 3. For zero-shot out-of-distribution comparison, the results of AURORA seem to be close to MaPLe. However, given the scope of AURORA, the comparisons seem acceptable. 4. Given very less parameters of AURORA, the generalization results are interesting. AURORA may prove to be a good low cost generalizable model. --- Reply to Comment 1.1.1: Title: Response to NuU2 Comment: Thank you once again for providing us with your valuable feedback on our paper. We are grateful to learn that our responses have successfully addressed your concerns. Your efforts in reviewing our work, as well as your insightful comments and support, are sincerely appreciated. We are pleased to receive your positive recognition of our experimental results. In the future, we will continue to analyze and enhance the performances on more base models (i.e., in MaPLE) to further amplify the advantages of our approach across a wider range of tasks. Moreover, we will make the code for our paper publicly available to facilitate research by a broader audience and foster advancements in the field. We really appreciate your efforts on reviewing our paper, your insightful comments and support. --- Rebuttal 2: Title: Thanks for your efforts and look forward to your reply. Comment: We sincerely appreciate your review and the constructive suggestions you have provided once again! Through our discussions and the reviewers' responses, it appears that we have effectively addressed the major concerns raised by everyone, and received a higher score from Reviewer kJXf. This outcome has greatly benefited us, and we would like to express our gratitude to all of you for your support! &emsp; After carefully reviewing your feedback once again, we have summarized the key points and will implement these modifications in the next version: * Highlight the motivation behind our method and important modules more prominently in the next version. * Provide additional detailed theoretical support in the appendix section. * Conduct experiments on two additional base models to validate our advantages on the generalization. &emsp; We firmly believe that our framework (AURORA) for parameter-efficient transfer of multimodal models plays a significant role in advancing the community. And we are committed to making our complete code and training details publicly available. Moreover, we are eager to engage in further discussions with you to enhance our understanding of the domain and further improve the quality of the paper. &emsp; And we deeply appreciate that if you could reconsider the score accordingly. We are always willing to address any of your further concerns.
null
null
null
null
null
null
Detecting Any Human-Object Interaction Relationship: Universal HOI Detector with Spatial Prompt Learning on Foundation Models
Accept (poster)
Summary: This paper proposes a framework for human-object interaction (HOI) detection that leverages vision-language foundation models and large language models to achieve universal and flexible recognition of complex interactions in images. The framework, named UniHOI, consists of three main components: a visual HOI detector that extracts three levels of features from images, a HO prompt-guided decoder that queries the foundation model for high-level relation representations associated with human-object pairs, and a knowledge retrieval module that uses a large language model to generate descriptive texts for interaction categories. The framework supports both supervised and zero-shot settings, and can handle any textual input for open-category interaction detection. The paper demonstrates the effectiveness and superiority of UniHOI over existing methods on two public benchmarks, HICO-DET and V-COCO, as well as in the wild scenarios. Strengths: - The performance is quite impressive. - Leveraging LLM and foundational models to augment CV tasks is the future, and this work gives a try to use them simultaneously. - The authors promise to release the code to ensure reproducibility. Weaknesses: 1. There is no ablative study of each component (perhaps only one component, i.e., HO prompt decoder) under the close-set setup. 2. This work uses BLIP2 with ViT-L while existing work like GEN-VLKT typically uses CLIP with ViT-B. It is evident that the former is much more powerful. Could you provide the performance on HICO-DET using HO prompt-based learning with CLIP ViT-B under the close-set setup? As the improvement may be brought by more advanced large visual-language pre-trained models. 3. How about the inference speed? As shown in Table 8, **three** times longer than GEN-VLKT. Note that existing work like GEN-VLKT does not involve the computation of large visual-language pre-trained models at the inference stage, since all of the feature of objects or verbs is pre-computed. However, in this work, the feature for prompting should be computed for each image individually. Considering the extremely large backbone (e.g, ViT-L), there would be a heavy burden at inference. 4. The core contribution of this work is actually the HO prompt-based decoder. However, there is nothing novel with it, i.e., directly using spatial location to get foundational model output features. 5. Knowledge retrieval is solely used in open-world setup, is it possible for it to augment the closed-world setup? Overall, this is technically solid work, and LLM for knowledge retrieval is interesting. But the comparison is unfair (i.e., a much more powerful visual-language pre-trained model), the inference time is unacceptable, and the novelty of the prompt-based decoder is limited. I will be very happy to update my score if the authors can address my concerns above. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please refer to weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: There is no discussion on limitations or failure cases. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer X6bU, First and foremost, we'd like to express our gratitude for your comprehensive review and insightful comments. We acknowledge the concerns you've raised and will attempt to address them point by point: **Ablations**: We appreciate the importance of ablation studies to establish the individual contributions of each component. We would like to kindly draw your attention to Table 5, which presents our ablation experiments conducted under the closed-set setup, and to Table 6, which depicts results from the open-set setup. **Model Comparison with GEN-VLKT using CLIP ViT-B**: Thank you for the keen observation regarding the comparative power of BLIP2 with ViT-L and CLIP with ViT-B. We wholeheartedly agree on the importance of a fair comparison. In line with this, we have conducted experiments using CLIP ViT-B not just in the close-set setup, but also in the open-set setup. We have conducted experiments on V-COCO datasets: |Method|${AP}^{1}_{role}$|${AP}^{2}_{role}$| |:-:|:-:|:-:| |GEN-VLKT_s|62.41|64.46| |${UniHOI}_s$ (w/ CLIP)|63.79|65.91| |${UniHOI}_s$ (w/ BLIP2)|65.58|68.27| |GEN-VLKT_m|63.28|65.58| |${UniHOI}_m$ (w/ CLIP)|64.47|67.83| |${UniHOI}_m$ (w/ BLIP2)|67.95|70.61| |GEN-VLKT_l|63.58|65.93| |${UniHOI}_l$ (w/ CLIP)|64.86|67.98| |${UniHOI}_l$ (w/ BLIP2)|68.05|70.82| The following table shows the results of UniHOI equipped with different foundation methods on the HICO-DET dataset: |||Default|||Known Obj.|| |:-:|:-:|:-:|:-:|:-:|:-:|:-:| |Method|Full|Rare|Non-rare|Full|Rare|Non-rare| |GEN-VLKT_s|33.75|29.25|35.10|36.78|32.75|37.99| |${UniHOI}_s$ (w/ CLIP)|35.92|34.39|36.26|38.84|37.19|40.18| |${UniHOI}_s$ (w/ BLIP2)|42.73|42.03|42.93|43.58|45.27|43.08| |GEN-VLKT_m|34.78|31.50|35.77|38.07|34.94|39.01| |${UniHOI}_m$ (w/ CLIP)|36.71|35.42|36.91|39.16|39.23|40.56| |${UniHOI}_m$ (w/ BLIP2)|43.18|42.98|43.57|44.32|45.96|43.89| |GEN-VLKT_l|34.96|31.18|36.08|38.22|34.36|39.37| |${UniHOI}_l$ (w/ CLIP)|36.84|35.71|37.05|39.28|39.31|40.79| |${UniHOI}_l$ (w/ BLIP2)|43.57|43.79|44.01|44.78|46.48|44.32| Additionally, the results under zero-shot setting are as follows: |Method|Type|Unseen|Seen|Full| |:-:|:-:|:-:|:-:|:-:| |GEN-VLKT_s|RF-UC|21.36|32.91|30.56| |${UniHOI}_s$ (w/ CLIP)|RF-UC|23.41|33.45|31.97| |${UniHOI}_s$ (w/ BLIP2)|RF-UC|28.68|33.16|32.27| |GEN-VLKT_s|NF-UC|25.05|23.38|23.71| |${UniHOI}_s$ (w/ CLIP)|NF-UC|26.89|25.57|25.96| |${UniHOI}_s$ (w/ BLIP2)|NF-UC|28.45|32.63|31.79| |GEN-VLKT_s|UO|10.51|28.92|25.63| |${UniHOI}_s$ (w/ CLIP)|UO|13.24|30.27|27.52| |${UniHOI}_s$ (w/ BLIP2)|UO|19.72|34.76|31.56| |GEN-VLKT_s|UV|20.96|30.23|28.74| |${UniHOI}_s$ (w/ CLIP)|UV|22.18|33.29|30.87| |${UniHOI}_s$ (w/ BLIP2)|UV|26.05|36.78|34.68| Our experiments demonstrate that **regardless of whether we use CLIP or BLIP2, our approach significantly outperforms the current state-of-the-art methods**. In addition, one of the motivations we chose for BLIP is that **the training text for CLIP is too simple**, such as "an image of an apple". **We are concerned that the knowledge in CLIP may not be sufficient for challenging HOI detection**. In order to explore the potential of the latest large models, the default UniHOI has chosen BLIP2, which is pre trained with richer text. **Inference Speed**: Thank you for highlighting the computational aspects of UniHOI. In response: - We've mitigated computational demands by adjusting the image resolution for the VL foundation model, striking a balance between efficiency and performance. - For practical applications, we're exploring techniques like quantization to optimize inference speed without sacrificing results. - Addressing the challenges posed by large models remains an ongoing effort, with potential solutions in architectural refinements or improved knowledge transfer methods. **Novelty of HO Prompt-based Decoder**: Thank you for drawing attention to the novelty. While the core concept may appear direct, the effectiveness lies in its integration and synergy with other components. Here's a deeper dive into the innovative aspects of our approach: - UniHOI is the pioneering approach using prompt learning for VL foundation models in HOI detection, surpassing state-of-the-art methods in both supervised and zero-shot settings. This sets a more reliable benchmark for harmonizing specialized HOI detectors with large models. - UniHOI efficiently extracts spatial representations of humans and objects from professional HOI detectors. This enables us to conveniently and accurately capture instance-level features from large models. Previous approaches (e.g., GEN-VLKT, HOICLIP) driven by large models predominantly aligned only with image-level features, overlooking this critical facet. - UniHOI unleashes the potential of VL foundation models and LLMs (e.g., GPT-4), potentially inspiring future endeavors to explore more universal and intelligent HOI detectors. - The elegance of our core concept is in its simplicity and directness, making it adaptable across various VL foundation models. **Knowledge Retrieval in Closed-World Setup**: Thank you for the insightful suggestion. In response, we have added experiments on the Knowledge Retrieval (KR) in the closed-world setup. The results are as follows: |Method|${AP}^{1}_{role}$|${AP}^{2}_{role}$| |:-:|:-:|:-:| |GEN-VLKT_s|62.41|64.46| |${UniHOI}_s$|65.58|68.27| |${UniHOI}_s$+KR|66.12|68.73| |GEN-VLKT_m|63.28|65.58| |${UniHOI}_m$|67.95|70.61| |${UniHOI}_m$+KR|68.59|71.13| |GEN-VLKT_l|63.47|65.93| |${UniHOI}_l$|68.05|70.82| |${UniHOI}_l$+KR|69.39|71.24| The results indicate that **knowledge retrieval can also promote model performance in Closed-World scenarios**. This is because more detailed textual description information can promote a deeper understanding of interactions. We genuinely appreciate your time and expertise. If you have any further questions, please let us know. We’d be very happy to do anything we can that would be helpful in the time remaining! Thanks! --- Rebuttal Comment 1.1: Title: Further Discussion with Reviewer X6bU Comment: Dear Reviewer X6bU, We sincerely appreciate the time you devoted to reviewing our manuscript and the invaluable feedback you provided. We have diligently addressed your comments and provided corresponding responses and results. We believe that these responses adequately address the concerns you raised. We would be grateful for an opportunity to discuss whether your reservations have been resolved. Should there be any aspect of our work that remains unclear, please do not hesitate to inform us. Once again, thank you for your constructive insights. Warm regards,
Summary: In view of the limited scalability and the suboptimal zero-shot performance of current HOI detection methods, the authors propose a novel method for HOI detection based on VL foundation models. With in-depth analysis and adaptation of HOI detectors, the foundation model is effectively adopted to reason about HOI relationships based on human/object tokens. Furthermore, LLM is adopted as a knowledge base to diversify HOI descriptions, enabling open-vocabulary HOI detections. With the VL foundation model and LLM, extraordinary HOI detection performance is achieved upon both conventional and zero-shot setting. Strengths: The proposed HO prompt-guided decoder is brilliant in addressing the feature alignment issue. Adopting GPT as knowledge base is an interesting idea to incorporate the recent progress in LLM with HOI detection. The performance is amazing with significant margins upon previous SOTAs, especially for zero-shot setting. Extensive experiments are conducted, providing valuable insights on the effect of VL foundation models in HOI detection. In the wild HOI detection illustration is quite impressive. Weaknesses: The comparison between GEN-VLKT and the proposed method is not totally fair to me. It might be better to replace BLIP-2 with CLIP for a fair comparison. Fig. 2 is not very clear. It would help if annotating the encoders in the figure with corresponding notations. The baseline of ablation experiments is chosen as GEN-VLKT. However, a major difference between GEN-VLKT and UniHOI is the VL foundation model used. And there are also other differences, like VLKT is not used for UniHOI. It might be better to change the baseline to make the ablation more reasonable. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - The description on the adopted Image Encoder and Instance Decoder is not clear. Is DETR adopted? Or some more advanced detectors? Are they frozen during training? - In L177, $V^f$ and $V^f$ seems to be a typo. - The performance of UniHOI-l in Tab. 1-2 is questionable. Please check whether there are typos. - In the ablation studies, it is still not very clear that how the VL Foundation model is simply added. Is it a simple removal of HO Spatial Prompting? Or replacing the input feature of HO Spatial Prompting with the learnable queries? - Is it possible to make the HOI detector share the backbone of VL foundation model? This could be related to the proposed insight that HOI detector is a three-tier visual feature hierarchy. Ablation studies on this would be preferrable. - The proposed method seems to be applicable to arbitrary HOI detectors (if the answer to Q1 is yes). Is it practical? - Results in Tab. 5 and Tab. 3 are not consistent. The result without Knowledge Retrieval is reported in Tab. 3. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Limitations of the paper is not well-discussed in the paper. I would like to see more discussion on extending the use of LLM further than a static knowledge base. Also please refer to the weakness part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer djfM, We truly appreciate the detailed feedback. Herein, we provide a detailed response to each of your concerns: **On Weaknesses**: a. **Comparison with GEN-VLKT**: In response, we replace BLIP2 with CLIP and conducted experiments on V-COCO datasets: |Method|${AP}^{1}_{role}$|${AP}^{2}_{role}$| |:-:|:-:|:-:| |GEN-VLKT_s|62.41|64.46| |${UniHOI}_s$ (w/ CLIP)|63.79|65.91| |${UniHOI}_s$ (w/ BLIP2)|65.58|68.27| |GEN-VLKT_m|63.28|65.58| |${UniHOI}_m$ (w/ CLIP)|64.47|67.83| |${UniHOI}_m$ (w/ BLIP2)|67.95|70.61| |GEN-VLKT_l|63.58|65.93| |${UniHOI}_l$ (w/ CLIP)|64.86|67.98| |${UniHOI}_l$ (w/ BLIP2)|68.05|70.82| The following table shows the results of UniHOI equipped with different foundation methods on the HICO-DET dataset: |||Default|||Known Obj.|| |:-:|:-:|:-:|:-:|:-:|:-:|:-:| |Method|Full|Rare|Non-rare|Full|Rare|Non-rare| |GEN-VLKT_s|33.75|29.25|35.10|36.78|32.75|37.99| |${UniHOI}_s$ (w/ CLIP)|35.92|34.39|36.26|38.84|37.19|40.18| |${UniHOI}_s$ (w/ BLIP2)|42.73|42.03|42.93|43.58|45.27|43.08| |GEN-VLKT_m|34.78|31.50|35.77|38.07|34.94|39.01| |${UniHOI}_m$ (w/ CLIP)|36.71|35.42|36.91|39.16|39.23|40.56| |${UniHOI}_m$ (w/ BLIP2)|43.18|42.98|43.57|44.32|45.96|43.89| |GEN-VLKT_l|34.96|31.18|36.08|38.22|34.36|39.37| |${UniHOI}_l$ (w/ CLIP)|36.84|35.71|37.05|39.28|39.31|40.79| |${UniHOI}_l$ (w/ BLIP2)|43.57|43.79|44.01|44.78|46.48|44.32| Additionally, the results under zero-shot setting are as follows: |Method|Type|Unseen|Seen|Full| |:-:|:-:|:-:|:-:|:-:| |GEN-VLKT_s|RF-UC|21.36|32.91|30.56| |${UniHOI}_s$ (w/ CLIP)|RF-UC|23.41|33.45|31.97| |${UniHOI}_s$ (w/ BLIP2)|RF-UC|28.68|33.16|32.27| |GEN-VLKT_s|NF-UC|25.05|23.38|23.71| |${UniHOI}_s$ (w/ CLIP)|NF-UC|26.89|25.57|25.96| |${UniHOI}_s$ (w/ BLIP2)|NF-UC|28.45|32.63|31.79| |GEN-VLKT_s|UO|10.51|28.92|25.63| |${UniHOI}_s$ (w/ CLIP)|UO|13.24|30.27|27.52| |${UniHOI}_s$ (w/ BLIP2)|UO|19.72|34.76|31.56| |GEN-VLKT_s|UV|20.96|30.23|28.74| |${UniHOI}_s$ (w/ CLIP)|UV|22.18|33.29|30.87| |${UniHOI}_s$ (w/ BLIP2)|UV|26.05|36.78|34.68| These results demonstrate that **regardless of whether we use CLIP or BLIP2, our approach significantly outperforms the current state-of-the-art methods**. b. **Clarification on Fig. 2**: We will add annotations to all the components in Fig. 2 to ensure clarity. c. **Baseline in Ablation Experiments**: Based on your suggestion, we adopted a GEN that doesn't use CLIP model as our baseline and performed ablation experiments by equipping BLIP2. The results are as follows: |Method|${AP}^{1}_{role}$|${AP}^{2}_{role}$| |:-:|:-:|:-:| |baseline|61.58|63.59| |+ BLIP2|62.91|64.83| |+ HOPD|65.58|68.27| |+ Knowledge Retrieval|66.74|69.31| Furthermore, we also conduct ablation experiments using CLIP as the VL foundation model: |Method|${AP}^{1}_{role}$|${AP}^{2}_{role}$| |:-:|:-:|:-:| |baseline|61.58|63.59| |+ CLIP|62.15|64.28| |+ HOPD|63.79|65.91| |+ Knowledge Retrieval|63.98|66.34| **On Questions**: a. **Image Encoder & Instance Decoder**: We employed the image encoder and instance decoder from the single-stage HOI detector GEN, allowing their weights to adapt during training, while the weights of the VL foundation model remain frozen. b. **Typos in L177**: We deeply regret the oversight. These errors will be addressed in the revised manuscript. c. **Performance of UniHOI-l in Tab. 1-2**: In Table 1, UniHOI-l reports a mAP of 43.79 in the Rare category under the Default Setting, showing a "12.61" improvement over GEN-VLKT-l. However, there was an oversight where we mistakenly wrote "2.61". We will correct this mistake. d. **Clarification on Ablation Studies**: We use the BLIP2 to produce a representation $X^q$ of dimension [32,768]. Then, a two-layer Transformer Decoder processes a learnable query [64,256] to derive the feature $V^q$ from $X^q$. This is then combined with the output $V^i$ from the Interaction Decoder for our prediction. However, without UniHOI's spatial feature prompting, the feature alignment between $V^q$ and $V^i$ is suboptimal. e. **Sharing Backbone of VL Foundation Model**: Thanks for the thoughtful suggestion. We employ its Image Encoder to derive image features, yielding a representation of [1024,1408]. We linearly interpolate the positional encodings in BLIP2 to handle larger images and used the Instance Decoder with learnable Queries of [64,256] to identify interacting HO pairs. **Unfortunately, achieving model convergence proved challenging.** Two primary constraints were observed: - BLIP2's training lacks emphasis on individual localization, limiting its precision in Instance Detection. - BLIP2's size necessitates a smaller input image. Enlarging this to match typical detection models would produce numerous tokens, adding substantial computational burden during LLM alignment. f. **Applicability to Arbitrary HOI Detectors**: Our UniHOI is universally adaptable. Quality spatial information from any detector can prompt the VL model to extract advanced features, enhancing versatility across multiple HOI detectors and domains. In the future, we will try to release open source code that is compatible with classic HOI detectors, such as UPT (Unary–Pairwise Transformer). g. **Consistency in Tab. 5 and Tab. 3**: - In Table 3, we solely showcased the results without the utilization of knowledge retrieval, which has already achieved substantial performance enhancements. - In this paper, we treated knowledge retrieval as an ancillary mechanism to elevate the model's performance, specifically aiding in comprehending intricate interactions. - We commit to sharing the relevant code, weights, and results to support further exploration of knowledge-based techniques. **On Limitations**: We plan to enhance our conclusion, addressing potential areas such as model compression, knowledge representation, parameter-efficient tuning, and multi-turn dialogues. We genuinely appreciate your time and expertise. If you have any further questions or concerns, please let us know. Thanks! --- Rebuttal Comment 1.1: Title: Further Discussion with Reviewer djfM Comment: Dear Reviewer djfM, Thank you for the time and effort you dedicated to reviewing our manuscript. We sincerely appreciate your valuable feedback. In response to your comments, we have provided thorough explanations and updated results. We believe that these address the concerns you raised. We are eager to ensure that all of your concerns have been addressed adequately. Should there be any aspect of our work that remains unclear to you, please do not hesitate to inform us. Once again, we extend our gratitude for your constructive feedback. Best,
Summary: This paper investigates the problem of human-object interaction (HOI) detection. The authors introduced UniHOI, a method for universal HOI detection in an open-world setting. They also explored the universal interaction recognition with Vision-Language (VL) foundation models and large language models (LLMs), and proposed HO prompt-based learning for high-level relation extraction aimed at VL foundation models. Experimental results show the effectiveness and significance of the proposed method. Strengths: 1. Overall, the manuscript is well-written and easy to follow. The figures are pretty and can convey the concepts clearly. 2. Pushing the problem of human-object interaction detection toward an open-world setting is of great importance. This is also a trend for most existing computer vision applications. 3. The extensive experimental results show the superiority of the proposed UniHOI method. Weaknesses: 1. In Table 4, the results from the third row come from "ConsNet [31]" but not "ATL [15]", according to the paper of "GEN-VLKT". 2. The conclusion part lacks objective reflections on the deficiencies of this study and future prospects for improvements. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Please refer to the weakness section. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Please refer to the weakness section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer WF8N, We greatly appreciate your thoughtful review and the time you have taken to provide insights and feedback on our submission. We are encouraged by the positive aspects you've highlighted and grateful for the critical points you've raised. Here, we address the weaknesses mentioned and hope to provide clarifications that would enhance the clarity of our contribution. **Reference to Table 4 Results** We apologize for the oversight. You are right. The result in the third row of Table 4 should reference "ConsNet [31]" and not "ATL [15]". We have duly noted this mistake and will rectify it in the final version of our manuscript. We understand that accuracy in presenting prior works is paramount, and we thank you for pointing out this error. **Reflections on the Study's Deficiencies and Future Directions** Thank you for your constructive feedback. In light of your feedback, we will expand our conclusion section to incorporate the following points: 1. **Deficiencies**: We acknowledge that while our UniHOI method has shown promise in various experimental settings, it also faces the issue of ***substantial computational demands*** when driving large models. 2. **Future Prospects**: Building on our current findings, there are several promising directions: (1) At present, the focus of our UniHOI is mainly on ***Prompt Learning***. In the future, we will consider paradigms such as ***Adapter*** and ***Parameter-Efficient Fine-Tuning*** to further enhance the generic Visual-Language (VL) foundation models' capabilities in the specialized Human-Object Interaction (HOI) domain. (2) We will explore better ***knowledge transfer*** methods in the future to more efficiently implement the driving or collaboration of specialized HOI detectors by VL base models, thereby reducing the inference costs of large models. In summary, we sincerely thank you for your valuable comments. We believe that by addressing these points, our work will be significantly improved and provide a solid contribution to the community. We hope our responses provide clarity, and we remain open to further feedback. Thanks! --- Rebuttal Comment 1.1: Comment: Thanks for the detailed feedback from the authors. I look forward to seeing some PEFT methods applied to the HOI domain in the near future.
Summary: The paper addresses human-object interaction (HOI) detection task. The authors propose a new method named as UniHOI, achieved by prompting BLIP2 using human-object paired features, as well as linguistic semantics generated by a LLM. The proposed UniHOI demonstrates significant performance gain on HICO-DET and V-COCO in both fullly supervised and zero-shot scenarios. Strengths: If the BLIP2 is a fundamental model pre-trained on a lower-level task than HOI detection (e.g., classification, object detection), I would say this paper is an excellent work in terms of both model design and performance. Actually, I think this is the first work to transfer a fundamental model into the domain of HOI detection, which may open doors to further exploration on prompting learning for HOI. However, I cannot accept the choice of using BLIP2 as the fundamental model for HOI detection. I will explain the reasons for this in the weaknesses below. Weaknesses: 1. Prompting engineering aims to transfer a fundamental model pretrained on **lower-level tasks** to **higher-level** tasks. At a minimum, **the task for pre-training needs to be decoupled from the downstream task**. Otherwise, transferring a model pertrained on higher-level tasks to low-level tasks is not prompting, but fine-tuning. BLIP2 is a powerful model pertrained for VQA, image captioning, and similar tasks. However, as widely acknowledged, HOI detection is a sub-problem for these detailed scene understanding tasks. Namely, BLIP2 itself is a powerful HOI detector (I have tried using BLIP2 directly for HOI detection, and the performance is impressive). From this point, BLIP2 cannot be used as a fundamental model for HOI detection since it has a great capability for HOI detection by itself, and is capable of even higher-level tasks. Therefore, this paper is more like a work that fine-tunes BLIP2 on HICO-DET and V-COCO, at the cost of giving up the ability to use BLIP2 for other tasks, e.g., captioning. 2. While direct use of BLIP2 for HOI detection may fail to achieve as impressive performance as that of UniHOI on HICO-DET and V-COCO, **BLIP2 has already achieved the goal of doing HOI detection, i.e., detailed scene understanding**. Therefore, is it a case of putting the cart before the horse to use BLIP2 for HOI detection only? 3. In a real open-world scenario, I think BLIP2 is more capable of HOI detection compared to UniHOI. After all, the zero-shot HOI detection capability of UniHOI is mainly inherited from BLIP2. 4. The performance of UniHOI on HICO-DET and V-COCO is impressive. I think this is the first work that achieves a mAP being larger than 40% on HICO-DET. However, the comparison is not so fair. As aforementioned, BLIP2 itself is a powerful HOI detector, which has been pre-trained with a large amount of **interaction-specific** data. Note that, **on a fair comparison, a fundamental model should not be pre-trained using data with annotations involving downstream tasks**. Otherwise, the authors need to report the results without using these extra data. For instance, if we first collect all data involving HOI detection from the dataset used for BLIP2 pre-training. Next, we use these data to pre-train a HOI detector listed in Table 1, (e.g., GEN-VLKT) to get GEN-VLKT-2. Finally, we fine-tune the GEN-VLKT-2 on the HICO-DET and V-COCO. I think it can also get an excellent performance. This is another reason why I think that transferring a model pre-trained on a higher-level task to a lower- level task (especially when the lower-level task is a sub-task of the higher-level task) is not a prompting, but a fine-tuning. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Actually, I am very much looking forward to the work of using fundamental models for HOI detection. However, I think an instrive work is to transfer a fundamental model pretrained on lower-level task to a high-level task. This work, however, seems to be the opposite. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer pXLy, First and foremost, we extend our deepest gratitude for your insightful feedback and hope our clarifications address your concerns. We're eager to highlight the significance and potential of our work. **1. Regarding BLIP2's Utilization**: We agree with your statement about Prompting engineering. However, with the utmost respect, I'd like to disagree with the assertion that "utilizing BLIP2 is a fine-tuning process on the HOI task." Please allow me to present several clarifications: - **Higher-level vs. Lower-level Task Pre-training**: BLIP2's pretraining revolves around global image-level tasks such as Image-Text Contrastive Learning (ITC), Image-grounded Text Generation (ITG), and Image-Text Matching (ITM). On the other hand, HOI detection involves detailed instance-level tasks like **instance localization** and interaction reasoning. Our UniHOI framework enhances BLIP2's **detection capabilities** beyond its original scope. - **Fine-tuning vs. Prompting**: Fine-tuning inherently modifies model parameters, as referenced in [1][2]. In our case, all parameters of BLIP2 are **frozen**. We leverage spatial features as prompts for BLIP2 without compromising its other functionalities (e.g. captioning). - **Motivations**: UniHOI presents a **robust synergy** between a specialized HOI model and a generic VL foundation model. With the foundation model retaining its core capabilities, it significantly enhances the performance of more specialized tasks. - **Using the Same Foundation Model - CLIP**: As additional evidence, we introduced **CLIP-driven UniHOI** to illustrate that our proposed method still achieves remarkable performance on VL foundation models like CLIP, which has been trained on (image, text) pairs for contrastive learning. Prior research like GEN-VLKT and HOICLIP also explored large model-driven methodologies based on CLIP. We've furnished detailed experiments and outcomes regarding this in the subsequent fourth response. [1] Visual Prompt Tuning, ECCV 2022. [2] Visual Tuning, arXiv 2023. **2. BLIP's ability to handle HOI tasks**: To further explore the possibility of using BLIP2 for HOI detection, we employed its Image Encoder to derive image features, yielding a representation of [1024,1408]. We linearly interpolated the positional encodings in BLIP2 to handle larger images and used the Instance Decoder with learnable Queries of [64,256] to identify interacting HO pairs. **Unfortunately, achieving model convergence proved challenging.** Two primary constraints were observed: - BLIP2's training lacks emphasis on individual localization, limiting its precision in Instance Detection. - BLIP2's size necessitates a smaller input image. Enlarging this to match typical detection models would produce numerous tokens, adding substantial computational burden during LLM alignment. **3. Open-world Scenario**: Thank you once again for your careful consideration of our work. - Firstly, we acknowledge that models like CLIP and BLIP2 exhibit superior adaptability in open-world scenarios, which aligns with our experimental findings. We endeavor to capitalize on these models' strengths while tailoring solutions for the nuanced requirements of HOI detection. - Presently, BLIP2's **text-driven interfacer** hinders nuanced interaction analysis. This limitation is evident when trying to describe complex visuals, such as distinguishing individuals in identical jerseys during a sports event, using just text. - The surge in large models driving specialized tasks, as seen with our UniHOI or the innovative SAM for image segmentation, indicates a promising direction. With UniHOI, we aim to bolster advancements in high-precision HOI detection. **4. Comparison Fairness**: Building on our earlier discussions, BLIP2's training tasks, namely ITC, ITG, and ITM, are rooted in the "image-text" paradigm, is hardly direct training for GEN-VLKT. For a fair comparison with CLIP-based approaches like GEN-VLKT, we introduced a CLIP-based UniHOI. Below, we present the results from our CLIP-driven UniHOI and GEN-VLKT on the VCOCO dataset: |Method|${AP}^{1}_{role}$|${AP}^{2}_{role}$| |:--:|:--:|:--:| |GEN-VLKT_s|62.41|64.46| |${UniHOI}_s$ (w/ CLIP)|63.79|65.91| |GEN-VLKT_m|63.28|65.58| |${UniHOI}_m$ (w/ CLIP)|64.47|67.83| |GEN-VLKT_l|63.58|65.93| |${UniHOI}_l$ (w/ CLIP)|64.86|67.98| We also reported the performance of these two methods on the HICO-DET dataset: |||Default|||Known Obj.|| |:--:|:--:|:--:|:--:|:--:|:--:|:--:| |Method|Full|Rare|Non-rare|Full|Rare|Non-rare| |GEN-VLKT_s|33.75|29.25|35.10|36.78|32.75|37.99| |${UniHOI}_s$ (w/ CLIP)|35.92|34.39|36.26|38.84|37.19|40.18| |GEN-VLKT_m|34.78|31.50|35.77|38.07|34.94|39.01| |${UniHOI}_m$ (w/ CLIP)|36.71|35.42|36.91|39.16|39.23|40.56| |GEN-VLKT_l|34.96|31.18|36.08|38.22|34.36|39.37| |${UniHOI}_l$ (w/ CLIP)|36.84|35.71|37.05|39.28|39.31|40.79| Moreover, CLIP driven UniHOI is also significantly better than GEN-VLKT in zero shot settings: |Method|Type|Unseen|Seen|Full| |:--:|:--:|:--:|:--:|:--:| |GEN-VLKT_s|RF-UC|21.36|32.91|30.56| |${UniHOI}_s$ (w/ CLIP)|RF-UC|23.41|33.45|31.97| |GEN-VLKT_s|NF-UC|25.05|23.38|23.71| |${UniHOI}_s$ (w/ CLIP)|NF-UC|26.89|25.57|25.96| |GEN-VLKT_s|UO|10.51|28.92|25.63| |${UniHOI}_s$ (w/ CLIP)|UO|13.24|30.27|27.52| |GEN-VLKT_s|UV|20.96|30.23|28.74| |${UniHOI}_s$ (w/ CLIP)|UV|22.18|33.29|30.87| In our additional experiments utilizing fair CLIP as the foundational model, UniHOI consistently demonstrated impressive results, further attesting to our method's efficacy. Finally, we humbly request you to consider the innovative spirit and potential impact of our work for the broader research community and kindly reconsider our submission. If you have any further questions, please let us know. We’d be very happy to do anything we can that would be helpful in the time remaining! Thanks! --- Rebuttal Comment 1.1: Title: Request for Further Discussion Comment: Dear Reviewer pXLy, I hope this message finds you well. Thank you for your thorough and insightful feedback on our submission. We have carefully addressed your comments and supplemented our work with relevant experimental results. If any ambiguity remains, we sincerely invite further inquiries. We genuinely appreciate your time and dedication to reviewing our research. Thanks. Warmest regards, --- Rebuttal 2: Title: Further Discussion with Reviewer pXLy Comment: Dear Reviewer pXLy, We sincerely appreciate the time you invested in reviewing our submission and your invaluable feedback. We have diligently addressed your comments and provided corresponding responses and results. We believe that these revisions have addressed the concerns you raised. We would be grateful for the opportunity to further discuss whether your concerns have been adequately addressed. If there are any aspects of our work that remain unclear, please do not hesitate to inform us. Once again, thank you for your guidance and insights. Warm regards,
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes a universal HOID pipeline, which utilizes decoded ho-pair feature as spatial prompts to prompt the VL foundation model with the aim to implement effective prompt-based learning on base VL model and extract HOI related features from it. It also proposes knowledge retrieval for HOID in open-category manner by large-scale pretrained language models. Experiments show the effectiveness of this approach in both generic and zero-shot HOID. Strengths: 1. It explores universal interaction recognition by tansferring the rich knowledge inside Vision-Language foundation models and LLMs to HOI pipeline, which broaden the research scope of HOID. 2. The experimental results are promising. In both generic and zero-shot setting, this approach reaches a new state-of-the-art and surpasses previous methods by a substantial margin. Weaknesses: 1.In line152, ‘P_h’ and ‘P_o’ are described as ‘excellent spatial position features’ and further utilized as HO spatial prompts. However, these features are generated by learnable position embedding and query, which is identical as many transformer-based HOID approaches before such as GEN-VLKT. Can you provide some evidence that these feature are indeed ‘excellent’, why could it provide accurate spatial information concerning ho pairs? Or are there some unique designs I overlook? 2.In Line 177, HOPD is designed for the alignment issue between VL foundation model and HOI pipeline. And the output V_f is incorperated with V_i for final predicition of interaction. But the performance of pure V_f, i.e., only utilizing V_f for prediction is unexamined. This results may more directly show the effectiveness of alignment between VL models and HOI pipeline. 3.Some typos. Line 171, ‘the guidance of’ repeated twice. Line 177, ‘V_i’ is mismarked as ‘V_f’. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Majorly the questions lies in the choice and interpretability of spatial prompts. It’s unclear why these prompts are ‘excellent spatial position features’. Or have you try some experiments on the choice of these prompts? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer WxZv, First and foremost, we extend our deepest gratitude for your thorough review and insightful feedback. Your recognition of our method is truly appreciated. We concur with your perspective that utilizing $V_f$ exclusively for prediction would offer a more direct testament to the effectiveness of alignment between VL models and the HOI pipeline. We have diligently conducted the pertinent experiments to this effect. Herein, we provide a detailed response to each of your concerns: **1. Regarding the 'excellent spatial position features' of $P_h$ and $P_o$**: In our study, “$P_h$” and “$P_o$” are two 64*256 features used for predicting Human boxes $B^h$, Object boxes $B^o$, and categories $C^o$. Consequently, “$P_h$” and “$P_o$” are highly correlated with instance-level position and category information. This makes them exemplary spatial position features, perfectly tailored to serve as spatial location representations to prompt the VL foundation models. We aim to explore the synergy between universal (e.g. BLIP2 and CLIP) and specialized (e.g. GEN-VLKT) models. This innovative design amalgamates traditional HOI detectors, encompassing instance localization and relational reasoning, with VL foundational models. It provides a novel avenue for large model-driven HOI detection, further inspiring the development and implementation of generic algorithms. **2. Regarding the performance of pure $V_f$**: We are truly grateful for this insightful consideration. **Conducting this experiment does indeed elucidate the effectiveness of alignment between VL models and the HOI pipeline**. We have appended the results of the experiment on V-COCO dataset where only $V_f$ was used for prediction: |Model|Feature|${AP}^{1}_{role}$|${AP}^{2}_{role}$| |:--:|:--:|:--:|:--:| ||Only $V_i$|62.41|64.46| |${UniHOI}_s$ (w/ BLIP2)|Only $V_f$|64.51|67.26| ||$V_i$+$V_f$|**65.58**|**68.27**| ||Only $V_i$|63.28|65.58| |${UniHOI}_m$ (w/ BLIP2)|Only $V_f$|66.18|68.95| ||$V_i$+$V_f$|**67.95**|**70.61**| ||Only $V_i$|63.58|65.93| |${UniHOI}_l$ (w/ BLIP2)|Only $V_f$|66.25|69.27| ||$V_i$+$V_f$|**68.05**|**70.82**| The experimental results on the V-COCO dataset show that only the features $V_f$ generated by the BLIP2 model have achieved impressive results, second only to the performance of the feature combination of "$V_i $+$V_f $". Additionally, in light of the concerns raised by reviewer djfM regarding the performance difference between BLIP2 and CLIP, **we have also conducted experiments using UniHOI driven by CLIP**, specifically evaluating its performance when utilizing $V_f$ and $V_i$: |Model|Feature|${AP}^{1}_{role}$|${AP}^{2}_{role}$| |:--:|:--:|:--:|:--:| ||Only $V_i$|62.41|64.46| |${UniHOI}_s$ (w/ CLIP)|Only $V_f$|62.87|64.91| ||$V_i$+$V_f$|**63.79**|**65.91**| ||Only $V_i$|63.28|65.58| |${UniHOI}_m$ (w/ CLIP)|Only $V_f$|63.74|66.25| ||$V_i$+$V_f$|**64.47**|**67.83**| ||Only $V_i$|63.58|65.93| |${UniHOI}_l$ (w/ CLIP)|Only $V_f$|63.79|66.83| ||$V_i$+$V_f$|**64.86**|**67.98**| In addition to the two close-set experiments mentioned above, **we have also conducted experiments in an open-set setting on the HICO-DET dataset**. The results of **UniHOI equipped with BLIP2** are as follows: |Type|Feature|Unseen|Seen|Full| |:--:|:--:|:--:|:--:|:--:| |RF-UC|Only $V_i$|21.36|32.91|30.56| |RF-UC|Only $V_f$|28.22|32.18|31.38| |RF-UC|$V_i$+$V_f$|**28.68**|**33.16**|**32.27**| |NF-UC|Only $V_i$|25.05|23.38|23.71| |NF-UC|Only $V_f$|**28.69**|32.51|31.74| |NF-UC|$V_i$+$V_f$|28.45|**32.63**|**31.79**| |UO|Only $V_i$|10.51|28.92|25.63| |UO |Only $V_f$|**20.91**|**34.88**|**31.78**| |UO |$V_i$+$V_f$|19.72|34.76|31.56| |UV|Only $V_i$|20.96|30.23|28.74| |UV |Only $V_f$|**26.81**|36.41|34.51| |UV |$V_i$+$V_f$|26.05|**36.78**|**34.68**| From the results of both our **close-set** and **open-set** experiments, we draw the following conclusions: - **The alignment between VL models and the HOI pipeline in our UniHOI is particularly effective**. Even when solely leveraging the features $V_f$ from the large models, our approach yields impressive results. - In the **close-set scenarios**, the feature combination of “$V_i$+$V_f$” emerges as a superior choice, rendering the model to function much like a specialized HOI detector. However, in the **open-set scenarios**, where there's a stronger emphasis on understanding the open world, it's predominantly the $V_f$ feature that plays a pivotal role. We highly value your feedback and pledge to incorporate these novel experimental data into our final version. This will undoubtedly enhance our paper, facilitating a more comprehensive understanding and evaluation for readers. **3. Typos**: We sincerely apologize for the oversight and are grateful for your meticulous attention to detail. Rest assured, these errors will be rectified in the revised manuscript. Furthermore, we commit to thoroughly scrutinizing the entire document to preclude any similar lapses. Thank you again for your constructive feedback. **We will incorporate the corresponding modifications and expansions into the revised paper**. In addition, **the corresponding code and model weights will be open-source to ensure replication**. If you have any further questions, please let us know. We’d be very happy to do anything we can that would be helpful in the time remaining. Thanks! --- Rebuttal Comment 1.1: Title: Further Discussion with Reviewer WxZv Comment: Dear Reviewer WxZv, We sincerely appreciate the time and effort you have dedicated to reviewing our submission. We have carefully addressed your comments and provided corresponding responses and results. We believe that these responses and results adequately address your concerns. We would value an opportunity to further discuss whether your reservations have been resolved. Should there remain any aspects of our work that are unclear to you, please do not hesitate to inform us. Once again, thank you for your invaluable feedback. Best,
null
null
null
null
null
null
Leveraging Vision-Centric Multi-Modal Expertise for 3D Object Detection
Accept (poster)
Summary: This paper introduces a novel cross-modal BEV distillation approach, namely VCD, but adopts identical feature extractors for the teacher and the student model, which eases the distillation difficulties during the knowledge-transferring phase. (1) It proposes a new multi-modal teacher VCD-E, with an image-based backbone only to serve as the teacher model, and achieves similar performance with multi-modal 3D detectors. (2) It introduces a novel trajectory-based feature distillation approach, to enhance the feature distillation quality on moving objects. (3) The final model achieves 63.1 NDS on nuScenes test leaderboard, setting SOTA performance for camera-based detectors. Strengths: 1. The paper points out that a LiDAR-based detector may not be essential in camera-based 3D detectors' distillation: a dedicated image-based detector with future frames temporal fusion and LiDAR-guided depth input is enough. Based on this, the author overcomes the domain gap between LiDAR and images easily. 2. The paper presents a novel trajectory-based distillation approach to deal with moving objects, which is simple yet effective according to ablations. Weaknesses: 1. The paper lacks some theoretical analysis, which is practical work. 2. Though the trajectory-based distillation is effective, the reviewer does not think VCD truly captures the moving objects. A more detailed experiment may help. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The official BEVDet4D-depth is 51.9NDS, while the author report 54 NDS. What's the modification to the model? 2. The reviewer is interested in the static objects' performance similar to Figure 1(supplementary). It is not straightforward to see moving objects' improvements only, since VCD has already improved over its baseline. 2. The author is suggested to clarify that VCD-E utilizes future frames during training. 3. The author is suggested to use \times rather than x in Table 1, which is not formal. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer oCet, We sincerely appreciate your valuable feedback. We will address your comments below. **1. lacks some theoretical analysis** Thanks for your suggestions. We have done some analysis on the motion misalignment during the process of temporal fusion in Supplementary Sec.B.1. We will delve deeper into the relevant research areas to conduct further analysis in the future. **2. An experiment to show VCD captures the moving objects** In response to the reviewer's concerns, we conducted the following experiments. We categorized object velocities into three tiers: 0-1, 1-5, and 5-. The tabulated results underscore a substantial enhancement achieved by our VCD method for objects with higher velocities, demonstrating 32.6% and 20.0% improvements in `precision-recall` (mAP) and `localization quality` (mATE), respectively. \* denotes the percentage improvement relative to the baseline. | mAP$\uparrow$ | velocity (0 - 1) | velocity (1 - 5) | velocity (5 - ) | | :---: | :---: | :---: | :---: | | Baseline | 0.289 | 0.115 | 0.146 | | VCD | 0.338 (+16.8%)\* | 0.155 (+34.9%) | 0.194 (+32.6%) | | mATE$\downarrow$ | velocity (0 - 1) | velocity (1 - 5) | velocity (5 - ) | | :---: | :---: | :---: | :---: | | Baseline | 0.729 | 0.731 | 0.885 | | VCD | 0.685 (-6.1%) | 0.680 (-7.0%) | 0.708 (-20.0%) | **3. The modification based on the official BEVDet4D-depth** We modify the resolution of the BEV feature based on the official BEVDet4D-depth from 128 to 256, which causes the gap. Notably, in the main experiments, including those involving the baseline and VCD, the resolution was consistently adopted for the 256. **4. The static objects' performance** Thanks for your advice. As depicted in the table below, the performance for static objects exhibits a notable improvement of over 3.3% in mAP. We also provide Fig.2 similar to Supplementary Fig.1 in the attached PDF. We will add it in the corresponding section in the revised version. | mAP$\uparrow$ | Con Veh | Traffic Cone | Barrier | | :---: | :---: | :---: | :---: | | Baseline | 0.060 | 0.463 | 0.495 | | VCD | 0.099 | 0.505 | 0.512 | **5. Clarify that VCD-E utilizes future frames during training.** Thanks for your kind advice. It is stated in line 29 of the supplement. We will clarify it more clearly in the revised version. **6. Suggested to use \times rather than x in Table 1** We appreciate your suggestion and will fix them in the revised version. --- Rebuttal Comment 1.1: Title: Response to Author Comment: The rebuttal addresses most of my concerns. Therefore, I keep my rating. ps. it would be better to state VCD-E adopts future frames for training in your main context, rather than in the supplementary. --- Reply to Comment 1.1.1: Comment: Thanks for your suggestions. We will incorporate it into the main context.
Summary: This work presents vision-centric distillation, which utilizes multi-modalities for the expert and images for the student. It includes two distillation parts, namely Trajectory-based Distillation and Occupancy Reconstruction. Experiments prove the effectiveness of the proposed method. Strengths: 1. The proposed method tries to improve the performance via distillation from depth (Occupancy) and motion (Trajectory), which is reasonable for image-based 3D detection. 2. Experiments show significant improvement over various benchmarks. 3. The paper presentation is overall clear. Weaknesses: 1. It is interesting to see the work improves the framework from depth and motion. However, the definition of Occupancy in Figure 2 and Section 3 could be confusing. Because there is no annotation for occupancy optimization in the framework, which cannot guarantee occupancy generation. It's more like depth supervision from expert to apprentice from my point of view. 2. Because the motion trajectory is generated from the predicted velocity. The difference between Trajectory-based Distillation and previous prediction-based distillation should be made clear. The classic prediction-based distillation also includes velocity for teacher-student optimization. 3. What's the advantage of a.2 over a.1 in Figure 1(a). Because this work focuses on the camera-based setting. And the backbone for experts (not included in inference) utilizing the image-only or modality-specific approach in Figure 1(a) seems not to make an inherent difference. The essential advantage of the point cloud is accurate depth, which is already utilized in this framework. 4. It's better to add the results StreamPETR[37] in Table 2, which achieves better results. 5. It's unclear how the LiDAR-camera depth fusion in Figure 2 is conducted. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to the Weakness section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Broader Impact is given in the supplementary material. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer bEge, Thank you for the valuable and detailed comments. We will address your concerns below. **1. The confusion about the definition of Occupancy and how to generate the occupancy annotation.** Excuse for the confusion. The **definition** of occupancy used in the paper is to determine the presence of objects in a given 3D space based on depth scores generated from multi-camera pixel space. The **generation of occupancy annotation** is obtained by projecting the depth information into the 3D space. Then the scores from different pixels in the multi-camera within the same occupancy region, are accumulated to make decisions about the presence of an object in that region. IMHO, our occupancy task `differs` from depth supervision. Occupancy is defined within a three-dimensional voxelized space, whereas depth is defined in a two-dimensional image space. We `will` revise accordingly to make it more clear in the manuscript. **2. The motion trajectory is generated from the predicted velocity. The difference between Trajectory-based Distillation and previous prediction-based distillation methods.** Note that, the motion trajectory is indeed generated by the past GT instances rather than the predicted velocity (As stated in L177). This is a notable difference with previous prediction-based distillation. Specifically, we transform the history's GT instances into the current coordinate system to construct the trajectory. In this way, we can capture the moving objects to mitigate their misalignment during the long-term temporal fusion process which is ignored by previous distillation methods. As shown in Tab.6, our method significantly outperforms previous methods by alleviating motion misalignment. **3. What's the advantage of a.2 over a.1 in Figure 1(a)** The most advantage of the VCD a.2 over a.1 is the elimination of the domain gap between expert and apprentice models. Our objective is to create an expert model that eliminates the domain gap typically present between pure vision models and multi-modal models. Our expert model a.2 only encodes visual information while utilizing LiDAR solely for geometric data, thus minimizing the domain gap. The experiments in Tab.5 demonstrate that our expert model a.2 has the prominent advantage over other multi-modality models a.1. **4. Add the results StreamPETR[37] in Table 2** Thanks for pointing it out, we will revise accordingly. **5. The process of LiDAR-camera depth fusion in Fig. 2** As stated in L284, VCD-E employs the point cloud data for pixels with available Ground Truth (GT) depth information. For pixels lacking GT depth information, we rely on the predicted depth values obtained from images. We will add it to Sec.3 accordingly. --- Rebuttal Comment 1.1: Title: Response to authors Comment: Thanks for the rebuttal from the authors. It addresses most of my concerns. So, I'd like to improve the rating to Borderline Accept.
Summary: This paper aims to design a multi-modal expert teacher with little domain gap to distillate the LSS-based 3D object detector. Different from existential work, it just leverages the LiDAR depth information to design a teacher model instead of using a cumbersome LiDAR feature extractor. The proposed method shows effectiveness to transfer the knowledge from teacher to student detector, which shows significantly results. Strengths: 1. Without the aid of a LiDAR feature extractor, it designs an apprentice-friendly multi-modal expert to conduct distillation. 2. VCD-E achieving comparable performance with state-of-the-art multi-modal methods who predominately rely on the LiDAR backbone. Weaknesses: 1. The comparison with other multi-modal methods should report the latency to highlight the superiority. 2. The performance seems to be saturated since the performance of VCD-A with ConvNeXt-B and 8 frames is not good. The Baseline is the BEVDepth with 2 frames. If the frame is set to 8 frames, what is your improvement compared with the baseline? Technical Quality: 3 good Clarity: 3 good Questions for Authors: When using a large backbone like ConvNeXt, you can not provide any evidence to verify the proposed method can boost performance. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Please see the weakness section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer xwmh, Thank you for the constructive and thoughtful feedback. We will address your concerns below. **1. The latency of VCD-E.** Thanks for your suggestions. As shown in the Table below, we add the FPS in the comparison with other multi-modal methods. The FPS is measured by a single A100. | Methods | Backbone | mAP | NDS | FPS | | :---: | :---: | :---: | :---: | :---: | | FUTR3D | LiDAR & Image | 0.642 | 0.680 | 2.5 | | TransFusion | LiDAR & Image | 0.675 | 0.713 | 3.2 | | Ours | Image | 0.677 | 0.711 | 3.5 | **2. The improvement compared with the baseline using ConvNeXt-B and 8 frames.** VCD demonstrates a noteworthy enhancement of 2.6% in mAP when compared to our baseline model. This improvement is observed when utilizing the ConvNeXt-B architecture with an input of 8 frames. Our baseline is the BEVDepth with 8 frames. We want to clarify that both the VCD-A model and our baseline model were implemented using 8 frames. So the main contribution to the improvement of scores lies in the effectiveness of the VCD distillation. Our choice of implementation involves the BEVDet-Depth code, distinct from BEVDepth. It is stated in the caption of Tab.2. The tabulated data below illustrates that the replication of BEVDepth using a 2-frame configuration yields 49.1% mAP, given the unavailability of the BEVDepth `leaderboard` code. We will add the details of the experiment settings in the revised version. | Model | Backbone | # frames | mAP(%) | NDS(%) | |:----------:|:----------:|:--------:|:------:|:------:| | BEVDepth | ConvNext-B | 2 | 49.1 | 58.9 | | BEVDepth | ConvNext-B | 8 | 52.2 | 61.0 | | VCD (ours) | ConvNext-B | 8 | 54.8 | 63.1 | **3. The consistent improvement of performance based on a large backbone.** Kindly consider referring to our response provided in Q2. We would like to clarify that our VCD distillation method does indeed boost performance by 2.6% mAP with ConvNeXt, which is a significant improvement. Moreover, we `will` release our code to the public to show the improvement achieved through VCD.
Summary: The paper presents an innovative approach for improving camera-only 3D object detection. It introduces a vision-centric multi-modal expert and a trajectory-based distillation module to address key challenges in the field. The framework includes an apprentice-friendly multi-modal expert and a fine-grained trajectory-based distillation module to rectify motion misalignment for each object in the scene. The proposed VCD-A model achieves state-of-the-art performance on nuScenes dataset. Strengths: 1. The paper presents an innovative approach of incorporating a vision-centric multi-modal expert that exclusively relies on camera features, eliminating the need for a LiDAR backbone. This approach simplifies the model architecture while delivering comparable performance to multi-modal methods. 2. The trajectory-based distillation module alleviates the problem of motion misalignment in long-term temporal fusion, improving the accuracy of object detection. 3. This paper provides clear figures and tables, making it easy for readers to understand and follow. Weaknesses: 1. Typesetting problem, Reference[23] and Reference[24] are repeated. 2. The Fine-grained Trajectory-based Distillation Module dose not really solve the problem of dynamic target detection, but only enhances the feature representation through multi-frame alignment, thereby improving performance. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Could you provide provide the dynamic object detection performance gain brought by the trajectory-based distillation module? 2. You can add a column about the number of frames used in the table. As far as I know, the number of frames used by different methods is inconsistent. How many frames does the VCD use? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: How to improve the detection performance of dynamic objects is an urgent problem to be solved at the current stage of 3D perception. The starting point of this article is promising. I hope that the authors can conduct more in-depth research on the real improvement of dynamic object detection performance. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer KRDk, Thank you for the thoughtful and valuable comments. We will address your concerns below. **1. The repeated Reference[23] and Reference[24]** It indeed confuses easily, while these are two different papers. [23] Li, Yinhao, Han Bao, Zheng Ge, Jinrong Yang, Jianjian Sun, and Zeming Li. "Bevstereo: Enhancing depth estimation in multi-view 3d object detection with dynamic temporal stereo." arXiv preprint arXiv:2209.10248 (2022). [24] Li, Yinhao, Zheng Ge, Guanyi Yu, Jinrong Yang, Zengran Wang, Yukang Shi, Jianjian Sun, and Zeming Li. "Bevdepth: Acquisition of reliable depth for multi-view 3d object detection." arXiv preprint arXiv:2206.10092 (2022). **2. Does Trajectory-based Distillation Module solve the dynamic target detection** The Fine-grained Trajectory-based Distillation Module indeed enhances the performance of dynamic objects by a large margin, which can be observed in the Table of Q3 and Supplementary Fig.1. There exists `motion misalignment` during long-term temporal fusion, which will damage the performance of dynamic object detection. In Supplementary Sec.B.1, we have analyzed the motion misalignment. By employing the Trajectory-based Distillation Module during the multi-frame alignment process, we enhance our capability to detect dynamic objects by mitigating motion misalignment. Besides, our expert model possesses a stronger ability to perceive dynamic objects. As a result, we leverage the Trajectory-based Distillation Module to facilitate knowledge transfer from the expert to the apprentice model, thereby augmenting the dynamic object perception capacity of the latter. **3. The performance gain of dynamic objects brought by the trajectory-based distillation module** The trajectory-based distillation module can enhance the performance of dynamic objects, demonstrating 4.0% and 3.2% improvements in `precision-recall` (mAP) and `localization quality` (mATE), respectively. To assess the performance of dynamic objects, we specifically exclude static entities from our analysis. As illustrated in the table presented below, trajectory-based distillation module (`TRM`) yields a substantial enhancement in the performance of dynamic object detection. The dynamic object detection improvement brought by VCD can also be found in Supplementary Fig. 1. | Dynamic objects | mAP $\uparrow$ | mATE $\downarrow$ | mAOE $\downarrow$ | mAVE $\downarrow$ | |:--------:|:-----:|:-----:|:-----:|:-----:| | baseline | 0.306 | 0.740 | 0.689 | 0.468 | | TRM | 0.346 | 0.708 | 0.602 | 0.404 | **4. Add a column about the number of frames used in the table** We appreciate your suggestions and will add them to the revised manuscript. For baseline and VCD experiments, we use 8 frames for temporal fusion. For more detail, please refer to Tab.1 in the attached PDF. **5. More in-depth research on the dynamic object detection** Thanks for your suggestions! The application of VCD enhances the capability to detect dynamic objects, which highlights the significant role played by knowledge transfer and motion misalignment resolution. Moreover, we will delve deeper into dynamic object detection in the future.
Rebuttal 1: Rebuttal: Dear Reviewers and AC(s): We extend our gratitude. Below, we address all review comments and incorporate accordingly into the revised manuscript of our work. The attached PDF below includes a table and a figure. The table illustrates the number of frames used by different approaches, addressing the concern from `Reviewer KRDk`. The figure presents the performance evaluation of static objects using VCD, addressing the remarks from `Reviewer oCet`. Pdf: /pdf/1f23bbcb9b555821524c99612931d82ed6ef01fd.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper seeks to improve camera-only 3D object detection by distilling a multi-modal expert model. The authors start by developing a multi-modal expert with essentially the same architecture as the camera-only model - intended to reduce the domain gap. Then, they propose trajectory-based distillation as well as occupancy reconstruction to supervise the student. The resulting camera-only model achieves state-of-the-art in nuScenes, and the multi-modal expert is a simple, strong method in its own right. Strengths: - The intuition that the expert should have similar architecture to the student is intuitive. - The proposed expert model’s significant increased performance when using a stronger image backbone in Table 7 is interesting. - Extensive experiments demonstrate that having an aligned architecture between the teacher and student is helpful (BEVDepth teacher outperforms TransFusion as well, which is a nontrivial observation). - The final model achieves state-of-the-art in nuScenes. Weaknesses: - The Occupancy Reconstruction section is difficult to understand. It appears that the predicted expert depths are outprojected to XYZ. However, it is unclear how the 3D voxel grid for occupancy reconstruction is generated. Further, it’s actually not clear how the outprojected XYZ is used, as G_xyz is defined in (5) and used in (6) without further mention of the outprojected XYZ. In L207, Should G_xy be G_xyz? It is also unclear how this optimizes depth prediction capabilities for static and dynamic objects. I would appreciate a clear explanation of this part, as it is one of the main contributions of this work. - What model/settings was used for the Table 4 ablation? The numbers seem too drastically worse compared to Table 1. In Table 1 in the supplementary, the baseline VCD-A model in the first and second sections appear to be the same, but the results differ. - While Trajectory-based Distillation improves performance in Table 9, additional ablations are missing. More specifically, the “simplest” form of Trajectory-based distillation would be to simply just distill GT instance locations in the current frame, without considering past trajectory locations. This ablation appears to be missing. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: - Additional details on the expert model are necessary. How many past frames were used to accumulate LiDAR to create the sparse depth map? Does using past LiDAR not cause significant depth artefacts for moving objects? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 2 fair Contribution: 4 excellent Limitations: The authors have included a limitations section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer LE1a, Your acknowledgment of our approach is sincerely appreciated. We will address your comments below. **1. Occupancy Reconstruction** 1.1. Details of Occupancy Reconstruction The **annotations of occupancy** are generated by gathering depth scores from multi-camera pixels within its boundaries. Then, the accumulated depth scores are averaged to get a single value. This single value represents how likely the voxel is occupied. Our process includes two main steps. First, we project depth information into 3D space. Then, we gather scores from different pixels in the same occupancy region. These scores help us decide if there's an object in that region. We `will` update our paper to make it more clear in the revised version. 1.2. In L207, Should G_xy be G_xyz Yes, thanks for pointing it out. It should be G_xyz. We will correct it in the revised version. 1.3. How occupancy optimizes depth prediction capabilities After the expert model generates occupancy annotations, these annotations are used to guide the apprentice model's occupancy reconstruction process using its depth scores. In this way, this module enables the enhancement of depth prediction capabilities within the apprentice model. Besides, it does not discriminate between dynamic and static objects; rather, this module provides direct supervision for scenarios involving both types of objects. **2. The settings for different tables** 2.1. The different settings between Tab.4 ablation and Tab.1 The different numbers between Tab.1 and Tab.4 come from the different number of training iterations and different resolutions of BEV. In Tab.1, we reported the results with 90 epochs and 256x256 resolution of BEV feature. Tab. 4 reports ablation studies that use 20 epochs and setting 128x128 resolution of BEV to save computation resources and time. We will add it to the implementation details in Supplementary Sec.A.2 to make it more clear in our updated paper. 2.2. The difference between the first row and the second row in Supplementary Tab.1 Excuse for the confusion. In Supplementary Tab.1, the first row denotes the baseline model which does not use the distillation method. The second row depicts that the VCD-A model directly conducts distillation under the full BEV feature without the Trajectory-based distillation module. We will add the explanation for Supplementary Tab.1 to make it more clear in our updated paper. **3. More experiments of distilling GT locations in the current frame.** Thanks for your kind suggestions. It is stated in Supplementary Tab.1. As shown in the Table below, The Exp.A is the `simplest` form of Trajectory-based distillation. With the increasing of the trajectory length, our performance of VCD is consistently improved. We will add the explanations for this table in the revised version. | Exp. | Trajectory length | mAP(%) | NDS(%) | | :---: | :---: | :---: | :---: | | A | 1 | 33.1 | 44.5 | | B | 3 | 34.6 | 45.6 | | C | 5 | 35.4 | 45.9 | **4. Additional details and the number of LiDAR frames to create a depth map, will cause depth artifacts?** Thanks for your suggestions! We use the last sweep LiDAR frame with the current LiDAR frame to create the depth map. The selection of a last sweep LiDAR for depth generation results in a relatively limited time interval, mitigating the occurrence of pronounced depth artifacts. In the nuScenes dataset, the temporal gap between the last sweep frame and the current frame amounts to 0.05 seconds. We will update the details of the expert model in the revised manuscript and `will` release our code to the public. --- Rebuttal Comment 1.1: Comment: The authors have sufficiently addressed my concerns for W2 and W3. However, even when reading through the authors' explanation for the occupancy module, some parts still remain unclear. Most critically, "After the expert model generates occupancy annotations, these annotations are used to guide the apprentice model's occupancy reconstruction process using its depth scores" - how, exactly, are these occupancy annotations used to supervise the student model? G_xyz seems to only depend on x, y, z (which I presume are the grid coordinates like CenterPoint), px, py, pz (center of 3D object, but it's unclear whether this is GT or expert predictions), and σ_p. I am not able to follow how the occupancy annotations are used to generate G_xyz. The authors did mention "These scores help us decide if there's an object in that region," but it is not clear what "decide" refers to. I believe significant revision, perhaps as well as a diagram, is needed for this section. Further, I notice from the other reviews that VCD-E uses 4 future frames for the primary results, which slightly decreases the expert model as a contribution as competing methods do not use future frames. Further, I am concerned about the experimental methodology for Table 1 - the authors state they train for 90 epochs, while competing methods typically train for 20 or 25. Further, a BEV resolution of 256x256 is used, while 128x128 is standard. 256x256 non-trivially improves performance for smaller objects. Due to these concerns, I lower my rating to 5. --- Reply to Comment 1.1.1: Title: Further Discussions with Reviewer LE1a Comment: **1. Details of Occupancy Reconstruction.** Sorry for your confusion. The G_xyz is exclusively derived from the 3D ground truth (GT) boxes. Once the occupancy has been generated by the expert model, we utilize it as a supervisory signal only when it corresponds to the location defined by G_xyz. Any occupancy falling outside the confines of G_xyz is disregarded. Specifically, employing the 3D GT boxes, we establish the Gaussian distribution G_xyz, where 'x', 'y', and 'z' represent the grid coordinates, and 'px', 'py', and 'pz' represent the center of 3D objects. When an occupancy annotation generated by the expert is located in the G_xyz, it is employed as supervision. **2. The future frames used in VCD-E.** We also conduct the VCD-E results which do `not` use future frames compared with other multi-modal methods in Tab.7. The result showcase the comparable performance of VCD with other multi-modal methodologies. It also illustrates that our approaches yield more significant enhancements when applied with larger backbones in comparison to BEVFusion. Besides, our objective is to create an expert model that `eliminates` the domain gap typically present between pure vision models and multi-modal models. | Methods | Backbone | mAP | NDS | |:---------:|:----------:|:-----:|:-----:| | BEVFusion | ResNet-50 | 0.598 | 0.662 | | BEVFusion | ConvNext-B | 0.597 | 0.665 | | VCD-E | ResNet-50 | 0.611 | 0.656 | | VCD-E | ConvNext-B | 0.664 | 0.693 | **3. The epochs used in training.** Sorry for your confusion. VCD indeed used `20 epochs with CBGS` same as the previous methods. The 90 epochs mentioned above refer to the equivalent of 20 epochs with CBGS. With the CBGS, a single epoch corresponds to 4.5 epochs. The CBGS is a commonly used augmentation for many methods such as BEVDepth and SOLOFusion. **4. The BEV resolution used in training.** Sorry for your confusion. `Both` the baseline and VCD approaches utilize a resolution of 256x256, enabling a fair comparison. The VCD is a distillation method. Our objective is to develop a distillation method that exhibits substantial enhancements over the baseline. VCD-A indeed has a 2.6% mAP improvement over the state-of-the-art baseline which demonstrates the effectiveness of VCD. In addition, VCD-A consistently outperforms other state-of-the-art methods which also adopt 256x256 resolution in the nuScenes test leaderboard.
null
null
null
null
null
null
ESSEN: Improving Evolution State Estimation for Temporal Networks using Von Neumann Entropy
Accept (poster)
Summary: The authors work on temporal graph representation learning that faces two challenges: (1) the diversity of the evolving patterns and their time-varying nature are hard to model; (2) high computational cost for structure recognization with increasing numbers of nodes and edges. The authors propose to overcome the problems by incorporating the approximate von Neumann Entropy and approximate thermodynamic temperature difference into the design of temporal graph learning modules. The effectiveness of the proposed method is validated by the link prediction task on different datasets. Update after rebuttal: The authors have addressed my concern through the rebuttal. The score remains unchanged. Strengths: 1. This paper is clearly written and easy to follow. 2. This paper brings the thermodynamic view to temporal graph learning, which may inspire later research. 3. The approximate von Neumann Entropy and approximate thermodynamic temperature difference are rigorously formulated and derived. 4. The proposed method significantly promotes the performance of link prediction, and it is computationally efficient. Weaknesses: The von Neumann entropy and the graph entropy in the thermodynamic temperature difference are defined on undirected graphs. However, the temporal network can be generically directed. How can the proposed method address this issue? Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Can the authors provide some intuitive explanations for the quantities $\mathcal{J}(G)$ and $\mathcal{K}(G)$ defined in Eq. (6) and Eq. (7), respectively? 2. In Line 259, the authors say that for large networks, the computational complexity can be reduced from $O(|V|^2)$ to $O(N^2)$, where $|V|$ is the number of nodes, and $N$ is a predefined number to control the budget. Will it make the approximation less meaningful when $N\ll |V|$? 3. In Line 329-331, the authors say that pre-computing some thermodynamic quantities can considerably reduce the computational overhead. However, the network evolves over time, and thus these quantities may be updated. Can the authors explain to what extent the pre-computation can reduce the computational cost? Comments: 1. Please make the colors of the barplot more distinguishable in Figure 3. 2. It may be inappropriate to place Figure 4 after Figure 5. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your careful reading, encouraging remarks, and constructive feedback. **1) The von Neumann entropy and the graph entropy in the thermodynamic temperature difference are defined on undirected graphs. However, the temporal network can be generically directed. How can the proposed method address this issue?** It is a great idea. Temporal networks are abstract representations that are widely used for real-world dynamic systems. In fact, most research on temporal networks focuses on temporal dynamics and simplifies the networks as undirected ones including the state-of-the-art baseline methods (TGAT, TGN, CAW, and NTW). Following these methods, we preprocess the network as undirected for a fair comparison. Our network can feasibly be covered to a directed one. Specifically, if replacing the undirected degree matrix with an out-degree or in-degree matrix when computing the Laplacian matrix, the computation of von Neumann entropy and the thermodynamic temperature difference can be converted with directionality naturally. Thus, further inclusion of directionality in the study is reasonable and has excellent theoretical potential. We will investigate this deeply in the future. **2) Can the authors provide some intuitive explanations for the quantities $\mathcal{J}(G)$ and $\mathcal{K}(G)$ defined in Eq. (6) and Eq. (7), respectively?** $\mathcal{J}(G)$ and $\mathcal{K}(G)$ can be explained as the probabilities of a random walker traversing specific edges or cycles in the graph when starting a random walk on the graph. They are the statistics about network structures. **3) In Line 259, the authors say that for large networks, the computational complexity can be reduced from $O(|V|^2)$ to $O(N^2)$, where $|V|$ is the number of nodes, and $N$ is a predefined number to control the budget. Will it make the approximation less meaningful when $N\ll |V|$?** The link prediction performance decreases when $N\ll |V|$. To quantify the extent of the impact, we provide a parameter sensitivity analysis of $N$ in the manuscript. The total node number of the MathOverflow dataset is 21688, which is also the max value of $|V|$ because the node will add and delete over time. We vary $N$ in the set of \{50,100,150,200,250\}. The AUC results of MathOverflow fluctuate between 95.43% and 98.56%. The result presents a promising trade-off between computational efficiency and the loss of information due to sampling in actual applications. **4) In Line 329-331, the authors say that pre-computing some thermodynamic quantities can considerably reduce the computational overhead. However, the network evolves over time, and thus these quantities may be updated. Can the authors explain to what extent the pre-computation can reduce the computational cost?** In Line 329-331, the cost which can be reduced is the neighborhood search and the computation of von Neumann entropy in the first epoch of the training. The history structure of the temporal network is fixed. If the adjacency list is sorted by time, it costs $ O(logD)$ to search the neighbors of the node at time $t$, where $D$ is the degree of nodes. Given a test node pair $(u,v,t)$, neighbor search totally costs $O(K^l\times logD)$, where $K$ is the number of neighbors in the neighborhood aggregation process and $l$ is the number of neighborhood aggregation layer. Moreover, it takes $O(N^2)$ in the computation overhead of von Neumann entropy and thermodynamic temperature difference. We hash the test node pair $(u,v,t)$ as key and its neighborhood tree and thermodynamic quantities as value and make these costs reduce to $O(1)$ after the first epoch. After training, the model stores the hash table. If pre-computing the thermodynamic quantities and saving the hash table as the cache files, the training will be faster when loading the hash table directly rather than building it in the first step. **5) The comment for figures.** We will follow the figure suggestion about the colors and position in the revised manuscript. --- Rebuttal Comment 1.1: Comment: Thanks. Your response has addressed my concern. **1)** Your discussion makes things clearer. Since this work is based on literature [27], I believe both the definition of von Neumann entropy and its approximation can be extended to directed graphs. **2)** Please add the explanation to the main text to make the formula more accessible to readers who are less familiar with these quantities. **3)** I have checked Figure 5(c) and understand when the approximation will be practical. **4)** Your complexity analysis and engineering efforts are appreciated.
Summary: The paper presents a new framework called ESSEN (Evolution StateS awarE Network) to measure the evolution of temporal networks using von Neumann entropy and thermodynamic temperature difference. Existing methods struggle to handle the time-varying nature of these networks, hindering their performance on complex evolving states. ESSEN utilizes an entropy-aware attention mechanism, contrastive learning, and a unique decoder called MoTE to improve recognition of network evolution states, showing effectiveness in link prediction tasks compared to state-of-the-art methods. Strengths: Clear motivation and problem statement. Innovations by integrating domain knowledge in biological study to GNN. Weaknesses: Some core concepts required domain specific knowledge to fully understand, but these concepts are not clearly defined, such as Von Neumann Entropy. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: What is Von Neumann Entropy? Why Von Neumann Entropy? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: na Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your careful reading, positive comments, and constructive feedback. **1) What is Von Neumann Entropy?** In the revised manuscript, we will add more details about the definition of von Neumann entropy. Von Neumann entropy is a concept in quantum information theory. In the quantum context, Von Neumann Entropy quantifies the uncertainty associated with the state of a quantum system. This idea has been extended to the static graph domain. Specifically, the von Neumann entropy is computed from the density matrix for the states of the system under study. the density matrix is used to describe systems whose state is a mixture of pure quantum states $\left|\psi_i\right\rangle$, each with probability $p_i$. The density matrix is defined as $$ \rho=\sum_{i=1}^{|{V}|} p_i\left|\psi_i\right\rangle\left\langle\psi_i\right|, $$ where $|{V}|$ is the number of node set. When defined in this way, the density matrix is Hermitian, i.e., $\rho=\rho \dagger$, $\rho \geq 0$, and Tr[$\rho$] = 1. $\dagger$ represents conjugate transpose. It plays an important role in the quantum observation process, which can be used to calculate the expectation value of measurable quantities. The von Neumann entropy is given by $S_{\textit{VN}}{(G)}=-\operatorname{Tr}(\rho \log \rho)$. For the graph domain, a density matrix for a graph or network can be obtained by scaling the combinatorial Laplacian matrix $\tilde{L}$ by the reciprocal of the number of nodes in the graph, i.e., $\rho=\frac{\tilde{L}}{|{V}|}$. The interpretation of the scaled normalized Laplacian as a density operator opens up the possibility of characterizing a graph using the von Neumann entropy. With the definition of the density matrix adopted by Severini et al., the von Neumann entropy can be computed from the normalized Laplacian spectrum as follows: $$ S_{\textit{VN}}{(G)}=-\operatorname{Tr}(\rho \log \rho)=-\sum_{i=1}^{|{V}|} \frac{\hat{\lambda}_i}{|{V}|} \log \frac{\hat{\lambda}_i}{|{V}|}, $$ where $\hat{\lambda}\_1$, $\ldots $, $\hat{\lambda}\_{|V|} $ are the eigenvalues combinatorial Laplacian matrix. This form of von Neumann entropy has been shown to be effective for network characterization. The approximation form of von Neumann entropy also has shown its efficacy for network characterization in the static graph. **2) Why Von Neumann Entropy?** Von Neumann entropy is effective for network characterization. As we discussed in our manuscript, von Neumann entropy is applied to describe the quantum statistics [1] and measure network irregularity [2] in a network system. It offers a novel method to study the properties of pure states and mixed quantum states [3]. Von Neumann entropy measurements play a crucial role in understanding network systems' structural and topological complexity. The ability of von Neumann entropy to capture information content aligns well with the changing nature of temporal networks. This information-theoretic perspective enhances our understanding of the network's evolution laws and capacity to transmit and store information over time. [1] Passerini, F., Severini, S.: Quantifying complexity in networks: the von Neumann entropy. International Journal of Agent Technologies and Systems (IJATS) 1(4), 58–67 (2009) [2] Passerini, F., Severini, S.: The von Neumann entropy of networks. Available at SSRN 1382662 (2008) [3] Anand, K., Bianconi, G., Severini, S.: Shannon and von Neumann entropy of random networks with heterogeneous expected degree. Physical Review E 83(3), 036109 (2011) **3) Some Concepts like von Neumann Entropy need to be more clearly defined.** In the revised manuscript, we will add more details about the entropy concepts and make it more straightforward.
Summary: The authors propose a novel method on performing inference tasks on dynamic network structures. Differentiating from previous literature, the proposed method capitalizes on the Von Neumann entropy, which provides a set of indicators about the structural symmetries. In combination with a quadratic approximation of the Von Neumann entropy, the authors utilize expressions of the thermodynamic temperature differences, in order to create representations of the evolution states of the temporal network. Given the computed representations, the authors propose a decoder based on a mixture of thermodynamic experts, specifically the Von Neumann entropy of the original graph, the Von Neumann entropy of the virtual node graph, and the thermodynamic difference between the two networks. The experimental study showcases a very strong performance of the proposed ESSEN model, that outperforms (by a large margin in several tasks) the baselines. However, unfortunately I was not able to assess the reproducibility of the results, since no code has been provided until the time of the present review. Strengths: - The experimental results suggest a very strong performance of the ESSEN model. - The idea of combining Von Neumann entropy of original and virtual node graph in combination with the thermodynamic difference seems very interesting and can provide some insights on temporal networks. Weaknesses: - The authors do not provide any clear theoretical indication of the contribution of Von Neumann entropy for representations of dynamic networks. - It would be really helpful for the community, that by the time of rebuttal the code for the reported results is published. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - How the approximation of Von Neumann entropy impacts the exact entropy computation? How would be the representations of the dynamic networks would look like given the actual entropy terms? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: No limitations of the proposed method are discussed. No discussion on potential negative societal impact is discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the careful reading and helpful comments. Our anonymous code link is provided in the official comment for Area Chairs. **1) Theoretical indication of the contribution of Von Neumann entropy for representations of dynamic networks.** Von Neumann entropy has shown its efficacy for network characterization in the static graph [1]. Our work innovatively introduces von Neumann entropy as a framework for analyzing evolution states. Analyzing evolution states is crucial for the model to better fit network link development law. The link prediction probability $P(u,v|G(t))$ is influenced by the network evolution state at time $t$, and the evolution state can vary as the graph's structure changes over time. Von Neumann entropy measures the global uncertainty or randomness associated with a given network evolution state. In the context of link prediction, this uncertainty arises from the various factors that influence whether a link will be established, including past interactions and structural changes in the network. As the network evolves, new information is introduced through the formation of new connections, while existing connections may become obsolete or less relevant. Von Neumann entropy quantifies the information within the temporal network at different time points, shedding light on the information acquisition or loss rate by capturing $\sum_{(u, v) \in E} \frac{1}{d_u d_v}$ in the approximate expression. In the revised manuscript, we will add more details about the theoretical indication of the von Neumann entropy's contribution and make it more straightforward. [1] : Passerini, F., Severini, S.: Quantifying complexity in networks: the von Neumann entropy. International Journal of Agent Technologies and Systems (IJATS) 1(4), 58–67 (2009) **2) How the approximation of von Neumann entropy impacts the exact entropy computation?** First, the exact von Neumann entropy is defined as $$ S_{\textit{VN}}(G)=-\sum_{j=1}^{|V|} \frac{\hat{\lambda}_j}{|V|} \ln \frac{\hat{\lambda}_j}{|V|}, $$ where $\hat{\lambda}\_{1}$, $\ldots $, $\hat{\lambda}\_{|V|}$ is eigenvalues from normalized Laplacian matrix. And the Taylor expansion for $\ln \frac{\hat{\lambda}_j}{|V|}$ is $$ % \begin{split} \left(\frac{\hat{\lambda}_j}{|V|}-1\right)-\frac{1}{2}\left(\frac{\hat{\lambda}_j}{|V|}-1\right)^2+\frac{1}{3}\left(\frac{\hat{\lambda}_j}{|V|}-1\right)^3- \frac{1}{4}\left(\frac{\hat{\lambda}_j}{|V|}-1\right)^4+\cdots . % \end{split} $$ The key approximation step is keeping the first item of the Taylor expansion for $\ln \frac{\hat{\lambda}_j}{|V|}$ and discarding the remaining that contribute to a small amount. $\ln \hat{\frac{\lambda_j}{|V|}}$ is approximated using $\left(\frac{\hat{\lambda}_j}{|V|}-1\right)$, which holds well when $ \hat{ \frac{\lambda_j}{|V|}} $ is close to 0 or 1. Then we obtain $$ \begin{equation} \begin{split} S_{\textit{VN}}(G)&=-\sum_j \frac{\hat{\lambda}_j}{|V|} \ln \frac{\hat{\lambda}_j}{|V|} \simeq \sum_j \frac{\hat{\lambda}_j}{|V|}\left(1-\frac{\hat{\lambda}_j}{|V|}\right) =\frac{1}{|V|} \sum_j \hat{\lambda}_j-\frac{1}{|V|^2} \sum_j \hat{\lambda}_j^2 . \end{split} \end{equation} $$ The expression allows us to be expressed in terms of the node degree combinations on edges of the graph in the next steps and by computed with quadratic complexity. In the appendix, we provide the total derivation of approximate von Neumann entropy on the temporal network. **3) How would be the representations of the dynamic networks would look like given the actual entropy terms?** Graphs with low entropy tend to be tree-like or string-like and have more low-degree nodes. Those with high entropy have high-degree nodes and tend to be fully connected. We provide analysis figures about von Neumann entropy and structure evolution process in the attachment PDF.
null
null
Rebuttal 1: Rebuttal: We sincerely thank all the reviewers for dedicating their valuable time and providing insightful comments. We are greatly pleased to receive some positive reviews. Specifically, we appreciate that they find our work is novel (w7vF), well motivated (oGt7), inspiring (jRZR), well presented (jRZR), and with promising experimental results (w7vF, jRZR). We will incorporate the suggestions and address the concerns in the revision. To the best of our efforts, we have provided detailed responses to address the concerns raised by each reviewer. Specifically, the primary responses are outlined below: - We provide the figures of the connection between network structure change and von Neumann entropy. - We introduce the von Neumann entropy definition more comprehensively. - We elaborate on the contributions of von Neumann entropy to network representation learning. - We precisely explain how the approximation process impacts the computation of von Neumann entropy. - We provide more details of our method, which include providing more insights into thermodynamic parameters $\mathcal{J}(G)$ and $\mathcal{K}(G)$, conducting an in-depth analysis of the relation between $|V|$ and $N$, elucidating the efficacy of pre-computation, and discussing the extensibility for the directed dynamic graph. Moreover, we would like to emphasize our motivation: - Von Neumann entropy measurements play a pivotal role in comprehending the structural and topological intricacies of network systems. Its efficacy in network characterization has been well-established. The ability of von Neumann entropy to capture information content aligns well with the changing nature of temporal networks. To the best of our knowledge, research has yet to explore the application of von Neumann entropy in the context of temporal networks. Our work pioneers the extension of approximate von Neumann entropy to temporal networks, aiming to stimulate future research in this area. - The accurate and adaptive evolution state estimation is paramount for link prediction on temporal networks. By expanding von Neumann entropy to temporal networks, we aim to mitigate the following problems in evolution state estimation: a) Different networks exhibit substantial variations in their evolution laws. Moreover, the evolution states may change over time within the same network. b) Temporal networks accumulate increasing nodes and edges as time progresses, leading to a rapidly expanding neighborhood for each node. Pdf: /pdf/e9d44ff2c5d3000f5312eafbb70d977551980897.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Statistical Knowledge Assessment for Large Language Models
Accept (poster)
Summary: This paper proposes a statistical approach called KaRR to assess the factual knowledge contained in Generative Language Models (GLMs). The authors use a large-scale assessment suite with 994,123 entities and 600 relations to evaluate 14 GLMs of various sizes. The results show that KaRR exhibits a strong correlation with human assessment and achieves a lower variance to varying prompts. The experiments also reveal interesting insights into the scaling law of GLMs and the impact of tuning on instruction-following data. Strengths: 1. The proposed statistical approach is novel and effective in assessing the factual knowledge contained in GLMs. The proposed method effectively considers different surface forms of subject, object, and relation. 2. The large-scale assessment suite used in the experiments is comprehensive and orders of magnitude larger than prior studies.1. Some of the paper's findings, such as scaling laws of knowledge and fine-tuning instructions to improve consistency, have been found in previous work. 2. Previous work has systematically assessed the consistency of the same facts under multiple prompts, but the facts considered in this paper are larger in scale and prompts are more diverse. Weaknesses: 1. Some of the paper's findings, such as scaling laws of knowledge and fine-tuning instructions to improve consistency, have been found in previous work. 2. Previous work has systematically assessed the consistency of the same facts under multiple prompts, but the facts considered in this paper are larger in scale and prompts are more diverse. Technical Quality: 3 good Clarity: 3 good Questions for Authors: N/A Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We really appreciate your effort in reviewing our paper and your acknowledgment of our paper’s contribution. We are very glad that you liked our statistical approach, the large-scale assessment suite, and the insightful findings. Our response to your further comments is as follows: **Q1 "Some of the paper's findings, such as scaling laws of knowledge and fine-tuning instructions to improve consistency, have been found in previous work."** We want to clarify our motivation for scaling the model size and highlight that our results are different from the findings in previous work. **Scaling laws of knowledge:** There may be some misunderstanding. Our primary objective in examining the evaluation results of scaling model size is to demonstrate that the knowledge assessment findings align with previous studies on scaling laws, thereby validating the performance of our approach. In addition, our research offers some unique insights. Figure in Table 2(b) first shows the scaling law of model knowledge of the different curves of scaling laws of different large generative language models. Besides, Figure 5 illustrates the detailed differences in model knowledge across different relations, providing an interpretation of the disparities among models of 350M, 2.7B, and 175B. **Fine-tuning instructions:** Yes, previous studies have mentioned fine-tuning instructions to consistency but our finding is novel that tuning on instruction-following data may compromise the model's capability to generate factually correct text consistently. However, our novel finding suggests that tuning based on instruction-following data might actually impede the model's ability to consistently generate factually accurate text. While instruction-tuning does enhance the understanding of instructions, it is not surprising that it also improves the general consistency of LLMs in response to prompts, as evidenced by earlier research. Our study, conversely, examines the consistency of knowledge, specifically in generating factually accurate answers consistently. Contrary to general consistency, our results reveal potential trade-offs between instruction understanding ability and consistent mastery of knowledge. We further present intriguing examples of this point in Table 1, located in Appendix 4. **Other findings:** In addition to these two findings, we have also uncovered many other findings that have not been previously investigated. For example, the spurious correlation in knowledge assessment (Table 3 (b)), the knowledge evaluation variance towards different prompts (Table 3 (a)), etc. We hope that these findings will enhance the research community's understanding of the knowledge assessment of GLMs and encourage further investigation. Again, we thank you for acknowledging that our approach is novel and effective, and experimental results are insightful. If you have any additional suggestions or concerns, we would be happy to discuss them further with you. **Reference:** [1] Petroni, Fabio, et al. Language models as knowledge bases? [2] Dhingra, Bhuwan, et al. Time-aware language models as temporal knowledge bases. [3] Roberts, Adam, Colin Raffel, and Noam Shazeer. How much knowledge can you pack into the parameters of a language model? --- Rebuttal Comment 1.1: Title: Follow up to Reviewer 4epC Comment: Dear Reviewer 4epC, We would like to thank you again for your reviews and your acknowledgment of our novelty. We have added replies to the weakness you mentioned and highlighted our other findings. Since the rebuttal deadline is approaching soon, we would appreciate it if you could let us know if our responses have addressed your concerns satisfactorily. If your concerns have not been resolved, could you please let us know about it so that we have the opportunity to respond before the deadline? We would really appreciate it if you are willing to increase your score. And we would be happy to have any follow-up discussions or address any additional concerns. Thanks very much! Looking forward to your reply. Paper9365 Autors
Summary: The paper introduces an automatic evaluation metric to assess the amount of factual knowledge kept by large language models (LLMs). This proposed metric considers various surface forms of factual knowledge presentation, allowing for an evaluation that not only measures the accuracy of the models in terms of factual knowledge but also considers the prediction robustness of the knowledge. Later, the authors demonstrated that this metric has a higher correlation with human annotations than previously proposed metrics. Additionally, the paper includes several robustness analyses for the metric, such as examining the impact of the prompting format and its relationship to co-occurrence statistics. And it shows the new metric's effectiveness compared to previous metrics. Strengths: 1. The proposed metric is commendable as it goes beyond capturing mere accuracy and considers model performance consistency. This aspect is crucial in evaluating the factual knowledge of large language models, as it reflects their ability to provide correct information consistently. 2. The strong invariance results achieved by the automatic evaluation metric compared to other baselines are a notable strength. 3. The paper's comprehensive analysis of the proposed metric is great. By covering multiple aspects, the authors thoroughly evaluate the metric's performance. This level of analysis contributes to a better understanding of the metric's strengths, limitations, and overall effectiveness in assessing the factual knowledge of large language models. Weaknesses: 1. The human correlation score of 0.43 for the automatic evaluation metric may be considered relatively low. 2. The proposed metric is not interpretable. To improve the interpretability of the score, calculating an oracle score and then comparing the current score to the oracle score ratio could be a useful approach. This ratio can aid in better understanding the effectiveness of the metric and provide a clearer picture of its performance relative to the best achievable outcome. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In The KaRR scoring details section in Appendix, you mentioned the selection of a threshold based on human judgment alignment. It would be valuable to explore the impact of different threshold values on the metric's performance and investigate the generalizability of the chosen threshold on other sets of knowledge. Does certain threshold value also result in high correlation in other datasets? 2. Minor suggestion: compress Figure 5 by representing the average performance of the best and worst relations. This modification could improve the readability of the figure and make it easier to interpret the performance trends. 3. Adding the best and worst relations to the appendix would help understand the model performance. I would be curious to see it personally. 4. There is a missing number in the equation mentioned in line 43 of the appendix. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No, the limitation of the work is not discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank Reviewer zX87 for the positive recommendation as well as the valuable suggestions. We really appreciate your kind words that our metric is commendable and our analysis is comprehensive. Below we would like to give detailed responses to each of your comments. **Q1 "The human correlation score of 0.43 for the automatic evaluation metric may be considered relatively low."** We'd like to clarify that 0.43 Kendall's tau correlation is a relatively high score for an automatic metric. To resolve possible misunderstandings, it's worth mentioning Kendall's tau correlation scores in [1][2]. As our evaluation of the KaRR metric is similar to the segment-level metric evaluation for machine translation, below we list Kendall's tau scores for some widely accepted metrics for machine translation evaluation for comparison. As shown, Kendall's tau is a relatively strict metric compared to Pearson’s correlation, and 0.43 is a relatively high score for automatic metrics. Metric | Pearson’s correlation | Kendall's tau | ------------- | ------------- | ------------- | METEOR | 0.484 | 0.324 | NLEPOR | 0.483 | 0.281 | SENTBLEU-MOSES | 0.465 | 0.266 | DEP-REF-EX | 0.453 | 0.307 | | *(Pearson’s correlation and Kendall’s τ between WMT-13 segment-level metrics and human assessment for Spanish-to-English. Please refer to Table 2, Sec 4.2 in [1] for the whole table.)* Metric | Kendall's tau | ------------- | ------------- | HTER | 0.4324 | HMEANT gold - monolinguals \* |0.4324 | HMEANT auto - monolinguals \* | 0.3964 | BLEU / METEOR / TER / PER | 0.1982 | | *(Sentence-level correlation with human adequacy judgments. The weights for individual roles in the metric are tuned by optimizing the correlation. Please refer to Table 8, Sec. 8.2 in [2] for the whole table.)* In addition, we compared the Kendall-tau correlation of KaRR with humans and the baseline metrics for model knowledge evaluation such as LAMA with humans in Table 1 (b). KaRR shows a much stronger correlation (KaRR: 0.43 versus LAMA@1: 0.17, K-prompts: 0.32). Besides, the Recall of finding human-detected false knowledge in Table 1 (b) (KaRR: 95.18% versus LAMA@1: 83.25%, K-prompts: 78.00%) further support KaRR‘s correlation with human evaluation. **Q2 "To improve the interpretability of the score, calculating an oracle score and then comparing the current score to the oracle score ratio could be a useful approach. "** Many thanks for your insightful suggestion! Yes, we agree that calculating an oracle score and then comparing the current score to the oracle score ratio would be more interpretable. However, it is worth noting that obtaining the oracle score for each GLM on every piece of knowledge is hardly feasible. We can only sample a portion of knowledge and obtain the human evaluation results (Sec. 4.4), but the cost of the human evaluation is substantial. To improve the interpretability of the score, we have updated our draft. In the latest version, we have included the quantitative results for both $KaRR_r$ and $KaRR_s$. **Q3 "In the KaRR scoring details section in Appendix, you mentioned the selection of a threshold based on human judgment alignment. It would be valuable to explore the impact of different threshold values on the metric's performance and investigate the generalizability of the chosen threshold on other sets of knowledge. Does certain threshold value also result in high correlation in other datasets?"** Thank you for making a great point. As suggested, we add the experiments on different threshold values and the generalizability of the chosen threshold. The results below highlight the significance of selecting a human-aligned threshold for evaluation accuracy, as minimal human input enhances the correlation with human assessments. If the threshold is too low, the criteria become lenient, and variance slightly increases. Conversely, an excessively high threshold results in near-100% Recall for false knowledge detection due to strictness but slightly reduces the correlation with human judgment. Encouragingly, our approach displays commendable generalization for the chosen threshold on other sets of knowledge. This implies that once we select a threshold in alignment with human judgment in a specific knowledge base, it can be directly applied to other knowledge bases. Threshold | KaRR Score | Variance | Recall | Kendall's \tau | ------------- | ------------- | ------------- |------------- | ------------- | 22 (\*human-aligned) | 12.27 | 0.67 | 95.18 | 0.43 | 8 | 33.05 | 0.87 | 84.20 | 0.28 | 16 | 17.20 | 0.69 | 90.05 | 0.36 | 32 | 7.93 | 0.64 | 99.53 | 0.32 | | *(Impact of different threshold values.)* Threshold | KaRR Score | Variance | Recall | Kendall's \tau | ------------- | ------------- | ------------- | ------------- | ------------- | T-REx | 12.27 | 0.67 | 0.67 | 95.18 | 0.43 | Google-RE | 9.76| 0.54 | 91.78 | 0.39 | ConceptNet | 10.02 | 0.68 | 87.06 | 0.39 | SQuAD | 9.98 | 0.80 | 85.21 | 0.34 | | *(Generalizability of the chosen threshold on other sets of knowledge. For the three new databases, facts are randomly sampled and manually aligned since some of the knowledge bases do not correspond with entity ids in Wikidata.)* Many thanks for your constructive suggestions on Figure 5 and the appendix. As suggested, we've added the best and worst relations for each GLM in Table 2(a) to Appendix 6, and added the equation number "(8)" in line 43 of the appendix. Thanks again for your detailed and constructive comments! We hope our answers have addressed your concerns. **Reference:** [1] Graham, Yvette, Timothy Baldwin, and Nitika Mathur. Accurate evaluation of segment-level machine translation metrics. [2] Lo, Chi-kiu, and Dekai Wu. MEANT: An inexpensive, high-accuracy, semi-automatic metric for evaluating translation utility based on semantic roles. --- Rebuttal Comment 1.1: Title: Thanks for the reply Comment: I've read the author's response and will keep my score. I appreciate the explanation of Kendall's tau score and the additional analysis! --- Reply to Comment 1.1.1: Title: Thanks for the positive feedback Comment: We sincerely thank you for the positive feedback and we are grateful for the time you spent on our submission and rebuttal. We are also delighted that the explanation of Kendall's tau score and the analysis have been acknowledged. We hope our paper can provide contributions to further understanding and exploring the knowledge of GLMs. Thanks again!
Summary: The paper proposes KaRR, a statistical approach to assess factual knowledge for generative language models based on graphical models. An assessment suite is also proposed for future research. Experiments are conducted with 14 popular large language models and comprehensive analyses are also conducted to reveal related properties. Strengths: - This paper released a dataset for assessing factual knowledge in generative language models, which is large-scale (millions of entities and text aliases) and could be used in future works. - This paper proposed a statistical score KaRR based on graphical models and KaRR aligns well with human preferences as shown with human evaluation. - Lots of popular large language models are experimented with and the analyses are interesting, e.g., Table 2(b). Weaknesses: - The knowledge assessment focuses on entity-aware knowledge, which could be a relatively limited knowledge form. The current LLMs are good at identifying entity-aware knowledge. The major hallucinations actually come from numbers, dates, etc. Not a strict weakness, just wondering if KaRR could be extended to these aspects. - There are in total 600 relations in the assessment suite. Does that mean the suite can only be employed for the specific 600 relations? What if there are new relations? Technical Quality: 3 good Clarity: 3 good Questions for Authors: - The conclusion of "instruction tuning impairs knowledge reliability" should be considered more carefully (line 236). There are at least two factors contributing to the KaRR difference between Alpaca and Vicuna, data quality and tuning style. You do not know exactly whether data or tuning style is the major reason. - Minor comments: - line 228, repeated "KaRR score". - line 230, should be "with a 3.92 KaRR score difference" - GLM in Table 2 and GLM across the paper are a little bit confusing. - What are the distributions of relations in the dataset? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Refer to previous weaknesses and questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank Reviewer adWh for the positive comments on our method and analyses as well as the valuable suggestions. We would like to give detailed responses to each of your comments. **Q1. “The knowledge assessment focuses on entity-aware knowledge, which could be a relatively limited knowledge form. The current LLMs are good at identifying entity-aware knowledge. The major hallucinations actually come from numbers, dates, etc. Not a strict weakness, just wondering if KaRR could be extended to these aspects.”** Thank you for this interesting point. Yes, we do rely on entity-aware knowledge; however, our definition of "entity" encompasses a wide range of categories, including dates, natural features, numbers, laws, and more. As a result, KaRR already incorporates the evaluation of facts associated with such diverse entities. In fact, dates can have aliases as well. For example, the fact <Barack Obama, date_of_birth, 4 August 1961 (Q69285218)> can be expressed in various ways within the prompt, including "August 4, 1961," "4 August 1961," and "1961-08-04." Similarly, for numbers, the number 7 (Q23350) can be represented as "the number 7," "seven," "number seven," "number 7," and so on. **Q2. “There are in total 600 relations in the assessment suite. Does that mean the suite can only be employed for the specific 600 relations? What if there are new relations?”** Thank you for highlighting the potential confusion. We would like to clarify that our statistical knowledge assessment approach is not limited to the 600 relations used in our experiments. The graphical model for knowledge assessment and the KaRR metric, as described in Section 3, can be implemented with various entities or relation types. For instance, researchers focusing on medical relations can easily substitute relations with predicates from a medical knowledge graph, such as RepoDB[1] and SemMedDB[2]. If new relations emerge, we can generate relation templates for these new relations following the same process outlined in Section 4.1, and subsequently incorporate them into the relation and fact sets. As our method is designed to serve as a general framework for GLM knowledge assessment, it can be flexibly adapted to different relation types based on specific requirements. **Q3. “The conclusion of "instruction tuning impairs knowledge reliability" should be considered more carefully (line 236). There are at least two factors contributing to the KaRR difference between Alpaca and Vicuna, data quality and tuning style. You do not know exactly whether data or tuning style is the major reason.”** Thanks for the great suggestion and sorry for the confusion. The conclusion is derived from the comparison of **the original LLaMA and the Alpaca**, as the Alpaca is finetuned on the LLaMA with instruction-following data. We agree that there are other factors contributing to the KaRR difference between Alpaca and Vicuna and we have utilized the phrase "could lead to" (Line 234) to convey our hypothesis regarding the potential cause. To be more rigorous, we have modified the claims accordingly in the revised version (i.e., "For larger GLMs, a comparison between the original LLaMA and the Alpaca model reveals that instruction-tuning might influence the model's ability to generate consistent and correct knowledge" ). **Q4 "What are the distributions of relations in the dataset?"** Thanks for this great point! We've incorporated the following table into our Appendix 7. The distribution of relations exhibits a long-tail phenomenon, which is consistent with the distribution of relations in real-world knowledge and the distribution of relations within knowledge graphs[3]. Top-k | \# Avg. aliases| \# Related facts | Proportion of related facts | ------------- | ------------- | ------------- | ------------- | 50 | 6.96 | 12638356 | 90.05\% | 100 | 5.97 | 13538967 | 96.47\% | 150 | 5.54 | 13802467 | 98.35\% | 200 | 5.22 | 13921742 | 99.20\% | 250 | 4.84 | 13977899 | 99.60\% | | *(Distributions of top-k relations in the dataset.)* Moreover, Figure 2 in the T-REx paper[1] demonstrates the distribution of the number of alignments created for each relation within the T-REx dataset. T-REx notably surpasses other datasets in terms of the number of examples provided, not only for the most prevalent predicates but also for those found in the long tail. **Q5. Typo and writing** Thank you for your gracious help in identifying typos and writing problems in our work. We have implemented all required changes in the newest version. **Reference:** [1] Brown, Adam S., and Chirag J. Patel. A standard database for drug repositioning. [2] Kilicoglu, Halil, et al. SemMedDB: a PubMed-scale repository of biomedical semantic predications [3] Elsahar, Hady, et al. T-REx: A large scale alignment of natural language with knowledge base triples --- Rebuttal Comment 1.1: Title: Further comments and discussions will be appreciated! Comment: Dear Reviewer adWh, Thank you for your valuable time to review our work and for your constructive feedback. We posted our response to your comments a week ago, and we wonder if you could kindly share some of your thoughts so we can keep the discussion rolling to address your concern if there are any. In the previous response, 1. We clarified the scope of our "entity", which encompasses a wide range of categories, including dates and numbers as you mentioned. This enables our knowledge assessment to cover a wide range of knowledge. 2. We outlined the procedure for employing our method when new relations emerge. It is important to note that our method can be flexibly adapted to different relation types based on specific requirements. 3. To eliminate any misunderstanding, we explained the reasoning behind the conclusion on instruction-tuning and, as suggested, revised the claim to be more rigorous. 4. We addressed the distribution of relations in the dataset by providing a table on the proportion of related facts for the top-k relations. This table has been incorporated into Appendix 7 in the revised version of our paper. And we've modified the typos and writing problems you mentioned, thank you for your gracious help again. We would appreciate it if you could kindly take a look at both the revision and our response to your comments. We would really appreciate it if you are willing to increase your score. If you have any further questions, we are happy to discuss them! Best regards, Authors --- Rebuttal Comment 1.2: Title: Thanks for the response. Comment: I have read the author's responses and other reviewers' responses as well. I appreciate the further statistics and explanations for my questions. It would be great to include them in a future version of this paper.
Summary: this paper proposes a statistical method to probe the knowledge in generative language models, which aims at connecting symbolic knowledge and GLM's text format generation. More specifically, the KaRR comprises two components with regard to specifying relation and subject entity. The authors also present the graphical model for model implementation on the text. The results show that the knowledge in GLMs follows the scaling law, but when the model is finetuned on instruction-following data, it may compromise the model's ability to consistently generate factually correct text. Strengths: 1. very important and interesting problem, assessing the knowledge stored in generative language models is challenging and worth studying 2. the proposed KaRR method shows strong robustness to prompt variance 3. Interesting findings: instruction-following data compromises the model’s capability to generate factually correct answer. Weaknesses: 1. This work only focuses on generating the object, I believe the author can assess the model's ability to generate the subject (by reversing the triples/facts) to gain a more comprehensive assessment of knowledge stored in GLMs. 2. The average alias count for each subject is approx 1.39, and for each object it is about 1.78, which I believe could not support the "diverse" claim. 3. The proposed KaRR’s design to measure reliability is not clear enough. 4. In line 38, the authors claim "Prior methods are designed for masked language models (MLMs) and are incapable of measuring GLMs.". However, to the best of my knowledge, there are several works about assessing the knowledge in GLMs, such as [1, 2]. [1] Hendrycks, D., Burns, C., Basart, S., Zou, A., Mazeika, M., Song, D., & Steinhardt, J. (2020). Measuring massive multitask language understanding. arXiv preprint arXiv:2009.03300. [2] Dhingra, B., Cole, J. R., Eisenschlos, J. M., Gillick, D., Eisenstein, J., & Cohen, W. W. (2022). Time-aware language models as temporal knowledge bases. Transactions of the Association for Computational Linguistics, 10, 257-273. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. I believe it would be interesting to see the model's knowledge across different domains, such as encyclopedia, biomedical, etc. 2. Since high-quality aliases are often difficult to obtain, it would be interesting to see the impact of alias, such as alias count, etc. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank Reviewer e9uD for your review and are grateful for the time you spent on our submission. We are also glad you think our research problem is important and our findings are interesting. Below we would like to give detailed responses to each of your comments. **Q1 “This work only focuses on generating the object, I believe the author can assess the model's ability to generate the subject (by reversing the triples/facts) to gain a more comprehensive assessment of knowledge stored in GLMs.”** Thank you for your comments. It is worth mentioning that the reversed facts have already been included in our knowledge base, T-REx. For example, both <Barack Obama (Q76), spouse, Michelle Obama (Q13133)> and <Michelle Obama (Q13133), spouse, Barack Obama (Q76)> are covered. Please note that not all facts can be reversed, so the final number of subject and object entities we obtain are slightly different. Moreover, our approach is intrinsically not restricted to particular entity types, as long as corresponding triplets are constructed. Formally, our method incorporates the probabilities $P(e_2|e_1, r)$, $P(e_2|r)$, and $P(e_2|e_1)$, where $e_1$ or $e_2$ can serve as either subject or object and maintain symmetric properties. We implement this method by employing naturally existing triplets within the current knowledge base, which covers a wide range of knowledge. **Q2 "The average alias count for each subject is approx 1.39, and for each object it is about 1.78, which I believe could not support the "diverse" claim.”** Thank you for highlighting the potential confusion. We'd like to clarify that real-world alias distribution exhibits a long-tailed pattern, with a substantial portion of entities having only one alias. This significantly affects the average number of aliases. However, the most frequent entities typically possess more than five aliases, related to a large proportion of facts (details shown in the table below). |\# Top-k | Type | \# Avg. aliases | Proportion of related facts | |------------- | ------------- | ------------- | ------------- | |50 | subject | 7.62 | 12.10\% | |100 | subject | 6.73 | 16.13\% | |500 | subject | 4.68 | 28.34\% | |1000 |subject | 3.70 | 33.70\% | |50 | object | 5.06 | 39.84\% | |100 | object | 4.79 | 47.35\% | |500 | object | 3.80 | 65.44\% | |1000 | object | 3.48 | 71.93\% | | **Q3 “KaRR’s design to measure reliability is not clear enough.”** As mentioned in Lines 24-29 and illustrated in Fig. 1, reliability refers to the ability to consistently generate knowledge-correct text towards various possible prompts with the same semantics. To measure reliability, we take multiple text forms for the same entity or relation into knowledge assessment and build the graphical model of text forms and symbolic knowledge (triplets). As mentioned in Sec. 3.3, we expand the KaRR metrics based on the graphical model (Eq. 4-9), so as to evaluate GLM on a simple knowledge with various possible prompts. **Q4 “There are several works about assessing the knowledge in GLMs, such as [1, 2].”** Thank you for mentioning related works. We'd like to clarify that existing methods, both open-form and closed-form, are tailored for specific models or tasks, but not well-suited for a comprehensive knowledge assessment of most GLMs. Open-form methods like TEMPLAMA [1] probe factual knowledge in MLMs and an exceptional GLM, T5, using a cloze-test format. However, they're not applicable to most GLMs without masked token/span prediction objectives (see Lines 35-42). Closed-form methods, such as multiple-choice questions [2], evaluate domain-specific knowledge-utilization and problem-solving abilities rather than assessing knowledge itself. These methods may also be biased toward specific option numbers. To avoid confusion, we've revised the statement to: "Prior methods target MLMs and don't provide a universal solution for assessing GLMs' knowledge." **Q5 "I believe it would be interesting to see the model's knowledge across different domains, such as encyclopedia, biomedical, etc."** Thank you for your constructive comments. We agree that it is valuable to assess and analyze model knowledge across different domains. Due to the lack of domain-specific entity aliases, we focus on encyclopedia knowledge using T-REx and Wikidata. Nonetheless, our proposed graphical model and KaRR metric provide versatile solutions for knowledge assessment, adaptable to diverse domains by replacing the knowledge base with relevant domain-specific data. This flexible approach sets the stage for future research in domain-specific knowledge evaluation. **Q6 "Since high-quality aliases are often difficult to obtain, it would be interesting to see the impact of alias, such as alias count, etc."** Thank you for making a great point. As suggested, we add the experiments using different numbers of aliases. The results (listed in the following table) show that a larger number of aliases decrease the variance of knowledge assessment, which is consistent with our intuition and the analysis of the sampling number K in Sec. 7. \# Avg. aliases | KaRR Score | Variance | ------------- | ------------- | ------------- | 1 | 19.67 | 0.96 | 2 | 18.98 | 0.72 | 4 | 18.78 | 0.55 | 8 | 18.82 | 0.51 | | *(GPT2-XL on 500\*20 facts of 500 frequent entities.)* Overall, we greatly appreciate your efforts for your thoughtful comments on our paper. We hope our answers have addressed your concerns. We have revised the paper to clear the issues you mentioned in the comments in our latest version. **Reference:** [1] Hendrycks, D., Burns, C., Basart, S., Zou, A., Mazeika, M., Song, D., & Steinhardt, J. (2020). Measuring massive multitask language understanding. [2] Dhingra, B., Cole, J. R., Eisenschlos, J. M., Gillick, D., Eisenstein, J., & Cohen, W. W. (2022). Time-aware language models as temporal knowledge bases.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper presents a framework for the quantitative assessment of the knowledge captured by large language models, which includes a proposed metric and a large set of relations. Given subject-relation-object triplets, a basic approach to quantify the model's knowledge would be to use the probability of the object entity being generated, given the subject and the relation. This naive approach has shortcomings, which the authors address in the following way: 1. Entities can have multiple synonyms, which are important to consider to accurately assess consistency. The authors therefore augment their proposed evaluation suite with a set of aliases for each entity, extracted from Wikidata, and adapt the metric to take these into account. 2. As this kind of probing often suffers from spurious correlations, the authors propose the "knowledge assessment risk ratio" metric, which also takes into account the expected generation probability when either the subject or the relation entity are not specified. The authors thoroughly evaluate the proposed approach on 14 generative model. They additionally measure the effectiveness of their approach compared to other pre-existing metrics, showing strong correlation with human judgement and robustness of the metric towards prompt variation. Strengths: * This is a sound and extensively evaluated approach towards the automatic quantitative assessment of the knowledge learned by LLMs. This method displays strong correlation with human judgements. While it cannot replace human annotation entirely, it should be extremely useful to the community as a cheaper, faster automatic way of performing knowledge assessment, much like how BLEU can be used during model development to validate the performance of translation models. * The authors should be commended for calling out robustness (consistency of generation given similar prompts). This aspect is crucial for real-world applications of such models, and is often ignored in similar works. Weaknesses: * The proposed framework is limited to assessing knowledge in a fairly simplistic way, via the prediction of entities in subject-relation-object triplets. It would have been interesting to hear more about the limitations of such an approach, taking into account e.g. slightly more complex type of queries (see e.g. arXiv:2305.01157). Very minor: * lines 141-144, 148, 202, 291, 313-315: straight quotes should be turned into curly quotes Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: * Do you think your proposed approach could be effective as a validation metric for the training of LLMs? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The authors did a decent job of addressing limitations, and I don't expect any potential negative societal impact from this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank Reviewer bA4R for the positive feedback and we are grateful for the time you spent on our submission. We are also glad for the acknowledgment that the problem we are working on is realistic and that the method we propose is sound. We would like to provide comprehensive responses to your comments and questions. **Q1 "The proposed framework is limited to assessing knowledge in a fairly simplistic way, via the prediction of entities in subject-relation-object triplets. It would have been interesting to hear more about the limitations of such an approach, taking into account e.g. slightly more complex types of queries (see e.g. arXiv:2305.01157)."** Many thanks for your constructive comments! Yes, our knowledge assessment focuses on atomic knowledge--each piece of knowledge consists of a triplet. As suggested, we outline several limitations of the current method as follows: 1. struggle to evaluate the knowledge that requires multi-hop reasoning or the understanding of complex relationships between multiple entities. 2. struggle to evaluate model knowledge mastery with context-dependent or time-varying information, which is often crucial for correct knowledge understanding and representation. Thanks for your advice again! We've incorporated them into our limitation section, and we're interested in eliminating the limitations in future work. **Q2 "Do you think your proposed approach could be effective as a validation metric for the training of LLMs?"** Thank you for making a great point. Our approach can be well-suited as a knowledge evaluation metric during LLM training, as our testing approach aligns with the model's next token prediction objective. It can be tested across various base models, such as the LLaMA, GPT2-XL, and T5-large reported in Table 2. It's noteworthy that the goals of the language model training phase go beyond just model knowledge - they might also include semantic understanding, among others. These goals and the learning of knowledge may involve certain trade-offs. If we directly apply KaRR as a validation metric, it could lead to new issues. Nevertheless, through the implementation of reasonable improvements to our metrics, we can overcome these issues. We appreciate your constructive feedback and are keen to explore this perspective further. **Q3. Typo and format.** Thanks for kindly pointing out our typos and format problems. We have revised them all in the latest version. Thank you very much for the constructive comments, which really help us further improve our work. We hope our answers have addressed your concerns. If you have any further questions, we are happy to address them. --- Rebuttal Comment 1.1: Title: Further comments and discussions will be appreciated! Comment: Dear Reviewer bA4R, We would like to thank you again for your detailed and constructive reviews. To better address your concerns regarding "the limitations of such an approach," we have incorporated discussions on two additional limitations in the latest version of our draft. Furthermore, we appreciate your reference to the related work on more complex query types. We have included a discussion on the paper mentioned[1] in both our Related Work and Limitation sections, as well as a discussion on the possible extensions of our approach for such queries (i.e., complex query decomposition and a compound metric for complex knowledge) in our future work discussion. For other questions, we have updated our draft and added replies to your comments. Overall, many thanks for your insightful points and suggestions. These comments really help improve our paper. We hope our answers have addressed your concerns. If you have any further questions, we are happy to address them. We would really appreciate it if you are willing to increase your score. Thanks very much! Best regards, Authors **Reference:** [1] Choudhary, Nurendra, and Chandan K. Reddy. "Complex Logical Reasoning over Knowledge Graphs using Large Language Models." arXiv preprint arXiv:2305.01157 (2023). --- Rebuttal Comment 1.2: Title: Response to rebuttal Comment: Many thanks to the authors for the responses to my questions. Having read the other reviews and rebuttals, I confirm my scores.
null
null
null
null
null
null
PRODIGY: Enabling In-context Learning Over Graphs
Accept (spotlight)
Summary: This paper proposes a pretraining framework that enables in-context learning on graph classification tasks (maybe diverse graph machine learning tasks). Specifically, it proposes a prompt graph as a unified representation for diverse tasks, then it designs a graph neural network architecture over the prompt graph and a corresponding family of in-context pre-training objectives. The experiments show that the pretrained model using this framework exhibits good in-context learning performance over various new tasks in the same domain of pre-training without finetune. Strengths: 1. This paper explores a problem of great current interest – “how to enable in-context learning for diverse graph machine learning tasks”. 2. The approach is novel and technically sound. Interestingly, the "Task graph Message Passing" step can adjust the representation of the label nodes using the prompt examples and propagate label information back to the examples and query graph representation for test-time tasks. I think this step allows prompt to provide much stronger constraint on generation than natural language prompting. 3. The paper is well-structured; The approach is clearly presented with descriptive figures. Weaknesses: 1. The performance of the approach may depend on the similarity between the test and pre-training data, and it is not clear how well it transfers to different datasets. 2. The fine-tuning comparison may be insufficient, as the work does not explicitly evaluate the disparity between in-context learning and fine-tuning settings for the proposed approach. 3. The performance improvement of this work over the baseline could be partially attributed to the larger model size. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The paper demonstrates the effectiveness of the proposed approach using the evaluation datasets from the same domain with the pre-training dataset. Does the success of this approach hinge solely on the close similarity between the test and pre-trained data? As I am not particularly conversant with the specific evaluation dataset utilized in this study, I am wondering how well is the transfer capability of this approach. 2. The Finetune baseline, sharing the same mode as the Contrastive, appears to be weak, as the observed improvement over the Contrastive seems marginal according to the experimental results. I believe a more compelling comparison would be to evaluate the performance disparity between this work within the in-context learning setting versus the finetuning setting. 3. I have concerns that the performance enhancement of this work over the baseline Contrastive may be attributed to the larger number of parameters. This work's architecture includes two message passing layers, MD and two message passing layers, MT, while the Contrastive only incorporates two message passing layers, Md. As shown in Table 4, it seems that the Contrastive model reaches saturation swiftly due to its limited capacity, while PRODIGY can persist in learning due to its greater capacity. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: One limitation is that the work currently sits only on the graph classification task, and it would be nice to include some exploration of the generation task, but I won't fault the authors for that. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the time and valuable feedback. Below, we clarify a number of important points raised by the reviewer. > Re: Does the success of this approach hinge solely on the close similarity between the test and pre-trained data? The reviewer has concerns on whether the success of our proposed framework requires close similarity between the test and pre-trained dataset. We respectfully disagree, the pre-trained data and test data are from quite diverse domains, e.g., we aim to pretrain on wikipedia knowledge graph and perform tasks on ConceptNet and NELL, both of which are drastically different from WikiKG from both node/edge feature and graph structure point of view. > Re: The Finetune baseline, sharing the same mode as the Contrastive, appears to be weak The reviewer wonders if we adopt a weak baseline for finetune. We argue that we design our finetune baseline with contrastive learning since this is the existing best way. If we design the finetune method with our proposed method, then this is more of a variant of our own method instead of a baseline. The goal of this work is to show that with our PRODIGY framework, we for the first time allow for in-context learning over graph tasks across diverse domains, getting rid of the need of additional finetuning steps, and even outperforms traditional finetuning methods that require additional backpropagation over data on downstream domains. > Re: Number of parameters The number of parameters for contrastive learning is ~1.8m, for our method is ~2m. We acknowledge ours has a larger number of parameters but as demonstrated in the number difference, the number of parameters of MT only accounts for a small portion of the overall model parameters. We respectfully do not think this is the main reason that leads to the performance increase.
Summary: This paper introduces Prodigy, a method aimed at facilitating 'in-context learning' over graphs. The key contribution of this work is the formulation of a 'prompt' that can be utilized for in-context learning with graphs. This 'prompt' is defined as a data graph which incorporates typical (input, output) examples, akin to those used in few-shot prompting setups. Given that each node in the graph serves as an input to the method, it is crucial to contextualize each node. For this, the authors add a neighborhood to each node in the data graph. This data graph is then paired with a task graph. The task graph, designed to link different parts of the prompt (e.g., connecting nodes in the data graph that are part of the same class), comprises one node for each node in the data graph (referred to as 'data nodes'), and one node for each label (termed 'label nodes'). When running inference on a new example, the procedure begins with a Graph Convolutional Network (GCN) style message passing over the data graph, followed by the task graph. Eventually, the label is predicted based on the similarity between the representations of the query and label nodes in the task graph. To train this model, the authors propose two self-supervised pre training objectives: neighborhood prediction and a combination of link-prediction and node prediction. The experimental results on citation graphs and commonsense graphs indicate the potential of the Prodigy method, with it outperforming strong baselines, including fine-tuning. Strengths: The concept of extending few-shot learning to graphs is both intuitive and attractive. Based solely on the coherent design for graph-based few-shot learning, the paper is worth considering for acceptance. Weaknesses: The title's use of the term “in-context learning” is questionable. While the term is currently popular due to the rise of Large Language Models (LLMs), it may inadvertently mislead readers about the actual contributions of this paper. Traditional in-context learning: * Applies to a large number of tasks * Adapts to new domains * Allows fluid task definition However, none of these attributes seem to apply to the proposed model. Why call it in-context learning when it essentially seems to be K-shot link prediction and node classification? There's a crucial factor to consider: few-shot learning enables tasks to be undertaken even when no training data is available. However, PRODIGY appears to necessitate pre-existing data to operate effectively as the experiments show. This mismatch in the implementation and title is the main reason for my somewhat low score. In general, I would strongly recommend a change in title for this work (at the minimum, changing _Enabling_ to _Towards_). Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Q1: *L264: Then, we construct a k-shot prompt for 264 test nodes (or edges) from the test split by randomly selecting k examples per way from these available 265 examples. This allows us to test the model’s ability to learn in-context relationships and perform well on 266 classification tasks with truly limited known labels. By default we use k= 3 shots in our experiments.* Generally, in a few-shot scenario, a fixed number of examples are included in the prompt, with the test example added at the end and supplied to the model for inference. However, the description here, which refers to the selection of both training and test examples, seems somewhat unclear. Could you please provide more clarity? Q2: Considering the baselines, is it the most effective approach to train a single Multi-Layer Perceptron (MLP) on top of a graph encoder? Wouldn't training the entire graph encoder end-to-end yield more effective results? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: Please see weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the time and valuable feedback. Below, we clarify a number of important points raised by the reviewer. > Re: Difference from K-shot prediction. The reviewer raises a concern on our difference from K-shot prediction. As in the related work section, most existing few-shot learning works are designed and tested for generalizing across different tasks on the same graph. They are shown to exhibit optimal performance only when trained on similar curated tasks. Our major contribution is to relax such constraints, and enable GNNs to perform both node-level and link-level tasks across drastically different domains (citation, wikipedia, freebase, commonsense) for train/test without the need to perform additional finetuning. Overall we think Prodigy is a solid contribution to explore how we can possibly formulate and also learn in context for various graph tasks across domains. We are happy to update the title to better reflect our contribution in our final version. > Re: Generally, in a few-shot scenario, a fixed number of examples are included in the prompt, with the test example added at the end and supplied to the model for inference. However, the description here, which refers to the selection of both training and test examples, seems somewhat unclear. Yes, your understanding is correct. Given a test datapoint, we sample k training datapoints for each of the m classes/ways. Together these serve as our prompt. Note there is no test datapoint in the prompt. We will rephrase the sentence in our final version to make it easier to understand. > Re: is it the most effective approach to train a single Multi-Layer Perceptron (MLP) on top of a graph encoder? Wouldn't training the entire graph encoder end-to-end yield more effective results? The reviewer wonders if doing end-to-end training can yield better results. We have run both configurations, and we did not notice a drastic difference in performance.
Summary: I have read the author’s rebuttal, I think I misunderstood the in-context learning mentioned in this paper and see the difference from other typical ICL works. I have no object to accept the paper if AC thinks it is enough contribution. The paper introduces an in-context few-shot prompting approach for edge classification over graphs using the PRODIGY framework. The idea is to use GNN to create few shot prompts that can make LLM do better in context learning. The pretrained model exhibits strong in-context learning performance on downstream tasks, surpassing contrastive pretraining baselines and standard finetuning methods. Strengths: Originality: The paper demonstrates some novelty by utilizing a graph to generate few-shot prompting examples for classification. Quality: The results presented show significant improvements over the baseline methods. Clarity: The methodology is described clearly, providing a clear understanding of the different components involved in task construction. Significance: The paper appears to lack a truly novel contribution, instead combining multiple existing approaches and claiming improvement by combining these strategies. E.g. using graphs to construct a few shot examples. How does this differ from superICL which is a more generic model that uses different downstream models for constructing in-context examples? Weaknesses: The paper is not very easy to read with some improvement room for the writing and fluency. For example, there are some incomplete sentences in the writing. "give music product 29 recommendations on spotify when being trained on Amazon book purchasing graph." The paper appears to lack a truly novel contribution, instead combining multiple existing approaches and claiming improvement by combining these strategies. E.g. using graphs to construct a few shot examples. How does this differ from superICL which is a more generic model that uses different downstream models for constructing in-context examples? Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: See weakness. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the time and valuable feedback. Below, we clarify a number of important points raised by the reviewer. > Re: Novelty The reviewer claims that the paper lacks a truly novel contribution. We respectfully disagree. The paper proposed one of the first frameworks that allow for in-context learning over graph tasks. We are not using graphs to construct few-shot examples, but rather propose a way to unify the formulation of different types of graph tasks, including node classification and link prediction, such that the model is able to learn in context. > Re: how does this differ from superICL? We would appreciate it if the reviewer can provide a detailed reference to the paper instead of only giving a name. To the best of our knowledge, we find this paper [1] by looking up the model’s name superICL. However, we fail to find connections between our work (which focuses on enabling in-context learning for graph tasks) and superICL [1] (which leverages a combination of an LLM with smaller models to perform supervised tasks efficiently for in-context learning on text tasks). Please kindly let us know how these two are related. We will improve the writing and polish the narrative in the final version. [1] Xu, Canwen, et al. "Small models are valuable plug-ins for large language models." arXiv preprint arXiv:2305.08848 (2023).
Summary: This paper proposed a framework for graph in-context learning. The PRODIGY architecture consists of the prompt graph, task graph, and in-context learning pretraining objective. The PRODIGY can directly conduct downstream tasks without finetuning and shows strong performance on downstream classification tasks. Strengths: 1. The paper is overall well-organized and well-written. 2. To the best of my knowledge, this is the first work on graph in-context learning, which can be directly applied to downstream tasks without tuning. 3. The proposed PRODIGY shows strong performance, significantly outperforming baseline methods. Weaknesses: 1. missing important self-supervised graph learning baselines, such as GraphMAE [1]. 2. Since there is much recent progress in graph contrastive learning, including data augmentation and architecture design, I strongly encourage the authors to select more recent and competitive baselines. 3. The authors only report results of node classification on one dataset, i.e., arxiv dataset. [1] Hou, Z., Liu, X., Cen, Y., Dong, Y., Yang, H., Wang, C., & Tang, J. (2022, August). Graphmae: Self-supervised masked graph autoencoders. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (pp. 594-604). Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: The proposed PRODIGY requires pretraining and testing on the same type of graph, such as MAG240M for pretraining and arxiv for testing, where both datasets are citation networks. It would be intriguing to observe PRODIGY's performance when the training and testing data belong to different domains. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the time and valuable feedback. Below, we clarify a number of important points raised by the reviewer. > Re: contrastive learning baselines Thank you for the reference. The reviewer suggests we compare our method with more contrastive learning methods. Here we would like to emphasize that our main contribution is to propose the first pretraining framework that enables in-context learning over graphs. We are not simply designing a new contrastive learning algorithm, but rather the framework we proposed enables a given contrastive learning algorithm to learn in context for graph tasks. Hence we view contrastive learning algorithms to be orthogonal contributions from ours. We will implement more contrastive learning methods and do a thorough evaluation. > Re: training and test data from different domains Thank you for the suggestion. Our experiments on link prediction have demonstrated transfer across datasets from multiple domains. Enabling pre-training and testing from different domains is very challenging and requires modeling capacity to capture the gap between both node/edge features as well as the graph structure. This is exactly our future work for a more fundamental graph foundation model that can transfer and complete tasks in-context from diverse domains and even diverse tasks (node/link/graph-level prediction tasks).
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Kernelized Cumulants: Beyond Kernel Mean Embeddings
Accept (spotlight)
Summary: This paper proposes kernelized cumulants to extend classical cumulants in $\mathbb{R}^d$ and shows that the kernelized cumulants provide a new set of all-purpose statistics and are computationally tractable. This paper also show advantages of kernelized cumulants both theoretically and empirically. Strengths: * The paper is well-written and easy to follow. * The method, that is, kernelized cumulant, proposed in this paper is novel and might be useful in real-world applications. For example, this method can be used to provide metrics for distributions and measure independence. Moreover, this method includes traditional MMD and HSIC as its special case and can even outperform the traditional ones. Weaknesses: * In the experiments, more kernels, such as the neural tangent kernel, could be considered. * It would be better if the authors provide theoretical guarantees of the effectiveness of kernelized cumulants, such as consistency. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In the experiments, what is the criteria of the 'optimal value' of $\sigma$ (Line 265)? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the time and effort invested, and for the kind review. Below, we answer the questions in detail. - ”more kernels, such as the neural tangent kernel, could be considered.” Yes, essentially any kernel can be used and in our experiments we focused on the standard kernel choices by practitioners. We are not sure what the reviewer exactly means by using the neural tangent kernel in this context; it would be certainly interesting to use the presented kernelized cumulants to study how dependencies arise in the training of neural networks, although this is beyond the scope of the current work. - ”provide theoretical guarantees” This is a good point, as the estimators are V-statistics they inherit all of their nice properties like consistency, so this boils down to citing the standard references for V-statistics. We will clarify this in the main text. - ”what is the criteria of the ’optimal value’ of $\sigma$?” By ’optimal value’ we mean the value for which the test has the highest power; we will clarify this in the main text. We hope that our answers resolve all the questions. --- Rebuttal Comment 1.1: Comment: Thanks for the response, but I will keep my score.
Summary: This paper generalizes the notion of cumulants to Hilbert-space-valued random variables. When these Hilbert spaces are RKHSs, the kernel trick applies so that computations can be performed with the kernel function. It leads to higher-order two-sample and independence tests, which generalizes MMD and HSIC. The efficiency of these tests is demonstrated numerically on both synthetic and real data. EDIT: I have read the author's rebuttal, which partially addressed my concerns. Strengths: The paper is well written and clear, despite the fact that the notation is heavy due to the complexity of the considered objects. I appreciated that equivalent definitions of the kernelized cumulants are given, as well as special cases to build intuition and relate it to classical notions. Because of this, I think the paper should be accepted. Weaknesses: A first weakness in my opinion is that the main contribution of the paper is a straightforward combination of two known concepts: (i) cumulants and (ii) the use of the kernel methods in statistical testing. Also, rather than using the tensor algebra framework used by the authors in Section 3 and Appendix C.2, it seems to me that the simpler route would be to use the generating function $K(\theta_1, \dots, \theta_d) = \log \mathbb E[ \exp(\sum_{i=1}^d \langle \theta_i, X_i \rangle ) ]$ and define the cumulants from its series expansion, or equivalently derivatives at zero (which would be the correct equivalent of item 1. in Appendix C.1, rather than eq. (11)) . The definitions are of course equivalent, but it would lighten the formalism in the main text. The second main weakness is that the relationship between MMD/HSIC and the higher-order versions *when one is allowed to change the kernel* is not discussed. If I'm not mistaken, the second-moment embedding of a probability distribution with kernel $k$ coincides with the mean embedding with the squared kernel $k^2$. So one should expect a relationship between $d^{(2)}$ with kernel $k$ and MMD with some combination of $k$ and $k^2$ as kernel. If that is the case, then the use of higher-order cumulants can be equivalently rephrased as the use of different kernels. In particular, it is not clear whether one could be better off by sticking to MMD/HSIC but with well-designed kernels, as the estimation of higher-order moments has a higher variance. I think this point should be discussed in the text. Two additional minor remarks: - Repetition of "energy distance" in lines 37-38, which I'm guessing is a typo. - Missing related work: [1] considers a second-moment kernel embedding and defines kernel information-theoretic quantities. [1] Bach, Francis. "Information theory with kernel methods." IEEE Transactions on Information Theory 69.2 (2022): 752-775. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - I am suspicious of the fact that the computational complexity of MMD/HSIC and the higher-order extensions are the same, and are bottlenecked by the computation of the kernel matrix. Surely the complexity must increase with $m = \mathrm{deg}(\mathbf{i})$, even if it remains quadratic in the sample size? - Is there a relationship between going to higher-order cumulants and changing the kernel? Can we achieve the same performance than the higher-order cumulants by adapting the kernel? - Isn't the the $V$ statistic estimator a straightforward replacement of the expectation with empirical averages over the sample? I think this could be mentioned in the text. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors mentioned the two main limitations of their approach: - It is not clear how to design the kernels to maximize the performance of the statistical tests. - There is no theoretical analysis of the introduced tests. I agree that a complete resolution of these issues should be left to future work. However, I think the second weakness above, which is related to the first limitation, should be at least acknowledged. There is no foreseeable negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the time and effort invested, and for the kind review. Below, we answer the questions in detail. - ”use the generating function” We agree that this is a more intuitive way to introduce the concept, but it is not very instructive when it comes to actually computing the statistics or writing down estimators, and in order to do so it is typical to work with combinatorial descriptions of cumulants. In order to save space we introduced only the combinatorial one as it was necessary to derive the statistics in the later parts of the paper. - ”higher-order cumulants can be equivalently rephrased as the use of different kernels” This is true for higher order moments, but we stress that the cumulants are not equivalent to mean embeddings of different kernels. To illustrate this, the embedding for the second cumulant of a random variable $X$ is $ \mathbb{E}\big[k(X,\cdot)\otimes k(X,\cdot)\big] - \mathbb{E}\big[k(X,\cdot)\otimes k(Y,\cdot)\big] $ where $Y$ is an i.i.d copy of $X$. The first term here is indeed a mean embedding of the product kernel $k^{\otimes 2}$, but the second term acts on a different product measure than the first. This structured cancellation of the various moments of the measure is central to the cumulants ability to pick up the higher order features, and makes them distinct from ”only” using higher order moments of kernels. - ”Surely the complexity must increase with the degree” This is true, we apologize that this was not made sufficiently clear in the text. The complexity in the sample size is quadratic for all the proposed statistics, but the number of operations to compute them does increase with the degree and is determined by the number of partitions corresponding to that degree. The number of partitions does grow quickly as the degree gets large, but for reasonably sized degrees it is not an issue. See also our reply to R1 on this topic, particularly the paragraph starting with ’Degree dependence’. - ”Isn’t the the V-statistic estimator a straightforward replacement of the expectation” It is very similar but there is some subtlety in which indices are summed over when computing the V-statistics, as an example consider the two generalised moments of a random variable $X$, $\mathbb{E}(X^2)$ and $[\mathbb{E}(X)]^2$. Given $N$ samples $x_1, \ldots, x_N$ the V-statistic for the first one is $\frac{1}{N}\sum_{i=1}^N x_ix_i$ and the V-statistic for the other is $\frac{1}{N^2}\sum_{i=1}^N\sum_{j=1}^N x_ix_j$. - Minor remarks: Thank you, we will address this and add the relevant citation. Regarding the reference, (i) it falls under the umbrella of $k^{\otimes 2}$-based information theoretical quantities, (ii) it relies on the uncentered covariance operator, (iii) it requires the kernel to be universal (in contrast to our significantly relaxed point-separating assumption), and (iv) its complexity (justified in the submission for the case of kernel entropy) is high, cubic in the sample size (whereas our estimator for fixed degree is quadratic in the sample size). We are happy to include a citation to the mentioned paper as a related work. We hope that our answers resolve all the questions. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed answer. - I agree the generating function is not useful in practice. I understand the choice of skipping it, though it could be mentioned in the appendix. - Your second point is important. In order to see the second cumulant as a mean embedding with a different kernel, we thus need to consider pairs by replacing the random variable $X$ to $(X, X')$ where $X'$ is an iid copy of $X$, and then use the kernel $k((x,x'),(y,y')) = k(x,y)^2 - k(x,y)k(x',y) - k(x,y)k(x',y) + k(x,y')k(x',y)$ (among other possibilities). I think this equivalence between higher-order cumulants and kernels over tuples of data points, to be contrasted with the equivalence between higher-order moments and product kernels, is important to understand the expressivity of higher-order cumulants. I hope that this discussion will help the authors improve the clarity of the paper concerning these more technical points. I recommend acceptance.
Summary: The authors introduce the kernelized cumulant and show that it can characterize distributions and statistical (in)dependence. Strengths: 1. The kernelized cumulant provides a natural generalization of the popular maximum mean discrepancy (MMD) as well as the Hilbert-Schmidt independence criterion (HSIC). 2. The authors illustrate several interesting properties of the kernelized cumulants and provide a two-sample test for non-characteristic feature maps as well as an independence test. 3. The authors demonstrate the utility of the proposed method on a variety of datasets with competitive results. Weaknesses: 1. Estimating the kernelized cumulant may require a large sample. Convergence rate of the estimator is not discussed, but the empirical results suggest comparable performance as the HSIC. 2. The proposed method could potentially be vulnerable to kernel misspecification. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Is the independence criterion kernel dependent? What happens if the kernel is misspecified? 2. What are the convergence rates for the finite-sample estimators? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the time and effort invested, and for the kind review. Below, we answer the questions in detail. - ”Is the independence criterion kernel dependent?” Since we are working in a very general framework, the independence criterion works for a very wide family of kernels, even for simple ones such as the linear kernel on $\mathbb{R}^d$. However, our proposed methods suffer from the same drawbacks as independence testing with HSIC or 2-sample testing with MMD in that the testing power can be severely affected by picking an inappropriate kernel. - ”Convergence rates” As the estimator of our proposed kernelized cumulant is a sum of V-statistics, the rates are inherited from that of V- statistics. - ”kernel misspecification” In our analysis we imposed minimal assumptions on the kernel (point-separating) to allow for the wide applicability of the proposed framework. Understanding the optimal choice of kernels (we assume that this is what the reviewer means by kernel misspecification) or even the involved hyperparameters specifically for MMD (which corresponds to a degree one object in our work) in specific downstream tasks is an important and highly non-trivial problem. Probably the simplest task in this context is 2-sample testing, where one can phrase the goal to achieve minimax optimality. In this very specific case, the analysis for constructing (almost) minimax optimal MMD-based adaptive tests can be worked out in 63 pages [7]. Extension of their results could mean the first step of the grounding of kernelized cumulants in downstream tasks. Similarly, getting optimal rates even for the classic MMD and for radial universal kernel is non-straightforward; to our best knowledge, the only available result in this domain is [21]. Our investigated assumptions are weaker (hence the analysis is expected to be harder) in multiple aspects: the space is Polish (hence even the notion of radial property is not defined), cumulants need not be 1st degree objects (as MMD), and our results (Theorem 2-3) hold for point-separating kernels (which requirement is significantly weaker than characteristic property, which itself is a specific case of universality). We agree (in line with the penultimate paragraph of Section 1) that these are important future research directions. We hope that our answers resolve all the questions. --- Rebuttal Comment 1.1: Title: After rebuttal Comment: Thank you for addressing my comments. I've increased my score accordingly.
Summary: This papers revisits advances in cumulants on real data, by extending them to provide cumulants for random variables in an RKHS. Strengths: The idea of this work is interesting, as the paper proposed to go beyond conventional kernel mean and kernel covariance. Moreover, a proposed kernel trick allow to obtain efficiently the kernelized cumulants. The paper well describes the contributions. The derivations seem to be good, and some experiments on synthetic and real datasets allow to understand the relevance of the proposed approach Weaknesses: While the paper includes some experimental results, there are only two settings : independence test and two-sample test (MMD-like). Is there any other tests (or beyond test) where the proposed kernelized cumulants would be relevant ? Technical Quality: 3 good Clarity: 3 good Questions for Authors: see question in weaknesses Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: OK Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the time and effort invested, and for the kind review. - ”Is there any other tests (or beyond test) where the proposed kernelized cumulants would be relevant?” The proposed kernelized-cumulant based measures are general divergence and independence measures, hence can be used practically in any application which rely on information theoretical objective and expectedly with improved sample- efficiency (as our hypothesis testing illustrations in the submission suggest). Examples include feature selection [2, 18, 9, 22], causal discovery [14, 13, 16, 3, 17], distribution classification [11, 23] and regression [20, 19, 8, 6, 15], or generative adversarial networks [5, 10, 1]. We hope that our answer resolves the reviewer’s question. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for taking the time to respond to the raised question. Considering the issues brought up by myself and the other reviewers, as well as the rebuttal, I am maintaining my "accept" score.
Rebuttal 1: Rebuttal: **References for the rebuttal:** [1] Mikolaj Binkowski, Danica Sutherland, Michael Arbel, and Arthur Gretton. Demystifying MMD GANs. In International Conference on Learning Representations (ICLR), 2018. [2] Gustavo Camps-Valls, Joris M. Mooij, and Bernhard Sch ̈olkopf. Remote sensing feature selection by kernel dependence measures. IEEE Geoscience and Remote Sensing Letters, 7(3):587–591, 2010. [3] Shubhadeep Chakraborty and Xianyang Zhang. Distance metrics for measuring joint dependence with application to causal inference. Journal of the American Statistical Association, 114(528):1638–1650, 2019. [4] N. G. de Bruijn. Asymptotic Methods in Analysis. Dover, 1981. [5] Gintare Karolina Dziugaite, Daniel M. Roy, and Zoubin Ghahramani. Training generative neural networks via maximum mean discrepancy optimization. In Conference on Uncertainty in Artificial Intelligence (UAI), pages 258–267, 2015. [6] Zhiying Fang, Zheng-Chu Guo, and Ding-Xuan Zhou. Optimal learning rates for distribution regression. Journal of Complexity, page 101426, 2020. [7] Omar Hagrass, Bharath K. Sriperumbudur, and Bing Li. Spectral regularized kernel two-sample tests. Technical report, 2022. (https://arxiv.org/abs/2212.09201). [8] Ho Chung Leon Law, Danica Sutherland, Dino Sejdinovic, and Seth Flaxman. Bayesian approaches to distribution regression. International Conference on Artificial Intelligence and Statistics (AISTATS), 84:1167–1176, 2018. [9] Runze Li, Wei Zhong, and Liping Zhu. Feature screening via distance correlation learning. Journal of the American Statistical Association, 107(499):1129–1139, 2012. [10] Yujia Li, Kevin Swersky, and Richard Zemel. Generative moment matching networks. In International Conference on Machine Learning (ICML), pages 1718–1727, 2015. [11] David Lopez-Paz, Krikamol Muandet, Bernhard Schölkopf, and Iliya Tolstikhin. Towards a learning theory of cause-effect inference. International Conference on Machine Learning (ICML), 37:1452–1461, 2015. [12] Laszlo Lovasz. Combinatorial Problems and Exercise. 2nd ed. Amsterdam, Netherlands: North-Holland, 1993. [13] Joris Mooij, Jonas Peters, Dominik Janzing, Jakob Zscheischler, and Bernhard Schölkopf. Distinguishing cause from effect using observational data: Methods and benchmarks. Journal of Machine Learning Research, 17:1–102, 2016. [14] Krikamol Muandet, Kenji Fukumizu, Francesco Dinuzzo, and Bernhard Schölkopf. Learning from distributions via support measure machines. In Advances in Neural Information Processing Systems (NIPS), pages 10–18, 2011. [15] Nicole Mucke. Stochastic gradient descent meets distribution regression. In International Conference on Artificial Intelli- gence and Statistics (AISTATS), pages 2143–2151, 2021. [16] Niklas Pfister, Peter Buhlmann, Bernhard Schölkopf, and Jonas Peters. Kernel-based tests for joint independence. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 80(1):5–31, 2018. [17] Bernhard Schölkopf, Francesco Locatello, Stefan Bauer, Nan Rosemary Ke, Nal Kalchbrenner, Anirudh Goyal, and Yoshua Bengio. Toward causal representation learning. Proceedings of the IEEE, 109(5):612–634, 2021. [18] Le Song, Alex Smola, Arthur Gretton, Justin Bedo, and Karsten Borgwardt. Feature selection via dependence maximization. Journal of Machine Learning Research, 13(1):1393–1434, 2012. [19] Danica Sutherland, Junier Oliva, Barnabas Poczos, and Jeff Schneider. Linear-time learning on distributions with approxi- mate kernel embeddings. In AAAI Conference on Artifical Intelligence (AAAI), pages 2073–2079, 2016. [20] Zoltan Szabo, Bharath K. Sriperumbudur, Barnab ́as P ́oczos, and Arthur Gretton. Learning theory for distribution regression. Journal of Machine Learning Research, 17(152):1–40, 2016. [21] Ilya Tolstikhin, Bharath Sriperumbudur, and Bernhard Schölkopf. Minimax estimation of maximal mean discrepancy with radial kernels. In Advances in Neural Information Processing Systems (NIPS), pages 1930–1938, 2016. [22] Andi Wang, Juan Du, Xi Zhang, and Jianjun Shi. Ranking features to promote diversity: An approach based on sparse distance correlation. Technometrics, 64(3):384–395, 2022. [23] Manzil Zaheer, Satwik Kottur, Siamak Ravanbakhsh, Barnab ́as P ́oczos, Ruslan Salakhutdinov, and Alexander Smola. Deep sets. In Advances in Neural Information Processing Systems (NIPS), pages 3394–3404, 2017.
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper generalises the notion of kernel mean embeddings to higher-order cumulants. It proposes kernelled cumulants in the RKHS. While kernelled cumulants reside in tensor product space of the RKHS, the paper shows that Hilbert space metric between cumulants can be exactly computed using the kernel trick. Based on this construction, the paper proposes: (1) a two-sample test statistics, that generalises MMD test statistics by considering distance between cumulants, and (2) a generalisation of HSIC statistic for independence testing, again by considering distant between cumulants of joint distribution and product of marginals. The advantages of the construction and proposed tests include: (i) the test are applicable for a broader class of "point-separating" kernels (unlike MMD/HSIC that can only useful for characteristic kernels); (ii) the new statistics can be computed in quadratic time (same as MMD); and (iii) empirically achieve higher power that classical MMD/HSIC statistics (both on synthetic and real data). Strengths: - The idea of considering higher order moments/cumulants in RKHS is quite natural, and yet unexplored in the literature (apart from the recent work of Makigusa (2020) that consider only 2nd order moments). Hence, the contribution is novel and quite timely - The main strength of the paper is that the computational cost of the proposed statistics is still quadratic in sample size (same as MMD), which implies the advantages of higher order moments does not come at significant additional cost - The work is technically sound, and the construction of cumulant and use of kernel trick is reasonably involved. Weaknesses: - While it is easy to imagine in general that higher-order cumulants can distinguish between more distributions, the advantage of kernelled cumulants is difficult to grasp. If a characteristic kernel is used wouldn't mean embedding (MMD) suffice? - The line of argument used in the paper to demonstrate advantage of kernel cumulants is that (empirically) they have show higher power (can detect small differences better). Is there a theoretical justification for this? It would be sufficient if the authors provide a justification / reference that standard (non-kernel) cumulants are more sample efficient in some cases, where means already show separation - For the synthetic experiments, the null rejection rate should also be plotted to show that cumulants do not have higher tendency to reject than MMD/HSIC. While this is true for the real data, unfortunately both 1st order and higher order terms reject at a rate higher than significance level - Overall, it is not clear when tests based on higher-order cumulants are indeed needed in practice. I still feel the work is relevant, but some discussions on this would certainly increase the significance of the work for the broader community - The paper, although well-written, is quite dense and at times bit difficult to follow, but this can be attributed to the content of the paper Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - see weakness - in addition, a precise statement on computational complexity (at least for d2, d3) would be useful Minor remarks: H in Lemma 3 is not defined in main paper (but in appendix), and E et al citation seems incorrect (there is a full name for E) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: the paper does not have immediate negative societal impact (although conclusions from hypothesis tests can always have). Hence, the work could benefit from: - consistency results (similar to kernel two-sample tests) - characterisation of whether higher-order cumulant based statistics typically tend to be larger than MMD (even under null). The comment is about sample estimates and not expected value (hence, more tied to concentration/consistency of test statistics) Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the time and effort invested, and for the kind review. Below, we answer the questions in detail. - ”If a characteristic kernel is used wouldn’t mean embedding (MMD) suffice?” This is true of course, a characteristic kernel is theoretically sufficient for 2-sample testing (in MMD), and a universal one is sufficient for independence testing (in HSIC). The main use case of cumulants are for when either one is in a situation where a non-characteristic kernel is preferable, or one does have a characteristic kernel but the data is structured in a way where one gets additional test power from using the cumulants and hence estimators which are more sample-efficient, like in the experiments shown in the main text. - ”Is there a theoretical justification for the higher testing power?” Testing power for these kinds of tests do not normally come with satisfactory descriptions since the tests are infinite dimensional. Closed form expressions of power are usually expressed as infinite convergent series, see for example Theorem 3.1 [16]. because of this, trying to study theoretical properties of testing power in this setting is a very challenging problem, and while an interesting one, we believe that the experimental results are convincing, and that further theoretical study is beyond the scope of the article. - ”both 1st order and higher order terms reject at a rate higher than significance level” It is true that the rejection rate is higher than 5%, but as it is consistent for both methods we do not feel that it gives any unfair advantages either way. The reason for this is most likely due to the way we choose the hyperparameter by optimising for test power, which would incur a bias for the rejection rate. We did include rejection rates for the bicycle data in Figure 8 in the appendix where it can be seen that it is around 8% for both methods, and we can include the rejection rates for the synthetics experiments here too. - ”(. . . ) when tests based on higher-order cumulants are indeed needed in practice. I still feel the work is relevant, but some discussions on this would certainly increase the significance of the work for the broader community” One answer is that kernelized cumulants provide guarantees even when non-characteristic kernels are used. However, even for characteristic kernels the presented kernelized cumulants are advantageous since they lead to more sample-efficient estimators and hence tests. To see this on the simplest ”unkernelized” example would be to test if two normal distributed random variables have the same distribution. Clearly, here the best answer is to just compute the sample means and sample covariances and then reject/accept if they are close enough; in contrast computing mean and second moments would lead to much more noisy tests and more samples would be needed (see the discussion in Appendix A). The same arguments carry over when we use kernels for testing. Our experiments confirm this intuition and show that the tests given by statistics derived from kernelized cumulants are usually more efficient than the classic kernelized statistics. Hence, kernelized cumulants are usually preferrable. However, the cost of the somewhat more complicated test statistics get harder to implement, the higher m becomes. On a practical side, we believe using m = 2 is a good compromise between gain in sample efficiency (hence better tests) and ease of implementation. We will add a sentence to emphasize this. - ”A precise statement on computational complexity” **Sample size dependence:** The exact statistic used are expanded on in examples E.1, E.2, and E.3 in the appendix. Since all these statistics can be phrased as polynomials of Gram matrices and some multiplication with centering matrices, the most expensive computation is computing the Gram matrices themselves. For example, when computing $d^{(2)}$, assume that one has N samples from one measure and M from the other. One first computes the 3 relevant Gram matrices which is $O(N^2 +NM + M^2)$ and depends on the specific kernel used. After that one computes the centered matrices which involves $N^2 + 2NM + M^2$ additions and the same number of divisions. One then computes the Hadamard square of the centered matrices which involves $N^2 +NM + M^2$ multiplications and finally sums over all the elements and normalizes the output which involves a further $N^2 +NM + M^2$ additions and 3 divisions, and then 3 more additions to add them up. Taking the total number of operations (in addition to computing the Gram matrices), this means $2N^2 + 2M^2 + 3NM + 3$ additions and the same number of multiplications/divisions. **Degree dependence:** There is no free lunch. The $m$-th Bell number $B_m$ (https://en.wikipedia.org/wiki/Bell_number) is defined as the number of elements in $P (m)$. The Bell numbers follow a recursion: $B_{m+1} = |P(m+1)| = \sum_{k=0}^m \binom{m}{k} B_{k}$, with the first elements of the sequence being $B_0 = B_1 = 1$, $B_2 = 2$, $B_3 = 5$, $B_4 = 15$, $B_5 = 52$, $B_6 = 203$, $B_7=877$, $B_8=4140$. By (6)-(7), in the worst case the number of operations to compute $d^{(i)}(\gamma,\eta)$ or $\Vert\kappa^{i}_{k_1,\ldots,k_d}(\gamma)\Vert^2$ is proportional to $B_m^2$ (it equals to $3B_m^2$ and to $B_m^2$, respectively). It is true that asymptotically $B_m$ gets very large [4, 12], but for reasonably small degrees the computation is still manageable. In addition, merging various terms in the estimator can often be carried out, which leads to computational saving. For instance, the estimator of $d^{(2)}$ (see Lemma 2, Example E.1), CSIC (Lemma 3, Example E.2) and $d^{(3)}$ (Example E.3) consists of only $2$, $11$ and $10 + 2\times 7 = 24$ terms compared to the predicted worst-case setting of $3B_2^2 =12$, $B_3^2 = 25$, and $3B_3^2=75$ terms, respectively. - Minor remarks: Thank you for pointing these out, we will address them. We hope that our answers resolve all the questions. --- Rebuttal Comment 1.1: Title: Thank you Comment: I thank the authors for the responses, and clarifying the relevance of looking at higher order cumulants. In future, it would help to further investigate practical implications of kernel cumulants. It feels like there is more potential than what is claimed in the paper/rebuttal.
null
null
null
null
null
null
An active learning framework for multi-group mean estimation
Accept (poster)
Summary: This manuscript studies a special type of bandit problem: instead of maximizing the rewards, the learner aims to minimize the L_p norm of the variance vector for the mean estimators of each arm. The motivation of this problem is multi-group mean estimation, where a small total variance is desired. The authors proposed a variance-UCB algorithm, which maintains an upper confidence bound for the variance of each arm, and then chooses the arm which maximizes a certain quantity determined by its current UCB and number of pulls. This quantity is motivated by the expression of optimal allocation in hindsight. This algorithm achieves a regret of O(T^{-2}) for all finite p, which is optimal by matching lower bounds. For p = infinity, this algorithm recovers the \Theta(T^{-1.5}) optimal regret obtained by [Antos et al. 2008, Carpentier et al. 2011]. Strengths: The problem setting is practically relevant, and the authors proposed a novel form of UCB-type algorithm to solve it. The resulting regret bound is also tight. Weaknesses: I have several major concerns about this work: 1. The target L_p norm in Eqn. (1) is not properly motivated. The authors should at least describe a scenario where minimizing this specific quantity is meaningful. For example, is p = 1 or infty the only interesting scenario? Can the current result be extended to some value of p < 1, say p = 1/2 (I believe this is relevant when one uses absolute estimation error). 2. Although the algorithm is a new UCB-type algorithm, both the intuition and analysis are very straightforward. By the KKT condition for the objective, the optimal allocation should give the same value to all arms for a certain quantity; so the algorithm simply pulls the arm with the largest quantity in order to reduce it. The analysis is then standard, possibly with the exception of Lemma 8 (the key part of the new UCB algorithm). However, this contribution alone does not reach the bar for a NeurIPS publication. 3. Very importantly, the current manuscript does not have a tight regret dependence on the number of arms G. Obtaining so would require a more careful analysis in both the upper and lower bounds. In particular, the lack of tight dependence on G makes the current two-point lower bound much less interesting. 4. There seems to be an issue in the upper bound analysis. Note that Lemma 3 only proves a one-sided inequality, i.e. an upper bound on n - n^*. However, when the authors applied Lemma 4, a two-sided upper bound on |n - n^*| is needed, and this requires additional explanation. I understand this seems to be fine because the sum of n_g is always T, but arguing in this way would have an additional factor of G which I don't know if that's necessary, and in my opinion (see above) the right dependence on G is important. Also, if one would only like to obtain an upper bound on n - n^*, the complicated arguments below Lemma 8 seem unnecessary. It seems that applying Lemma 8 to the last time a certain group g is chosen should be enough. Additional comments: Page 3: "the optimization program (3) can be NP-hard to prove". Please add a justification. Page 4, expression of C_T: the precise form is not interesting at all. Just say C_T = C * log(T) for a large enough constant C. Page 18, Eqn. (21): where does Sigma_p come from? Lemma 2 does not involve this term. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: I'll likely increase my rating if the authors could work out the tight regret dependence on G. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed comments on our paper. Q1 (Lp norms): The expression of the $p$-norm for $p \geq 1$ is $\|\boldsymbol{x}\|_p := (x_1^p + \ldots + x_n^p)^{1/p}$. The most frequent measures of error we find in the ML literature are $p = 1$ (absolute error), $p=2$ (squared error), and $p=\infty$ (worst-case error). General $p$-norms are less frequent in practice, but their study in this case allows us to connect results across $p = 1, 2, +\infty$, thus providing a framework for the most three common measurement errors. We believe $p=2$ is also interesting as it prevents ``outliers'', no variances of any group's mean estimator will be too large, and it provides a nice trade-off between the sum of all the variances ($p=1$) and the worst-case variance ($p=\infty$). For $p < 1$, $\|.\|_p$ is no longer a norm, as it violates the triangle inequality. Maybe we misunderstood the comment: is it possible to provide more detail for the $p = 1/2$ case and how it relates to absolute estimation error? Q2 (algorithm and analysis): We believe that the algorithm is indeed intuitive, which is a desirable feature, although the analysis is far from immediate. Lemma 8 is only a first step to solving the first problem, as it gives us a good control of the last pulled arm, but information about the previous arms is lost. Lemma 9 uniformly bounds the number of pulls with a quantity that is decoupled from the algorithm, while Lemmas 10 and 11 are necessary for the tightness of the bounding. We also require using a Taylor expansion argument to deal with the complex curvature of our objective function, which does not arise in the traditional MAB setting. This has the benefit of giving the tightest first order approximation. (This is also why we don't have large constants in front of the regret, as opposed to most bandits regret bounds). We remark that our results address open questions in [Carpentier et al. 2011], which directly asked if one can relax the Gaussian assumption to remove the dependency on $\sigma_{min}$, and also asked if tight lower bounds can be obtained. Q3 (dependence on $G$): You are indeed correct that the dependence on $G$ in the regret bounds is interesting, as well as the smallest variance $\sigma_{min}$ and total sum of variances $\Sigma_{\infty}$. Upon more careful analysis for the case where $p=\infty$, we are able to show an upper bound on the order of $\Sigma_{\infty} G^{1.5} T^{-1.5}$ and a matching lower bound of $\Sigma_{\infty} G^{1.5} T^{-1.5}$. This is a significant improvement in the $G$ and $\sigma_{\min}$ parameters, compared to the upper bound result of [Carpentier et al. 2011] for the case of sub-Gaussian distributions, where they provided a bound on the order of $\Sigma_\infty \sigma_{\min}^{-1} G^{2.5} T^{-1.5}$. We will add details of these more refined results to the paper. Q4 (upper bound analysis): The proof we submitted gives sub-optimal dependencies on $G$, and the reason is the one that you pointed out (the naive bounding $n - n^* \leq ||n-n^*||$ is sub-optimal). We did so because we were more focused on the dependency of $T$ in our bound. However, we can now derive a tight dependency on $G$, as we mentioned above. Here is a brief explanation on how to do so: first, we combine the upper bound in Lemma 3 with a lower bound derived from similar arguments found in [Carpentier et al. 2011]. Next, for $p$ finite, we approximate the objective with a quadratic function in $(n - n^*)$, and maximize it subject to the two-sided bound on $n - n^*$. For $p=\infty$, we can rewrite the objective function to get a more tractable expression in $n - n^*$. Responses to additional comments: -To simplify presentation, we will remove the mention of NP-hardness from the paper. -We provided the exact expression of $C_T$ for implementation and replicability purposes. -$\Sigma_p$ is introduced in line 172. The upper bound in Lemma 2 involves $n$, while the upper bound in Lemma 3 involves $n^*$. Carpentier, A., Lazaric, A., Ghavamzadeh, M., Munos, R., & Auer, P. (2011, October). Upper-confidence-bound algorithms for active learning in multi-armed bandits. In International Conference on Algorithmic Learning Theory (pp. 189-203). Berlin, Heidelberg: Springer Berlin Heidelberg. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed comments. It's great to see that the optimal dependence on $G$ could be obtained for the case $p=\infty$, and I'll happily increase my score. Just two additional questions: 1. For $p=\infty$, does your current lower bound technique already give you the right dependence on $G$, or you need a different argument? 2. For $p\in (1,\infty)$, what is currently your best upper and lower bound in terms of the dependence on $G$? --- Reply to Comment 1.1.1: Comment: 1-The new lower bound proof uses the same technique, but with a better adversarial instances to get the right dependency on $G$. Instead of using $2$ instances, we use $G+1$ instances. 2-For the case where $p$ is finite, we have not yet derived a (good) lower and upper bound that depends on all the parameters yet (including $G$). We conjecture that the same roadmap for $p=\infty$ should yield a tight upper bound (mainly using double bounds on $n - n^*$ with more careful approximations on the objective). We also conjecture that, for the lower bound, the same $G+1$ instances with more careful analysis of the dissimilarity function $d(\cdot,\cdot)$ will yield a good result. We hope to update our paper in the future with this extra analysis, but cannot guarantee concrete results for the finite $p$ scenario that depend on $G$. For $p=\infty$, we are happy to report tight lower and upper bounds that depend on $G$, as discussed in the earlier post
Summary: This paper focuses on the active learning algorithm for multi-group mean estimation. The authors focus on minimizing the $l_p$-norm of the variance vector. This paper proposes the variance-UCB algorithm to actively select which group to sample in each round. The sample complexities for $p<\infty$ and $p=\infty$ are provided, and the tightness of proposed algorithm on $T$ is also verified. Strengths: This paper focuses on the novel problem of estimating the mean values of multiple groups with minimal variance. The authors adopt $l_p$-norm to measure the variances of group mean estimates. The variance-UCB algorithm is designed and analyzed for different values of $p$. In addition, the lower bound is established to justify the tightness of the proposed algorithm on the dependency of $T$. The simulation results are provided to corroborate the theoretical findings. Weaknesses: 1. The authors mainly focus on the dependency of $T$. However, the variance of each group also influences the sample complexity. It will be helpful to derive the upper bound that reflects these instance-dependent quantities. 2. The limitation part is missing in the paper. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: The questions are provided in the weaknesses section. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors adequately addressed the limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your time reviewing our paper, and for your supportive comments. Below we address the two questions raised in your review. Q1 (Instance-dependent upper bound): Our primary focus in the analysis was dependence on $T$, which is the parameter that we envision growing large in most practical applications. However, our analysis also allows us to provide tight (upper and lower) bounds for all parameters, including $G$ and $\sigma$. For example, for $p = +\infty$, we can provide a refined upper bound of $\Sigma_{\infty}G^{1.5} T^{-1.5} + o(T^{-1.5})$, and a refined lower bound in $\Sigma_{\infty}G^{1.5} T^{-1.5}$. We will add these results to the final version. Q2 (limitations): We will add a brief limitations section to the final version of our paper. We will include discussions of the limitations arising from our two modeling assumptions, and possible future directions that may arise from removing these assumptions. --- Rebuttal Comment 1.1: Comment: Thank you very much for addressing these concerns.
Summary: This paper propose the Variance-UCB algorithm to sequentially learn the mean in a multigroup setting in order to minimize the variance over all mean estimates, and prove the regret of the algorithm is optimal for both finite and infinite p values. Strengths: 1. The Variance-UCB algorithm in this paper automatically achieves optimal regret for both cases when $p$ is finite and $p=\infty$, and provide solid theoretical proofs for both the general lower bound and the matching regret for the algorithm. 2. The authors support their statements with empirical results by varying different parameters in time horizon $T$, norm parameter $p$, group numbers $G$ and sub-Gaussian parameters. 3. The paper is in general well-written. Weaknesses: The experiment results lack the comparison with other benchmark results in the literature. The authors only mention that the varying lowest variance has no effect on the regret results when p is infinite which is a known result in the literature. One typo: line 65 "it is thekkir variances..." should it be "their"? Technical Quality: 2 fair Clarity: 3 good Questions for Authors: As stated in the weakness, can the authors compare the Variance-UCB algorithm to other results in the literature? For example, the authors mention in the paper that this work lies in the frame work of a multi-arm bandits setting, but other bandit works in this line did not take care of the variances of the mean estimates. It would be good to empirically show that the proposed algorithm outperforms these bandit algorithms in the literature in this task. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: Lack of comparison to other benchmark algorithms in the literature. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and for your kind words about our paper. Thank you also for pointing out the typo; we will correct that. 1- Comparison with other algorithms: For the case where the norm $p = +\infty$, two algorithms are known: [Antos et al. 2008, Carpentier et al. 2011]. [Carpentier et al. 2011] provides an algorithm (B-AS) and has already shown that it outperforms the algorithm (GAFS-MAX) derived in [Antos et al. 2008]. Our algorithm (Variance-UCB) coincides with (B-AS) when $p = +\infty$. We chose not to repeat the experiment made in [Carpentier 2011]. For the case where $p < +\infty$, we provide the first algorithm for this setting to the best of our knowledge. Applying without any adapting traditional algorithms (Thompson sa,pling or $\epsilon$-greedy) would yield to sub-optimal solutions. In our setting, the optimal strategy samples all groups asymptotically. It is not clear how to adopt traditional bandit algorithms that, at the minimum, converge asymptotically to the right solution. 2- Experiments around the smallest variance: [Antos 2008, Carpentier 2011] derive an upper bound that depends on $\sigma_{\min}^{-1}$ when $p=\infty$. While [Carpentier 2011] showed that the smallest variance has no effect when the distributions are exactly Gaussian, they did not show this for the general sub-Gaussian case which our paper does. Our experiments simply verify our theoretical result that the smallest variance does not impact the regret significantly when $p=\infty$ and the distributions are sub-Gaussian. In stark contrast, we show that when $p$ is finite the smallest variance does still have an effect, even for Gaussian distributions. András Antos, Varun Grover, and Csaba Szepesvári. Active learning in multi-armed bandits.307 pages 287–302, 10 2008. Carpentier, A., Lazaric, A., Ghavamzadeh, M., Munos, R., & Auer, P. (2011, October). Upper-confidence-bound algorithms for active learning in multi-armed bandits. In International Conference on Algorithmic Learning Theory (pp. 189-203). Berlin, Heidelberg: Springer Berlin Heidelberg. --- Rebuttal Comment 1.1: Comment: Thank you for your response, and I will keep my score.
Summary: This paper studies the mean estimation problem under the multi-armed bandit setting. Here, we have a group of populations (random variables) with unknown mean and standard deviations. The goal is to estimate the mean of each group on the fly and optimize the regret (measured by different kinds of norms). Their major contribution is a group of confidence interval based online learning algorithms and showing the optimality of these algorithms. Strengths: I think many working on multi-armed bandits and statistical learning have naturally wondered this question. I am glad to see this result and that existing UCB type algorithms still work in the new setting. Weaknesses: The text in intro feels a bit strange to me (e.g., quite a bit of empty sells and and “analyst” also sounds odd to me). I think most people understand the importance of this problem. It would be helpful if the authors could relate this result to Stein estimator type research and stratified sampling. They feel related. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I have a few questions: 1. The baseline considers a relaxed version of the original problem because the latter is NP-Hard. Nevertheless, the regrets are tight. Does that imply a certain kind of integrality gap bound between the NP-hard original problem and the relaxed problem? 2. How is the regret related to the number of populations, and is this also tight? 3. Is it possible to comment on your result with the long line of research related to Stein estimators? It seems that those statistics results also aim to estimate population mean via shrinkage methods. 4. Is there a reason to call a random variable a population, e.g., they have finite populations, or they are inconsequential naming? 5. Are there speculations and educated guesses on the generalization of the problem, e.g., what if my goal is to estimate the standard deviation/second moment of each population? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review of our work and for your kind words about our contributions to the bandits and statistical learning literature. Below we address the questions posed in your review. Q1 (integrality gap): Indeed, our tightness results are with respect to the original problem, which implies that the integrality gap goes to zero as $T$ grows large. Q2 (dependence on $G$): Our main focus in the analysis was dependence on $T$, since this is the parameter that we envision growing large in practice. Upon more careful analysis for the case where $p=\infty$, we are able to show an upper bound on the order of $\Sigma_{\infty} G^{1.5} T^{-1.5}$ and a matching lower bound of $\Sigma_{\infty} G^{1.5} T^{-1.5}$. This is a significant improvement in the $G$ and $\sigma_{\min}$ parameters, compared to the upper bound result of [Carpentier et al. 2011] for the case of sub-Gaussian distributions, where they provided a bound on the order of $\Sigma_\infty \sigma_{\min}^{-1} G^{2.5} T^{-1.5}$. We will add details of these more refined results to the paper. Q3 (Stein estimators): Stein estimators are biased, and we exclusively want to focus on unbiased which is why we look at the sample mean for each group. A key motivation at looking at the norm of the variances of the mean estimators is to make sure we are sampling each group enough, motivated by fairness reasons. Thus, using a biased estimator (such as the Stein method) is problematic in our setting since we care about fairness across groups. Q4 (population terminology): We use the term ``population'' in relation to a survey methodology setting, where the decision-maker/analyst/surveyor decides where to collect data in a dynamic fashion. However, we assume each group can be sampled as many times as we want, so essentially the populations is infinite in this setting and the naming is inconsequential. Q5 (extensions to other statistics): We conjecture that our results and algorithmic framework would generalize to other well-defined statistics, as long as one can construct an estimator with high probability concentration guarantees. Carpentier, A., Lazaric, A., Ghavamzadeh, M., Munos, R., & Auer, P. (2011, October). Upper-confidence-bound algorithms for active learning in multi-armed bandits. In International Conference on Algorithmic Learning Theory (pp. 189-203). Berlin, Heidelberg: Springer Berlin Heidelberg.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper provides a general bandit-based active learning approach to the problem of learning the means of several disjoint groups so as to optimize for the variance of the resulting estimates as measured by the p-norm of the variance vector. An algorithm is proposed which samples based on a certain upper confidence bound designed for the variance p-norm, and its regret guarantees are studied. The authors identify and carefully study two regimes: p < infty and p = infty, and prove a somewhat surprising, and tight, dichotomy: namely, the regret is only Theta(1/T^2) for p < infty but increases to Theta(1/T^1.5) for p = infty. The tight lower bounds is achieved via a simple and explicit lower-bounding construction, whereas the upper bounds follow from two different appropriate upper bounds on the curvature of the variance vector p-norm. Some experiments are given to confirm the theoretically identified tight rates. Strengths: The main strength of the paper is that it presents a very clean and natural UCB-based algorithm which, for reasons that are transparently shown via a sequence of intuitive arguments (such as a simple Taylor-based curvature bound on the p-norm vector), achieves the tight rates for the important problem of identifying group means with small variance using as few samples as possible. I also consider it quite important and strong that the authors were able to locate the corresponding tight lower bounds, which is achieved by a surprisingly clean and simple construction (where two incompatible worlds are provided which are identical except for two groups' distributions that are slightly tweaked). In this manner, this simple case of optimizing for p-norms of the variance vector becomes, in a sense, resolved (up to potential tweaks to the setup which --- as mentioned in the future work section --- could involve constraining the sampling policies etc.) Another strength is the lucid and careful presentation of the results --- the paper is generally quite well-written (aside from a few typos that I encourage the authors to locate and fix). Weaknesses: In the theory part, there are in my opinion no big weaknesses --- the case is worked out quite comprehensively. Of course, a (small) weakness of the approach is the elusive nature of the subgaussian parameters that the proof assumes --- some experiments are later run to convince the reader that you don't necessarily need to know these constants exactly, but the intricate ways in which they are present in the bounds make the situation not as pleasant to deal with theoretically (on which note, I would like to ask the authors to --- for full transparency ---- more prominently mention the assumption, e.g. in the contributions section). Another relatively small weakness is that the experimental section could be tightened up, in the sense of some plots being expanded to be more informative and some off-the-cuff remarks elaborated on. (See the Questions section below.) Still, no major problems there in my opinion. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: First, for the theory part, and for the experimental part alike, I would like to better understand the nature of requiring the smallest variance to be nonzero as well as it featuring in the performance guarantees and --- in fact, as mentioned on line 275 --- somewhat paradoxically leading to higher regret as it decreases. If there is an intuitive explanation or some examples for why this should be so, I'd like to learn more. Secondly, could the authors plot out some other norms between l2 and linfty? This way the transition effect as p -> infty could be better portrayed and understood. Thirdly, can plots such as Figure 2 be extended to make the convergence of some "difficult" curves (such as C=0.001 in that example) more visible? Finally, do the authors have any insights into why the transition in Figure 4, even though it takes a long time to wait for it, kicks in so rapidly? Thanks! Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and for your kind words about our paper and results. We will add a brief discussion about the sub-Gaussianity assumption in the contributions. For the "relatively small weakness" in the experimental section, corresponding to Q2 and Q3, we will incorporate your feedback to plot some other norms between $\ell_2$ and $\ell_\infty$ and extend the x-axis on Figure 2 to better show convergence of the $C=0.001$ case. Q1 (Requiring non-zero variance): The smallest variance appears in the regret bounds (although not in the dominant term for the case of $p=\infty$). Suppose the unknown distributions are Bernoulli. Intuitively, an analyst cannot distinguish (using finitely many samples) between a group with arbitrarily small variance and a group with zero variance, but the optimal strategy can be quite different in these two cases. Q4 (Rapid convergence in Figure 4): To better understand why this happens, we ran a similar experiment for $p = 1, 2, +\infty$, which we will add to the paper. Initially the algorithm samples (on average) uniformly across all groups due to the UCB constant outweighing the sample variance estimates, and each group must wait to be sampled enough times for the algorithm to estimate its optimal sampling rate. This delay will naturally increase with the number of groups. Once we are in the right range, the algorithm samples the highest variances first. This causes abrupt variations in the case where $p$ is small because the objective function is very sensitive to changes in one coordinate. --- Rebuttal Comment 1.1: Title: Acknowledgment Comment: Thank you for your response to my questions. In particular, with the help of the authors' answers, I now better understand the empirical properties of the proposed method (e.g. through the explanation about the rapid convergence region), and expect and encourage the authors to add such intuitions, as well as revised plots, to the updated version of the manuscript. I will happily keep my score and positive opinion of this work. I also enjoyed reading the discussions with the other reviewers --- in particular, appreciating their point that working out optimal dependence on the number of groups would be quite appealing in this setting. I look forward to seeing the added proof of the new tight bound in the revised manuscript.
null
null
null
null
null
null
Non-Smooth Weakly-Convex Finite-sum Coupled Compositional Optimization
Accept (poster)
Summary: This paper addresses the finite-sum coupled compositional optimization (FCCO) scenario, relaxing the requirement of Lipschitz gradient for the involved functions. Instead, they consider weakly convex functions with certain monotonicity conditions. The paper introduces new algorithms and provides oracle complexity guarantees for computing a point with an epsilon-gradient norm relative to the Moreau envelop. The authors also demonstrate the applicability of these methods to AUC maximization tasks. Strengths: It is always beneficial and sometimes nontrivial to extend the algorithm implementation and analytic technique for gradient Lipschitz functions to that of more general weakly convex functions. The authors also demonstrate practical applicability of the proposed new algorithms. Weaknesses: Since I'm not acquainted with the FCCO setting, I will concentrate on the broader technical aspects instead. * L137: I don't get the idea of the "monotonic property" of a multivariate mapping f here. What is the "input" referring here? Let the function be f: R^n -> R^m. Is the input here referring to a vector in R^n? If so, how can we define the "monotonicity" with respect to such a multivariate function? * The whole development in this paper is built on the weak convexity of F proved in Proposition 4.2 and 4.4. However, the arguments there are not transparent to me. Let me take the proof of Proposition 4.2 for example. (a) L809 first inequality: Why do we have v'(g(y) - g(x)) \geq v'( ... )? It seems that the authors assumed v <= 0 here as f is assumed non-increasing in L808 (though the term "non-increasing" is not clearly defined in my opinion). However, it is not clear to me why such a "monotonicity" of f will promise the nonnegativity of subgradients v. For a smooth function f, the claim (v <= 0) holds trivially. But for a general non-smooth weakly convex function, even with the "monotonic assumption", this is nontrivial and a formal proof is required. (b) L809 third inequality: should L_2C_1 be L_2C_1*sqrt(n), where n is the input dimension of f? Please check. If that is true, then the weak convexity parameter would be depend on the dimension n of the input to f. Minor: * L805: should v'(g(x) - g(y)) be v'(g(y) - g(x))? * L145: \epsilon > 0 should be 1 > \epsilon >= 0. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer FiRa for the detailed and insightful review. We will fix the minor issues in the future revision. Here we would like to address the remaining concerns. **Q1**. L137: Regarding the "monotonic property" of a multivariate mapping $f$. **Response**: We apologize for this confusion. Here we present a more detailed definition of monotonicity: Consider a function $f:\mathbb{R}^d\to \mathbb{R}^m$. For simplicity, we write $f=(f_1,\dots,f_m)$. We say that $f$ is non-decreasing if for each $k=1,\dots,m$, $f_k:\mathbb{R}^d\to \mathbb{R}$ is non-decreasing with respect to each element in the input. We will add this into the revision. **Q2**. Regarding the proof of Proposition 4.2, L809 first inequality: why such a "monotonicity" of f will promise the nonnegativity of subgradients v? **Response**: Thank you for raising this question. Here we present an additional proposition to address this issue. Before stating the proposition, we would like to mention the definition of subgradient we used in this work. For the class of weakly-convex Lipschitz continuous functions, the definition of Clarke subgradients [Theorem 9.61 Rockafellar and Wets, 2009] coincides with the definition of subgradient [definition 8.3 in Rockafellar and Wets, 2009] [41]. Since we assume both weak convexity and Lipschitz continuity for functions in our objectives, the definitions mentioned above are considered to be equivalent. Proposition. Consider a Lipschitz continuous function $f:O\to\mathbb{R}$ where $O\subset \mathbb{R}^d$ is an open set. Assume $f$ to be non-increasing (resp. non-decreasing) with respect to each element in the input, then all subgradients of $f$ are non-positive (resp. non-negative) element-wise. Proof. Let $D$ be the subset of $O$ where $f$ is differentiable. By Theorem 9.60 in [Rockafellar and Wets, 2009], a Lipschitz continuous function $f:O\to\mathbb{R}$, where $O\subset \mathbb{R}^d$ is an open set, is differentiable almost everywhere, i.e., $D$ is dense in $O$. Then by Theorem 9.61 in [Rockafellar and Wets, 2009], the subdifferential of $f$ at $x$ is defined as $$ \partial f(x) = \text{con} (v | \exists x_k\to x \text{ with } x_k\in D, \nabla f(x_k)\to v), $$ where $\text{con}(\cdot)$ denotes the convex hull. If we assume that $f$ is non-increasing with respect to each element in the input, then $\nabla f(x)\leq 0$ (element-wise) for all differentiable points $x\in D$. It implies that the all vectors in $(v | \exists x_k\to x \text{ with } x_k\in D, \nabla f(x_k)\to v)$ are non-positive element-wise. Therefore, all subgradients of $f$ are non-positive element-wise. On the other hand, if we assume that $f$ is non-decreasing, one may follow the same argument and conclude that all subgradients of $f$ are non-negative element-wise. For functions $f:O\to\mathbb{R}^m$ where $O\subset \mathbb{R}^d$ is an open set, one may write $f=(f_1,\dots,f_m)$ and apply the above proposition for each $f_k:O\to\mathbb{R}$, $ k=1,\dots,m$. **Q3**. In the proof of Proposition 4.2, L809 third inequality: should $L_2C_1$ be $L_2C_1*\sqrt{d}$, where $d$ is the input dimension of f? Please check. **Response**: Thank you for pointing this out! You are correct. The weak convexity parameter of $F$ should involve the input dimension to the outer functions. The weak convexity constant in Proposition 4.2 should be $\rho_F= \sqrt{d_1}\rho_gC_f+\rho_f C_g^2$, where $d_1$ is the input dimension of $f_i$. The weak convexity constant in Proposition 4.4 should be $\rho_F = \sqrt{d_1}(\sqrt{d_2}L_hC_g+\rho_g C_h^2)C_f+\rho_f C_g^2C_h^2$, where $d_1$ and $d_2$ are the input dimensions of $f_i$ and $g_i$ respectively. --- Rebuttal 2: Title: Questions? Comment: Dear Reviewer FiRa and AC, Given that the discussion period is ending soon, we would like to follow up if there are some other questions from the reviewer that need us to clarify. Thank you for providing valuable comments on our paper! Regards Authors --- Rebuttal Comment 2.1: Comment: Thank the authors for the clarification, which addressed my concerns and filled in the missing details in the statement and proof. I'll increase my score to 6.
Summary: The paper considers a class of finite-sum coupled compositional optimization (FCCO) problems and a class of tri-level finite-sum coupled compositional optimization (TCCO) problems. Under the setting of non-smooth weakly-convex FCCO, the paper establishes the complexity of a single-loop algorithm with the tool of Moreau envelop. The algorithm is then extended to solve the non-smooth weekly-convex TCCO. Numerical results on the two-way partial AUC maximization and multi-instance two-way partial AUC maximization are also reported. Strengths: Convergence analysis of an single-loop stochastic algorithm for solving non-smooth weakly-convex FCCO is a nice contribution of the paper. Weaknesses: 1. There lacks motivation for considering non-smooth weakly-convex FCCO and TCCO. Some additional applications other than the two-way partial AUC maximization should be provided. 2. In the nonconvex setting, definitions need to be rigorously given/referred to. For example, $f$ at line 133 can be nonconvex, which definition of "subgradient" and "subdifferential" are the paper using? There some ways to define a "variance-reduced estimator", which definition is the paper using? 3. The experiments just partially support the theoretical results of the paper. Numerical results on the comparison of the evolution of the objective function over time when using the proposed algorithm and other known algorithms for solving one same non-smooth weakly-convex FCCO problem are expected. It would illustrate the importance of the obtained complexity. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Consider the TPAUC problem in Section 5.The corresponding NSWC FCCO problem of the regular setting and the corresponding NSWC TCCO problem for the MIL setting still have convex $f_i$. Could the complexity of the proposed algorithms be improved if the weakly-convex assumption of $f_i$ is replaced by convexity? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: n.n. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer GoBV for the insightful review. Here we would like to address your concerns. **Q1**: Additional applications other than the two-way partial AUC maximization should be provided. **Response**: We would like to provide another important application of NSWC FCCO for regularized group distributionally robust optimization (group DRO), which is useful for addressing distributional shift [Sagawa et al. 2020]. Consider $N$ groups with different distributions. Each group $k$ has an averaged loss $L_k(w)=\frac{1}{n_k}\sum_{t=1}^{n_k}\ell (f_w(x_t^k),y_t^k)$, where $w$ is the the model parameter and $(x_t^k, y_t^k)$ is a data. For robust optimization, we assign different weights to different groups and form the following robust loss minimization problem: $$ \min_w \max_{p\in \Omega} \sum_{k=1}^N p_k L_k(w), $$ where $\Omega\subset\Delta$ and $\Delta$ denotes a simplex. A common choice for $\Omega$ is $\Omega=$\{$p\in\Delta, p_i\leq 1/K$\} where $K$ is an integer, which yields the so-called CVaR losses (i.e., average of top-K group losses). As a result, the above problem is equivalent to (Curi et al, 2019): $$ \min_w \min_{s} F(w,s)=\frac{1}{K}\sum_{k=1}^N [L_k(w)-s]_+ + s. $$ We can map this problem into non-smooth weakly-convex FCCO when the loss function $\ell(\cdot,\cdot)$ is weakly convex in terms of $w$. Compared with solving the min-max problem, solving the above FCCO problem does not involve dealing with the projection onto the constraint $\Omega$ and avoid expensive sampling as in existing works (Curi et al, 2019). **Q2**: In the nonconvex setting, definitions of "subgradient" and "subdifferential" need to be rigorously given/referred to. **Response**: Thank you for the comments! We will add these definitions into the revision. In particular, the definitions of subgradient and subdifferential follow from the definition 8.3 in [Rockafellar and Wets, 2009] and section 2.2 of [5]. In section 2.2 of [5], properties of subgradients for weakly-convex functions are discussed. For the class of weakly-convex Lipschitz continuous functions, the definition of Clarke subgradients [Theorem 9.61 Rockafellar and Wets, 2009] coincides with definition 8.3 in [Rockafellar and Wets, 2009] [41]. **Q3**: Which definition of "variance-reduced estimator" is the paper using? **Response**: The use of the term “variance-reduced estimator” follows from the line of research on variance reduction techniques in the literature of stochastic optimization for machine learning, including SVRG [Johnson and Zhang, 2013], SARAH [Nguyen et al., 2017], SPIDER [Fang et al., 2018], STORM [4], and MSVR [13]. In our context, “variance-reduced estimator” generally refers to the estimation techniques that ensures the average accumulated estimation error of the gradient estimators decays. We will clarify it in the revision. **Q4**: Numerical results on the comparison of the evolution of the objective function over time when using the proposed algorithm and other known algorithms for solving one same non-smooth weakly-convex FCCO problem are expected. **Response**: (1) Please note that this is **the first work** that studies and solves non-smooth weakly-convex FCCO problems. Hence, there is no known algorithms in the same style for solving the same non-smooth weakly-convex FCCO problems that we can compare with. (2) The experimental results serve the purpose of demonstrating the usefulness of our algorithms for solving ML problems. In particular, for TPAUC maximization we compared with the state-of-the-art baseline SOTAs [39] in order to illustrate the generalization performance of our algorithms. (3) We did provide experimental results to justify the impact of different algorithmic choices on the convergence. For example, the results in Figure 3 in global response demonstrate that using the MSVR estimator ($\gamma>0$) gives faster convergence than using the moving average estimator ($\gamma=0$), which is implied by our theoretical results. (4) To better address your concern, we have implemented a min-max optimization approach for optimizing the same TPAUC loss as ours, which was proposed in the supplement of [39] and named as SOTA. Please note that SOTA leverages the convexity of the outer function $f_i$ and solves the equivalent min-max problem. The training convergence results shown in Figure 3 in the global response demonstrate that our method has competitive and sometimes even faster convergence though our method does not explore the convexity of the outer function. **Q5**: Could the complexity of the proposed algorithms be improved if the weakly-convex assumption of $f_i$ is replaced by convexity? **Response**: It is unclear to us whether the convexity of $f_i$ can help improve the convergence. To the best of our knowledge, when $f_i$ is convex the problem is equivalent to a weakly-convex concave min-max problem, whose best complexity is also $O(\epsilon^{-6})$ [20,26,36,38]. Since there is no known lower bound for this problem, we do not know if it is possible to further improve the complexity. **References**: Yang et al. Two-way partial auc and its properties.Statistical methods in medical research,28(1):184–195,2019. Sagawa et al. Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization. ICLR, 2020. Curi et al. Adaptive sampling for stochastic risk-averse learning, 2019. Rockafellar and Wets. Variational analysis, volume 317. Springer Science & Business Media, 2009. Johnson and Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In NIPS, 2013. Fang et al. Spider: Near-optimal non-convex optimization via stochastic path-integrated differential estimator. In Advances in Neural Information Processing Systems, 2018. Nguyen et al. SARAH: A novel method for machine learning problems using stochastic recursive gradient. In Proc. of the 34th ICML, 2017.
Summary: This paper handles the problems of non-smooth weakly-convex compositional optimization. The first problem, referred to as FCCO (Finite-sum coupled compositional minimization), is given by $$ \min_{w \in \mathbb{R}^d} F(w) \triangleq \frac{1}{n} \sum_{i=1}^n f_i(\mathbb{E}_{\xi\sim \mathcal{D}_i}[g_i(w; \xi)])$$ The second problem, referred to as TCCO (Tri-level coupled compositional minimization), is given by, $$ \min_{w \in \mathbb{R}^d} F(w) \triangleq \frac{1}{n_1} \sum_{i=1}^{n_1} f_i\left(\frac{1}{n_2}\sum_{j=1}^{n_2} g_i( \underset{\xi \sim \mathcal{D}{i,j}}{\mathbb{E}} [h_{i,j}(w; \xi)])\right)$$ Here, the outer functions $f_i$ and $g_i$ are Lipschitz and weakly convex which has not been considered by previous works and significantly complicates the analysis. The authors propose two algorithms, SONX and SONT for the two problems respectively. For the first problem, FCCO, $u_{i,t}$ are the estimates of $g_i(w_t)$, $\forall i \in \{1,2,\ldots, n\}$, which are required for an unbiased estimate of the the complete function $F(w)$. In each step, a batch $B$ of coordinates from $\{1,2,\ldots, n\}$ is sampled and only the estimates in this batch, $u_{i,t}, i\in B$, are updated using a Variance-reduced estimator. Using the estimates, a single gradient step is computed for $w_t$. The estimator has been adapted from [1]. As the functions are weakly convex, the authors show convergence to an $\epsilon$-stationary point of the Moreau envelope of $F$ at a rate of $\mathcal{O}(\epsilon^{-6})$. The authors show that both problems can be mapped exactly to Two-way partial AUC (TPAUC) maximization of a parametrized network, even deep networks, with the TCCO problem mapping to a multi-instance learning version. TPAUC is the area under the ROC curve when False Positive rate $\leq \beta$ and True Positive Rate $\geq \alpha$. Over a set of medical datasets, the authors show that SONX and SONT out-perform baselines for TPAUC maximization as the baselines use poor approximations of TPAUC. **References** 1. Jiang et al 2022. Multi-block-single-probe variance reduced estimator for coupled compositional optimization Strengths: - **Easy to Implement**: The algorithms are single-loop and use moderately sized batches which makes it easy to implement. Also, the algorithms can also be parallelized over the coordinates $i$. - **Better Rates**: In addition to being easy to implement, the algorithms achieve the best possible rates for this class of functions. Existing works obtain similar or worse rates under an easier problem setting or with multiple loops or using large batches. - **Mapping to TPAUC**: The authors describe the TPAUC problem comprehensively and show the exact mapping with the two optimization problems. - **Detailed presentation**: The convergence analysis, the mapping to TPAUC and the related works are very detailed which helps in understanding the paper. Weaknesses: - **Experimental and Theoretical Baselines do not match**: For theory, the baselines for comparisons are algorithms for simpler problem settings, for instance with smoothness or convexity. For experiments, the baselines for TPAUC maximization are used which do not match the theoretical baselines. It is unclear, if some theoretical baseline can be applied to TPAUC and perform better. This should not be the case ideally but it has not been verified. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Is there a lower bound for this problem setting, which can be used to verify if the rates are optimal and cannot be improved? - Why is the dependence on $\frac{n_1}{B_1}$ worse for SONT as compared to SONT? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Described in Weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer evM3 for the detailed insightful review. Here we would like to address your concerns. **Q1**: Regarding mismatch between theoretical baselines and experimental baselines. It is unclear, if some theoretical baseline can be applied to TPAUC and perform better. **Response**: (1) We would like to point out that the baseline SOTAs is a state-of-the-art method for optimizing TPAUC as demonstrated in [39]. (2) The complexity of the baseline SOTAs for TPAUC maximization [39] matches that of the theoretical baseline SOX in Table 1. SOTAs can be considered as an extension of SOX for solving a smoothed surrogate of TPAUC loss [39]. (3) A theoretical baseline named SOTA for optimizing the same TPAUC formulation as ours has been pointed out in [39] (as discussed in lines 114-117), which reformulates the problem into a min-max problem by leveraging the convexity of the outer function and adopts an existing double-loop algorithm. In comparison, our algorithm is single loop and has the same iteration complexity. We have implemented SOTA and compared the training convergence on the two molecular datasets. The training convergence results as shown in Figure 3 in the global response demonstrate that our method has competitive and sometimes even faster convergence though our method does not explore the convexity of the outer function. **Q2**: Is there a lower bound for this problem setting, which can be used to verify if the rates are optimal and cannot be improved? **Response**: To the best of our knowledge, there is no known lower bound for NSWC FCCO. However, the complexity of our proposed methods matches the best known complexity for solving weakly-convex concave min-max problems. With additional convexity assumption on $f_i$, NSWC FCCO can be rewritten as a weakly-convex concave min-max problem. Thus we consider the complexity of our proposed methods to be state of the art. **Q3**: Why is the dependence on n1/B1 worse for SONT as compared to SONX? **Response**: The worse dependence on $\frac{n_1}{B_1}$ is caused by the two layers of block-sampling in SONT. Since TCCO problem is three-level compositional, we apply block-sampling strategy in the estimation of both the 1st inner function $\{g_i\}$ and 2nd inner function $\{h_{i,j}\}$. This results in an increasing inaccuracy in the overall gradients and function values estimation, and eventually leads to a worse dependence on $\frac{n_1}{B_1}$. --- Rebuttal Comment 1.1: Title: Response Comment: Thanks for the detailed rebuttal, especially the clarification about the baseline SOTAs.
Summary: The purpose of this paper is to introduce a new approach to solving a specific type of optimization problem called non-smooth weakly-convex finite-sum coupled compositional optimization (NSWC FCCO) problems. The authors specifically focus on a variation of FCCO problems where the outer function is weakly convex and non-decreasing, and the inner function is weakly convex. To address these problems, the authors propose two new algorithms. The first algorithm is designed for two-level NSWC FCCO problems and operates using a single-loop. The second algorithm is an extension of the first algorithm and is intended for tri-level NSWC FCCO problems. The authors provide a comprehensive analysis. They establish the complexity of the algorithms in terms of finding an $ε$-stationary point of the Moreau envelope of the objective function. Strengths: The paper is of high quality as it provides a detailed and rigorous mathematical analysis of the proposed algorithms. The technique of using Moreau envelope is intuitive and interesting. I have not checked the proof line by line but it looks good. The authors also conduct extensive experiments to validate their theoretical findings. They compare the performance of their algorithms with other competitive methods across multiple datasets, demonstrating the effectiveness of their approach. Weaknesses: The authors have presented new convergence analysis for FCCO and TCCO, which I acknowledge. However, it seems that other multistage algorithms have already achieved the optimal rate. Consequently, I would lower my evaluation score for the novelty aspect. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I acknowledge and value the authors' dedication in exploring the intricacies of the technical aspects. Nevertheless, I fail to find the application of maximizing two-way partial AUC particularly inspiring. Could you please demonstrate any other compelling applications? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: no Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer mWSr for the insightful review. Here we would like to address your concerns. **Q1**: It seems that other multistage algorithms have already achieved the optimal rate. **Response**: Thank you for acknowledging our new convergence analysis. However, there is some **misunderstanding** of our results. No previous algorithms and analysis of non-convex FCCO (even the broader family of non-convex compositional optimization) are applicable to our considered non-smooth weakly-convex FCCO/TCCO problems as they assume that $f_i$ and $g_i$ are smooth. Our work is the first to study and solve non-smooth weakly-convex FCCO and TCCO problems, i.e., where $f_i$ and $g_i$ are non-smooth weakly-convex. All the possible existing solutions to non-smooth FCCO [20,26,36,38,15] need to reformulate FCCO as a min-max problem assuming the convexity assumption of $f_i$. **Q2**: Could you please demonstrate any other compelling applications? **Response**: (1) We would like to point out that two-way partial AUC maximization is particularly important in medical domains where it is important to restrict false positive rate to be small and true positive rate to be large (Yang et al. 2019). (2) We would like to provide another important application of NSWC FCCO for regularized group distributionally robust optimization (group DRO), which is useful for addressing distributional shift (Sagawa et al. 2020). Consider $N$ groups with different distributions. Each group $k$ has an averaged loss $L_k(w)=\frac{1}{n_k}\sum_{t=1}^{n_k}\ell (f_w(x_t^k),y_t^k)$, where $w$ is the the model parameter and $(x_t^k, y_t^k)$ is a data of the $k$-th group. For robust optimization, we assign different weights to different groups and form the following robust loss minimization problem: $$ \min_w \max_{p\in \Omega} \sum_{k=1}^N p_k L_k(w), $$ where $\Omega\subset\Delta$ and $\Delta$ denotes a simplex. A common choice for $\Omega$ is $\Omega=$\{ $p\in\Delta, p_i\leq 1/K$\} where $K\leq N$ is an integer, which yields the so-called CVaR losses (i.e., average of top-K group losses). As a result, the above problem is equivalent to (Curi et al, 2019): $$ \min_w \min_{s} F(w,s)=\frac{1}{K}\sum_{k=1}^N [L_k(w)-s]_+ + s. $$ We can map this problem into non-smooth weakly-convex FCCO when the loss function $\ell(\cdot,\cdot)$ is weakly convex in terms of $w$. Compared with solving the min-max problem, solving the above FCCO problem does not involve dealing with the projection onto the constraint $\Omega$ and avoid expensive sampling as in existing works (Curi et al, 2019). References: Yang et al. Two-way partial auc and its properties.Statistical methods in medical research,28(1):184–195,2019. Sagawa et al. Distributionally Robust Neural Networks for Group Shifts: On the Importance of Regularization for Worst-Case Generalization. ICLR, 2020. Curi et al. Adaptive sampling for stochastic risk-averse learning, 2019. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed reply, the paper looks more promising now and i will increase the score.
Rebuttal 1: Rebuttal: We thank all the reviewers from their insightful reviews. Please find Figure 3 and Table 5 in the attached pdf file. Pdf: /pdf/7a8fc8565a6324afb450f3388248ace2870519c2.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The manuscript studies a class of non-smooth non-convex compositional optimization problems in which the objective function is given in a form of finite-sum composition where the functions are assumed to be weakly convex. The authors present stochastic approximation algorithms to solve this class of problems and establish their sample complexity bounds. Strengths: The authors provide the sample complexity bound of $O(1/\epsilon^6)$ for a single-loop algorithm applied to non-smooth weakly convex compositional problems. Weaknesses: Given the existing works in the literature and assumptions made in the manuscript, the presented theoretical results are incremental. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. In Table 1, the authors compare their results with the existing ones for smooth problems. However, dependence on $n$ is hidden and the reader cannot compare this dependency. Moreover, they need to add more existing results (mentioned on page 3) to the table with their assumptions so that there is a clear picture for comparison. 2. The choice of bath sizes appear in the complexity bounds, however, there is a very limited discussion on its role. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: . Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer SCt1 for the detailed and insightful review. Here we would like to address your concerns. **Q1**. Difference from the existing works in the literature. **Response**: We politely disagree with the reviewer that our work is incremental in light of existing works. There are fundamental differences between our work and existing works. This is the **first work** studying finite-sum coupled compositional optimization (FCCO) problems in **the nonsmooth setting**, i.e., no smoothness assumption is imposed on neither $f_i$ nor $g_i$. (1) As discussed in lines 47-51 for previous works on smooth FCCO and lines 65-69 for our work on the analysis of convergence, there is a key difference in the analysis. Previous works heavily reply on the smoothness of the outer function $f_i$ for bounding the error of stochastic gradient estimator, which is not applicable to our setting. (2) Our assumptions about weak convexity assumptions on $f_i$ and $g_i$ are **weaker** than smoothness assumption made in existing works [13,21], as the latter implies the former. No existing method can solve non-smooth weakly-convex FCCO unless additional assumptions are added. (3) Moreover, our results bring new algorithms for solving a family of weakly-convex concave min-max problems in the form of $\min_{x}\max_y \frac{1}{n}\sum_iy_i g(x) - f^*(y_i)$, where $f^*$ is convex conjugate of a convex function $f$. The best existing methods are double loop methods with complexity $O(\epsilon^{-6})$ [20,26,36,38]. However, our method is single loop with the same complexity. Last but not least, the considered non-smooth weakly-convex TCCO is novel, which has important applications in ML but no efficient solution is available in existing works. **Q2**. Regarding the dependence on $n$ and the roles of batch sizes in the complexity bounds. **Response**: We will summarize the detailed complexities exhibited in our theorems into Table 5 (can be found in the pdf file from the global response) revealing the dependence on $n$ and the batch sizes as below. We have also discussed the dependence on the batch sizes and $n$ in detail at lines 230-234 for SONX, and lines 239-242 for SONT. Our SONX algorithm for non-smooth FCCO has the same dependence on the batch sizes and $n$ as MSVR [13] for smooth FCCO. The dependence of SONT's complexity on these parameters is more complex than that of SONX. Nevertheless, we have provided some discussions in lines 239-242. Overall, we can see that increasing the batch sizes plays a role of accelerating the convergence. **Q3**. Comparison with more existing results (mentioned on page 3) to the table with their assumptions so that there is a clear picture for comparison. **Response**: Please see Table 5 (can be found in the pdf file from the global response) for the comparison of more existing results. We will provide Table 1 in the revision. --- Rebuttal Comment 1.1: Comment: Thanks for the clarification. I increased my rating to 5. --- Reply to Comment 1.1.1: Title: Thank you for raising your rating! Comment: We are glad that our rebuttal help address your concerns. Thank you!
null
null
null
null
null
null
PID-Inspired Inductive Biases for Deep Reinforcement Learning in Partially Observable Control Tasks
Accept (poster)
Summary: The paper proposes two history encoders for RL in partially observable control tasks. The history encoders are designed with PID-inspired inductive bias. Specifically, the inductive bias comes from lifting the observation into features consisting of summation and difference of observation over the historical horizon, which are analogous to the integral and derivative of tracking errors in PID. It was shown that the proposed history encoders, especially the generalized PID encoder, could achieve better performance and robustness than GRU and transformers in tracking and locomotion tasks when integrated with RL algorithms. Strengths: I particularly appreciate the new perspective presented by the authors in combining conventional control techniques with reinforcement learning. The idea of incorporating PID architecture into history encoders is interesting and novel to the best of my knowledge. The advantages of the generalized PID encoders over GRU and transformers, especially in the locomotion tasks, validate that the PID-inspired inductive bias is indeed effective in some control tasks. This paper could inspire the RL community to discover more powerful and generalizable architectures with insights from control theory and conventional control techniques. Weaknesses: 1. In lines 102-104, the authors stated: > > In the case of MIMO tracking problems, where there are M signals with M corresponding actuators, one can control the system with M separate PID controllers. However, this assumes there is a clear breakdown of which actuator influences which signal. > Decomposing a MIMO system into multiple SISO systems is indeed a practical way to synthesize MIMO PID controllers. However, there exist methods to directly synthesize PID controllers for MIMO systems without decomposition (e.g., [1]). While I am okay with the authors only comparing the proposed methods with a decomposed PID controller, the authors should appropriately assert the capacity of PID controllers based on the state-of-the-art literature. 2. The related work section should include a review of the literature on the combination of conventional control techniques with RL. For instance, there have been extensive efforts in tuning PID or learning an adaptive PID weight schedule with RL, which could be considered a special case of the control policy with the proposed PID encoder. I understand the paper is mainly targeting audiences from the RL community, but I think it is necessary to place the proposed method in the control literature as well. 3. While I am generally impressed by the experimental results, I am concerned about the failure of SAC-GPIDE in the HalfCheetah-V environment. It makes me wonder whether the PID-based inductive bias could limit the expressiveness of the policy network in some control tasks. For instance, I would doubt if the proposed method could still work when image-based observation is used. The authors do not provide any explanation or hypothesis for the failure in HalfCheetah-V. I would suggest providing more information and insights regarding it during the rebuttal and in the updated version of the paper. [1] Boyd, Stephen, Martin Hast, and Karl Johan Åström. "MIMO PID tuning via iterated LMI restriction." International Journal of Robust and Nonlinear Control 26, no. 8 (2016): 1718-1731. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. Please see the three issues listed in the Weaknesses section. I would have a more positive impression of the paper if the authors could properly address these three issues during the rebuttal. 2. In addition, I want to ask the authors to clarify the role of attention heads in GPIDE. In the ablation study, the authors stated: > > It appears that the attention scheme for HalfCheetah is simply a poor reproduction of exponential smoothing, making it redundant and suboptimal. In fact, we found this phenomenon to be true across all attention heads and PyBullet tasks. We believe that the periodicity that appears here is due to the oscillatory nature of the problem and lack of positional encoding (although we found including positional encoding degrades performance). > If that is the case, does it mean that the attention heads could be simply replaced by exponential smoothing heads? It seems to be supported by the ablation study results (i.e., ES+SUM has similar or better performance than GPIDE with multiple types of heads). In lines 227-230, the authors said: > >This choice was not optimized but rather was picked so that all types of heads were included and so that GPIDE has roughly the same amount of parameters as our GRU baseline. > I don't think it is a good justification for the experimental design. The authors should attempt to find an optimized configuration so that the audience can have a better sense of the necessity of each proposed head, especially the attention head. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The authors acknowledged the limitation of the proposed methods, which is the proposed PID-inspired inductive bias might not apply to all control tasks. While I personally doubt if the proposed method can ever be extended to systems with image-based observations, I do not think the authors are obligated to justify the feasibility of extension to visual control tasks in this paper. However, as suggested above, I do think the authors should provide more information and insights on the failure of GPIDE in Cheetah-V. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and your thoughtful review! We are glad to hear that you were generally impressed with the experimental results. For the failure of HalfCheetah-V, we agree that the current iteration of the paper could benefit from more information. Please see the global response for what we plan to add. We also agree we need to do a better job at appropriately framing the capacity of PID based on state-of-the-art literature. On top of this, while our paper is about how PID can help RL, we agree that we do need to reference literature on how RL can help PID to give readers the full picture. In addition to the works recommended by reviewer EZtX, we also plan on referencing the works shown at the end of this response in our discussion. For your second question, although there are a few cases where attention may be helpful, generally it seems that the best performing variant of GPIDE does not have attention. We thought the version of the paper that would be most coherent to readers is to first run experiments with a “complete” GPIDE. That is, a version of the method that, although possibly suboptimal, uses all of the heads discussed in the Methodology section. Through the ablation section we hoped to give readers insights into the importance of each of the proposed heads and hone in on a more optimal version of GPIDE. If you feel that we have adequately addressed your concerns, we would respectfully ask you to consider raising your score. Thank you! Lawrence, N. P., Stewart, G. E., Loewen, P. D., Forbes, M. G., Backstrom, J. U., & Gopaluni, R. B. (2020). Reinforcement learning based design of linear fixed structure controllers. IFAC-PapersOnLine, 53(2), 230-235. Guan, Z., & Yamamoto, T. (2021). Design of a Reinforcement Learning PID controller. IEEJ Transactions on Electrical and Electronic Engineering, 16(10), 1354-1360. Jesawada, H., Yerudkar, A., Del Vecchio, C., & Singh, N. (2022). A Model-Based Reinforcement Learning Approach for PID Design. arXiv preprint arXiv:2206.03567. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the response! After reading the response and other reviewers' comments, I would like to keep my current score.
Summary: The paper proposes a new, simplified architecture for reinforcement learning in the partially observable setting inspired by PID control. Through experiments on a number of tracking problems, and some locomotion environments, strong performance is obtained. Strengths: - The exposition of the argument is very clear, even for a reader somewhat unfamiliar with some aspects. - Simplification of complex (and computationally expensive) recurrent architectures is a worthwhile endeavor. - Ablations are informative, and come with demonstrations of issues (Figure 4). Weaknesses: - The largest weakness is in the evaluation. Most of the evaluation is devoted to low-dimensional tracking problems. The choice of state-based locomotion with only positions or velocities observable also seems a little artificial when there are many tasks that have existing sources of partial observability (long-horizon, first-person viewpoints). - One possibly beneficial use of an alternative state summary like GPIDE would be computational savings, but there isn't any measurement of that in the paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See the final weakness, it would be nice to see computational cost measured. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and your review! For your note on computational savings, please refer to the global response. While it is true the majority of the environment variants (10 out of the 18) are lower-dimensional tracking problems (either 3 or 6 dimensions), we do not believe that they are any less valuable than the high dimensional ones. It would be one thing if all baselines performed optimally on the lower dimensional experiments, but we find that is not the case and that pre-existing (and widely used) methods can be very suboptimal. How can we expect a method to perform optimally on high-dimensional tasks if it cannot on basic low-dimensional ones? Lastly, while we agree there are many different types of partial observability that occur in real-world tasks, partial observability of positions or velocities is a common benchmark in the literature (Han et al 2019, Meng et al 2021, Yang and Nguyen 2021, Ni et al 2022). --- Rebuttal Comment 1.1: Comment: Thanks for your response, a couple of specific things below - **Computational savings**: I had a look at appendix E, and the details are only for training. While it's helpful, it would also be nice to get a sense of the inference costs for using GPIDE instead of baselines. For example, does it lead to a qualitative difference that lets some tasks run in real-time? This would be a significant win over baselines and I think is worth exposing. - **Low vs. high dimensional environments**: I'm not totally convinced by this line of argument. I believe the goal is to do research on interesting tasks. The most obvious existing case of partial-observability to me is something like a first-person viewpoint. I would also take strategic games that have fog of war (Stratego, Starcraft, etc.) as other good examples. Another great example is minesweeper. While some of these domains are very hard for existing algorithms, others seem to be quite natural and have been used by many papers in different subfields (first-person navigation in particular). Such domains seem to be much closer to the eventual deployment scenario of the paper's method than masked tracking problems, though I would be convinced by a good argument as to the deployment uses of tracking. I do acknowledge that I am not familiar with the literature, and using standard domains is a good idea. --- Reply to Comment 1.1.1: Comment: Thank you for your reply! Your point about the test-time execution point is a good one that we had not considered during the writing of the paper, and we now realize that Appendix E does not fully shed light on this. As noted in our global response, we speculate that GPIDE without attention should be faster than a policy that uses GRU or LSTM; however, we cannot say for certain right now how much faster. We will work on creating a more efficient version of our code that assumes no attention is used to test this. We agree with you that research should be grounded in real, interesting applications. While the “fog-of-war” type of partial observability is interesting, we want to emphasize that there are many other types of partial observability worth studying, especially in robotic applications. For example, partial observability can come in the form of unknown system parameters (which are present in all of our tracking experiments) or unmeasured signals. While masking positions and velocities is one form of the latter that appears frequently in the literature, our tokamak control experiment has realistic assumptions on which states of the plasma can be measured in real time. Tokmak control using RL has gained attention from the community lately (e.g. see “Magnetic control of tokamak plasmas through deep reinforcement learning” from Degrave et al. which also considers a tracking problem), and this experiment highlights the partial observability challenges that this application faces.
Summary: This paper considers history encoding for deep RL POMDP control problems through a PID-inspired lens. Specifically, authors introduce PIDE, a method of directly using PID control which extends to multiple-input multiple-output problems, and GPIDE, a PID-inspired encoder architecture. GPIDE consists of a series of heads that takes as input model observations and past history, and internally computes differences between observations (similar to the PID D-term) and aggregation/summation over all past timesteps (similar to the PID I-term). The method is then evaluated on tasks including mass-spring-damper, navigation, tolemak control, and pyBullet locomotion. Across these experiments, the method is compared against baselines of direct PID control, GRU, transformer, and several RL algorithms. GPIDE is found to have strong performance across both simple and complex tasks and be robust to domain transfer scenarios. Further ablation studies provide insight into design choices for GPIDE heads, visualize attention schemes, and evaluate a GRU+VIB baseline. Strengths: #### Originality The ideas of the paper are generally original, building from prior deep RL work and focusing on the design of the feature encoder, especially designing a novel history encoding architecture which draws from PID controllers. The pieces of the architecture draw from general temporal aggregation principles and transformers. #### Quality The paper is generally high-quality. The experiments compare against relevant baselines and prior work, are run with multiple seeds, and the training / setup / tasks are clearly defined. The code is provided and looks cleanly written from initial observation, though I haven’t run it to verify. The ablation studies answer my initial questions about the architecture design, especially in how important the attention heads are. #### Clarity The manuscript is well written and clearly conveys the derivation of the method from PID controllers and explains the architecture and design choices in a very understandable way. Relevant figures are provided for context of the architecture and tasks. The experiments are well motivated and help understand the performance of PIDE / GPIDE across tasks, RL methods, and transfer scenarios. #### Significance GPIDE seems broadly applicable across deep RL control tasks, especially those related to tracking and navigation / locomotion as demonstrated by experiments. The method is an approach to handling the POMDP problem on control problems as a drop-in architecture for history encoding. Weaknesses: #### Experiments Compared to RNNs such as GRU which consider only the current timestep and state to predict the next action, GPIDE seems to consider all past timesteps from 1 to t within each step, as part of the weighted summation resembling the PID I step, which may be more expensive than other methods if the time / observations are large. It may be worthwhile to consider potential inference time compute differences because of this, and also to consider ablations or baselines which also have such history context. #### Significance On line 116-117, the authors mention that LSTM, GRU, and transformers were shown to be powerful tools for NLP because of complex relationships within the task. Because control tasks may not have such complexities, such methods may overfit rather than help, especially in cases of domain transfer. However, these methods have also been applied to success in domains including robotics and visual encoding (e.g. https://arxiv.org/abs/2212.06817, https://arxiv.org/abs/2010.11929, https://arxiv.org/abs/2106.01345) - perhaps this indicates that the best targets for using GPIDE would be cases where highly-overparameterized / “large” models may overfit, which would be towards the simpler side. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: #### Limitations The authors mention that control tasks which require memory may not be suitable - it would be further interesting to consider from the perspective of task similarity to that of PID control. Do the authors think that there would be correlation between similarity to PID / tracking tasks and successful application of GPIDE? For instance, a common image-based task would be visual grasping which involves reasoning about object geometry, where it may be less clear that results would directly transfer. Furthermore, some tasks may be less partially observable in nature than others - given this method focuses on a history encoder, would the relevance of history generally for certain tasks be relevant to consider? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The limitations are generally adequate (there are some questions in the section above). Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and your thorough review! We were happy to see that you thought the paper was high-quality and that the method seems broadly applicable across RL tasks. We have touched on many of your points in the weakness section in the global response (particularly computational expense, ablation of history context, and extension to image-based tasks). You mentioned that perhaps the best targets for using GPIDE are cases in which the tasks or dynamics are relatively simple. This may very well be the case; however, we would like to point out that the papers you linked use a supervised loss function during training. Besides a few works we mention in the paper, it seems that most successes applying transformers to control use supervised losses. We found that training transformers in an online RL setting where there are bootstrapped target values is far more challenging. As such, it is not clear to us exactly when a transformer should be used over GPIDE for online RL.
Summary: This paper proposes a new way to encode features in partially observable environments using PID. Experiments show superior results over several domains compared with recurrent and transformer encoders. Strengths: The paper is easy to read overall. Experiments show promising performance on multiple domains. Weaknesses: Some assumptions and technical details need to be clarified. 1) Using the difference between observations as the derivative of the error (D) is trivial, assuming that the reference value for the observation at different timesteps is the same, which is not true. 2) The author mentioned in Line 102 that using M separate PID controllers to control M signals has some clear shortcomings. This paper models the MIMO problem in a centralized manner, inputting all inputs and outputting a 3M dimensional vector. However, this would not be feasible if there is a large scale of signals that need to control. 3) It said the $f_{\theta}^h$ is a learnable linear projection, how to learn it is not discussed. Also with $g_{\theta}$. 4) How to determine the suitable length of the history, As for the length $1:t$ in $v_{1:t}^{h}$? More ablations should be provided. 1) The ablations investigate the influence of different types of heads, while why choosing these three settings (ES, ES+Sum, Attention) is not explained. What about ES+Attention? Why five ES + 1 Sum? Are there any insights in choosing the number and combination of different heads? 2) The influence of different lengths of history should also be investigated. More limitations should be discussed. For example, the encoder processing time is highly dependent on the length of the history. Some related works are not discussed in this paper. For example, [1-3]. [1] An adaptive deep reinforcement learning approach for MIMO PID control of mobile robots [2] Self-Tuning Two Degree-of-Freedom Proportional–Integral Control System Based on Reinforcement Learning for a Multiple-Input Multiple-Output Industrial Process That Suffers from Spatial Input Coupling [3] Robust Adaptive PID Control based on Reinforcement Learning for MIMO Nonlinear Six-joint Manipulator Technical Quality: 3 good Clarity: 3 good Questions for Authors: Questions, Please see the pros and cons part about the assumptions, technical details, ablations, and limitations. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Please see the pros and cons part about the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and your review! We will respond to each of your weaknesses in a corresponding list. Starting with the list of assumptions and technical details, 1. For your first point, you are correct that the reference value need not be the same throughout time. However, GPIDE can still recover the correct D term. To demonstrate, consider an M=1 dimensional tracking problem. As per line 89, $o_t = (x_t, \sigma_t)$. Then, $o_t - o_{t - 1} = (x_t - x_{t-1}, \sigma_t - \sigma_{t - 1})$. If we let our $f$ be a projection that simply subtracts the second coordinate from the first we get $f(o_t - o_{t - 1}) = (x_t - \sigma_t) - (x_{t - 1} - \sigma_{t - 1})$. Thus, we have recovered the correct D term up to a constant. 2. While we agree that learning a policy becomes difficult as the dimensionality $M$ grows, we assert that all approaches will struggle as $M$ becomes large, especially since policies need to remember a history of observations to perform well in the POMDP setting. For example, a GRU-based approach would learn a $D$-dimensional encoding of the history, which effectively makes the policy over a $(M + D)$-dimensional input. We found that $D$ needs to be relatively large for good performance (64 for tracking experiments and 256 for locomotion experiments), and it would likely need to grow as $M$ grows. Moreover, the mapping from the history to the $D$-dimensional encoding is often learned simultaneously with the policy. Adding up all these factors, we assert that learning a policy that takes in a fixed, $3M$-dimensional input is actually a less daunting task than using a GRU or LSTM. 3. $f$ and $g$ are simply fully connected layers with no activation functions. Everything is trained end-to-end using the Adam optimizer. We will add a mention of this in the final paper. 4. For the lookback $t$, we simply use the full episode for tracking problems and the same setting as was used by the baseline methods in Ni et al. In practice $t$ is a hyperparameter; however, this is not unique to our method. Although recurrent networks can encode arbitrary history lengths at test time, one must often select how far to look back during training the recurrent network. Please refer to the global response for more discussion on this. For your comment on the ablations: 1. We chose the ablations we did because we wanted to investigate two questions: how important are heads that accumulate information (hence ES vs ES + Sum)? and do we even need ES if attention has the capacity to represent it already (hence ES vs Attention)? This reasoning was not presented well in the paper, and our final version will flesh this out. While the results of ES + Attention would be interesting to see, we did not feel the need to devote computational resources to this configuration since it would not answer either of our two questions. As to why we picked 5 ES heads and 1 summation head, ES can capture several different time scales depending on $\alpha$ whereas summation can only capture one. Therefore, it seemed logical to only have one summation head. 2. Please refer to the global comment for time horizon ablations. Thank you for bringing these related works to our attention. We will cite them in our final paper. While they cover how RL can improve PID rather than how PID can improve RL (like our paper), we still think it is important related work to discuss. If you feel that we have adequately addressed your concerns, we would respectfully ask you to consider raising your score. Thank you!
Rebuttal 1: Rebuttal: Thank you to all of the reviewers for taking the time to read our paper and give thoughtful reviews. We value your feedback and hope to use it to strengthen our work. In this global comment we will address some points that were raised in multiple reviews. ### Performance of HalfCheetah-V Some reviewers pointed out that GPIDE achieves substantially worse performance on HalfCheetah-V and were concerned this may indicate the GPIDE architecture does not have enough flexibility to handle higher dimensional tasks. However, we assert this is not due to the GPIDE architecture but due to attention heads. Looking at Table 28 in the Appendix, we see that removing attention heads more than doubles the average performance ($20.39 \pm 29.60$ to $53.14 \pm 5.86$), making it competitive with the TD3-GRU, which is the best performing method ($59.03 \pm 2.88$). Moreover, GPIDE-Attention is one of the worst performers, showing attention heads result in particularly poor performance in this environment. We agree that the current iteration of the paper is lacking this explanation, and we will update our paper with these details. ### Computational Cost of GPIDE Many reviewers inquired about the computational cost of GPIDE. This information can be found in Appendix E, and the final version of the paper will do a better job of referencing this in the main body. Using attention is slower since the current query must be compared with all previous keys in order to form the encoding. We see in practice this results in a roughly 20% slow down on the tracking problems. However, we would like to emphasize that if no attention heads are used (which ablations suggest is optimal), $w_{t-1}$ can be cached at every time step and only $v_t$ need be computed to calculate $w_t$. Since $v_t$ is the result of a linear projection, this would likely be faster than using a GRU or LSTM. Unfortunately, our implementation of GPIDE-ES and GPIDE-ESS does not take this shortcut into consideration since our code was created with flexibility of head type in mind. ### Impact of Different Lookback Lengths Multiple reviews wished to see the impact of lookback length on the policy performance. We were able to run preliminary experiments with varying lookback on the HalfCheetah environments for GPIDE (see one page pdf), and will run similar experiments for all locomotion environments and variations of GPIDE found in the ablations for the full paper. As a reminder, we picked a lookback of 64 since this is the sequence length that competitor algorithms trained on; however, we agree that investigating the lookback is important to understanding GPIDE. For HalfCheetah-P, it is clear that a greater amount of lookback corresponds to higher returns. In fact, it seems that we are able to get even stronger performance by increasing the history from 64 to 128. For HalfCheetah-V, there is a clear increase in performance going from 4 to 16 lookback; however, performance drops off after this and crashes for a lookback of 128. As stated in the global response addressing HalfCheetah-V, we believe attention heads are the cause for this, and it seems that there are instabilities that occur as lookback grows. We do not expect to see these same instabilities when we run the same experiments for GPIDE-ES and GPIDE-ESS. As initial evidence to this claim, we also have preliminary data for GPIDE-ESS with a lookback of 128. Although these jobs are still running, it is clear that the stability problems have been eliminated, and it seems the final performance will be greater than any of the other GPIDE variants. ### Extension to Image-Based Tasks Many reviewers also mentioned the extension to image-based tasks. While we are optimistic about the application of GPIDE on such tasks given the right image encoding, working with images comes with its own set of challenges which we leave for future work. Pdf: /pdf/1b557f899165fa69d14d99726f429a3ca3c881f4.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper studies the problem of learning from histories in partially observable MDPs where a key question is how to design an architecture that is general enough so as to work on a large set of problems, yet specific enough to be sample efficient. Inspired by the success of PID in the more classical control literature, the idea here is to design an architecture that is solely comprised of summation, attention, and linear operations. Experiments show that on tracking problems, where PID is originally proposed for, as well as more complicated control problems, this simple architecture is good enough to achieve competitive results. Strengths: The paper shows that using a complicated architecture is an overkill for many of the domains that the RL community considers as benchmarks. On this note I'd like to mention that, while the contexts are somewhat different, a similar conclusion can be made from "Towards Generalization and Simplicity in Continuous Control" Rajeswaran et al, where they show that simpler architectures such as linear models are equipped to deal with Mujoco Tasks. The difference is that in their paper they were considering reward-maximization in presence of the full state, whereas here the goal is learn a memory-like function in conjunction with performing well in the given task. In any case, I generally like results that show simple stuff work, because simple things are easy to understand, implement, and execute. Weaknesses: Overall I like this paper but I also have some concerns: Fundamentally, I feel like the results in this paper are pertinent in terms of a bigger question, namely how rich we want our hypothesis class to be for learning a certain target. The solution adopted by the ML community nowadays seems to be: we really do not know how complicated a task is, so by choosing a very complicated architecture and the fact that we are at the mercy of universal function approximation capability, we can always in principle find the right fit. Moreover, the double descent phenomenon ("Deep double descent: where bigger models and more data hurt", Nakkiran et al) tells us that empirically we are very likely to find a good fit with larger and more complicated models and that, despite more classical belief, such an overparameterized fit happens to also be quite robust. ("A Universal Law of Robustness via Isoperimetry" Bubeck and Sellke) Now, moving to the RL setting, these insights from supervised learning do not really carry over because we usually learn by bootstrapping off of Bellman targets, and by increasing the complexity of our function approximator, we also have to learn from increasingly more complicated targets (unlike supervised learning where targets are fixed). So I think, at least with our current RL algorithms, unlike SL we always need to choose our function class carefully and so adding inductive bias, such as ones provided in this paper, can usually be helpful. My worry is that adding such biases are helpful in a limited sense: It is not clear how much inductive bias should I add for a given task when I don't know too much about that task. Sure, I can always choose environments and problem settings where I know that my inductive bias is conducive to solving the problem, but do we really want to bring back the burden to ML practitioners to carefully think about the kind of inductive biases that are useful for their task? Grounding my concerns more so in the experimental results provided here, for example we see that the inductive bias provided is already giving small (or negative) gains as we move to some of the more complicated domains such as HalfCheetah and Ant (based on Figures 18 and 19 in the Appendix), so I ask authors, are you not worried that as we move to much larger domains then it becomes increasingly more difficult to show performance gains for GPIDE compared to other approaches? In other words, does the inductive bias scale reasonably with problem complexity or there is only so much this could be useful? To further play devil's advocate, to say that GPIDE is capable of beating the transformer architecture is sort of a weird claim, because as authors say GPIDE could be thought of as a special case of transformers, and further as you show the attention layer is often not very useful. This tells me that with proper hyper-parameter tuning and regularization, the transformer architecture should in principle be able to always do better than GPIDE, and we can still hope to find the transformer useful as we move to larger problems. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I would like to ideally see an acknowledgment that as we move to much more difficult domains, GPIDE may be incapable of doing well simply because the actual function to be represented in not in the function class. But such an example is currently absent. This may give a false sense that the paper is claiming that GPIDE is a general-purpose memory learner but that is not true, and I am not saying that authors say it is true, but having examples of failure cases would really clarify this point. If I may say, I would also like to see experiments on different types of problems. For example, can we expect GPIDE to perform reasonably on image-based domains? Like the paper mentions, we can hope to apply the GPIDE on the embedding space learned by some more capable function approximator, but it seems to me that to learn good embeddings that are useful for downstream memory learning, one needs to leverage some kind of GRU/Transformer/LSTM architecture, and so it is not immediately clear to me how to leverage GPIDE in this setting. My other question is also that as we know RNNs struggle to learn important events from the distant past because the vanishing/exploding gradients. LSTMs and Transformers are less prone to this issue but I wonder if we could better compare this capability of GPIDE in contrast to the existing architectures. In the Half-Cheetah experiment, for example, do we have a good sense of how far back one needs to go to be able to accurately learn the position and/or the velocity signal? Also, from the intro the paper reads "Another hurdle stems from the fact that policies are often trained in an imperfect simulator, which is likely different from the true environment. Combining these two challenges necessitates striking a balance between extracting useful information from the history and avoiding overfitting to modelling error." My thoughts reading this sentence was that we are going to either have actual sim2real experiments or some kind of a transfer learning benchmark, where we learn the memory function in some settings, and test the learned memory function in another test setting. This would have been more convincing in terms of claiming that GPIDE is not prone to overfitting. Did I get it right that this experiment is absent? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and your thoughtful review! We greatly appreciate the feedback, and we are glad that you liked the paper. * Your comments highlight a fascinating problem which is the tradeoff between the benefits of high capacity modeling and the benefits of useful inductive biases. As you point out, the community is getting a better handle on these tradeoffs for supervised learning, but for RL many of these questions are fairly open. We agree that our Limitation section should better acknowledge GPIDE’s reduced capacity being a potential problem for high dimensional or complex environments, and we will expand our discussion in the final version of the paper. While a quick look at the margin of victory might suggest we do better on lower dimensional environments, we note that GPIDE consistently improves upon or matches the performance of the best competitor (see global response for more information on HalfCheetah-V). It is important to note this best competitor is not always the same. In particular, while SAC-LSTM is the best performing competitor by a substantial margin on Ant-P, it is also the worst performing competitor on Ant-V. We believe this robustness across environments is encouraging for many other problems including larger and more complex ones. Regarding the possibility of transformers always doing better than GPIDE, it is not clear to us that this is true with the right hyper-parameters and regularization. We view this as analogous to CNNs vs fully connected networks (FCNs) on image tasks. While the FCNs are a more expressive class, they are rarely as performant as their CNN counterparts, which utilize an intelligent inductive bias. As you already pointed out, there are still many open questions around RL and whether these same phenomena apply. * We have done an additional experiment on the amount of lookback for HalfCheetah. Please refer to the global response. * We do have substantial experiments on sim2real and the generalization capabilities of PIDE and GPIDE. For the MSD and DMSD environments, we train the policy on a reduced set of system parameters and test on a larger set. This is the same type of experimental setup used in “Assessing generalization in deep reinforcement learning” (Packer et al 2018). We expand on this in the navigation experiments by having the “sim” not model friction whereas the “real” environment has friction. Perhaps most realistic, in our fusion experiment we create a simulator using first-principle equations. For the “real” environment we use a data-driven dynamics model trained using data from the actual device. In all of these experiments, PIDE and GPIDE showed an impressive level of generalization, especially when compared to using a GRU. --- Rebuttal Comment 1.1: Comment: Awesome, thanks for the rebuttal. As alluded to by a few other reviewers (64AN RWrF), we still have a lingering question about scaling up GPIDE to larger domains, or domains with different inputs such as text or image. That said, I am really encouraged by your acknowledging that in the limitation section a more nuanced discussion on the potential limitation of this inductive bias would be highly appropriate. This would hedge against potentially over-claiming the result. This seems to be the kind of inductive bias that can a) work surprisingly well in lower dimensional settings and b) provide enough insights for follow up works to propose more scalable inductive biases. So overall, I keep my score and remain supportive of this paper getting accepted. --- Reply to Comment 1.1.1: Comment: Thank you for your reply and for your support for our paper's acceptance! We sincerely value your feedback.
null
null
null
null
null
null
Adversarial Examples Exist in Two-Layer ReLU Networks for Low Dimensional Linear Subspaces
Accept (poster)
Summary: The paper investigates adversarial robustness of trained ReLU neural networks when data lies within a linear subspace. For the theoretical part, which is the bulk of the paper, the networks have two layers and only the first layer is trained. Then the key observation is that the assumption on the data causes the training to change only the projections of the first-layer weight vectors to the linear subspace, i.e. their components in the orthogonal subspace remain as at initialisation. Based on this, the authors prove several results, including a lower bound on the projection to the orthogonal subspace of the gradient of the network output at any point in the linear subspace, existence of a direction in the orthogonal subspace for universal adversarial perturbations, and that either reducing the initialisation scale or using L2 regularisation can reduce the norm of the projections to the orthogonal subspace of the gradients (in the latter case, the projections of the first-layer weight vectors to the orthogonal subspace are changed during training only by the L2 regulariser). The results contains some assumptions, most notably for the universal adversarial perturbation, the width of the network has to be roughly at most a constant fraction of the input dimension. The theory is supplemented by experiments on small examples in dimensions 2 and 3; in them all layers are trained and a network with five layers is also considered, which suggests that it may be possible to extend the theoretical results beyond the assumptions in the paper. Strengths: The paper is a nice mostly theoretical follow-up to the mostly empirical "The Dimpled Manifold Model of Adversarial Examples in Machine Learning" by Shamir, Melamed, and BenShmuel (CoRR abs/2106.10151), which in particular brings in the considerations of the effects of initialisation scale and L2 regularisation. The theorems are proved in detail in the appendix, with helpful sketches provided in the main. The experiments and the resulting figures aid the intuition, and suggest directions for future theoretical work. Weaknesses: The proofs of the theoretical results are perhaps not surprising or difficult. I believe the code for the experiments is relatively simple and essentially given by the text in Appendix E, however it was not submitted with the supplementary materials. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Are you able to say how the theoretical results would change if the second-layer weights were trained as well? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: The assumption of small network width in relation to the input dimension is restrictive, although a similar assumption featured in the related paper by Daniely and Shacham (NeurIPS 2020). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thorough review and constructive comments. Code for the experiments: We indeed report all the experimental details in Appendix E. We will publish the full code with the camera-ready version. "Are you able to say how the theoretical results would change if the second-layer weights were trained as well?": This is a good question and an interesting future research direction. We believe it is possible to show a similar result when the second layer weights are also trained. However, such a result would require stronger assumptions on the input data, beyond residing on a low dimensional manifold. Therefore we tried to avoid it in our paper and keep the conclusions as general as possible. Empirically, we trained all layers and showed similar results in the experiments attached here and in the paper. --- Rebuttal Comment 1.1: Comment: Thank you for these responses.
Summary: This paper studies the vulnerability of two-layer ReLU networks under adversarial attack when the data is in a low-dimensional manifold. The paper also observes that adding L2 regularization in clean training also improves the adversarial robustness. Strengths: The intuition of this paper is interesting, with a high quality and a good clarity. Although existing literature considers that the vulnerability of clean training is caused by the off-manifold attack, this is the first one to study the theoretical perspective in detail. Weaknesses: (1) My major concern is that this paper mainly studies the side off the data manifold, but not the side on the data manifold. Some claims need more justifications. For example, in the abstract, the authors mention that "the standard gradient methods lead to non-robust neural networks, .. and are susceptible to small adversarial L2 perturbations in these directions". To claim that the perturbation is "small", the authors need to provide a result on what is the smallest on-manifold attack strength which results in an incorrect prediction. It is essential to make comparison in order to claim a quantity is "small", i.e., while Theorem 4.1 presents the L2 norm of the off-manifold gradient, what is the on-manifold gradient? (2) For the numerical experiments, please also share the code or report the observations when the training iteration T is large enough so that the loss converges to zero. In the paper Ba, J., Erdogdu, M., Suzuki, T., Wu, D., & Zhang, T. (2020, April). Generalization of two-layer neural networks: An asymptotic viewpoint. In International conference on learning representations. it is observed that training a neural network in regression in their setting is the same as a penalized linear regression, i.e., the training already introduces some sort of regularization. It would be great if the authors of this submission can provide the code or some more experimental results to support the claims in this submission. Due to concern (1) and (2), my current score is 5. I like the intuition of this paper so I will consider raising my score if these major concerns can be addressed properly. For (1), a full theoretical proof may takes too much time. A brief justification would be helpful enough. Some other comments: (3) While Theorem 6.2 provides justifications on how L2 regularization improves adversarial robustness, it would be helpful if the authors can also provide the convergence and the generalization performance of the regularized neural network, which may potentially implies a trade-off between clean performance and adversarial robustness. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please address my concerns mentioned in the weakness section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thorough review and helpful comments. Weaknesses: 1) We emphasize that an analysis of the perturbations on the data manifold would require having some additional assumptions on the data, e.g. about its distribution. We believe that one of the major benefits of our work is that the results are general in the sense that the only assumption we have on the data is that it resides on a low-dimensional subspace. Hence, we can only analyze the gradient and the adversarial perturbations off the manifold. We think it is of great interest to study on-manifold perturbations and compare them to those off-manifold. However, this is beyond the scope of our paper. 2) We will include all the code for the experiments in the camera-ready version. The visualizations in the paper demonstrate the evolution of the decision boundary, rather than the loss of the predictor on the train/test data. Therefore we measured convergence using minimal margin criteria, as noted in Appendix E. We added several new experiments which demonstrate the theoretical phenomenon described in the paper, on both synthetic and real datasets. In these experiments, we trained the network until reaching a cross-entropy loss of $10^{-3}$. Full details about the experiments can be found in our global comment above, and the results are in the attached PDF. 3) We agree it would be helpful to provide the convergence and generalization performance of the trained predictors and potentially observe a trade-off between robustness and performance. However, to analyze these, we would need further assumptions on the data beyond residing on a low-dimensional manifold. This is because, different data distributions (on the manifold) would mean different generalization or convergence guarantees. We think that a very interesting follow-up work would be to study specific on-manifold distributions, and use the tools we developed in this paper to compare on- and off-manifold robustness with generalization and convergence. --- Rebuttal Comment 1.1: Title: Comment Comment: I appreciate the authors providing the more real data experiments in the pdf file. For my review comments, I'm still not convinced: 1. I agree on-manifold perturbation requires extra assumption, but **an example of such a result is necessary to support the claim that "off-manifold perturbation is small"**. (If I say US is closed to Australia, I mean comparing to the distance from the Earth to the sun, but not comparing the distance from New York to California.) 2. Could you summarize the observations of the additional simulations in the comment? --- Reply to Comment 1.1.1: Title: Re: Comment Comment: Thank you for the reply. Below we provide three arguments on why we consider the off-manifold perturbations small: 1) As we elaborated in lines 186-191 and 249-253, if we consider data where each coordinate is of size $\Theta(1)$, then the norm of each data point is $\Theta(\sqrt{d})$ (where $d$ is the input's dimension). In our results, we show that the off-manifold perturbations are much smaller, namely, typically it is in the order of $\text{polylog}(d)$. The size $\Theta(\sqrt{d})$ is a good reference point for comparison, since it is the trivial upper bound for the size of on-manifold perturbations (as we can always flip the output's sign by moving to a point with an opposite label). We note the previous works on adversarial perturbations in random neural networks also used the inputs' norms as the reference point for measuring whether a perturbation is small [1,2,3,4]. Thus, comparing the perturbation's size to the inputs' norms is already common in the relevant literature. 2) Empirically, this can be observed in the new experiments we added (please see the attached PDF in the comment to all reviewers, Figure 2(a)). Specifically, for the synthetic experiment we considered where the data lies on a low-dimensional sphere, the size of the off-manifold perturbation for a network trained with standard initialization ($1.0$ in the x-axis) is roughly half of the on-manifold perturbation. This indicates that without small initialization (or weight decay), in certain situations the off-manifold perturbation is smaller than the on-manifold. 3) It is easy to construct a contrived example of a data distribution where the on-manifold perturbation is arbitrarily large compared to the off-manifold perturbation. Consider a one-dimensional data manifold, and a dataset consisting of a single data point sampled from this manifold with norm $A$. Suppose also that the network consists of a single neuron (for ease of analysis). In that case, training the network on this dataset until reaching some fixed margin, will result in the neuron being a sum of two components: (1) A component in the direction of the data point; and (2) an orthogonal component to the point, with norm depending on the magnitude of the initialization. An on-manifold perturbation will move only in the direction of the point, hence flipping the label would require a perturbation of magnitude $A$ which moves to the opposite halfspace. On the other hand, an off-manifold perturbation is independent of $A$. We will be happy to elaborate more on this theoretical example if required. Regarding the observations of the additional experiments. We conduct two sets of experiments: 1) We perform PCA on MNIST and CIFAR10, and show that most of the variance lies only on few dimensions. Thus, further motivating our assumption that the data lies on a low-dimensional manifold. 2) On both a synthetic dataset and MNIST, we compare the off-manifold to on-manifold and overall (i.e both on- and off-manifold) perturbations for networks trained with different weight initializations. The experiments show that smaller initialization increases the off-manifold and overall robustness (by increasing the required perturbation size required for changing a label), while having almost no effect on the on-manifold robustness. [1] Amit Daniely and Hadas Shacham. Most relu networks suffer from ℓ2 adversarial perturbations, 2020 [2] Sébastien Bubeck, Yeshwanth Cherapanamjeri, Gauthier Gidel, and Rémi Tachet des Combes. A single gradient step finds adversarial examples on random two-layers neural networks, 2021 [3] Peter Bartlett, Sébastien Bubeck, and Yeshwanth Cherapanamjeri. Adversarial examples in multi-layer random relu networks, 2021 [4] Andrea Montanari and Yuchen Wu. Adversarial examples in random neural networks with general activations, 2022
Summary: This paper focus on the data lies on a low dimensional data manifold (linear subspace P). There’re no additional assumptions on the number of data and their structure (orthogonality). The paper considers the perturbation on P^\orth space. The paper claims that standard gradient descent leads to non-robust neural network; while decreasing the initialization scale, adding L2 regularizer can make the trained network more robust. Strengths: 1. Studying the adversarial perturbation on the low-dimensional subspace is an interesting problem. 2. The paper is well-written and easy to follow. The mathematical derivation seems sound. Weaknesses: 1. This paper specifically assumes the dataset lies on a linear subspace, and the perturbation lies in the orthogonal direction of the subspace. The analysis of thm4.1 still relies on part of the weight being unchanged after training, and the gradient lower bound depends on the unchanged weights. To my understanding, such analysis still relies on random initialization property and therefore not much different compared to previous work, which might previous work analysis also holds under this paper’s low dimensionality assumption. To that end, the perturbation lies in the orthogonal direction of the subspace this constraint is trying to bypass the gradient update algorithm. In other words, does it mean that theorem 4.1 still holds even if you consider adversarial training? 2. Although the author motivates the idea by saying real-world dataset mostly lie in low-dimensional subspace, the experiments are based on extremely simple synthetic data. I’m wondering whether similar results hold for a simple dataset like MNIST. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In Theorem 5.1 or Corollary 5.1, the perturbation size depends on the output size. However, in classification problems, you can always rescale the predictor ($w,u$) while maintaining the sign of prediction to be unchanged. To that end, $N(x_0)$ can be arbitrarily small, implying the size of perturbation can be arbitrarily small, regardless of the size of $k_{y_0}$ and $\ell$. Then the result is a bit counterintuitive. 2. I'm wondering whether theorem 4.1 lower bound can also depend on the network initialization so that it matches theorem 6.1. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thorough review and helpful comments. Regarding the weaknesses: 1) We acknowledge in the paper that some of our proof techniques were also used in previous works about robustness. The main difference is that in this work we consider trained networks, and our results are general, with only very minor assumptions on the data, while previous works considered random and untrained networks. While the weights corresponding with the off-data dimensions are indeed unchanged, the other weights are updated and affect those gradients via activations. Therefore, even if we use stronger data constraints, results based on random networks do not apply here. Analyzing adversarial training will require additional assumptions on the input data, this goes beyond the scope of the paper and is an interesting future research direction. If we didn't understand the question correctly, we would be happy if the reviewer could clarify it. 2) To show our results indeed extend to the MNIST dataset, we use simple PCA decomposition that demonstrates the implicit low linear data subspace. One can see a small fraction of the dimensions capture most of the variance. We also added an additional experiment on the MNIST dataset demonstrating the phenomena presented in the paper. Details about the experiments can be found in our global comment above, and the results are in the attached PDF. Regarding the questions: 1) Theorem 5.1 and Corollary 5.1 indeed depend on the size of the output, but also on the scale of the initialization. Please note that scaling the output (by scaling the weights) in these results is equivalent to scaling the size of the initialization, which, as we show, affects the size of the adversarial perturbation. 2) This is a good question. Yes, Theorem 4.1 could have been written where the scale of the initialization is a parameter of the problem and then the upper and lower bounds in Theorem 4.1 and Theorem 6.1 would have been tight (up to a constant factor). We wrote the theorem this way to emphasize the phenomenon in standard initializations, and will consider changing it for the camera-ready version. --- Rebuttal Comment 1.1: Comment: I thank the author for addressing my concerns and presenting a new experiment. I have some followup questions. Regarding weakness 1, can the author point to me which part of the theorem uses the fact that ``the other weights are updated and affect those gradients via activations”? I might be wrong, but after briefly skimming the proof, all I find is based on the assumption of the perturbation, the weights that are updated are no longer important as demonstrated in Theorem A.2. Regarding question 1, yes I understand the size of perturbation depends on the scale of initialization. To some extent, thm5.1 shows that if the initialization is super small, then there’s no adversarial robustness at all. But if the initialization is large, it does not mean the model is more robust. Please check the correctness of this statement. The reason I said the above statement is because in thm5.1 and corollary 5.1 the size of perturbation is provided in an upper bound form. I believe it would be more appropriate to give a lower bound on the size of perturbation, demonstrating that as long as the perturbation is these large, then there’s no robustness at all. --- Reply to Comment 1.1.1: Title: Re: Comment Comment: Thank you for the reply. Regarding weakness 1, there are indeed some similarities between our work and previous works on robustness for random networks. However, there are also some significant differences, and our results cannot be derived from these previous works. One important difference is that the works on random networks consider an adversarial perturbation in the direction of the gradient. In this paper (Theorem 5.1), we consider a specific perturbation which is \emph{not} in the direction of the gradient, and indeed a perturbation in the direction of the gradient would not work in our setting. The reason is that the gradient depends only on active neurons. For random networks, each neuron is active independently w.p. $1/2$, and hence it is possible to use a probabilistic argument to show such a perturbation will work. For a trained network, the active neurons depend on the data, and could possibly be chosen "adversarially" such that a perturbation in the direction of the gradient will not change the labels of the data (because the gradient changes significantly along such a perturbation). For this reason, we needed do devise a different perturbation, and the analysis is also quite different. Regarding question 1, Thm 5.1 shows that for standard initialization (i.e. which is quite large) there exists an adversarial perturbation with an upper bounded size. The reason we give an upper bound here, is to show that the perturbation is small, meaning that the network is not robust. For small initializations, we show in Section 6 an upper bound on the gradient's norm, which indicates that the network may be robust, at least to off-manifold perturbations. Specifically, in the reply "To some extent, thm5.1 shows that if the initialization is super small, then there’s no adversarial robustness at all ", it is the opposite, Thm 5.1 shows that standard initialization (i.e. large) means non-robustness. We also emphasize that in the new experiments we added, in Figure 2, the x-axis indicates the scale by which we divided the initialization of the network. This means that larger values correspond to smaller initialization scales, therefore smaller initializations resulting in larger minimal perturbations off-manifold.
Summary: The paper shows that on two-layer neural networks trained using data which lie on a low dimensional linear subspace that standard gradient methods lead to non-robust neural networks, that networks which have large gradients in directions orthogonal to the data subspace, and are susceptible to small adversarial L2-perturbations in these directions. The paper shows that decreasing the initialization scale of the training algorithm and adding L2 regularization, can make the trained network more robust to adversarial perturbations orthogonal to the data. Strengths: The paper is well written and organized, and is innovative and original. The structure of the paper is clear and rigorous. Weaknesses: The experiments are insufficient, the number of data points used to evaluate the proposed method is few. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1.Does the proposed method which decreasing the gradient in directions orthogonal to the data subspace hurt the accuracy on clean data? 2.The paper claim that large gradients exists in directions orthogonal to the data subspace, and are susceptible to small adversarial L2-perturbations in these directions, do other adversarial perturbations such as L1 or L_{\infty} also exist in directions orthogonal to the data subspace? 3.Is the proposed method effective in deeper networks? 4.small typo errors,e.g. line 287,extra ) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The number of data points is few, it's better to evaluate on more data points. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thorough review and constructive comments. "The experiments are insufficient, the number of data points used to evaluate the proposed method is few.": The experiments in the paper were done mostly for visualization purposes, with low dimensions and dataset size. We added several new experiments on larger datasets, both synthetic and MNIST/CIFAR10, please see the comment above to all the reviewers and the attached PDF. These experiments both show empirically on real datasets that they approximately lie on a low dimensional subspace, and that a small initialization scale indeed increases the robustness outside of the data subspace. Regarding the questions: 1) Our methods, that is changing the initialization scale and $L_2$ regularization, may affect the accuracy on clean data. We note that it does not necessarily hurt it; namely, it may as well improve the accuracy. Indeed, $L_2$ regularization is often used in practice for improving accuracy, regardless of its effect on adversarial robustness. Analyzing the effect of our methods on the accuracy requires additional assumptions on the data, since our results are general in the sense that we have almost no assumptions on the data, besides residing on a low dimensional subspace. 2) In this work, we focused on adversarial perturbations w.r.t the $L_2$ norm. We believe that there are also adversarial perturbations w.r.t. other norms in directions orthogonal to the data, although proving such perturbations exist requires a different analysis, which is beyond the scope of the paper. 3) Yes, our methods are effective for deeper networks, at least empirically, as demonstrated in Figure 4 in Appendix E. In the attached PDF, we describe further experiments on a low-dimensional sphere and on the MNIST dataset using deeper networks, demonstrating the same robustness effect described in the paper. --- Rebuttal Comment 1.1: Comment: Thank to the authors for their response, I have no more questions.
Rebuttal 1: Rebuttal: We thank the reviewers for the thorough reviews and constructive comments. In the attached PDF we add two experiments, aiming at showing empirically the effects of small initialization and that real datasets approximately lie in a low dimensional subspace. 1) To show that real datasets approximately lie in a low dimensional subspace we performed PCA on MNIST and CIFAR10, and calculated the cumulative variance. For MNIST, it reached $90\%$ of the total variance by accumulating $86$ components, and $95\%$ variance by accumulating $153$ components. Similarly, CIFAR10 reaches $90\%$ of the total variance by accumulating $98$ components, and $95\%$ variance by accumulating $216$ components. Note that MNIST is a $784$-dimensional dataset, and CIFAR10 is a $3072$-dimensional dataset. This indicates that most of the information for both of these datasets indeed lies in a low-dimensional subspace. 2) We trained a $3$-layer fully-connected neural network on two datasets for different initialization scales of the first layer of the network, and used a standard projected gradient descent adversarial attack to calculate the effect of the initialization scale on the distance from the decision boundary. The datasets are: - MNIST projected on a $32$ dimensional subspace using PCA. - $500$ random samples from a $20$-dimensional sphere which lies in a $784$-dimensional space. The adversarial attack is either: (a) Projected on the data subspace; (b) Projected off the data subspace (i.e. the space orthogonal to it); or (c) Unconstrained attack. Each experiment was repeated $5$ times (with a newly trained networks each time), and error bars are given in the plot. This experiment shows the dramatic effect of changing the initialization scale on the attacks projected off the data subspace. Also, changing the initialization scale improves robustness for the unconstrained attacks, while having almost no effect for the attacks projected on the data manifold. Pdf: /pdf/04cee2e7e977eeb7c5c61eb09a786056f10fd2a2.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Optimal Exploration for Model-Based RL in Nonlinear Systems
Accept (spotlight)
Summary: This paper addresses the problem of exploration when learning to control non linear systems. In particular, they derive a task-driven exploration method which focuses on learning the system parameters that are relevant for the specific task that a controller is trying to achieve. To derive the algorithm, they rely on a quantity which estimates how much the task loss vary, under a certainty-equivalence controller and the current model estimate. This quantity is the key to estimating what parameters in the model are relevant for the task at hand. It can be used by existing exploration algorithms (DynamicOED) as a minimization target. The exploration routine is presented as a second contribution which extends an algorithm originally designed for linear MDPs to non-linear systems. Strengths: The premises of the article that all parameters of the system might not be useful to know for the task of the controller makes a lot of sense, and the problem of task driven exploration is interesting and relevant. Overall, there is novelty as they extend a lot of existing concepts to a much broader class of systems (non-linear) and the contribution seems significant. However, it would be quite hard to implement the proposed algorithm from reading just the paper itself. The introduction of the motivating example and associated discussion is quite useful to understand the problem space. The related work section helps to position the scope of the paper and is mixing both RL and more traditional control. The experiment section show that the proposed algorithm outperforms non task-driven exploration methods. The set of baselines is quite limited but as the concept is quite specific to this article, I believe they are a sound choice. Weaknesses: Although the paper is well structured and written, the main weakness is the presentation. Since the paper introduces a lot of mathematical concepts and assumptions, it is useful to help the reader by describing intuitively what they mean. The assumptions presented are justified by citing previous work but some description would be useful to make the paper more self contained. Specifically assumption 3, it would be useful if the authors provide an intuitive justification. Assumption 8 lacks some general explanation. How does the choice of the underlying algorithm affect the performance? and why is the cost different than the goal $\Phi$? A lot of the concepts are derived from reference [2] which essentially solves the problem in the case of linear systems. It makes the contribution more incremental, however the jump from linear to non-linear is still significant. Theorem 1 and 2 introduce bounds on the exploration loss which are claimed to be novel, it would still be useful to compare with the bounds for algorithms that are not task dependent. Is there a trade-off? Since section 5 is presented as a potentially independent contribution, it would have been useful to add more background about the experiment design problem and clearly highlight the difference with previous work. The notation in dynamicOED is hard to follow, the explanation helps only a little. It would be quite hard to implement the algorithm just from reading the paper. It refers a lot to the LC3 algorithm which is not explained (this is certainly not my main expertise). The systems use in the experiments have fairly small scales (6 dimensions for the state). One could wonder whether the approach scale in the number of dimensions. It is mention a linear dependency with the system parameter for the convergence. What about the computational cost of the algorithm? In the experimental section, it is mentioned that MPC sampling based methods are used for exploration. How do we know that they verify assumption 8 which is key for convergence? Maybe the author could have tried linearizing the systems and applied the linear version of the algorithm that is cited. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Is there any physical system that would be like the motivating example? - What is D-optimal design? What to the optimization targets presented as example represent at line 270 in section 5? - Why is dynamicOED returning several exploration policies and not just one per step in algorithm 1? - How is the optimal controller $\pi_*$ obtained to compute the excess controller loss? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: They assume a specific form of non-linear system which could limit the applicability of the solution (at least the theoretical results), however a few references are mentioned indicating that many systems fit into this category. The bounds scale polynomially in the number of system parameters, it is not commented whether it is desirable or not. Couldn’t systems have a large number of parameters? Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their comments and will work to incorporate the feedback into the final version. We address specific comments below. > *“However, it would be quite hard to implement the proposed algorithm...” “The notation in dynamicOED is hard to follow, the explanation helps only a little...”* While the algorithm description in the main body is somewhat brief due to space constraints, we provided many more experimental details in Appendix G, which we believe will be helpful in implementing our algorithm. Furthermore, we plan to release code for our approach, and will work to clarify notation in the final version. > *“Although the paper is well structured and written, the main weakness is the presentation...”* We were somewhat limited by space constraints in the original submission, but will plan to add more explanation as space permits in the final version. We would be happy to provide explanations to any particular points the reviewer has. > *“The assumptions presented are justified by citing previous work but some description would be useful...”* Intuitively, Assumption 3 states that every feature direction can be reached, and is relevant to learning a good controller. If this assumption is not satisfied, then $\phi$ is overspecified, and could be projected down to a lower-dimensional space without changing the behavior. If our smoothness assumptions (Assumptions 4 and 5) are not met, small changes in the system parameters could lead to arbitrarily large changes in the cost of the learned controller. To ensure that a (reasonably) good controller can be learned on an estimate of the system parameters, we must assume that the cost of a controller varies smoothly as the parameters of the system change. We will add these intuitions in the extra page. > *“Assumption 8 lacks some general explanation...”* As shown in Theorem 3, the parameters the algorithm employed by Assumption 8 has ($C_{\mathcal{R}},p_{\mathcal{R}}$) affect the convergence rate of our experiment design procedure—besides this, the choice of algorithm does not affect the performance. The cost in Assumption 8 is different from the goal cost as we do not apply the algorithm of Assumption 8 to optimize the goal cost directly—instead, we apply it to a cost which incentives exploration (and the data collected during exploration is then used to solve the actual goal). This is illustrated and described in in Section 5.1. > *“A lot of the concepts are derived from reference [2]...”* We emphasize that moving from linear to nonlinear systems is usually very non-trivial, which is the case here, and to achieve our guarantee in the nonlinear setting requires non-trivial analysis. For example, in linear systems, random excitation is sufficient, while in nonlinear systems random excitation is not guaranteed to excite each direction, and estimates of system parameters could be arbitrarily bad. To address this required developing exploration techniques that navigate within the system to ensure all directions are excited. > *“Theorem 1 and 2 introduce bounds on the exploration loss which are claimed to be novel...”* Only several works that we are aware of can be directly compared with our results in Theorem 1 and 2. In particular, [1,2] both provide guarantees in this setting, but their rates translate to suboptimality guarantees that scale as $O(1/\sqrt{T})$, significantly slower than our $O(1/T)$ rate. The work of [3] does not give an end-to-end guarantee on controller loss (and furthermore they consider the non-episodic setting). However, applying their estimation guarantee in Proposition 1, we obtain a bound scaling as roughly $O( || \mathcal{H}(A_\star) ||\_{op} \cdot \min [ d_x, d\_{\phi}] \cdot \frac{H (d_x + d_\phi)}{\lambda_{\min}^\star T})$, which could be significantly worse than our bound, as we demonstrate to be the case experimentally in Section 1.1 (we emphasize again that these are not directly comparable though—to obtain a direct comparison the results of [3] would need to be reworked in the episodic setting we consider). We will add further discussion on this in the final version. [1] Kakade, Sham, et al. "Information theoretic regret bounds for online nonlinear control." Advances in Neural Information Processing Systems 33 (2020): 15312-15325. [2] Song, Yuda, and Wen Sun. "Pc-mlp: Model-based reinforcement learning with policy cover guided exploration." International Conference on Machine Learning. PMLR, 2021. [3] Mania, Horia, Michael I. Jordan, and Benjamin Recht. "Active learning for nonlinear system identification with guarantees." arXiv preprint arXiv:2006.10277 (2020). > *“The systems use in the experiments have fairly small scales (6 dimensions for the state)...”* We refer the reviewer to point 1., 3., and 4. of our response to all reviewers, and to Appendix G for additional experimental details. As noted, we believe our contribution is primarily theoretical, but we believe our experimental results do provide empirical evidence that task-driven exploration yields non-trivial gains on realistic systems, and motivates future experimental work in this direction. Furthermore, efficient approximations of our algorithm exist, which we believe could be scaled to higher-dimensional systems without issue. > *“In the experimental section, it is mentioned that MPC sampling based methods are used for exploration. How do we know that they verify assumption 8 which is key for convergence?”* In practice our MPC procedure may not have a guarantee satisfying Assumption 8 directly. However, we see this as an advantage: our experimental results demonstrate that our approach can be effectively applied even when our assumptions are not necessarily met. --- Rebuttal Comment 1.1: Title: Thank you for the clarifications, please add the explanation about the assumptions. Comment: I acknowledge reading the rebuttal of the authors and thank them for answering all my questions clearly. I recommend that you add the explanation about assumption 3 and assumption 8 to the paper to improve the presentation. The discussion of theorem 1 and 2 and their relations to related work could also be added.
Summary: This paper considers the problem of task-relevant system identification. It starts with the motivation that not all parameters in the dynamics model are relevant to the given task; thus, the exploration should be placed to prioritize the identification of the relevant dynamics parameters, and thus eventually leading to improved efficiency of the model-based policy learning. The authors begin with translating the task loss to the dynamics prediction loss, which is weighted by the cost hessian at ground truth dynamics model. The main result of this paper is to show that this dynamics loss can be approximated tightly by DynamicOED method. The proposed method is validated using some toy problems. Overall, the paper is well-motivated and presented. This paper did a great job in explaining the theorems. Although the interest of parametric dynamics model is restricted to weighted features, the main results of this paper may be of interest to the reinforcement learning community. Strengths: See my comments in Summary Weaknesses: - In the motivating example, the explanation of task-irrelevant or relevant parameters are confusing. Shouldn't $a_{3:12}$ be irrelevant to learning the optimal controller, while $a_{1:2}$ critical? - Please further explain "In practice, though they may not formally satisfy 290 the guarantee of Assumption 8, deep RL approaches could be used." - I think Section 5 a bit deviates from the main problem formulation, especially starting with a new system Equ. (5), although I understand the authors are trying to introduce DynamicOED generally for a broad interest of the readers. - The authors could give some perspectives on the promise of the results applied to more general dynamics. - I strongly suggest adding more or complex tasks to demonstrate the efficiency of the proposed methods over non-optimal exploration model-based policy learning. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See my comments in Weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See my comments in Weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their comments and will work to incorporate the feedback into the final version. We address specific comments below. > *“In the motivating example, the explanation of task-irrelevant or relevant parameters are confusing. Shouldn't $a_{3:12}$ be irrelevant to learning the optimal controller, while $a_{1:2}$ critical?”* $a_1$, $a_2$, and $a_3$ are all critical to learning the optimal controller—$a_1$ and $a_2$ since they are active everywhere, and $a_3$ since it is active at the point the optimal controller seeks to reach (and so therefore significantly influences the dynamics at the optimal point). $a_1$ and $a_2$ would be learned efficiently be essentially any approach, however—our approach is able to outperform existing approaches by targeting its exploration $a_3$ (while simultaneously learning $a_1$ and $a_2$). We will add additional explanation to the final version to make this clear. > *“Please further explain "In practice, though they may not formally satisfy 290 the guarantee of Assumption 8, deep RL approaches could be used."”* Our approach is very modular—for example, the policy parameterization and optimizer used, as well as the regret minimization oracle for DynamicOED could be replaced without changing our end-to-end guarantee. One could therefore instantiate these with a variety of methods, including deep RL approaches (e.g. PPO with neural network function approximation could be run on the estimated system to obtain a good policy). While these approaches may lack the theoretical guarantees necessary for our formal results to hold, we expect that, if they are used effectively, they could still yield strong empirical results. > *“I think Section 5 a bit deviates from the main problem formulation, especially starting with a new system Equ. (5), although I understand the authors are trying to introduce DynamicOED generally for a broad interest of the readers.”* As we hoped to make the results of Section 5 as applicable as possible to a wide range of systems (and make clear that the results in this section do not require that our system be of the form (1.1)), we chose to make this section as general as possible. We will make clear in the final version, however, its connections to systems of the form (1.1). > *“The authors could give some perspectives on the promise of the results applied to more general dynamics.”* We refer the reviewer to point 2. of our response to all reviewers—the dynamics considered are quite general and can encompass nearly any continuous dynamical system. We believe extending our results to even more general settings is an interesting future direction, but is beyond the scope of this work. In particular, extending to more general systems would require new estimation procedures, as the least-squares estimator is not necessarily optimal in such settings, which could significantly affect how the observations translate to estimation error and therefore to controller loss, introducing non-trivial analysis challenges. > *“I strongly suggest adding more or complex tasks to demonstrate the efficiency of the proposed methods over non-optimal exploration model-based policy learning.”* We refer the reviewer to points 1. and 3. of our response to all reviewers. As stated there our contribution is primarily theoretical, but we believe our experimental results do provide empirical evidence that task-driven exploration yields non-trivial gains on realistic systems, and motivates future experimental work in this direction.
Summary: The paper proposes an active exploration algorithm for systems of the form $x_{k+1} = A \phi(x_k, u_k) + w_k$ with known features $\phi$ and unknown matrix $A$, where $w_k \sim \mathcal{N}(0,1)$. The problem is reduced to minimising $\mathrm{tr}(\mathcal{H}(A_\star)\check{\Lambda}_T^{-1})$ where $\mathcal{H}(A_\star)$ is the Hessian at the optimal $A_\star$ (which is iteratively estimated) and $\check{\Lambda}_T^{-1}$ is roughly the expected covariance $\phi\phi^T$ over $T$ episodes of length $H$. Thorough analysis is presented. The algorithm achieves instance-optimal rate. Examples on simulated systems are provided. Strengths: Very strong theoretical paper, solving optimal exploration for a class of linear-in-features systems. Originality, quality, clarity, are excellent. Significance is strong, but since the features are assumed known, it limits the applicability of the algorithm. Weaknesses: - The class of systems is quite restricted. If one needs to learn the non-linear features $\phi(x, u)$ first, then one could just as well use the collected data to estimate $A$ afterwards. It would be interesting to see an extension where features are learned as well. - The computational complexity is unclear. Does the algorithm run real-time? Is the numerical implementation stable? - There is no Discussion/Conclusion/Limitations section. - Empirical comparisons are only done with respect to very naive baselines (random and uniform exploration). At least some variance or entropy minimising active exploration could be considered, as e.g., in [1]. But for this paper it is fine, the theoretical contributions are already sufficient, no further experiments are necessary. [1] Schultheis, M., Belousov, B., Abdulsamad, H., & Peters, J. (2020). Receding horizon curiosity. In Conference on robot learning (pp. 1278-1288). PMLR. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Could one further improve the algorithm for control-affine systems? I.e., $x' = A\phi(x) + Bu$ with unknown $A$ and $B$? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: No. The authors are encouraged to add at least one paragraph of Conclusion, which also discusses the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their comments and will work to incorporate the feedback into the final version. We address specific comments below. > *“The class of systems is quite restricted. If one needs to learn the non-linear features first, then one could just as well use the collected data to estimate afterwards. It would be interesting to see an extension where features are learned as well.”* We refer the reviewer to point 2. in our response to all reviewers. We agree that there are interesting cases where one may want to learn the features of the system, but leave fully addressing this for future work. We remark, however, that in many settings (e.g. inverted pendulum) the features (i.e., structure from physics) are known, and all that must be learned is the coefficient. Furthermore, in many settings it may be possible to learn an expressive set of features from one task and transfer them to another; therefore, we do not believe the assumption that the features are known is particularly restrictive (though is nonetheless an interesting direction for future work). > *“The computational complexity is unclear. Does the algorithm run real-time? Is the numerical implementation stable?”* We refer the reviewer to point 4. in our response to all reviewers for discussion on computational complexity. The current implementation is essentially real-time—the primary computational burden is in the policy optimization step and computation of the hessian. While these cannot be done in real-time currently, they only need to be performed a limited number of times (on order $\log T$) and can be performed offline between episodes. Furthermore, as noted, our approach is very modular and these more computationally expensive components could be replaced with more computationally efficient procedures if desired—we did not optimize for this. The implementation is numerically stable and we plan to release code for it. > *“There is no Discussion/Conclusion/Limitations section.”* We have omitted this due to space constraints but will add for the final version. In particular, we will make sure to highlight some of the limitations mentioned by reviewers here. > *“Empirical comparisons are only done with respect to very naive baselines (random and uniform exploration). At least some variance or entropy minimising active exploration could be considered, as e.g., in [1]. But for this paper it is fine, the theoretical contributions are already sufficient, no further experiments are necessary.”* We refer the reviewer to points 1. and 3. of our response to all reviewers. As stated there (and noted by the reviewer) our contribution is primarily theoretical, but we believe our experimental results do provide empirical evidence that task-driven exploration yields non-trivial gains on realistic systems, and motivates future experimental work in this direction. Furthermore, we believe the “Uniform” baseline we compare against is very strong (see point 3. for more explanation). > *“Could one further improve the algorithm for control-affine systems? I.e., $x' = A \phi(x) + Bu$ with unknown $A$ and $B$?”* As our result is optimal for any choice of $\phi$, in the control-affine case, where $\phi(x,u) = [\phi(x),u]$, our statistical complexity cannot be improved (since it is already optimal). However, there might exist more computationally efficient implementations in such control-affine settings. --- Rebuttal Comment 1.1: Title: Rebuttal Acknowledged Comment: Thank you for your answers and clarifications
Summary: This paper tackles the challenge of controlling unknown nonlinear dynamical systems in reinforcement learning and control theory. The authors propose a novel algorithm, inspired by recent work in linear systems, focusing on the most critical parameters for learning a low-cost controller on the actual system. This method efficiently explores the system to reduce uncertainty in a specific, task-dependent metric. They demonstrate the algorithm's effectiveness on nonlinear robotic systems and provide a theoretical lower bound showing that their approach learns a controller at a near instance optimal rate. This work could have significant implications for reinforcement learning and control theory. Strengths: The author presents an algorithm which achieves the instance-optimal rate, with controller loss matching the lower bound on the loss of any sufficiently regular control rule. Weaknesses: The regularity assumptions in 3.1 should be addressed clearly and the author could give a toy example to illustrate when these assumptions are satisfied. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The formulation of the nonlinear system in (1.1) seems restricted, can you explain more about the formulation? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The simulations are conducted on low dimension systems and the experiment setting should be stated more clearly. Besides, this work faces a high computational burden. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their comments and will work to incorporate the feedback into the final version. We address specific comments below. > *"The regularity assumptions in 3.1 should be addressed clearly and the author could give a toy example to illustrate when these assumptions are satisfied."* We would like to emphasize that, as noted, the majority of the assumptions in Section 3.1 are standard in the theoretical literature [1,2]. These will all be met for classes of problems such as LQR. We also want to emphasize that, intuitively, such assumptions are needed to learn efficiently. For example, if our smoothness assumptions (Assumptions 4 and 5) are not met, small changes in the system parameters could lead to arbitrarily large changes in the cost of the learned controller. To ensure that a (reasonably) good controller can be learned on an estimate of the system parameters, we must assume that the cost of a controller varies smoothly as the parameters of the system change. [1] Mania, Horia, Michael I. Jordan, and Benjamin Recht. "Active learning for nonlinear system identification with guarantees." arXiv preprint arXiv:2006.10277 (2020). [2] Kakade, Sham, et al. "Information theoretic regret bounds for online nonlinear control." Advances in Neural Information Processing Systems 33 (2020): 15312-15325. > *"The formulation of the nonlinear system in (1.1) seems restricted, can you explain more about the formulation?"* We refer the reviewer to point 2. of our response to all reviewers. As stated there, system (1.1) is actually very general and can encompass essentially any continuous dynamical system. > *"The simulations are conducted on low dimension systems and the experiment setting should be stated more clearly. Besides, this work faces a high computational burden."* We refer the reader to point 1., 3., and 4. of our response to all reviewers, and to Appendix G for additional experimental details. As noted, we believe our contribution is primarily theoretical (with a focus on statistical optimality rather than computational complexity), but we believe our experimental results do provide empirical evidence that task-driven exploration yields non-trivial gains on realistic systems, and motivates future experimental work in this direction. Furthermore, efficient approximations of our algorithm exist, which we believe could be scaled to higher-dimensional systems without issue.
Rebuttal 1: Rebuttal: We would like to thank the reviewers for their feedback and will do our best to incorporate as much of it as possible into the final version. We address several key points of clarification below: 1. **Primary Contributions:** We want to emphasize that the primary contribution of this paper is theoretical. To our knowledge, ours is the first work to provide instance-optimal guarantees for nonlinear control. We also want to emphasize that the extension from linear to nonlinear systems introduces significant technical difficulties (e.g. in linear systems, random excitation allows for efficient learning, which is not the case here), which our work overcomes. Furthermore, the study of instance-optimality in decision making is of much interest to the theory community and is a very active area of research (see e.g. [1,2,3] below and references therein)—we believe our results are an important contribution in this direction. 2. **System Model:** Several reviewers remarked that the class of systems we consider is too restrictive. We have several comments on this. First, as is stated in bullet point 1, our setting can model any continuous dynamics model given expressive enough $\phi$ (which can be accomplished without prior knowledge of the system by, for example, using random fourier features [7]), and is therefore quite general. In particular, our setting encompasses many standard control settings: linear systems, control affine systems, etc. Second, this setting is standard in the theory community, and has seen much recent work [4,5]. Finally, similar settings have been considered in a variety of empirical studies, demonstrating its applicability to a wide range of real-world problems in robotics and model-based RL [6,7,8]. 3. **Experiments:** Several reviewers suggested that we provide additional experimental results, or compare against more baselines. We first want to reiterate that our primary contribution is theoretical. Our experiments, then, serve as a proof-of-concept of the theory. They are also, however, the first experimental results, to our knowledge, to demonstrate that task-driven exploration can yield non-trivial improvements over more naive exploration in realistic systems. We believe this motivates future empirical research extending this to higher-dimensional systems, but this is beyond the scope of the current work. We also want to reiterate that our method is agnostic to the choice of policy parameterization and optimizer, and could be combined with many different approaches—our method provides a statistically optimal exploration routine for any given policy parameterization and optimization method. Given this, for the sake of clarity in our experiments we chose to use relatively straightforward control methods, but the extension to more general methods is straightforward. On our choice of baselines, we want to emphasize that the method referred to as “Uniform” (the approach of [4]) is a very strong baseline as it explores in a targeted manner to optimally reduce uncertainty in the estimated model. It is essentially performing a variant of maximum entropy exploration specialized to the class of systems we consider—we would not expect any existing approaches for MBRL (which either rely on random exploration or entropy-driven exploration) to improve on this baseline. 4. **Computational Efficiency:** Several reviewers brought up the computational efficiency of our approach. While our approach is not computationally efficient in general, we want to emphasize it is statistically optimal. In addition, existing work on learning in systems of the form (1.1) are also computationally inefficient as efficient planning in systems of the form (1.1) is in general not tractable. For special cases—such as linear systems—however, where efficient planning is possible, our algorithm is computationally efficient. Furthermore, as we demonstrate experimentally, there exist computationally efficient approximations (e.g., using a Jacobian approximation to the Hessian computation) to our algorithm which allow us to run it on a range of systems. [1] Wagenmaker, Andrew J., and Dylan J. Foster. "Instance-optimality in interactive decision making: Toward a non-asymptotic theory." The Thirty Sixth Annual Conference on Learning Theory. PMLR, 2023. [2] Kirschner, Johannes, et al. "Asymptotically optimal information-directed sampling." Conference on Learning Theory. PMLR, 2021. [3] Xu, Haike, Tengyu Ma, and Simon Du. "Fine-grained gap-dependent bounds for tabular mdps via adaptive multi-step bootstrap." Conference on Learning Theory. PMLR, 2021. [4] Mania, Horia, Michael I. Jordan, and Benjamin Recht. "Active learning for nonlinear system identification with guarantees." arXiv preprint arXiv:2006.10277 (2020). [5] Kakade, Sham, et al. "Information theoretic regret bounds for online nonlinear control." Advances in Neural Information Processing Systems 33 (2020): 15312-15325. [6] Richards, Spencer M., et al. "Adaptive-control-oriented meta-learning for nonlinear systems." arXiv preprint arXiv:2103.04490 (2021). [7] Boffi, Nicholas M., Stephen Tu, and Jean-Jacques E. Slotine. "Regret bounds for adaptive nonlinear control." Learning for Dynamics and Control. PMLR, 2021. [8] O’Connell, Michael, et al. "Neural-fly enables rapid learning for agile flight in strong winds." Science Robotics 7.66 (2022): eabm6597.
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper seeks to quantify formally in the settings of nonlinear dynamical systems (a) which parameters are most relevant to learning a good controller and (b) the best exploration strategy to minimize uncertainty in such parameters. This paper draws theoretical insights where minimizing the controller loss in nonlinear systems translates to estimating the system parameters in a particular task-dependent metric. In light of this characterization, the paper presents an algorithm that achieves the instance-optimal rate for efficient exploration, with controller loss matching a proven lower bound. Several numerical experiments are conducted on nonlinear systems to validate the analysis and the approach. Strengths: - **S1** The paper is very well written. The problem definitions and contributions are clear. - **S2** For a particular class of nonlinear systems of the form (Eqn. 1.1), the paper characterizes how minimizing the controller loss in nonlinear systems translates to estimating the system parameters in a task-dependent metric, and provides a lower bound on the loss of any control rule. - **S3** The paper proposes an algorithm that achieves the instance-optimal rate, with controller loss matching the proven lower bound. - **S4** The proposed approah demonstrates improvements compared to the baselines in several numerical experiments. Weaknesses: - **W1** The paper could benefit from having stronger experimental results. The authors could include more simulated environments that fit into the particular class of nonlinear systems, both in simulation and the real world, such as [1]. The authors could also consider comparing with more baseline methods, such as [2] and [3], from the deep model-based RL community. - **W2** Could the authors elaborate on how theoretical insights from the paper relate to problems in practical applications of the particular class of nonlinear systems (1.1)? I believe this would improve the paper's impact on the broader robotics, control, and RL communities. For instance, could applications in safety-critical scenarios be benefited from these insights (e.g., flight maneuvers in [1])? [1] O’Connell et al., "Neural-fly enables rapid learning for agile flight in strong winds." Science Robotics, 7(66):eabm6597, 2022. [2] Kaiser et al., "Model-based reinforcement learning for atari." arXiv preprint arXiv:1903.00374, 2019. [3] Hafner et al., "Dream to Control: Learning Behaviors by Latent Imagination." arXiv preprint arXiv:1912.01603, 2020. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - The authors mentioned in line 378 that they believe the approach will also scale to deep model-based RL settings. Could the authors explain how their approach can be scaled with deep RL? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: - The authors target a particular class of nonlinear systems characterized in (1.1). This characterization would determine what kind of systems the theoretical insights and the approach can be directly related to. - The authors provide experimental analysis on simulated domains. It would be interesting to see the results in real-world environments. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their comments and will work to incorporate the feedback into the final version. We address specific comments below. > *Response to W1* We refer the reviewer to points 1. and 3. of our response to all reviewers. As stated there our contribution is primarily theoretical, but we believe our experimental results do provide empirical evidence that task-driven exploration yields non-trivial gains on realistic systems, and motivates future experimental work in this direction. > *Response to W2* The primary practical insights from our theory are in what and how to explore. Our theoretical analysis provides a precise quantification of which parameters of the system are most relevant to learn if our goal is to find a good policy, and therefore provide an exploration objective: explore so as to minimize the estimation error in these parameters. Furthermore, our analysis provides insight into how to achieve this: running the DynamicOED procedure will ensure that the estimation error in relevant parameters is minimized as much as possible. The particulars of a given application can be easily encoded in our cost. For example, one could add a penalty to the cost to incentivize safe behavior, or could choose the cost to track a particular trajectory. In such settings, our approach will adapt to direct exploration to whatever parameters are most relevant to the particular cost. In addition, as our exploration policy set is generic, it could be restricted to only include “safe” exploration policies (for example, policies guaranteed to maintain stability of the drone during exploration)---our theoretical results still provide an end-to-end guarantee in such settings. Moreover, the learned features in [1] exactly matches our setting in (1.1), so our theory applies directly to guide practice in such applications. > *"The authors mentioned in line 378 that they believe the approach will also scale to deep model-based RL settings. Could the authors explain how their approach can be scaled with deep RL?"* Our approach is very modular—for example, the policy parameterization and optimizer used, as well as the regret minimization oracle for DynamicOED could be replaced without changing our end-to-end guarantee. One could therefore instantiate these with a variety of methods, including deep RL approaches (e.g. PPO with neural network function approximation could be run on the estimated system to obtain a good policy). While these approaches may lack the theoretical guarantees necessary for our formal results to hold, we expect that, if they are used effectively, they could still yield strong empirical results. --- Rebuttal Comment 1.1: Title: Thank you for the clarifications Comment: Thank you for your answers and clarifications. I recommend that you add the practical insights to the paper to improve its presentation.
Summary: In this paper, the authors propose a method for learning an optimal controller that is able to focus the exploration so as to minimize the estimation error of relevant parameters. They show that the proposed method is indeed optimal with tight upper and lower bounds on the gap. Experiments show that the proposed algorithm performs better compared to the baseline reaching lower loss at any given number of learning episodes. **After authors' rebuttal:** Thanks for the added clarification. I agree that the paper's contributions are largely theoretical and there's enough here to merit a publication. The authors mention their intention for addition of proof sketches and intuitive discussion which will improve the readability of the paper further. Given these, I am keeping my rating at 7! Strengths: The paper is clearly-written, well-motivated and systematically-structured. The assumptions are explicit and the theorems are described to develop intuition. While the idea of certain parameters dictating the system dynamics might make abstract sense, having the motivating example in section 1.1 made the follow-up analysis more grounded for me. Using the second order moment to quantify parameter relevance is neat but that appears to be adopted from previous work. In general, the theory part of the paper is crisp and laid out to facilitate understanding. Weaknesses: On the other hand, the experiment section is quite thin. However, as a largely theoretical contribution, this is acceptable. I didn’t look through the appendix so I missed out on the proofs, and the paper could have benefitted from some proof sketches without hashing out all the details. Technical Quality: 3 good Clarity: 3 good Questions for Authors: NA Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their comments and will work to incorporate the feedback into the final version. We address specific comments below. > *"On the other hand, the experiment section is quite thin. However, as a largely theoretical contribution, this is acceptable."* We refer the reviewer to points 1. and 3. of our response to all reviewers. As stated there (and as the reviewer and other reviewers also state) our contribution is primarily theoretical, but we believe our experimental results do provide empirical evidence that task-driven exploration yields non-trivial gains on realistic systems, and motivates future experimental work in this direction. > *"the paper could have benefitted from some proof sketches without hashing out all the details"* We agree that proof sketches would be helpful and will plan to include in the final version with the extra page of space. In particular, we will include intuition on how the lower bound was proved, and additional intuition on the exploration procedure (DynamicOED).
null
null
null
null
What Makes Good Examples for Visual In-Context Learning?
Accept (poster)
Summary: This paper studied in-context learning abilities of large vision models and finds downstream task performance to be highly sensitive to the choice of examples. They observe that the closer the in-context example is to the query, the better the performance. Since manually designing prompts would be time-intensive, they propose a (contrastive) supervised and an unsupervised version of a framework for prompt retrieval guided by a score function. They evaluate the proposed framework on the foreground segmentation, single object detection and image colorisation tasks. Strengths: - In-context learning in vision is a new and active area of research and methods for selecting better examples for in-context learning will likely be of interest to the computer vision community. - The proposed method for prompt retrieval outperforms randomly selected examples on all tasks considered. - Distribution shift results are interesting, suggesting that the supervised prompt retrieval method acquires in-context knowledge that is robust to distribution shift. - The analysis and ablations are interesting and insightful. Weaknesses: - The finding that examples in the prompt should be semantically close to the test example is known from language and not that surprising. - The proposed method is technically relatively straightforward compared to an average NeurIPS paper. - The analysis is limited to one model for in-context learning and may not transfer to other models. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. It is surprising to me that the smallest gains are observed on the colorisation task and supervision is not helpful, despite the fact that both training and evaluation are done on ImageNet. Do you have any intuition for why this may be the case? 2. In Sec 3.3, you mention that 20% of the data is enough for good performance. Is this 20% of 50,000, i.e. 1000 images? 3. What is the intuition behind MAE in Table 5 performing worse than UnsupPR? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback and comments. We respond to the concerns below: **Q1**: Semantically close to the test example is known from language and not that surprising. **A1**: Yes, we acknowledge the parallels between some of our observations and those from the natural language domain. However, most of our study presents findings that are significantly novel to the computer vision community. For example, while the concept of semantic similarity might seem intuitive initially, the specific attributes that constitute this similarity in the context of computer vision remain largely unexplored. As highlighted in Figure 3, elements such as background, pose, appearance, and viewpoint play crucial roles. Furthermore, our study identified differences between language and vision models. Notably, while in-context learning in language models shows sensitivity to the sequencing of in-context examples[1], such order doesn't significantly influence vision models. This underscores the unique challenges and considerations intrinsic to computer vision tasks. **Q2**: Technically relatively straightforward compared to an average NeurIPS paper. **A2**: Our proposed methodology, specifically for the SupPR, introduces an innovative approach to prompt retrieval that efficiently boosts the downstream capabilities of the pre-trained model without fine-tuning it. In addition, as shown in table.5, the superiority of prompt retrieval becomes apparent only when we meticulously develop an appropriate learning pipeline, such as our SupPR, which effectively leverages this feedback to enhance the downstream performance of the pre-trained model. We believe our prompt retrieval design is a beneficial contribution to the community. **Q3**: One model for in-context learning and may not transfer to other models **A3**: Thanks for your suggestion. Please refer to the **General Response** for the detailed explanation. **Q4**: Smallest gains are observed on the colorization task and supervision is not helpful **A4**: A possible explanation for this observation is that the current pre-trained inpainting model does not efficiently perform in-context learning on the colorization task (for a detailed illustration, please refer to Figures 14-15 in the supplementary material). If the pre-trained model's in-context learning ability for the colorization task is weak, we face difficulties in ranking positive and negative examples relative to a query image. This, in turn, impedes the effective training of a robust feature extractor for SupPR. **Q5**: 20% of the data is enough for good performance **A5**: Given that each data split comprises approximately 3,000 samples, this implies that a subset of 600 samples (20% of the data) is deemed sufficient for effective model performance. **Q6**: Intuition behind MAE in Table 5 performing worse than UnsupPR **A6**: The differential in performance is fundamentally attributed to the complexity of designing a robust feature extractor for SupPR. The strength of SupPR comes from our intellectual design rather than the supervision signal. Without such careful design, the SupPR can even perform worse than the UnsupPR. Moreover, this finding highlight that semantic similarity is a key factor in selecting an effective prompt. The inferiority performance of the fine-tuned MAE suggests that features optimized by foreground segmentation may be deficient in encoding semantic information. On the other hand, given CLIP's well-known ability in encoding semantic information, our UnsupPR is equally good at retrieving semantically similar examples. [1] Agrawal, Sweta, et al. "In-context examples selection for machine translation." arXiv preprint arXiv:2212.02437 (2022). --- Rebuttal Comment 1.1: Title: Follow-up Comment: Dear Reviewer, In our rebuttal: 1) We highlight the insight found in our paper. We claim that most of our study presents findings that are significantly novel to the computer vision community. And we indicate that the Table.5 in our paper shows that, despite our method is straightforword, the superiority of our method becomes apparent only when we meticulously develop an appropriate learning pipeline. 2) We conduct more experiments showing that method has demonstrated easy generalization to visual-language tasks such as image captioning and VQA. 3) We clearly answer your question about the detail of our paper. We would love your feedback on whether our answer has solved your concern or if you have further questions.
Summary: The paper identifies that visual prompting is sensitive to the choice of input-output example(s). To address this, the authors propose a retrieval framework to better select the examples. The authors propose supervised and unsupervised retrieval approaches which significantly improve the performance compared to random selection. Strengths: - Through extensive empirical study, the authors demonstrate the role of input-output examples for visual prompting. - The authors present two different retrieval strategies to choose the best visual prompting example and find that both approaches are superior to random choice. - The authors show that choosing the right example can also improve performance under distribution shifts. - It is also nice that the retrieval similarity function is class-agnostic. But regardless, empirically it extends to three different tasks. Weaknesses: Overall, the main two weaknesses in my opinion are: 1. The assumption that we have a set of tagged examples to retrieve from. 2. The supervised/unsupervised similarity function is straightforward but currently very specific to mIOU similarity. Minor: 3. I think there is a broader spectrum that can be analyzed for visual prompts. Please see the questions for extended comments/feedback. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. The main weakness is the assumption that we have a set of tagged examples. In this case, why should we retrieve rather than use our tagged examples? e.g., train a new model overall tagged examples or use an ensemble? Perhaps the best way to address this would be to add another ensemble/supervised-trained baseline in Figure 5 (left). These baselines should utilize the same pool of examples. 2. The supervised/unsupervised similarity function is straightforward but currently very specific to mIOU similarity. Can we improve this by utilizing similarity in feature space? (e.g., using existing CLIP or DINO, etc?). To be fair, from the empirical results it seems like by optimizing mIOU there is somewhat improvement for colorization and single object detection as well. Minor: 3. I think there is a broader spectrum that can be analyzed for visual prompts. E.g., using synthetic task data as examples, using examples from different classes, then examples from within classes as analyzed in the paper. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors clearly stated and discussed the limitations of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback and comments. We respond to the concerns below: **Q1**: The assumption that we have a set of tagged examples to retrieve from. Why should we retrieve rather than training our tagged examples? **A1**: It is essential to note that our method is fundamentally designed for scenarios where there are a certain amount of tagged examples, which aligns with the standard presumption of in-context learning. In addition, we conduct studies to demonstrate the advantages of prompt retrieval. The following table, referring to Figure 5 (left), compares full-set and 1% of the full-set scenarios. The first two columns compare the performance of our SupPR model with the fine-tuned MAE model (the supervised-trained baseline that might concern you). Importantly, SupPR outperforms the fine-tuned model, especially when training data is scarce, underlining the superiority of the prompt retrieval approach. Furthermore, to accentuate the potential of visual in-context learning via prompt retrieval, we present an upper-bound performance, achieved by manually selecting the best in-context samples from the training dataset for each query. The noticeable performance difference between this upper bound and the fine-tuned MAE model highlights the potential and efficacy of prompt retrieval. If optimized, it can substantially enhance the performance of visual in-context learning, spurring further research in this field. | | Fine-tune (MAE) | SupPR | Upper bound | |------------|-----------------|-------|-------------| | Seg.(mIoU) 1% full-set| 20.15 | 30.22 | 34.53 | | Seg.(mIoU) full-set| 34.98 | 35.56 | 40.23 | **Q2**: Supervised/unsupervised similarity function is straightforward but currently very specific to mIOU **A2**: Our similarity function indeed uses cosine similarity between two CLIP embeddings in the feature space. And, it's might be a misconception that it's inherently specific to mIOU. We emphasize that our methods have applicability beyond mIOU, including diverse generative tasks like image caption generation and VQA. Please refer to the **General Response** section, demonstrating the extendability of our method to other vision in-context learning tasks (image captioning and VQA), not limited to inpainting. **Q3**: A broader spectrum that can be analyzed , e,g. using examples from different classes, then examples from within classes **A3**: Thank you for the insightful suggestion to broaden the analysis spectrum. In response, based on foreground segmentation task, we conducted random sampling experiments with three distinct scenarios: (a) examples randomly sampled from the entire training dataset (b) examples randomly sampled across different classes (c) examples sampled from within classes. Our findings indicates that the alignment of example data within the same classes as the query data is crucial. This reaffirms our observation that an effective in-context example should be semantically similar to the query. | | Different classes | Whole training data | Within classes | UnsupPR | SupPR |------------|-------------------|---------------------|----------------|-------|------| | Seg.(mIoU) | 24.02 | 24.46 | 27.45 | 33.56 | 35.56 --- Rebuttal Comment 1.1: Title: Response to authors Comment: Thank you for addressing my concerns, following the author's rebuttal I raise my score and I now support the acceptance of the paper. --- Reply to Comment 1.1.1: Title: Thanks for raising your score. Comment: Thanks for raising your score! We’re very encouraged that our rebuttal addressed your concerns and appreciate your support for the paper's acceptance.
Summary: This paper investigates in-context learning for large vision models. Specifically, the authors propose to automatically retrieve prompt for vision models in two methods: (1) nearest example search with off-the-shelf model, (2) supervised prompt retrieval method. Experimental results show that the proposed method bring non-trivial improvement, comparing to the random selection on foreground segmentation, single object detection and colorization. Strengths: - the motivation of this paper is important: investigating how vision models could benefit from in-context learning. - the writing is clear and easy to follow. - The experiments show improvement comparing with random selection. Weaknesses: 1. Visual in-context learning emerged from large autoregressive language models. In-context learning was referred to use language models for a wide range of downstream tasks without updating the model itself. But this paper, though claims to study visual in-context learning, only studies a limited set of tasks (assign labels to pixels) with in-paining model. I wonder if the authors could provide thoughts on how to apply to a wider visual tasks, like classification, or standard segmentation tasks (semantic and instance). 2. The finding of "a good in-context example should be semantically similar to query and closer in context" seems to be a problem if the testing examples are unique, and there is no semantically similar {image, annotation} pair in the retrieval pool. This goes to the practicality of the method, and goes to the claim "potential of using prompt retrieval in vision applications". Technical Quality: 3 good Clarity: 3 good Questions for Authors: please refer to the weakness section Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback and comments. We respond to the concerns below: **Q1**:Only studies a limited set of tasks (assign labels to pixels) with an inpaining model; to a wider visual tasks, like classification, or standard segmentation tasks (semantic and instance). **A1**: Thanks for your suggestion. We would like to clarify that the SupPR/UnsupPR model primarily aims to enhance the in-context learning capabilities emerged during pre-training, rather than creating new abilities. Consequently, we posit that our model could be feasibly extended to standard segmentation tasks, provided the pre-trained inpainting model demonstrates in-context learning on such tasks, albeit its current limitation in standard segmentation tasks. Additionally, our method shows potential for extension to a broader range of visual tasks. The UnsupPR/SupPR can be easily extended to other tasks. As detailed in the **General Response** section, we extend our method to image caption task and VQA task based on a vision-language model. We appreciate your feedback and encourage you to refer to that section for a comprehensive understanding. **Q2**: Seems to be a problem if the testing examples are unique **A2**: We appreciate the reviewer's point regarding unique testing examples. However, it is essential to note that our method is fundamentally designed for scenarios where there are a certain amount of testing examples, which aligns with the standard presumption of in-context learning. Also, due to the relative nature of similarity, the most semantically similar pair can always be identified within the retrieval pool. Our Figure 5 (left) illustrates that even with a retrieval set of just 0.01% of the entire dataset (about 30 image-annotation pairs), which may not include the most semantically similar pairs, supPR and UnsupPR consistently surpass random selection. This highlights the resilience and practicality of our method. --- Rebuttal Comment 1.1: Title: Follow-up Comment: Dear Reviewer, In our rebuttal: 1) Regarding concerns about the limited set of tasks (assign labels to pixels) with an inpaining model: - Our method has demonstrated easy generalization to visual-language tasks such as image captioning and VQA. 2) On the point of "Seems to be a problem if the testing examples are unique": - We underscore that (1) Our study follows the in-context learning paradigm, where a given set of testing examples is assumed. (2) Even without perfect semantically similar pairs in the retrieval pool, our SupPR and UnsupPR methods consistently outperform random selection, underscoring our method's practicality. We would love your feedback on whether our answer has solved your concern or if you have further questions. --- Reply to Comment 1.1.1: Title: Follow-up Comment: Dear reviewer, In our rebuttal, we hope we have effectively clarified your confusion, and our added experiments serve to bolster the strength of our approach. Given that our rebuttal has effectively resolved concerns raised by other reviewers, we eagerly await your input on whether our response adequately addresses your apprehensions, or if you require additional clarification.
Summary: This paper aims to address the problem of vision in-context-learning that the performance highly depends on the choice of visual in-context examples. In the paper, the authors propose automatically retrieving prompts in unsupervised and supervised ways without reaching internal weights of large vision models. Besides, this paper comprehensively studies how to select good examples for visual in-context learning and shares some valuable insights with the community on choosing good visual in-context examples. Strengths: The paper discusses several aspects that influence visual in-context learning and propose methods to choose in-context learning samples automatically to optimize visual in-context learning with inpainting method. The proposed methods both outperform random selection baseline and the supervised version requires training performs better than the unsupervised method in several tasks. Also, this paper discusses other factors that might influence visual in-context-learning including the number of examples, order of examples, and size of retrieval set, which shares some useful practical experience to the community. Weaknesses: 1. This paper only discusses the previous paper of using inpainting as visual in-context learning as the visual icl frame and experiment on the dataset from that paper. It's hard to tell if experiments and conclusions conducted on a specific framework will generalize to more general "visual in-context learning" settings. Besides, although the supervised example retrieval method outperforms the unsupervised one, the additional model and training seem to be contradictory to the main advantages of ICL learning that requires no additional training. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The paper discusses the impact of the order and number of in-context examples. While more in-context examples generally lead to better performance, is there an upper limit to this? Will adding more examples become redundant or even detrimental at some points? 2. The paper compares the proposed methods with random selection. Are there other baselines that the authors considered for comparison? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback and comments. We respond to the concerns below: **Q1**:Whether this method can be generalized to more general "visual in-context learning" settings. **A1**:Thanks for your suggestion. Please refer to the **General Response** for the detailed explanation. **Q2**: SupPR contradicts the main advantages of ICL learning that requires no additional training **A2**: The advantages of In-context learning (ICL) comes from no additional training for the **pre-trained model** that might be only provided as the service, e.g. ChatGPT, Flamingo, PaLM2. SupPR only uses the pre-trained model as a mechanism for in-context results output, while training an independent scoring model for prompt retrieval. This approach uses downstream few-shot data effectively and do not train the pre-trained model. The goal of SupPR is to enhance the ICL capability (such as foreground segmentation) emerged during pre-training rather than creating new capabilities. **Q3**: The upper bound of the number of in-context examples. **A3**: We appreciate the reviewer's suggestion to investigate the upper bound of the number of in-context examples. In response, we conducted an additional experiment focusing on the foreground segmentation task, where we employed SupPR as the prompt retrieval method. We increased the maximum number of in-context examples to 32. However, we would like to note that any further increase in in-context examples would result in out-of-memory errors on NVIDIA V100 GPUs. The findings, presented below, detail the performance improvement with an increase in the number of shots. The performance enhancement achieved from the latest shot is indicated within round brackets. From this table, it is evident that while there is an almost linear increase in performance from 1 shot to 16 shots, the improvement diminishes from 16 shots to 32 shots. | | 1 shot | 4 shot | 8 shot | 16 shot | 32 shot | |------------|--------|---------------|--------------|--------------|--------------| | Seg.(mIoU) | 21.90 | 25.04 (+3.1) | 28.45 (3.4) | 31.25 (+2.8) | 32.64 (+1.4) | **Q4**: Other baselines that the authors considered for comparison **A4**: Thank you for your query on baselines. In line with Bar et al.[1], our primary baseline employs a within-class random selection of prompts matching the query image class. To broaden our comparisons, we introduced another baseline where prompts are randomly chosen from the entire training dataset. The significant performance difference between UnsupPR/SupPR and this (entire) Random baseline underscores the effectiveness of our method in real-world scenarios. | | (entire) Random | (within-class) Random | UnsupPR | SupPR | |------------|-----------------|-----------------------|---------|-------| | Seg.(mIoU) | 24.46 | 27.45 | 33.56 | 35.56 | [1] Bar, Amir, et al. "Visual prompting via image inpainting." Advances in Neural Information Processing Systems 35 (2022): 25005-25017. --- Rebuttal Comment 1.1: Title: Follow-up Comment: Dear Reviewer, In our rebuttal: 1) Regarding concerns about the generalization of our method to broader "visual in-context learning" settings: - Our method has demonstrated easy generalization to visual-language tasks such as image captioning and VQA. 2) On the point of "additional training" potentially contradicting ICL advantages: - We claim that since the SupPR doesn't train the pre-trained model, it does not conflict with the benefits of ICL. Furthermore, we've incorporated experiments exploring: 1) The upper bound of in-context examples: - Our findings indicate an almost linear performance increase from 1-shot to 16-shots. However, the gains plateau from 16 to 32 shots. 2) Additional baseline comparisons: - We add a baseline about randomly selecting the in-context examples from the entire training dataset, which further strengthen the superiority of our proposed UnsupPR and SupPR. We would love your feedback on whether our answer has solved your concern or if you have further questions. --- Reply to Comment 1.1.1: Title: Follow-up Comment: Dear reviewer, In our rebuttal, we hope we have effectively clarified your confusion, and our added experiments serve to bolster the strength of our approach. Given that our rebuttal has effectively resolved concerns raised by other reviewers, we eagerly await your input on whether our response adequately addresses your apprehensions, or if you require additional clarification. --- Rebuttal Comment 1.2: Comment: Thank you for the aditional experiments and reply, I think the rebuttal basically address my concerns and I decide to raise my rating to borderline accept. --- Reply to Comment 1.2.1: Title: Thanks for raising your score Comment: Thanks for raising your score! We’re very encouraged that our rebuttal basically addressed your concerns and appreciate your support for the paper's acceptance.
Rebuttal 1: Rebuttal: **General Response** Dear Reviewers, We sincerely appreciate the time and insightful comments provided by all reviewers, which have been instrumental in enhancing our paper. We are encouraged by the positive feedback regarding the motivation behind our work (QAph) and our strong performance across various tasks (mHCs, QAph, EQqT, vwKB), our analysis is useful, interesting and insightful (mHCs, vwKB). **We would like to highlight the key contributions of our work** (1) Our study is the first comprehensive examination of selecting effective examples for the emerging field of visual in-context learning. We uncover a critical issue: the choice of in-context examples significantly impacts performance. (2) From a technical standpoint, we introduce a prompt retrieval framework that automates the prompt selection process, offering two straightforward implementations: an unsupervised method and a supervised method. (3) Through extensive experimentation on three visual in-context learning tasks not encountered during pre-training (foreground segmentation, single object detection, and image colorization), we provide the community with valuable insights on identifying suitable visual in-context examples. For instance, our findings indicate that the supervised method consistently outperforms other approaches and often identifies examples that are both semantically related and/or spatially similar to a given query. **Our methods are adaptable to broader "Visual In-Context Learning" scenarios** Our strategies - SupPR and UnsupPR - are designed with a broad scope of applicability , capable of extending to a variety of "visual in-context learning" frameworks. We have substantiated this claim by conducting experiments on the COCO Caption dataset and the TextVQA dataset, utilizing the recently public OpenFlamingoV2-9B model [1]. OpenFlamingo, a visual-language model distinct from the inpainting model, can generate natural language description from visual-language inputs. In these experiments: - For the UnsupPR method, we used the CLIP ViT L-14 as the feature extractor. - For SupPR, a prompt is defined either by an image-caption pair (COCO caption) or by image-question-answer triplets (TextVQA). A prompt that instructs a query image for the maximum/minimum CIDEr (COCO caption) or accuracy (TextVQA) is labeled as positive/negative prompt. Using these positive/negative prompts, we trained a new feature extractor following the methodologies outlined in Sec.2.2.2. The comparative performance between random selection and our proposed SupPR and UnsupPR are presented below. The results indicates the broader applicability of our methods to more visual in-context learning task and visual model. | | Random | UnsupPR | SupPR | |----------------------|--------|---------|-------| | COCO caption (CIDEr) | 84.2 | 87.3 | 88.9 | | TextVQA (Acc.) | 25.4 | 30.2 | 33.4 | Once again, we thank the reviewers for their invaluable feedback and support. [1] Awadalla, Anas et al. “OpenFlamingo: An Open-Source Framework for Training Large Autoregressive Vision-Language Models.” (2023).
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Causal Effect Identification in Uncertain Causal Networks
Accept (poster)
Summary: This paper considers two problems: 1. Finding the most probable graph that makes a desired causal query identifiable. 2. Finding the graph with the highest aggregate probability over its edge-induced subgraphs makes a desired causal query identifiable. This paper shows that both problems reduce to a special combinatorial optimization problem called the edge ID problem. They prove that the edge ID problem is NP-hard, implying that both of the problems are also NP-hard. The paper presents several exact and heuristic algorithms for these problems and evaluates their performances through different experiments. The experiments note that the heuristic algorithms performed remarkably well across all metrics. The paper also discusses the application of these algorithms to four real-world datasets. Strengths: - The paper addresses a complex causal inference problem, specifically identifying the most probable graph that makes a desired causal query identifiable. This is a significant contribution to the field. - The authors provide a detailed analysis of the problem's complexity, demonstrating that it reduces to a special combinatorial optimization problem, the edge ID problem, which they prove to be NP-hard. This rigorous theoretical analysis is a strength of the paper. - The authors propose several exact and heuristic algorithms to solve the problem and evaluate their performance through different experiments. The heuristic algorithms, in particular, performed remarkably well across all metrics, demonstrating the practical applicability of the proposed methods. - The paper also applies the proposed algorithms to real-world datasets, further demonstrating their practical utility. The authors provide a detailed comparison of runtimes, solution costs, and failure rates, offering valuable insights into the performance of the algorithms in real-world scenarios. Weaknesses: - The authors made the assumption that the edges in the graph are mutually independent. This assumption may not be held in all cases. - The external validity of the derived subgraph is not guaranteed. This means that the subgraph may not be correctly specified with respect to the corresponding real-world process. This could limit the applicability of the results in practical scenarios. - The EDGEID algorithm, one of the exact algorithms proposed in the paper, had large runtime variance, which depended heavily on the specifics of the graph under evaluation, particularly for graphs with fewer vertices. This could limit its utility in certain scenarios. - The EDGEID algorithm also timed out on all but one of the real-world structures tested, indicating that it may not be as consistent as other algorithms in terms of runtime. - The runtimes for the MCIP variants exceeded the HEID variants due to the required transformation. This could be a potential drawback in scenarios where computational efficiency is a priority. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Could you elaborate on the assumption of mutual independence of edges in the graph? How might the results change if this assumption does not hold? The external validity of the derived subgraph is not guaranteed. Could you discuss potential strategies for validating these subgraphs in real-world scenarios? - The EDGEID algorithm showed large runtime variance and often timed out on real-world structures. Are there any plans to improve the performance of this algorithm in future work? - The paper mentions that the heuristic algorithms performed remarkably well across all metrics. Could you provide more details on the specific scenarios or types of data where these heuristic algorithms are most effective? - The paper suggests that future work should explore cases where the assumption of mutual independence of edges does not hold. Could you provide some insights into how this exploration might be conducted? - The paper applies the proposed algorithms to four real-world datasets. Are there plans to test these algorithms on a wider variety of datasets, particularly those with different characteristics or from different domains? - The paper presents a detailed analysis of the problem's complexity. Could you discuss the implications of this complexity for the practical application of the proposed algorithms? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: - The authors made the assumption that the edges in the graph are mutually independent. This assumption may not hold in all cases, and the authors themselves acknowledge that future work should explore scenarios where this assumption does not hold. - The external validity of the derived subgraph is not guaranteed. This means that the subgraph may not be correctly specified with respect to the corresponding real-world process. This could limit the applicability of the results in practical scenarios. - The EDGEID algorithm, one of the exact algorithms proposed in the paper, had large runtime variance which depended heavily on the specifics of the graph under evaluation, particularly for graphs with fewer vertices. This could limit its utility in certain scenarios. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for the thoughtful review and for acknowledging the significance of our work. We have addressed the main points and questions below. --- >Could you elaborate on the assumption of mutual independence of edges in the graph? How might the results change [...]? In the case of violation of this assumption, the NP-hardness result would still be intact, but achieving a proper reduction to the EdgeID problem becomes challenging, unless at least a structure is maintained analogous to our discussion of Appendix C. With the current modelling of the problem, the independence assumption is central for our reduction to hold. However, this assumption can potentially be relaxed by modelling the EdgeID problem on a hyper-graph rather than a simple graph. On the other hand, it is worth noting that certain instances can be easily analyzed when they violate the independence assumption. For instance, in the extreme case where the independence gets violated is when all of the edges are highly correlated, and only two graphs (one with every edge existing and the other where none exists) are likely. This example illustrates that the more interesting cases to analyze are those in which at least a structure on the dependence of edges is maintained, e.g., our model in Appendix C. --- >The EDGEID algorithm showed large runtime variance and often timed out on real-world structures. Are there any plans [...]? Given the hardness results proved in this work, we do not see improving the efficiency of this algorithm as the main focus of future work. Instead, we are planning to develop approximation algorithms that would run in polynomial time but guarantee an upper bound on the cost of the recovered solution in terms of the optima. --- >The paper mentions that the heuristic algorithms performed remarkably well across all metrics. Could you provide more details on the specific scenarios [...]? As indicated in our response to reviewer 'eheu' as well, the result of [1] below suffices to show that it cannot be expected from these heuristic algorithms to exhibit a 'worst-case' performance which is better than a log(n) factor of the optimal. However, our experiments suggest that these algorithms perform quite well in the 'average-case'. We believe that unless the data is generated adversarially, these algorithms would achieve a near-optimal solution. Proving this claim however is left as future work. It is noteworthy that since both of these heuristic algorithms are quite efficient, it would make sense to run both and choose the best solution out of the two. By doing so, the performance would be unsatisfactory only if both heuristic algorithms fail to achieve a near-optimal solution. This indeed would result in a level of robustness, since the performance of HEID1 relies highly on the structure of the directed edges of $\mathcal{G}$, while the performance of HEID2 depends more on the structure of the bidirected edges of $\mathcal{G}$. [1] Uriel Feige. A threshold of ln n for approximating set cover. Journal of the ACM (JACM), 45(4): 634–652, 1998. --- >The paper suggests that future work should explore cases where the assumption of mutual independence of edges does not hold. Could you provide some insights [...]? We have covered a special case where the independence assumption gets violated in Appendix C. This special case still allows us to reduce the problem to an instance of EdgeID. However, in a more general setting this would not be the case. Of course as mentioned earlier, allowing for the most general cases is not the best course of action (considering for instance the example above where only two possible configurations exist). One reasonable way to extend our work would be considering the case where each edge is correlated with at most $k$ other edges for some parameter $k$. In such a scenario, classic results such as Lovasz' local lemma might be helpful to tackle the problem. Another promising direction might be modelling the EdgeID problem on hypergraphs, assigning each subset of variables a cost. This direction is indeed what we would like to explore next. --- >The paper applies the proposed algorithms to four real-world datasets. Are there plans to test these algorithms on a wider variety of datasets [...]? We will indeed consider applying these algorithms to a wider range of datasets. We thank the reviewer for their suggestion. --- >The paper presents a detailed analysis of the problem's complexity. Could you discuss the implications of this complexity for the practical application [...]? There are several implications that can be thought of. First, as NP-hard problems are known for their high computational complexity, the 'exact' algorithms developed in this paper may face challenges when dealing with large-scale or real-world datasets. In practice, this complexity may result in intractable run times for the exact algorithms on extensive datasets, leaving only the heuristic algorithms on the table. Second, the NP-hard nature of the problem also motivates the development of approximation algorithms that can provide near-optimal solutions in polynomial time. While we have introduced heuristic algorithms that performed well in our experiments, further research in designing more efficient approximation algorithms could enhance the practical applicability of our work. Last but not least, understanding the NP-hardness of the problem encourages us to explore scalable techniques, such as parallelization or distributed computing, to tackle larger instances efficiently. These strategies can help mitigate the impact of the problem's complexity on the practical application of the algorithms. While the NP-hardness of the problem presents challenges in terms of computational complexity, it also motivates further research in developing more efficient algorithms to handle larger datasets. We will discuss these implications in the revision. --- Rebuttal Comment 1.1: Comment: My concerns are well answered and I maintain the primary score.
Summary: This paper is about causal identification in a setup where the assumption of reliable knowledge of the causal graph is relaxed. The authors assign a probabilistic weight to the possible confounders. The combinatorial task of finding the most probable causal graph such that a given causal query is identifiable is considered. The same probable with an aggregate probability over the induced subgraph is also considered. Not surprisingly, both these problems are hard. The authors first deliver an exact solution. Two approximate procedures are obtained: (i) a recursion calling min-cut; (ii) a reduction to MCIP. Experiments on real and synthetic data are finally discussed. Strengths: The idea of an uncertain causal network with distributive semantics looks very reasonable. The hardness results are not surprising, but they appear important for properly characterising the problem. The heuristics presented are non-trivial and effective. Weaknesses: - Both heuristic algorithms lack a worst-case complexity characterisation. - In the case of MCIP, it needs to be clarified whether the polynomial reduction considered by Proposition 4.2 can be implemented in practice. - Problems 1 and 2 are not necessarily the only/best ways to cope with uncertain causal networks. No robust discussions to advocate such a choice are reported. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: I understand that min-cut has a poly solution, but what is the overall complexity of the first heuristic algorithm? Similarly, is it possible to implement the polynomial transformation in Proposition 4.2, and if so, which is the overall complexity of reducing the problem to MCIP and solving the corresponding instance? The authors assume that the query should be identifiable, and in this perspective, Problems 1 and 2 are very reasonable ways to face the problem. But why should the query be identifiable? There is a lot of literature on non-identifiable queries. E.g., if the output is identifiable only in a very unlikely graph, how should we proceed? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: I don't see relevant issues on this point. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's thoughtful and positive feedback. We have carefully considered their comments and concerns and addressed them in the following points: --- > is it possible to implement the polynomial transformation in Proposition 4.2, This transformation is indeed implemented in the code provided as supplementary material. We have also reported the time required to perform this transformation in Figure 4.c, where even accounting for this overhead, the MCIP-based algorithm is faster on a fraction of instances. >which is the overall complexity of reducing the problem to MCIP and solving the corresponding instance? The time complexity of the transformation is linear in the number of vertices and edges of $\mathcal{G}$, and cubic in the size of the set $Y$. More precisely, with the notation used in the paper, the time complexity would be $O(\vert V^\mathcal{G}\vert + \vert E_b^\mathcal{G}\vert + \vert E_d^\mathcal{G}\vert + \vert Y\vert^3)$. It is straightforward to verify this complexity bound through the construction of Appendix A.2. We thank the reviewer for bringing this up, and we will include this analysis in the revision. --- >Both heuristic algorithms lack a worst-case complexity characterisation. >I understand that min-cut has a poly solution, but what is the overall complexity of the first heuristic algorithm? The necessary transformations for the heuristic algorithms, i.e., the transformation from $\mathcal{G}$ to \mathcal{H}, requires an inital step of finding the maximal hedge (algorithm 3 in the appendix) which requires running $O(\vert V^\mathcal{G}\vert)$ depth-first-searches in the worst case. After this step, both algorithms are linear-time in the number of edges and vertices of $\mathcal{G}$. The final step then is to solve the minimum-cut in the graph $\mathcal{H}$. The real bottleneck of both of these algorithms in terms of computational complexity is therefore the minimum-cut problem, for which there exist cubic-time (and even more efficient) algorithms. Therefore, a cubic-time complexity upper bound can be proved for the heuristics, and this bound is tight, for instance for a complete graph. We will include the complexity analysis in the revision. As for the implementation, we relied on the existing min-cut-max-flow algorithms in the scipy package, which use the Ford-Fulkerson algorithm by default. The transformation from $\mathcal{G}$ to $\mathcal{H}$ is also included in the code we have provided. --- > There is a lot of literature on non-identifiable queries. E.g., if the output is identifiable only in a very unlikely graph, how should we proceed? This is a valid point, and we agree with the reviewer. As mentioned in our conclusion, it is necessary for the practitioner to evaluate the assumptions they want to make. Assuming identifiability yields point estimates at the expense of the possibility of leaving us with the choice of a very unlikely graph, as the reviewer correctly states. The upside is that the practitioner would at least be aware of the fact that this structure is very unlikely and may choose to disregard the causal conclusions. Also, as pointed out by reviewer 'eheu,' one other possible option would be to consider partial identification of the effect and settle for bounds on the query of interest rather than point estimates, which can be thought of as a generalization of the work we presented here. Our main goal was to advocate for a probabilistic model that takes uncertainties on the structure into account, rather than assuming a certain structure is correct. We will discuss this matter further in the text. --- >Problems 1 and 2 are not necessarily the only/best ways to cope with uncertain causal networks. We consider this work as a first step towards bringing up the idea of incorporating the uncertainty on the structure into uncertainty about inference. While our work may not be the best possible approach, we believe it is a reasonable step forward. --- Rebuttal Comment 1.1: Title: Thanks for the clarification. I will keep my positive score. Comment: I thank for authors for their rebuttal and their detailed clarifications. I am happy to maintain my positive recommendation (weak accepts).
Summary: The paper considers a setting in which we have a probabililty distribution over causal graphs. This paper considers the problem of finding the most probable subgraph in which a given query is identified, and then that subgraph with the highest sum of probabilities of its own subgraphs in which the query is identified. (The latter problem is significant since the result of the identification must agree among all these subgraphs.) A reduction is given to a known NP-hard problem for which various algorithms are evaluated empirically. Strengths: The paper is well-written and the results are clearly set out, extensive, and correct as far as I can tell. Weaknesses: I had a little bit of difficulty understanding the problem initially. I am currently a little dubious about the paper's ultimate significance, but if the authors can make the links to structure learning more explicit then it will represent an interesting synthesis with the other approach in which ADMG encapsulates "a priori" facts about the causal structure that are known for certain. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Can the authors point to any other literature besides citation [12] in which probability distributions over ADMGs are considered? Or provide motivating examples perhaps involving statistical tests in structure learning. - Line 47: "to addresses" -> "to address" Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes, they have (for instance, the assumption of independence between edges). Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable input. We are delighted that the reviewer has found our results clear, extensive, and sound. We address the main points and questions below. --- >Can the authors point to any other literature besides citation [12] in which probability distributions over ADMGs are considered? Or provide motivating examples perhaps involving statistical tests in structure learning. One compelling reference to consider is [1], where the authors use concepts of monotone triangular transport maps to define a score for the conditional independence of variables. In particular, these scores are interpreted as indicators of whether an edge is a 'strong' or 'weak' edge Moreover, it is noteworthy that a considerable fraction of causal discovery algorithms relies on statistical tests, which inherently introduce uncertainty. These statistical approaches often involve hypothesis testing to assess the conditional independence of variables, which inherently involves probability distributions over graphs. We believe that this fact has been simply overlooked in the literature because it complicates the identification process and analysis. [1] Learning non-Gaussian graphical models via Hessian scores and triangular transport. R. Baptista, Y. Marzouk, R. E. Morrison, O. Zahm. --- >Line 47: "to addresses" -> "to address" We thank the reviewer for pointing out this typo and other minor comments. --- Rebuttal Comment 1.1: Comment: Thanks, this is a helpful response.
Summary: The paper studies the problem of causal identification in a setting which the structure of a causal graph (of interest) is probabilistically uncertain i.e., only known per a certain degree of belief or with a degree of confidence of a particular statistical set, asking the most likely subgraph in which causal effect identifiable. In doing so, paper reduces it the combinatorial optimisation problem i.e., edge ID (as they call it) whose computational complexity is shown to be NP-hard through a reduction from minimum-vertex cover problem. Authors also introduce exact and approximate algorithm and show empirically that the algorithms work good as well. Strengths: - Deep theoretical paper. - The theoretical results are complemented with good empirical evaluation. - Good exposition. Weaknesses: I did not spot any major weakness. Only very few minor issues: - causal query better be formally mentioned in *the introduction* (i.e., how does it look like formally) -problem To be surmountable (missing dot) -In Figure 1 caption, also mention the longer version of Q[y] to help following it. -Definition 2.2: explain what you mean when you say functional (again formally). - (potential typo): In Example 1, I sense the direction of the subset to be other way around Gˆ⊆G1 -Assumption 2.1: form (1) -> Equation 1 -In the next [s]ection Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: -how about NP-completeness case? -Can the work of Eiter Lukasiewicz 2001 be any useful/connect for further results in your setting? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: I did not see any particular limitation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time and valuable feedback. We also thank the reviewer for pointing out typos and minor issues. **Questions** >how about NP-completeness case? -Can the work of Eiter Lukasiewicz 2001 be any useful/connect for further results in your setting? The EdgeID problem is indeed 'NP-complete', since the ID algorithm of [1] can be used to validate an answer to EdgeID within polynomial time. We thank the reviewer for raising this question and we will include it in the revision. [1] Shpitser, Ilya, and Judea Pearl. "Identification of joint interventional distributions in recursive semi-Markovian causal models." Proceedings of the National Conference on Artificial Intelligence. Vol. 21. No. 2. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999, 2006. --- Rebuttal Comment 1.1: Comment: Thanks authors for their responses. I will keep my positive score.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: The authors consider the problem of causal effect identification assuming the given causal graph G is probabilistic where each edge (directed or bi-directed) is associated with a measure of confidence. Under this setting, the objective is to identify a subgraph of G with the highest plausibility such that the target causal effect Q[Y] is identifiable. The authors propose an exact algorithm that is asymptotically exponential, two heuristic approaches, and a reduction to an NP-hard problem that has approximation algorithms. Finally, they evaluate the proposed methods on synthetic and real-world graphs. Strengths: The formulated problem of identifying the most plausible subgraph where a target effect is identifiable is algorithmically interesting and the author highlight its complexity by showing it to be NP-hard. Moreover, the paper is well-written, for the most, with examples and discussions to illustrate the problem (i.e., Section 2.1). The experimental evaluation of the heuristic algorithms shows favourable performance. Weaknesses: 1- Motivation for the problem: Whenever the target causal effect is not identifiable in the full graph G, it is not justifiable why we would drop edges from G just to make the effect identifiable; an alternative to exact identification could be bounding the target effect. The authors can provide more motivation in the introduction to justify their approach. 2- Lack of performance guarantees for the heuristic approaches: There is no theoretical analysis for the heuristic algorithms discussed in Section 4.2 in terms of upper bound on the cost. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1- "when $w^{th} = 0$, Algorithm 1 will output the optimal solution": Suppose we have G={Z -> X, X -> Y, X <--> Z, Z <--> Y}, $W_G$={$\infty, \infty, 10, 1$}, $w^{ub}=\infty$, and $w^{th}=0$. Running this example does not lead to any solution unless I'm mistaken. It should optimally return the subgraph after removing the bi-directed between Z and Y. Unless you mean that whenever $w^{th} = 0$, then an alternative formulation of Algorithm 1 is run which discards the threshold and simply searchers for the optimal solution. 2- Implication of Remark 2.1: Can you elaborate on the need for this assumption to only consider Q[Y] instead of general causal queries? Is it a necessary assumption for the problem to be defined or are general queries more challenging? 3- Typos: + Line 68: period after "and open problem" + Line 224: (Line 12) --> (Line 13) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the constructive feedback of the reviewer. We will address the concerns raised in the weaknesses and questions section. ----------------------------------------- **- Motivation & bounding the effect:** We agree that in some cases, relying on partial identification and bounding the target causal effect can be more justifiable than striving for exact identification. However, we aim at highlighting the importance of taking the uncertainty in the structure into account in the context of causal identification. It is noteworthy that even in the context of partial identification, nontrivial bounds can only be achieved if at least some pairs of variables are assumed to be unconfounded. In particular, if every bidirected edge in our model has a positive probability, even partial identification would become subject to this probabilistic model, which is a nice generalization of our problem. We thank the reviewer for their suggestion and we will mention this direction in the revision as well. **- Performance guarantees for the heuristic approaches**: We acknowledge the lack of theoretical analysis for the heuristic algorithms discussed in Section 4.2. In average-case scenarios, the cost of the heuristic algorithms can be bound in terms of the optimal cost and the parameter $p$ in a straightforward manner, which we will include in the revision. On the other hand, since both of these algorithms are polynomial-time, it is natural to expect their 'worst-case' performance to be at best log(n) times worse than that of the optimal solution. This can be justified based on [1] below. We will elaborate further in the revision. [1] Uriel Feige. A threshold of ln n for approximating set cover. Journal of the ACM (JACM), 45(4): 634–652, 1998. ----------------------------------------- **Questions** >1- "when $\omega^{th}=0$, Algorithm 1 will output the optimal solution": Suppose we have $G=\\{Z -> X, X -> Y, X <--> Z, Z <--> Y\\}, W_G=\\{ \infty,\infty,10,1\\}$, $w^{ub}=\infty$ and $w^{th}=0$. Running this example does not lead to any solution [...]. Running Algorithm 1 on the instance with the provided graph G and the given weights will return the optimal solution as follows: At the first iteration, the link between $Z$ and $Y$ is removed, which yields the optimal solution with cost 1, and this is stored as E_min. But since there could possibly be solutions with lower cost, the algorithm does not terminate here. Instead, this link is returned to the graph, but with infinite cost this time (see line 14.) The next iteration finds the solution of removing the link between Z and X, but since it is not optimal, it is disregarded and this link is also updated to have infinite cost. Now the graph has 4 edges with infinite cost, and it returns the stored E_min at line 7 of the algorithm. >2-Implication of Remark 2.1: Can you elaborate on the need for this assumption to only consider Q[Y] instead of general causal queries? Is it a necessary assumption for the problem to be defined or are general queries more challenging? The assumption of considering Q[Y] in Remark 2.1 is not a necessary constraint for defining the problem. We appreciate the reviewer's observation, and indeed, the problem can be posed in a more general form without restricting the queries to Q[Y]. The general formulation of the problem would complicate the analysis, potentially invalidating our specific reduction to EdgeID, but the NP-hardness and corollary 3.5 would remain intact. We will clarify this in the revision. >3- Typos: We thank the reviewer for pointing out the typos. We will correct them in the revised version of the paper.
null
null
null
null
null
null
FlatMatch: Bridging Labeled Data and Unlabeled Data with Cross-Sharpness for Semi-Supervised Learning
Accept (poster)
Summary: Inspired by Sharpness-aware minimization (SAM) technique, this paper proposes a new semi-supervised learning method FlatMatch. The main idea is in Eqn.(4), where FlatMatch picks both $\theta$ (current weight parameters) and $\tilde{\theta}$ (the parameters close to $\theta$, and maximize the loss on labeled dataset), and then try to optimize $\theta$, so that both $\theta$ and $\tilde{\theta}$ agree with the unlabeled data sets. The authors also propose an efficient version of FlatMatch called FlatMatch-e, that saves some computation. In experiments, the authors demonstrated that their method FlatMatch with fix label trick outperforms all the existing methods. Strengths: I think it is an interesting idea to use SAM like techniques to solve semi-supervised learning methods. The presentation of this paper is fairly good, so I find the paper very easy to follow. The experiment section is fairly comprehensive. Originality: good, because it is my first time hearing using this technique to solve the problem. Quality: poor, as I will mention below. Significance: fair. If the authors can improve the experiments to validate the real improvement of their algorithm, I think it seems to be a nice contribution to the semi-supervised learning. Weaknesses: I think the experimental setting, the FlatMatch (fix label) part, has serious issue. Specifically, I think the main motivation of semi-supervised learning, is to consider the case that the labeled dataset is fairly small, and the unlabeled dataset is huge. In that case, we hope to combine both the information of labeled data and unlabeled data to improve the performance of the model. However, in their setting, they mentioned that they pretrain a common SSL method, such as FixMatch for the dataset, and select high-confidence unlabeled data with pesudo labels, **convert them into labeled data. After that, they will use this "expanded labeled dataset" for their FlatMatch training**. Interestingly, although they claim they only "slightly augmenting the number of labels", they in fact added a huge set of labels: 500, 2500, 500, 5000 for CIFAR10, CIFAR100, SVHN, STL10, respectively. Without this "fix label" trick, their method is not as competitive as the existing methods. I personally feel that this is not a fair comparison. If "fix label" trick is indeed helpful, I think the authors should at least compare FlatMatch with the existing methods with fix label as well. If all the other methods does not have any improvement after using fix label trick, then FlatMatch (fix label) will indeed look promising. Technical Quality: 1 poor Clarity: 3 good Questions for Authors: I only have one question: can you improve Table 1 to include the cases for other methods with fix label trick? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 1 poor Presentation: 3 good Contribution: 2 fair Limitations: I think limitation is not a main issue for this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: R5: We thank Reviewer 9MiC for your feedback. Here we justify that our Fix label setting is a common SSL technique which is reasonably designed in our method. *Q1*: **Whether our Fix label setting violates the motivation of SSL.** *A1*: We want to stress that transforming unlabeled data to labeled data is not a violation of SSL. * The proposed fix label strategy is essentially a well-known SSL technique: Pseudo label, which also selects highly confident examples as labeled data. * So, the Fix label setting can be considered as combining FlatMatch with Pseudo label, which just takes advantage of an SSL technique to solve the SSL problem. *Q2*: **Claim of ''slightly augmenting the number of labels''.** *A2*: All the chosen number of labels in all settings is less than or equal to the largest ''# Label'' value in our experimental setting. Therefore, all of the expanded labels are less than or equal to 10% of the total number of training data. **Note:** Particularly, we apologize that the number for STL10 is 5000 is a typo, which is actually **1000** number of labels. *Q3*: **Limited performance of FlatMatch without fix label trick.** *A3*: We want to stress that the proposed FlatMatch method shows non-trivial performance improvement on most of the settings among various datasets. * For example, FlatMatch achieves the best results on most of the settings, as listed below. | Dataset | CIFAR10 | CIFAR10 | CIFAR10 | CIFAR10 | CIFAR100 | CIFAR100 | CIFAR100 | SVHN | SVHN | SVHN | STL10 | STL10 | |----------|---------|---------|---------|---------|----------|----------|----------|------|------|------|-------|-------| | # labels | 10 | 40 | 250 | 4000 | 400 | 2500 | 10000 | 40 | 250 | 1000 | 40 | 1000 | | Best? | | | &check; | &check; | | &check; | &check; | | &check; | &check; | | &check; | * Additionally, we have conducted additional experiments on the ImageNet dataset and ViT backbone and found that FlatMatch can still surpass strong baseline methods such as FreeMatch and FlexMatch. For details, please see the general response. *Q4*: **Comparison to other SSL methods with Fix label for fairness.** *A4*: Thanks for the helpful advice, we have conducted experiments on several SSL methods with the Fix label strategy on the small-scale dataset CIFAR10 and large-scale dataset ImageNet, and we show the results below: | Dataset | CIFAR10 | CIFAR10 | CIFAR10 | CIFAR10 | |-----------------------|:------------------------------------:|:------------------------------------:|:------------------------------------:|:------------------------------------:| | # Label | 10 | 40 | 250 | 4000 | | FlexMatch (Fix label) | 14.24&pm;0.25 | 6.02&pm;0.73 | 5.36&pm;0.34 | 4.92&pm;0.29 | | FreeMatch (Fix label) | 9.39&pm;3.72 | 6.35&pm;2.30 | 5.34&pm;1.21 | 4.66&pm;0.53 | | FlatMatch (Fix label) | **7.36**&pm;5.62 | **4.28**&pm;1.61 | **3.90**&pm;1.72 | **3.55**&pm;0.64 | | Dataset | ImageNet | |-----------------------|:--------------:| | # Label | 100 per class | | FlexMatch (Fix label) | 43.06 | | FreeMatch (Fix label) | 41.25 | | FlatMatch (Fix label) | **38.01** | Note that the Fix label setting seems to only favor FlatMatch over other SSL methods, but there are reasonable explanations: Our FlatMatch only uses expended labeled data to compute the gradient, and our risk minimization is still conducted on the original labeled dataset and unlabeled dataset. As a result, there are two benefits: * The gradient is diversified by more labeled data, which makes the worst-case model vulnerable to a wide range of examples. In turn, our sharpness minimization can effectively make up for the vulnerability. * Our risk minimization is still based on the original dataset and does not introduce too many labels in the early stage of training. As a result, we can avoid introducing noisy labels. On the other hand, if use the expanded labels directly for risk minimization in SSL, as shown by the baseline results, the performance would not bring any improvement. --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal. 1. I understand pseudo label, but I think directly setting them as the "real label" during the training is a bit strange. 2. Even your expanded labels are less than 10% of the dataset, they are much more than the true labels. (e.g., 10, 40, 250 etc.) 3. I observe that your algorithm works worse in the case when the number of true labels is small. However, I personally feel that that is the setting where semi-supervised learning is more attractive. 4. Thank you for the additional experiments! The results look strange, in the sense that the performance of other algorithms is worse, when using this fix label trick, comparing this table with Table 1 in the paper. Therefore there are essentially two questions that I have in mind: 1. Your algorithm works worse when there are fewer labels. Is this a problem for semi-supervised learning algorithm? 2. You fix label trick is a bit strange, it seems that it brings worse performance to other algorithms. If you can have detailed discussion on this point, explain why this trick is in particular useful for your method, that will be great. Therefore, I will still maintain my score right now. --- Reply to Comment 1.1.1: Title: Further Comments Comment: Dear Reviewer 9MiC: We appreciate your prompt reply, **1. The strange setting for the Fix label:** Here we would like to clarify that we do not leverage the expanded labels for risk minimization. Your strange feeling is due to the misunderstanding that we might have simply created “real labels”. In fact, we only use them to compute the gradients that are utilized to perturb the model, which can stabilize the sharpness minimization process. **2&3. Performance on barely-supervised learning:** Indeed, the capability of SSL can be manifested by extremely scare labels, but we respectively disagree that an SSL method should be only judged by the performance in an extremely-limited label setting. On the other hand, as many powerful large-scale models and pre-training techniques are emerging, we think the generalization performance is one promising direction to promote the development of SSL, therefore we leverage SAM optimization to assist SSL. **4. The performance drop of SSL on Fix label setting:** The results can testify to your strange feeling about creating “real labels” for SSL. We show that following your previous misunderstanding and directly using the “real labels” is harmful. Therefore, the results are actually reasonable. Thanks again for your discussion, we hope to hear your further opinions soon. Kind regards, Authors.
Summary: The paper focuses on the problem of semi-supervised learning (SSL). The authors first study the loss landscapes of labeled data and unlabeled data and find a generalization mismatch. Base on the findings, they propose FlatMatch to encourage consistent learning performance between labeled data and unlabeled data. Extensive experiments demonstrate the effectiveness of their methods. Strengths: - To the best of my knowledge, this paper is the first to introduce sharpness-aware minimization into SSL, so the work is novel. - The experiments are wide-ranging and cover several different datasets. The experimental analysis is also relatively comprehensive. - The paper is well written and easy to follow. Weaknesses: 1.The analysis of loss landscape is somewhat unclear and contradictory. - The paper states in lines 42-43: The learning on scarce labeled data convergences faster with lower errors than on unlabeled data. However, in Figure 1, it can only be observed that the loss landscape of labeled data is sharper at both epoch 60 and epoch 150. If want to compare the convergence speed, maybe it is more intuitive to compare the change curve of loss in the time dimension. - The paper states in lines 44-46: the abundant unlabeled data…generalizing better than labeled data. However, the accuray of labeled data is better than that of unlabeled data in Figure 1 (right). - The paper states that using slight data augmentation to plot loss landscape is common in SSL. It is recommended to add relevant references to justify the drawing method. Based on the above questions, the reasonable of two critical flaws proposed in the paper needs further clarification. 2.The paper states in lines 8-9 that FlatMatch minimizes cross-sharpness measures to ensure consistent learning performance between the two datasets. But further verification is lacking. Does the cross-sharpness measure achieve consistent learning performance? If yes, why is it possible? Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. What is the function $l_d$ used in the experiment? 2. For "sharpness on $D^{l}(D^{u})$" in Table 2, is D^{u}(D^{l}) used? If yes, how to use it? If not, it is recommended to compare the performance of using SAM on the full dataset ($D^{l} \cup D^{u}$). 3. The significance of this work is somewhat limited, as the focus is only on visual datasets. The effectiveness and practicality of the method would be further improved if results from more types of datasets (such as text classification) can be provided. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors point out the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: R4: We thank Reviewer Z9Pn for your helpful feedback. We have carefully addressed all your concerns as follows. *Q1*: **Why faster convergence leads to sharper loss curves and how to compare the convergence speed between labeled data and unlabeled data.** *A1*: Good point. * Here we clarify that convergence does not lead to flatness: Converged accuracy or loss can only indicate first-order property. However, flatness stands for the relative loss under parameter change, it is a second-order statistic. When the labeled data lie in sharp minima, the sharpness could be large even though the loss is small. Therefore, our claim is not a contradiction with our empirical findings. * To illustrate the convergence of loss curves, we show the loss values from both FlatMatch and FixMatch during training. Moreover, we subtract the loss value of unlabeled data from the loss of labeled data to compute a loss difference, which gives an illustration of the performance difference between both two datasets. * Firstly, the loss value of labeled data quickly converges to zero and is significantly smaller than unlabeled data. Such phenomenon occurs in two methods which support our claim that the learning on labeled data converges faster than unlabeled data. * Secondly, the loss difference between the two datasets of FlatMatch is significantly smaller than FixMatch, which indicates that our FlatMatch can alleviate the unmatched convergence speed of two datasets and helps decrease the loss value of unlabeled data. * Moreover, to understand how FlatMatch affects the sharpness curve, we compare the sharpness between labeled data and unlabeled data by using FixMatch and FlatMatch. Similarly, we also compute a sharpness difference. The results are shown in Fig. 4 in the rebuttal pdf. * We can see that the sharpness of FlatMatch on both two datasets is smaller and stabler than that of FixMatch. * Moreover, the sharpness difference of FlatMatch is obviously smaller than FixMatch. Hence, we can know that FlatMatch can successfully alleviate the flatness gap between two datasets, further leading to consistent learning performance. *Q2*: **Why the better generalization performance of unlabeled data shows inferior accuracy to labeled data.** *A2*: Similar to the last concern, we want to address that the generalization performance is not equivalent to training accuracy. * As shown in Fig. 1 in the main paper, we can see that the loss curve of unlabeled data is smoother than that of labeled data, because the number of unlabeled data is much more than labeled data, thus containing abundant knowledge which benefits the generalization of unseen data. * However, as the supervision is absent, it is reasonable that the model is less accurate on unlabeled data than on labeled data. *Q3*: **Justification of using data augmentation for generating loss landscape.** *A3*: Our choice of using data augmentation to demonstrate generalization performance is well-supported by existing studies. Engstrom et al. show that when training data is augmented, the loss landscape could be significantly non-concave [D1], thus exposing the limited generalization performance on slightly changed data. Moreover, Gotmare et al. [D2] also conduct data augmentation to plot the loss curves. [D1] Engstrom et al., Exploring the Landscape of Spatial Robustness, in ICML 2019. [D2] Gotmare et al., Using Mode Connectivity for Loss Landscape Analysis, in ICMLW 2018. *Q4*: **Verification of ensuring consistent learning performance between two datasets.** *A4*: There are two major findings show that our FlatMatch can benefit consistent learning performance between labeled data and unlabeled data: * The flatness on labeled data has been largely improved, leading to low sharpness on both labeled and unlabeled datasets. In Fig. 4 in the main paper, we have illustrated that our FlatMatch can smoothen the loss curves of labeled data by leveraging cross-sharpness minimization. As a result, the generalization performance on labeled data is improved, which can be supported by many studies regarding flatness [D3, D4, D5]. * The error rate and sharpness gap between the two datasets have been effectively decreased. As shown in Figs. 3 and 4 in the rebuttal pdf, our FlatMatch can benefit both loss and sharpness minimization of unlabeled data compared to FixMatch. Therefore, the performance gap between the two datasets is effectively alleviated. Based on the above reasons, our FlatMatch can benefit consistent learning. [D3] Neyshabur et al., Exploring generalization in deep learning, in NeurIPS 2017. [D4] Foret et al., Sharpness-Aware Minimization for Efficiently Improving Generalization, in ICLR 2021. [D5] Chaudhari et al., Entropy-sgd: Biasing gradient descent into wide valleys, in J. Stat. Mech. 2019. *Minor*: **1) function $\ell_d$; 2) ablation study on sharpness with full dataset; and 3) Requirements on language/text task.** *A*: * In line 154, we have stated that ''the function $\ell_{d}(\cdot)$ denotes the loss criterion for unlabeled data''. * We did not use $D^{u}$ (or $D^{l}$) when sharpness is computed on $D^{l}$ (or $D^{u}$). Thanks to the advice, we have conducted an ablation study on the full dataset as shown in Tab. 2 in the rebuttal pdf. We find that leveraging the sharpness of the full dataset cannot surpass the performance when only using labeled data, this could be caused by the inaccurate gradient signal from unlabeled data. * We follow the most common SSL learning process in the community to evaluate our method. However, adapting to the text classification could lead to extremely huge work such as reformulating the data augmentation, re-designing the loss objectives, and re-training all the baseline methods. Hence, it is too challenging to fulfill this heavy requirement in such a short time. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed rebuttal. After reading the rebuttal, most of my concern has been addressed. The figures provided in the rebuttal pdf are clearer and better illustrate the motivation than those provided in the main text, and I strongly recommend that the authors revise the paper. Furthermore, despite the empirical observation that flatness measures are consistently strongly correlated with generalization, why and under what circumstances flatness is correlated is still an open theoretical question, so I would not advise the authors to conflate the two (like Q2). Overall, I decide to raise my rating from 4 to 5. --- Reply to Comment 1.1.1: Title: Reply Comment: Dear Reviewer Z9Pn: Thanks for your valuable advice and quick response, There would be three major modifications in our future versions: * Detailed discussions between generalization performance and learning accuracy will be added. We will further supplement our empirical justification of flatness consistency between labeled and unlabeled sets. * The motivation of our Fix label setting will be carefully demonstrated and how it is conducted will be summarized in an Algorithm. * The formulation of our equations and unclear descriptions will be carefully improved. It is really our pleasure to address all your questions and we sincerely appreciate your raising your score. Best, Authors. --- Rebuttal 2: Title: Further Discussion Comment: Dear Reviewer Z9Pn: We really appreciate your constructive opinions that helped us improve this paper. If there are any concerns unresolved, we would be glad to have further discussions. Thanks again for your time, looking forward to hearing from you soon. Best, Authors.
Summary: This paper focuses on Semi-supervised Learning (SSL) where the training data consists of scarce labeled data and a massive amount of unlabeled data. The authors argue that the propagation of label guidance from labeled data to unlabeled data is challenging. This can cause the learning process on labeled data to progress much faster than on unlabeled data, leading to sub-optimal generalization performance. To alleviate this issue, the authors propose FlatMatch. This method seeks to increase the empirical risk on labeled data to obtain a worst-case model and then penalize the prediction difference (i.e., cross-sharpness) between the worst-case model and the original model, which can benefits the final model. Strengths: 1. This paper is well-written and easy to follow. The motivation is clear, and the unlabeled loss is indeed always larger than the labeled loss in the SSL setting. The idea is interesting. 2. The experiments on CIFAR, SVHN, and STL-10 demonstrate the effectiveness of FlatMatch. 3. The original FlatMatch algorithm is computationally complex, which makes it difficult to run FlatMatch on large-scale datasets like ImageNet. However, the authors propose an efficient version of FlatMatch to make the runtime more manageable. Weaknesses: 1. Although FlatMatch performs well on smaller datasets like CIFAR, SVHN, and STL-10, its performance on large-scale datasets remains unknown. Experiments on ImageNet are necessary to prove the generalization capabilities of the proposed method. 2. The authors train wide-resnets from scratch. However, many recent works have conducted SSL research using Vision Transformers[1,2], which can greatly improve the performance of SSL algorithms in many settings. The performance of FlatMatch when Vision Transformers are used as the backbone remains unknown. 3. While FlatMatch can achieve high performance, it remains computationally complex. The running time of the efficient version of FlatMatch is manageable, but its performance cannot surpass that of SOTA SSL algorithms. [1] Cai, Zhaowei, et al. "Semi-supervised vision transformers at scale." Advances in Neural Information Processing Systems 35 (2022): 25697-25710. [2] Wang, Yidong, et al. "Usb: A unified semi-supervised learning benchmark for classification." Advances in Neural Information Processing Systems 35 (2022): 3938-3961. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See weaknesses Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The performance of FlatMatch on large-scale datasets remains unverified, and the computational complexity is high to achieve strong performance. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **R3**: We thank Reviewer yYJy for your helpful comments. Here we carefully conducted experiments on large-scale datasets (ImageNet) as well as sophisticated models (ViT) and justified the efficiency and effectiveness of FlatMatch. *Q1&Q2*: **Performance on ImageNet dataset and Vision Transformer architecture.** *A1*: Please see the general response. *Q3*: **Limited efficiency of FlatMatch and limited effectiveness of FlatMatch-e.** *A3*: We want to argue that requiring efficiency and effectiveness at the same time is quite demanding for any method. * The proposed FlatMatch method shows non-trivial performance improvement in most of the settings among various datasets. Although its efficiency is one defect, we propose an efficient variant that shows significant efficiency improvement. * Empirically, our FlatMatch-e takes 0.29 sec/iter, compared to 0.28 sec/iter from FreeMatch; * Meanwhile, FlatMatch-e brings 2.67% performance improvement on CIFAR100 with 10000 labels, and its efficiency is much faster than FlexMatch (0.34 sec/iter). * About the effectiveness, we want to clarify that the proposed FlatMatch-e can still surpass existing SSL methods in most of the settings. * In the main result, FlatMatch-e passes existing SSL methods in the following settings: | Dataset | CIFAR10 | CIFAR10 | CIFAR10 | CIFAR10 | CIFAR100 | CIFAR100 | CIFAR100 | SVHN | SVHN | SVHN | STL10 | STL10 | |----------|---------|---------|---------|---------|----------|----------|----------|------|------|------|-------|-------| | # labels | 10 | 40 | 250 | 4000 | 400 | 2500 | 10000 | 40 | 250 | 1000 | 40 | 1000 | | Best? | | | &check; | &check; | | &check; | &check; | | &check; | &check; | | &check; | * Moreover, as the additional experiments on the ImageNet dataset and ViT backbone show, FlatMatch-e can still surpass strong baseline methods such as FreeMatch and FlexMatch. --- Rebuttal 2: Title: Further Discussion Comment: Dear Reviewer yYJy: We want to express our appreciation for your valuable suggestions, which greatly helped us improve the quality of this paper. We are also glad that you agreed that our idea is novel and the performance is effective. We have taken our maximum effort to address your concerns on large-scale datasets and ViT models. Your further opinions are very important for evaluating our revised paper and we are hoping to hear from you. Thank you so much. Best, Authors. --- Rebuttal 3: Title: Remaining Concerns Comment: Dear Reviewer yYJy: We thank you again for your valuable time to review this paper, your constructive advice is really helpful. By carefully answering all your concerns, this paper has been improved on the scale of application, and validation on efficiency&effectiveness. We hope to know whether our rebuttal solves your concerns. Since the NeurIPS conference supports interactive discussion, we wish we can have to chance to make further efforts on polishing our work. Thanks again for your previous help, we wish to hear from you soon! Best, Authors. --- Rebuttal 4: Title: Rebuttal Recap Comment: Dear Reviewer yYJy: We again thank you for reviewing our paper. Here we provide a recap to help you read our rebuttal without feeling unfamiliar with this paper. Your initial concerns can be summarized into two points: * Lack of experiments on ViT model and ImageNet. * The effectiveness and efficiency are limited. Our rebuttal carefully addresses the concerns by: * Conducting experiments on ViT and ImageNet which re-validated the performance of FlatMatch. * Carefully discuss the effectiveness: FlatMatch achieves the best performance in most cases, and justifies the efficiency: Our FlatMatch-e achieves higher performance with lower computational costs. It is really important to us that you could kindly re-read our rebuttal and provide further questions if there are any. Thank you so much and hope you have a good day. Best, Authors.
Summary: The authors propose a semi-supervised learning (SSL) method that applies the sharpness-aware minimization (SAM) method for consistency regularization. Unlike FixMatch, it does not apply perturbation to the unlabeled training sample but evaluates the consistency between posterior distributions of predicted labels between neural networks with original and perturbed parameters. Experimental results show that the proposed method outperforms existing SSL methods with the help of initialization pre-processing to increase labeled samples when their numbers are small. Strengths: The paper proposes a new SSL algorithm that shows consistently improved performance. Weaknesses: The authors should show results of existing SSL methods that use SAM as the optimizer instead of SGD. The equations contain errors mixing the definition of objective function and their derivatives. For example, the second term in the first line of equation (2) looks for optimal \theta that minimizes the SAM's objective, but the third term is the derivative of the objective function. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Equation (4) explains the proposed method. Is min missing in the second term? Similarly, min is missing in the second and the third terms in equation (3). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: The authors explain the condition where the proposed method is weak when they use it alone. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **R2**: We thank Reviewer Y5aq for your constructive comments. We have carefully included several typical baselines with SAM optimizer and reformulated our objectives, derivatives, and optimization. *Q1*: **Comparison to existing SSL methods with SAM optimizer.** *A1*: Thanks for the advice. Here we conduct experiments by employing SAM to several SSL baseline methods such as FreeMatch, FlexMatch, and FixMatch on CIFAR100 with a varied number of labels. The results are shown below: | Dataset | CIFAR100 | CIFAR100 | CIFAR100 | |---------------|:---------:|:---------:|:---------:| | # Label | 400 | 2500 | 10000 | | FixMatch | 46.42 | 28.03 | 22.20 | | FixMatch-SAM | 47.59 | 28.62 | 22.67 | | FlexMatch | 39.94 | 26.49 | 21.90 | | FlexMatch-SAM | 39.89 | 27.05 | 22.38 | | FreeMatch | 37.98 | 26.47 | 21.68 | | FreeMatch-SAM | 39.02 | 27.64 | 22.55 | | FlatMatch | **38.76** | **24.28** | **18.21** | * We find that such a vanilla combination of SAM and SSL methods does not bring further performance improvement. As we demonstrated in the gradient angle analysis in Fig. 3 in the main paper, the gradient could contain many noises especially when too many unlabeled data are incorporated. As a result, the SAM optimization might be misled by the uncertainty of unlabeled data, thus causing performance drops. * Such an issue is also found in Cha et al. [B1] where SAM is not beneficial to the generalization across datasets. [B1] Cha et al., Swad: Domain generalization by seeking flat minima, in NeurIPS 2021. *Q2&Q3*: **Mixing the objective function and derivatives. Missing the optimization notation ''min''.** *A2&A3*: Thanks for pointing this out. We have rigorously improved our formulation by clarifying the differentiation and addressing the optimization goal. --- Rebuttal 2: Title: Further Discussion Comment: Dear Reviewer Y5aq: We really appreciate your efforts to help improve this paper. We have carefully addressed the mentioned concerns, such as experiments on SAM, and the formulation problems. Having further discussions really helps to achieve consensus and clarify misunderstandings, we are eager to know if are there any remaining problems. We will try our best to address them. Best, Authors. --- Rebuttal 3: Title: Discussion for Remaining Concerns Comment: Dear Reviewer Y5aq: We appreciate your opinions on our paper. We have tried with our maximum effort to address your concerns, but it has been a while since your initial comments. During our discussions with other reviewers, some misunderstandings have been clarified which helps to make a proper judgment of our paper. Since your opinion of our paper is still leaning negative, we assume there are still several concerns remaining. However, it would be unfortunate if some of the concerns are based on misunderstandings. We hope to avoid this situation and wish you could express your further opinions soon. Thanks again for your valuable time, wish you have a good day. Best, Authors. --- Rebuttal 4: Title: Request for Further Discussion Comment: Dear Reviewer Y5aq, It has been a while since we have heard from you, we understand that you might be very busy. However, we have made plenty of effort to work on this paper as well as address your concerns. It would be much appreciated if you could kindly spend 5 to 10 minutes going through our response and tell us if there are any remaining questions. We really thank you for reviewing our paper and hope to hear from you soon! Best, Authors. --- Rebuttal 5: Title: Rebuttal Recap Comment: Dear Reviewer Y5aq: We hope this letter finds you well. As there has been a while since you read our paper, therefore we **recap** your initial commons as well as our improvements to make sure that nothing is forgotten. Your major concerns can be summarized into two points: * Lack of comparisons to baseline methods with SAM optimizers. * Some equations are not rigorously formulated. To address these concerns, we have: * Validated our method through fair comparison with SAM optimizers. * Carefully reformulated all the equations. Since your opinions on our soundness and presentation are both ''fair'', and even think our contribution is **''good''**, we are really confused that why your rating is still so negative. To give a proper judgment of this paper, we sincerely hope you can provide further opinions soon. Thank you so much! Best, Authors.
Rebuttal 1: Rebuttal: ***General Response***: We thank the reviewers for their insightful and constructive reviews of our manuscript. Delightfully, we are glad that the reviewers found that: * Our idea is **novel, interesting, and well-motivated**. (Reviewers 9rMs, Z9Pn, 9MiC) * The presentation of our paper is **clear and easy to follow**. (Reviewers 9rMs, yYJy, Z9Pn, and 9MiC) * Our experiments are comprehensive and show **effective or efficient** performance. (Reviewers 9rMs, Y5aq, yYJy, Z9Pn, and 9MiC) Based on all the comments from the Reviewers, here we provide a general response to the concerns raised by multiple reviewers. The individual responses are commented on below each review. * **Regarding extension on large-scale datasets and sophisticated architectures.** * Although we have conducted experiments on ImageNet30 in the initial appendix, here we follow the advice to validate the performance of FlatMatch on full ImageNet, we choose only 100 labels per class and provide the test error results of FlatMatch, FlatMatch-e, FixMatch, FlexMatch, and FreeMatch. Additionally, we consider a sophisticated architecture such as Vision Transformer (ViT) as the training backbone and compared the proposed methods with several SSL methods. The results are shown below: | Dataset | Architecture | FlatMatch | FlatMatch-e | FreeMatch | FlexMatch | FixMatch | |----------|------------------|:---------:|:-----------:|:---------:|:---------:|:--------:| | ImageNet | Wide-ResNet-28-2 | **38.70** | 39.92 | 40.57 | 41.95 | 43.66 | | ImageNet | ViT-Base (86M) | **21.57** | 22.07 | 23.55 | 23.78 | 25.52 | * We can see that on large-scale datasets such as ImageNet, the performances of both FlatMatch and FlatMatch-e are still superior to other components. * Moreover, we can still observe the effectiveness of the two of our methods. Therefore, it is reasonable to conclude that the performance of FlatMatch is extendable to large-scale datasets and sophisticated architectures. * **Regarding convergence analysis on accuracy, loss, and sharpness.** * We have carefully conducted experiments to analyze the convergence of accuracy and loss curves. * We also made a demonstration of how FlatMatch affects the sharpness of labeled data and unlabeled data during training. * **Fairness comparisons with other SSL methods on the same optimization and experimental setting.** * For the optimization strategy, we have carefully compared our methods with several SSL methods implemented with SAM to justify our design choice. * For experimental setting fairness, we have applied the Fix label setting to several SSL methods and compared their performance with our FlatMatch. For other detailed concerns, we have tried our best to fully address every single concern raised by each reviewer. Thanks to all reviewers for your time in reviewing this paper, your help and support have largely improved this manuscript. Pdf: /pdf/1d6a14a65e07d1d74cca3d71d60305af8fd455d0.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper introduced FlatMatch, a new SSL method which encourages consistent learning performance between labeled and unlabeled data and aims to boost the performance of SSL methods without being limited by insufficient label information. The paper analyzes the loss landscapes of labeled and unlabeled data and proposes to use a worst-case model alongside the original SSL model to achieve agreement on unlabeled data through a novel cross-sharpness regularization method. The approach is tested on 4 computer vision datasets from image classification and obtains state-of-the-art results. Strengths: - The presentation of the approach is clear and concise. Overall, this paper is very-well written. - First, I want to say that the idea is very interesting. SSL models usually obtain perfect accuracy on labeled data very fast and it is intuitive that at this point there is not much information gain. This paper solves this drawback in a very clever way. - The performance of FlatMatch is impressive, obtaining considerable improvements especially in “harder” datasets where there was room for improvement from prior approaches. - Additionally, extensive analyses are performed, which offer valuable insights and help further understand the approach. Weaknesses: - A notable weakness is the absence of experiments on large-scale datasets (ImageNet?). It’s clear that FlatMatch performs well on “harder” datasets, however CIFAR-100 and STL are still low-scale. I believe ImageNet experiments are vital since it’s the closest to a real-world setup and would further solidify the effectiveness of your approach. At the same time, all recent proposed SSL methods verify performance on ImageNet (100/300K). - (Minor) The training efficiency is not that good. This is okay, but the graph in Figure 7 is built around FixMatch as the most efficient method. I think this is a bit misleading. FlexMatch has the same training time with a higher accuracy compared to FixMatch; so I think FlexMatch should be the reference. Moreover, it would be interesting to also see a convergence analysis as done in FlexMatch (plot accuracy/iteration) since I believe this would give a better sense of the training efficiency compared to how the plot is structured here. Technical Quality: 3 good Clarity: 3 good Questions for Authors: For Figures 1 and 4 last column, how do the curves look at later iterations? The plots are at epochs 60 and 150 which is at the very beginning of training (e.g. CIFAR-100 trains for 9K epochs). Also, what is the dataset used for those plots? I haven’t seen it mentioned in text. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper states limitations. I believe the computational cost limitation should be explicitly stated here. I don't think this work has any negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **R1**: We thank Review 9rMs for your positive opinion. We have carefully conducted experiments on ImageNet, added efficiency comparison with FlexMatch, analyzed the convergence regarding training accuracy, and showed the loss curves on the final iteration. *Q1*: **Result on ImageNet.** *A1*: Please see the general response. *Q2*: **Efficiency comparison with FlexMatch.** *A2*: Thanks for the suggestion, we have included the efficiency comparison with FlexMatch, as shown in Fig. 1 in the rebuttal pdf. * We want to rectify that the efficiency of FlexMatch is not the same as FixMatch, as it requires an indexing process that takes a noticeable computational cost, as testified by the empirical result. Moreover, the efficiency of FreeMatch on the CIFAR100 dataset is also inferior to FixMatch. * When comparing our methods with the SSL baseline methods, we find that FlatMatch achieves the best performance and FlatMatch-e can still bring non-trivial improvements without introducing too much computational cost. *Q3*: **Convergence analysis of accuracy.** *A3*: Thanks for the advice, here we compare our FlatMatch with FreeMatch, which has the best performance among most SSL baseline methods. The accuracy curve is shown in Fig. 2 in the rebuttal pdf. * We observe that the performances of the two methods are almost comparable in the early stages, but FlatMatch continues to improve the training performance in the middle and later stages and finally achieves better accuracy than FreeMatch on the final point. Therefore, we can conclude that our method can converge to superior performance than FreeMatch. *Q4*: **Loss curves of later epoch, and the used dataset.** *A4*: Thanks, our loss curves are generated on CIFAR10 datasets, we have made this clear in the main paper. * We also showed the loss curves of FlatMatch and FixMatch on the last epoch from training in Fig. 5 in the rebuttal pdf. We can see that FlatMatch generates a wider loss landscape than FixMatch. * Moreover, the loss curve of labeled data from FlatMatch is smoother than that of FixMatch. Therefore, we can again conclude that FlatMatch can benefit the generalization result.
null
null
null
null
null
null
Exponential Hardness of Optimization from the Locality in Quantum Neural Networks
Reject
Summary: In this work, the authors characterize the problem of the Barren Plateau from different perspectives: (1) local unitary within a QNN on the cost function, particularly the randomness for the generic cost function; (2) quantum information theory; (3) the optimization methods during training. This work discusses those factors impacting the Barren Plateau landscape. Strengths: (1) The work provides a theoretical understanding of the Barren Plateau problem and determines what factors actually impact the training of VQC, which is very interesting. (2) Solid mathematical formulation is given and the experiments can corroborate the theoretical analysis. Weaknesses: (1) Some latest work on the Barren Plateau problem in the training of VQC should be included, such as Refs. [1], [2], and [3]. Ref. [1] aims at the QNN architecture for dealing with the Barren Plateau problem, Ref. [2] focuses on the initialization strategy, and Ref. [3] puts forth the pre-training method for mitigating the VQC training problem of Barren Plateau. [1] Jun Qi, Chao-Han Huck Yang, Pin-Yu Chen, Min-Hsiu Hsieh, "Theoretical Error Performance Analysis for Variational Quantum Circuit Based Functional Regression," npj Quantum Information, Vol. 9, no. 4, 2023 [2] Zhang, Kaining, Hsieh, Min-Hsiu, Liu, Liu, and Tao, Dacheng. Gaussian Initializations Help Deep Variational Quantum Circuits Escape From the Barren Plateau. In Neural Information Processing Systems, 2022. [3] Jun Qi, Chao-Han Huck Yang, Pin-Yu Chen, Min-Hsiu Hsieh, "Pre-Training Tensor-Train Networks Facilitate Machine Learning with Variational Quantum Circuits," arXiv:2306.03741v1 Technical Quality: 3 good Clarity: 3 good Questions for Authors: How to asscoated with the locality unitary property with other methods for dealing with Barren Plateau problem, such as the DNN architecture and pre-training method? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: It is expected to have experimental simulations on real data like the MNIST dataset to demonstrate the effectiveness of the proposed analysis approach. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly thank the reviewer for their time and the helpful feedback. Here we respond to the comments and questions. > $\textbf{Comment 1:}$ ``Some latest work on the Barren Plateau problem in the training of VQC should be included, such as Refs. [1], [2], and [3]. Ref. [1] aims at the QNN architecture for dealing with the Barren Plateau problem, Ref. [2] focuses on the initialization strategy, and Ref. [3] puts forth the pre-training method for mitigating the VQC training problem of Barren Plateau.'' $\textbf{Re 1:}$ Great thanks for bringing these recent works on the QNN training problem to our attention. We will ensure to include and discuss these references to provide more comprehensive research in our revision. --- > $\textbf{Comment 2:}$ ``How to associated with the locality unitary property with other methods for dealing with Barren Plateau problem, such as the DNN architecture and pre-training method?'' $\textbf{Re 2:}$ We appreciate the reviewer's thoughtful comment. Our theorem is mainly based on two assumptions. (a) The QNN is composed of local quantum gates. (b) The QNN is randomly initialized and approximate a 2-design. Thus to enhance the trainability, one can construct global parameterized gates by e.g. correlating parameters to break assumption (a). On the other hand, one can design problem-tailored QNN architecture or use pre-training to escape from being a 2-design to break assumption (b). We will address this important aspect in our revision. --- Rebuttal Comment 1.1: Title: Follow-up for the rebuttal letter Comment: Thank the authors for the rebuttal letter. The authors' responses have perfectly resolved my major concerns. Since the trainability of QNN is a significant issue, this paper has a significant contribution in this aspect. So, I highly recommend this paper be accepted by NeurIPS.
Summary: The paper examines the critical issue of trainability in quantum neural networks (QNNs) by adopting a perspective centered around the locality. Through extensive analysis, the authors convincingly demonstrate that the adjustment of local quantum gates within a diverse range of QNNs results in an exponential decay of the loss function range as the number of qubits scales up. The authors bolster their claims with carefully conducted numerical simulations, providing compelling evidence that locality plays a fundamental role in shaping the behavior of QNNs. Building upon prior research on barren plateaus, the paper makes a technically sound contribution, albeit with an incremental advancement in the field. Strengths: The analysis of Theorems and Propositions, which shows the exponential decay of the loss function range by adjusting local quantum gates, is technically sound. Additionally, the ideas, concepts, and results are well presented. The authors effectively communicate their methodology, theoretical framework, and experimental simulations, making it easier for readers to comprehend and follow their arguments. Weaknesses: The main weakness of this paper lies in its limited impact. While the authors conduct a clear and thorough analysis of how the concentration results of random circuits depend on the locality unitary, the technical tools employed bear a striking resemblance to prior literature concerning barren plateaus. The achieved results can be derived from existing works, with the only notable distinction being the introduction of a parameter, m, related to the locality in the derived bound. Additionally, the authors' claim that few rigorous scaling results exist for generic QNNs is contradicted by the abundance of relevant research, as evidenced by references [1], [2], [3], and [4], which address similar theoretical aspects the authors aim to explore. Previous studies have already established that deep ansatz can lead to the concentration of the cost function, rendering the observation regarding the exponential vanishing of the loss function range via the adjustment of local quantum gates less novel. [1] Leone, Lorenzo, et al. "On the practical usefulness of the Hardware Efficient Ansatz." arXiv preprint arXiv:2211.01477 (2022). [2] Thanasilp, Supanut, et al. "Subtleties in the trainability of quantum machine learning models." Quantum Machine Intelligence 5.1 (2023): 21. [3] Garcia, Roy J., et al. "Barren plateaus from learning scramblers with local cost functions." Journal of High Energy Physics 2023.1 (2023): 1-79. [4] Larocca, Martin, et al. "Diagnosing barren plateaus with tools from quantum optimal control." Quantum 6 (2022): 824. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - The results presented in Theorem 4 are somewhat perplexing. It appears that the upper bound can be independent of the qubit number n when utilizing the global unitary U_A. For example, when m=n, the concentration phenomenon can be avoided. I am not sure if I have overlooked certain details. - In Remark 2, what is the necessity of decomposing and preserving the local unitary for subsequent analysis of U_A? A more detailed explanation is expected. - The similar findings discussed in Section 4 can be derived from existing research. Specifically, the concentration phenomenon of the cost function, as outlined in reference [5], in conjunction with the parameter shift rule (for first-order and higher-order) or the finite-difference method (for zero-order), can effectively yield similar outcomes as presented in Section 4. It would be valuable if the authors could acknowledge and discuss these connections with prior findings to provide a clearer context and further highlight the novelty of their results. If the only difference lies in the introduced parameter related to the locality m? [5] Arrasmith, Andrew, et al. "Equivalence of quantum barren plateaus to cost concentration and narrow gorges." Quantum Science and Technology 7.4 (2022): 045015. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: No, the authors did not address the limitations of their work Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful feedback and for acknowledging that our paper is technically sound and well-presented. Below is a detailed response to the questions raised by the reviewer. > $\textbf{Comment 1:}$ ``The main weakness of this paper lies in its limited impact ...'' $\textbf{Re 1:}$ Thank you for your feedback and we will make the following clarifications and enhancements to the paper: i): Prior Literature: We thank the reviewer for pointing out other relevant research concerning the scaling results of QNNs. We will discuss the prior literature (Ref. [1], [2], [3], and [4]) in the final version. We would like to clarify our differences with prior literature. Most of the previous results for characterizing the QNN training are related to the vanishing gradient. Whereas we present the maximum variation range of the cost function when optimizing a local unitary. Our theorem $\textbf{can not be derived}$ from existing works. For instance, the process of taking extreme values of the cost function in our work is not involved in other works considering the cost function difference. ii): Novelty and Impact: the study of QNN training is pivotal for harnessing the potential of quantum computing. Different from the celebrated barren plateau phenomena, our paper delivers a new understanding of the training landscape from a more intuitive way. We agree with the reviewer that previous studies have already established the concentration of the cost function, which are mainly from the perspective of the gradient of the QNN parameters. However, we believe that the entire variation range of the cost function when you are optimizing a local gate has an intuitive meaning in QNN training. Its property can not be derived from the existing works. Besides the implications in VQAs, our results can be regarded as a basic property of random quantum circuits. [1] Leone, Lorenzo, et al. "On the practical usefulness of the Hardware Efficient Ansatz." arXiv preprint arXiv:2211.01477 (2022). [2] Thanasilp, Supanut, et al. "Subtleties in the trainability of quantum machine learning models." Quantum Machine Intelligence 5.1 (2023): 21. [3] Garcia, Roy J., et al. "Barren plateaus from learning scramblers with local cost functions." Journal of High Energy Physics 2023.1 (2023): 1-79. [4] Larocca, Martin, et al. "Diagnosing barren plateaus with tools from quantum optimal control." Quantum 6 (2022): 824. --- > $\textbf{Comment 2:}$ ``The results presented in Theorem 4 are somewhat perplexing ...'' $\textbf{Re 2:}$ Thanks for this careful question. We apologize for any confusion resulting in the reference to ``Theorem 4''. In case you are asking the Theorem 1, we would like to explain it as follows. We agree that the utilization of global unitary $U_A$ can lead to a trivial upper bound independent of the qubit number $n$. This is because we define the variation range by taking optimum over all possible unitaries $U_A$ with support $m$. Thus it is not surprising that a universal global unitary can lead to a non-vanishing range. In this case, our result indeed does not indicate the limitation of training, but parameterizing and optimizing such a universal unitary is impractical. However, the unitaries we can optimize are usually single-qubit and two-qubit quantum gates in a practical cases, corresponding to the case $m=1,2$, where our theorem gives an exponentially small upper bound. --- > $\textbf{Comment 3:}$ ``In Remark 2, what is the necessity of decomposing and preserving the local unitary for subsequent analysis of U_A? ...'' $\textbf{Re 3:}$ Thanks for this careful question. Remark 2 points out that the exponential small bound in Theorem 1 with $m=1$ can be extended to the case where $U_A$ is a global unitary satisfying the parameter-shift rule. However, for the case where $U_A$ does not satisfy the parameter-shift rule, such as the controlled Pauli rotation gates, Remark 2 does not generally hold and we have only Theorem 1. We will make this point more clear in our revised manuscript. --- > $\textbf{Comment 4:}$ ``The similar findings discussed in Section 4 can be derived from existing research ...'' $\textbf{Re 4:}$ Thank you for the careful comment regarding the similarity of our findings in Section 4 to existing research and the potential connections with prior work, particularly Ref. [5]. We appreciate the opportunity to address this concern and clarify the context of our results. Ref. [5] considers the cost function difference between two points either both randomly chosen or one random and the other has a deterministically chosen distance with it. Whereas we consider the difference between the maximum and minimum within the whole subspace w.r.t. a local unitary. The process of taking extreme values in our work is not involved there. Their relation is further clarified in Eq.(9) after line 198 in our paper. Actually, we also acknowledge the work of Ref.[6] which presents relevant results as you may concern. [6] Andrew Arrasmith, M. Cerezo, Piotr Czarnik, Lukasz Cincio, and Patrick J. Coles, “Effect of barren plateaus on gradient-free optimization,” Quantum 5, 1–9 (2020). --- Rebuttal Comment 1.1: Comment: Thank you for your response. After carefully reviewing the authors' feedback, I have some additional comments to share. Paper's Impact: Let me showcase how to use existing works to attain a similar result achieved in the paper. For simplicity, denote the cost function in Definition 1 as $C(\theta)$, where $U_A = \exp(-i\theta H)$. Then, applying Taylor expansion to the cost function, we obtain $$C(\theta)=C(\theta_0)+C'(\theta_0)(\theta-\theta_0)+\frac{C''(\theta_0)}{2!}(\theta-\theta_0)^2+\frac{C^{(3)}(\theta_0)}{3!}(\theta-\theta_0)^3+...+\frac{C^{(n)}(\theta_0)}{n!}(\theta-\theta_0)^n.$$ This expansion allows us to establish a connection between the cost function range examined in this paper and previous literature. Recall that the gradient or higher-order derivatives of the cost function tend to exponentially approach zero with the increase in the number of qubits [M. Cerezo and Patrick J. Coles, Quantum Science and Technology 6, 035006 (2021)], it becomes conceivable that the variability range of the cost function also experiences an exponential diminishment as the qubit count grows, i.e., $|C(\theta)-C(\theta')|\rightarrow O(\exp(-n))$. I acknowledge the technical endeavors undertaken by the authors, which might not have been directly deduced from existing literature. However, it's worth noting that there may exist multiple straightforward avenues to achieve similar insight (at least in a rough sense) through the utilization of results obtained from prior works. Reply 2 Clarification: In Reply 2, the authors assert that *the unitaries suitable for optimization typically pertain to single-qubit and two-qubit quantum gates in practical scenarios, corresponding to the cases of m=1 and m=2. Here, our theorem offers an exponentially small upper bound*. I respectfully hold a divergent perspective on this matter. A wide array of ansatzes, such as the quantum approximate optimization ansatzes and group-invariant ansatzes, typically align with the scenario where $m\gg 2$. From this perspective, Considering this angle, it becomes important to engage in a thorough discussion regarding the broader impact and applicability of the obtained results. --- Reply to Comment 1.1.1: Comment: We greatly appretiate the reviewer's thoughtful comments. After carefully reviewing these additional comments, we provide our response as follows. > $\textbf{Comment~1:}$ ``Paper's Impact: Let me showcase how to use existing works ...'' $\textbf{Re 1:}$ The reviewer provides a rough argument using Taylor expansion in order to show that the cost function difference $C(\theta)-C(\theta')$ between two parameter points is exponentially small given exponentially small derivatives. In fact, this idea has already been formulated in a more strict manner by Ref. [26] mentioned in our manuscript (line 175~208). However, we emphasize that the cost function difference $ C(\theta)-C(\theta'),$ is very different from the variation range $$ \max_{U_A} C(\mathbf{U})-\min_{U_A} C(\mathbf{U}), $$ studied in our work. Because the former focuses on two *fixed* parameter points (independent with the probability ensemble) while the latter takes the possible maximal range over all parameters within $U_A$. In other words, the latter implies that the cost function difference between any two parameter points *simultaneously* vanishes, which is beyond the former result. Furthermore, we would like to emphasize that the significance of our work lies not only in providing technical outcomes, but also in its foundational implication. Our work unifies the restrictions on gradient-based and gradient-free optimizations from a new perspective which is independent with gate parameterization, reveals a fundamental property of random quantum circuits and deepens the understanding of the role of locality in QNNs. [26] Andrew Arrasmith, M. Cerezo, Piotr Czarnik, Lukasz Cincio, and Patrick J. Coles. Effect of barren plateaus on gradient-free optimization. Quantum, 5:1–9, nov 2020. ISSN 2521327X. doi:10.22331/q-2021-10-05-558. > $\textbf{Comment~2:}$ ``Reply 2 Clarification: In Reply 2, the authors assert ...'' $\textbf{Re~2:}$ We appreciate your perspective and understand the importance of encompassing a wide range of ansatzes and scenarios in the discussion of our results' applicability. We would like to offer some further clarifications to address your concerns. Firstly, it's important to underscore that our theorem involves a scaling relationship of the gate locality $m$ with respect to the qubit count $n$. And our key emphasis lies in the situation of local gate where $m$ does not scale with $n$ so that our theorem can give a non-trivial exponential upper bound. This is indeed a situation frequently encountered in practice like in variational quantum eigensolver and quantum state learning, considering the fact that common elementary quantum gates available on digital quantum computers (e.g., Pauli rotation gates and CNOT gates) are inherently local. This aligns with the hardware constraints and technological advancements shaping current quantum devices. Secondly, we agree with the reviewer that there are indeed instances utilizing global unitaries, like QAOA, which is beyond the scope of our theorem. These ansatzes usually realizes a parameterized global unitary by correlating parameters among many local parameterized gates. This is actually one of the implications of our theorem. That is to say, one possible strategy to escape from the vanishing variation range of the cost function is correlating parameters in multiple local gates like in QAOA. We value the reviewer's insights in this regard and will certainly engage in a more comprehensive discourse concerning the broader impact and applicability of our derived results in the upcoming revised version of our paper.
Summary: This paper investigates the trainability of random quantum circuits from the perspective of their locality and demonstrates the variation range of the cost function via adjusting any local quantum gate vanishes exponentially in the number of qubits. This theorem unifies the restrictions on gradient-based and gradient-free optimizations. The paper also verifies their theorem on three applications with numerical simulations and deepens the understanding of the role of locality in QNNs. Strengths: 1. The paper is well-written and provides a rigorous analysis of QNN trainability and scalability from the perspective of their locality. 2. The paper applies the proposed theorem to three representative QNN models, including the VQE, quantum autoencoder, and quantum state learning, and provides the numerical simulation results. Weaknesses: 1. Although Line 66-73 provides the advances of the proposed method, the comparison with previous works is not clear enough. It is important to review the previous methods and compare their specific differences. 2. The contribution of the paper seems weak, the novelty, comparisons with related works, and guidances for future QNN training or design need to be highlighted and enhanced. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Compared with previous approaches that prove barren plateaus, what new enlightenment does the proposed method bring to the design and training of QNN? Specifically, barren plateaus are a well-known phenomenon in the field of quantum machine learning, and we usually design QNNs with single-qubit and tow-qubit parametric gates for practicality on NISQ devices, and it seems to have defaulted to the property of barren plateaus w.r.t local unitary. What guidance can the proposed finding give to the design of QNNs? Moreover, based on the finding, what are some specific suggestions to design more effective training strategies? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The paper can be improved by considering the above weaknesses and questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate the reviewer's recognition of our work as technically solid and well written. We also thank the reviewer for the helpful feedback. A detailed response to the reviewer's comments and questions is provided below. > $\textbf{Comment 1:}$ ``Although Line 66-73 provides the advances of the proposed method, the comparison with previous works is not clear enough. It is important to review the previous methods and compare their specific differences.'' $\textbf{Re 1:}$ Thanks for your comments regarding the unclearness of the comparison between our work and previous work. The specific differences between our work and previous work are as follows. i). Ref. [1],[2],[3] analyze the gradient of parameters in QNNs. Basically, the results for characterizing the QNN training are related to the vanishing gradients along any reasonable direction. Whereas our work does not start with calculating the gradients. We present the maximum variation range of the cost function when you optimize a local unitary that has not been analyzed before. We believe this is practically valuable regarding the optimization strategy and also theoretically meaningful in studying random quantum circuits. ii) Ref. [4] also considers the cost function difference. However, the quantity they consider is the difference between two points either both randomly chosen or one random and the other has a deterministically chosen distance with it. Again, we consider the difference between the maximum and minimum within the whole subspace w.r.t. a local unitary. The process of taking extreme values of the cost function in our work is not involved in Ref. [4]. [1]. Jarrod R. McClean, Sergio Boixo, Vadim N. Smelyanskiy, Ryan Babbush, and Hartmut Neven, “Barren plateaus in quantum neural network training landscapes,” Nature Communications 9, 1–7 (2018). [2] Garcia, Roy J., et al. "Barren plateaus from learning scramblers with local cost functions." Journal of High Energy Physics 2023.1 (2023): 1-79. [3] M. Cerezo and Patrick J. Coles, Quantum Science and Technology 6, 035006 (2021). [4] Arrasmith, Andrew, et al. Quantum Science and Technology 7.4 (2022): 045015. --- > $\textbf{Comment 2:}$ ``The contribution of the paper seems weak, the novelty, comparisons with related works, and guidances for future QNN training or design need to be highlighted and enhanced.'' $\textbf{Re 2:}$ We greatly appreciate the reviewer's feedback on the contribution of our paper. In what follows, we would like to explain the novelty, comparisons with related works, and guidances for QNN training or designing. i): For the novelty and comparisons with related works, we clarify that the main quantity we focus on in this work is the entire variation range of the cost function via adjusting any local unitary within the circuit, which has not been analyzed before. And our results unify the restrictions on gradient-based and gradient-free optimizations. Such a quantity has an intuitive meaning in QNN training whose properties can not be derived from the existing works. ii): One direct guidance for QNN training and designing is that the gate-by-gate optimization strategy (which tries to avoid barren plateaus using gradient-free optimization for each gate) is ineffective no matter what optimizers are utilized. Reparameterization within local unitaries is also unhelpful. Our theorem is mainly based on two assumptions. (a) The QNN is composed of local quantum gates. (b) The QNN is randomly initialized and approximate a 2-design. Thus to enhance the trainability, one can construct global parameterized gates by e.g. correlating parameters to break assumption (a). On the other hand, one can design problem-tailored QNN architecture or use pre-training to escape from being a 2-design to break assumption (b). We will address this important aspect in our revision. --- > $\textbf{Comment 3:}$ ``Compared with previous approaches that prove barren plateaus, what new enlightenment does the proposed method bring to the design and training of QNN? ...'' $\textbf{Re 3:}$ Thanks for this very good question. Indeed, barren plateaus is a well-known phenomenon in the field of QML which limits the training of the QNNs. However, phenomenological calculations such as gradient analyses and their descendent have not unraveled enough information on the training landscape. The quantity we consider is the entire variation range of the cost function when you are optimizing a local gate. We believe this quantity has an intuitive meaning in QNN training whose properties can not be derived from the existing works. Also, its importance lies in the unification of the existing explanations of the hardness of QNN training. Our results offer new enlightenment in two aspects, both theoretical and practical. Theoretically, we show that the effect of a local operation on a physical observable will vanish exponentially after a chaotic evolution. In practice, a direct consequence is that the gate-by-gate optimization strategy (which tries to avoid barren plateaus using gradient-free optimization for each gate) is ineffective no matter what optimizers are utilized. Reparameterization within local unitaries is also unhelpful. Our theorem primarily relies on two underlying assumptions. Firstly, the QNN is built using local quantum gates. Secondly, the QNN is initialized randomly and approximates a 2-design. Therefore, for the purpose of improving trainability, one approach is creating global parameterized gates by, for instance, establishing correlations among parameters. Conversely, to deviate from being a 2-design, one could devise a QNN architecture tailored to a specific problem or employ pre-training strategies. We will make this point more clear in the revision. --- Rebuttal Comment 1.1: Comment: Thanks for your answers. I have no further questions and believe the quality of the paper will improve if the above discussion can be incorporated into the revised paper. I have adjusted my score to reflect this (4 -> 5).
Summary: The paper proof a result on the range of possible values that the cost function of a variational quantum algorithm can take when one optimises over a given unitary that is before or after random gates that form unitary 2 designs. This quantity vanishes exponentially with the number of qubits. This generalises previous results on Barren plateaus, concerned with the vanishing of the gradient of the cost function. The material is presented clearly and the paper also has numerical verification of the scaling in the case of VQE, quantum autoencoder and quantum state learning. Strengths: - The paper is clearly written, with figures explaining concepts. It presents both theory and numerical checks - The problem studied is relevant in scaling up quantum neural networks - The main theorem allows the authors to recover and unify previous results on exponentially vanishing gradients and cost function differences Weaknesses: - The paper does not comment on recommendations to avoid the exponentially vanishing variation range - The comparison to previous works is limited. The authors mention that their work opens a new venue for analysing trainability of QNNs but it is not clear to me what new insights are gained. It would be useful to comment on what is gained wrt previous literature. Also, on whether the methods used to prove their main theorem are similar to those used in the literature or not. - The VQE experiments are taken with circuits of depth 10 x n. That depth was chosen so that the hardware aware ansatz approximates a 2-design. However no comment on the required depth to compute the ground state of the Hamiltonian is presented and it is not clear whether the choice of ansatz and depth is something that practitioner would actually do. Minor - Sentence Line 71 - 73 does not read very well, you could rephrase it - Line 153 - 155: it would be helpful for the reader to have an explanation of the connection between parameter shift rule and $e^{-i\theta \Omega}$ with $\Omega^2=1$. Also why does this imply the existence of $W$ as claimed? Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Can you comment on your choice of VQE ansatz explaining whether 10x n is needed or smaller depth would be enough to solve this problem? Also, do other ansatz, such as alternating ansatz, suffer from exponentially vanishing variation range? - What are the additional insights that you get on top of previous works for quantum neural network trainability? - Can you expand on the previous literature? Do the previous results also require approximate 2-design? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: - The method applies when the $V_1$ or $V_2$ are 2-designs. This limits applicability. - Strategies to overcome the exponentially vanishing range are not discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate the reviewer's positive assessment on the correctness and significance of our work. We also thank the reviewer for the very helpful feedback. Below is our point-by-point response to the comments and questions. > $\textbf{Comment 1:}$ ``The paper does not comment on recommendations to avoid the exponentially vanishing variation range.'' $\textbf{Re 1:}$ Thanks for the valuable feedback. Problem-inspired architectures are highly recommended since prior information is used to escape from being a 2-design. We will include more related discussion in our revision. --- > $\textbf{Comment 2:}$ ``The comparison to previous works is limited. The authors mention that their work opens a new venue for analysing trainability of QNNs but it is not clear to me what new insights are gained. It would be useful to comment on what is gained wrt previous literature. Also, on whether the methods used to prove their main theorem are similar to those used in the literature or not.'' $\textbf{Re 2:}$ The main differences are summarized as follows. i): Most previous works focus on the gradient which usually provides limited information around the vicinity, while our work analyzes certain entire variation ranges when tuning local unitary. This brings new insight that the locality of QNN also plays an import role in the trainability limitation. ii): Besides the standard Haar integration, the proof involves new techniques including the Hamiltonian splitting, diverse norm inequalities in quantum information theory, as well as tensor network diagrams to compute high-order integrals. --- > $\textbf{Comment 3:}$ ``The VQE experiments are taken with circuits of depth 10 x n. That depth was chosen so that the hardware aware ansatz approximates a 2-design. However no comment on the required depth to compute the ground state of the Hamiltonian is presented and it is not clear whether the choice of ansatz and depth is something that practitioner would actually do.'' $\textbf{Re 3:}$ Thanks for highly practical comment. $10\times n$ is a empirically chosen depth which is also discussed in previous works [1-3]. Unfortunately, the practically required depth in VQE depends on the specific Hamiltonian and there is a lack of universal guarantee to estimate the depth tailored for each Hamiltonian. We will highlight the discussion of this part in the revision. [1] Jarrod R. McClean, et al. Nature Communications 9, 1–7 (2018). [2] Aram W. Harrow, et al. Physical Review Letters 103, 150502 (2009). [3] Fernando G. S. L. Brandão, et al. Communications in Mathematical Physics 346, 397–434 (2016). --- > $\textbf{Comment 4:}$ ``Sentence Line 71 - 73 does not read very well, you could rephrase it.'' $\textbf{Re 4:}$ Many thanks for your careful comment. We will improve the statement in our revision. --- > $\textbf{Comment 5:}$ ``Line 153 - 155: It would be helpful for the reader to have an explanation of the connection between parameter shift rule and $e^{-i\theta\Omega}$ with $\Omega^2=I$. Also why does this imply the existence of $W$ as claimed?'' $\textbf{Re 5:}$ Thanks a lot for this careful question. The condition $\Omega^2=I$ ensures the equality $e^{-i\theta\Omega}=I\cos\theta - i\Omega\sin\theta$ and hence the expectation value w.r.t a single parameter must be some triangular function, the derivative of which can be exactly expressed as a finite difference, i.e., satisfying the parameter-shift rule. The conditions $\operatorname{tr}(\Omega)=0$ and $\Omega^2=I$ imply that half of the eigenvalues of $\Omega$ is $1$ and the other half is $-1$. Thus $\Omega$ has the same eigen spectrum with, e.g., the local operator $Z\otimes I\otimes \cdots\otimes I$, so that there must exist a unitary $W$ to diagonalize $\Omega$. We will make this point more clear in the revision. --- > $\textbf{Comment 6:}$ ``Can you comment on your choice of VQE ansatz explaining whether 10x n is needed or smaller depth would be enough to solve this problem? Also, do other ansatz, such as alternating ansatz, suffer from exponentially vanishing variation range?'' $\textbf{Re 6:}$ The proximity between a randomly initialized ansatz and a 2-design can serve as a measure of its expressibility [4]. $10\times n$ is an empirically chosen depth [1-3]. Regarding the other ansatzes, all the ansatzes that form 2-design will suffer from the exponentially vanishing variation range. Some problem-inspired ansatzes with specially designed architectures may avoid the vanishing variation range problem because of escaping from the zone of 2-design using prior information. [4] Holmes, Zoë, et al. PRX Quantum 3.1 (2022): 010313. --- > $\textbf{Comment 7:}$ ``What are the additional insights that you get on top of previous works for quantum neural network trainability?'' $\textbf{Re 7:}$ Most of the previous works are based on gradient analysis, which require specific parameterization like $e^{-i\Omega\theta}$. However, here we focus on the vanishing variation range of the cost function. Our result is a fundamental property of random quantum circuits regardless of parameterization, which rules out all optimization methods focusing on optimizing local quantum gates independently. --- > $\textbf{Comment 8:}$ ``Can you expand on the previous literature? Do the previous results also require approximate 2-design?'' $\textbf{Re 8:}$ Thanks for your question. The requirement of 2-design is a common assumption in analyzing QNNs [5-7]. Barren plateaus was first presented using 2-design random circuits. The expressibility was also shown to have a close connection with its distance to 2-design [4]. We will make this point more clear in the revision. [5] Marrero, Carlos Ortiz, Mária Kieferová, and Nathan Wiebe. PRX Quantum 2.4 (2021): 040316. [6] Sack, Stefan H., et al. PRX Quantum 3.2 (2022): 020365. [7] Holmes, Zoë, et al. Physical Review Letters 126.19 (2021): 190501. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal and clarifications. I do not have further comments.
Rebuttal 1: Rebuttal: Dear PC, We are grateful to the PCs for their efforts in shaping the conference's scientific program and to the reviewers for their dedicated time and efforts in reviewing our paper. We thank reviewers J4bd and dzQA for recognizing our paper's sound technique and clear writing style, acknowledging its potential to contribute to the field, and both recommending acceptance. Reviewer T1RK is concerned about novelty and significance, questioning the comparison with related works and our contribution to the existing body of knowledge. In the rebuttal, we clarified that our paper considers a novel feature of the QNN training landscape, and one key distinction is that our results unify the restrictions on gradient-based and gradient-free optimizations in training variational quantum circuits. We also explained our differences with previous results precisely and highlighted what new enlightenment our results deliver to the study of QNN. Reviewer n6Yp has concerns with our paper's impact and comparison with previous results on QNN training limitations. In the rebuttal, we explained the main differences between our results and prior research. Our main result on the entire variation range of the cost function has an intuitive meaning in QNN training, and such property can not be derived from the existing works. As recognized by reviewers J4bd and dzQA, our work established a theoretical understanding of the Barren Plateau problem and determined the behaviors of training QNN via gradient-based and gradient-free methods, which is significant to the study of quantum machine learning. Besides the implications in QNN training, our results can be regarded as a basic property of random quantum circuits, which is expected to have potential applications in other areas involving random quantum circuits. We have addressed the reviewers' comments in the rebuttal. We want to thank all the PCs and ACs for their time and efforts, regardless of the result of this paper. Yours Sincerely, Authors of Paper 10546.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Latent Space Translation via Semantic Alignment
Accept (poster)
Summary: This work proposed a method to translate learned representations between two pre-trained networks, using surprisingly simple transformations. The method is demonstrated on models with different architectures, trained on different modalities and across different tasks. Strengths: 1. The paper is well written and easy to follow. 2. To my understanding, the discussed setting where models are ad-hoc zero-shot stitched, as opposed to being trained to have compatible representations is original. This distinction is important as it makes the approach applicable for the myriad of pre-trained models that already exist. 3. The method is surprisingly simple and yet produces very convincing results as demonstrated through sufficient evaluation. Weaknesses: 1. The exact setting leading to Figure 2 is not clear. Caption mentions results being produced on “CIFAR-100”. However, I was not able to find the entire setting. What two models were used? Are both trained on CIFAR-100? What are the architectures used? Are reported results averaged across multiple settings? 2. Also regarding Figure 2 - the two trends not being monotone is surprising to me. Especially the very clear outliers. I suppose this behavior might occur if each point represents a single experiment and some randomness might affect performance. In that case, it would probably be better to average these results across several experiments as done in other tables in the paper. Can the authors explain this behavior? Minor comments: 1. Typo line 49 - mphasizing -> emphasizing. 2. Notation line 139 - $\mu$ is used in equation 3 but $\bar{x}$ in text. 3. Clarity line 167 - I suggest the “naive absolute baseline” should be concisely explained. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: See questions mentioned in the Weaknesses section. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for appreciating our work, providing valuable feedback, and firmly pushing for its acceptance. - **Figure 2 (Additional Details)**: In response to the concern about missing details, we will provide comprehensive information in the appendix. For the classification aspect (left), the results are derived from averaging across all potential stitching combinations (pairs) among the utilized pre-trained encoders and their classification heads (SVMs) trained on CIFAR100. With a total of 7 distinct encoders (refer to the appendix, Table 4), this results in a pool of 42 pairs for each "number of anchors" configuration. Regarding the generation aspect (right), we train vanilla convolutional autoencoder pairs on the CIFAR100 dataset for each "number of anchors" configuration. No normalization is applied to the dataset images, and each autoencoder has ~2 million trainable parameters and a size 500 bottleneck. - **Figure 2 (Trends)**: The presence of outliers and the non-monotonic pattern observed in the lstsq curve can be attributed to a confluence of factors, including numerical instability and the PyTorch **driver selection** policy. This policy adapts based on the condition number of the matrix. In particular, we observed that outliers tend to emerge as the amount of anchors approaches the number of dimensions, leading to a square matrix. - **Naive Absolute Baseline**: this baseline entails the direct stitching between models considering their vanilla representations without any transformation, to verify any pre-existing compatibility and establish it as a lower-bound. We agree it deserves further explanation, we will integrate this description in the preliminaries, since it is also used in Moschella et al.. **Figure 2 Updates** In response to these observations, we systematically repeated all classification experiments using five distinct random seeds, influencing initialization and anchor selection. Additionally, we have transformed the plot into a violin chart format to present the variance clearly. We will do the same for the generation experiments. The updated visualization we will include in the paper can be found in the rebuttal PDF. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal! It addresses the comments I've raised in my review.
Summary: In this paper the authors propose to translate latent space using an effectively angle-preserving affine transformation learned using pairs as anchors. Experiments are conducted on a wide range of tasks showing the okay performance by the swap-in latent space of embeddings/features. Strengths: **Strengths** The authors have taken an extensive suite of experiments encompassing a diverse set of tasks, including cross-training, cross-architectures, and several downstream tasks. The experimental design demonstrates a comprehensive approach in assessing the proposed method *per se*. Furthremore, this work shows that aligning two latent spaces with zero-shot stitching is poissible without any retraining. This is an important observation and the proposed method could benefit a wide range of downstream tasks. Weaknesses: **Weaknesses** The following concerns regarding the design of the transformation and its intrinsic limitation warrant consideration. - **Assumption of Linearity and Lack of Analysis in Generative Model**: The proposed method is effectively an angle-preserving affine transformation, which fundamentally assumes linearity of mapping between two latent spaces (of embeddings and/or features). While "linearity of the transformation between seemingly different latent spaces has a theoretical foundation in research on identifiability in neural models," as the author pointed out in the rebuttal, in the context of generative model, the efficacy is still likely contingent on the extent to which two latent spaces exhibit linear mapping, because a good generative model may fill the slight mismatch with its own capacity. In detail, it is likely that an affine transformation may yield only a coarse mapping between two latent spaces, and may not capture fine-grained relations (consider style transfer in generative models). Thus, the good performance in classification tasks may thus be attributed to the capacities of the downstream models rather than the transformation itself. This is a critical assumption that necessitates further scrutiny, but the submission does not provide an in-depth analysis of this assumption. In the current form of the manuscript, for this regard only an autoencoding on MNIST/Fashion MINIST/CIFAR is shown. It does not have a qualitative comparison with the reconstruction from VAE itself, nor quantitative/qualitative results with other stitching and zero-shot methods. The lack of such comparison can unfortunately weaken the justification of the proposed method. Adding such comparison is thus suggested. Optionally, conducting such comparisons on common image datasets of higher resolution, like CelebA, ImageNet, (with more powerful VAE models) may provide more insight in this regard, since it may potentially highlight the capacity of VAE models and the quality of latent space alignment. (**After author response**: See the discussion below) - **Ambiguity in Technical Contribution**: The technical novelty and significance of the proposed method remain unclear. The method effectively is an angle-preserving affine transformation. While this per se is not necessarily an issue, the missing studies mentioned above make it hard to assess the impact and implications of the proposed method. (**After author response**: This is not a major issue anymore, see the discussion below) - ** Position of Works within the Broad Field**: Latent space matching has been the subject of extensive research, albeit in varying contexts such as text, image processing, and generative models. A few (it's even not an attempt to cover the overall picture of this widely studied topic) works below in References that share technical similarity with the proposed method are cited in the References. However, the submission does not include a substantive comparison or discussion with a wide body of works in this field, mkaing it challenging to ascertain the relative standing and uniqueness of the proposed method. (**After author response**: This is not a major issue anymore, see the discussion below) **References**: - Set Prediction in the Latent Space https://proceedings.neurips.cc/paper/2021/hash/d61e9e58ae1058322bc169943b39f1d8-Abstract.html - Formality Style Transfer with Shared Latent Space https://aclanthology.org/2020.coling-main.203.pdf - Homomorphic Latent Space Interpolation for Unpaired Image-To-Image Translation https://openaccess.thecvf.com/content_CVPR_2019/html/Chen_Homomorphic_Latent_Space_Interpolation_for_Unpaired_Image-To-Image_Translation_CVPR_2019_paper.html **Updates** - The previous version of this review mistakenly refer to the page limit. As the authors and the AC pointed out, this is NOT the case and has been corrected. - In Weakness, update the Assumption of Linearity to focus on the generative model. - Added reference to discussion below. - The rating has been increased. Technical Quality: 3 good Clarity: 3 good Questions for Authors: N/A Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the review and comments. We appreciate the opportunity to address the concerns they've raised and offer clarifications on the following points: **Page Limit**: As per the **official 2023 Call for Paper**, submissions are indeed limited to **nine** content pages and **not eight**. Therefore, we don’t see any reason to report the paper to be desk-rejected. **Assumption of Linearity and Lack of Analysis**: - The assumption of linearity of the transformation between seemingly different latent spaces has a theoretical foundation in research on identifiability in neural models [a, b, c,e,f]. In particular, in [a], it is proved that representations learned by a large family of supervised and generative models are identifiable up to a linear transformation. We will incorporate these additional references in the related works section. Our work is intrinsically centered around exploring the emergence of these properties in practical settings within deep neural networks (DNNs). - While we acknowledge the broader context of latent space matching, the references the Reviewer kindly suggested involve the introduction of constraints, penalties, or adjustments *during training*, which lie outside the scope of our investigation. - Our focus, instead, aligns closely with the works highlighted in the "Stitching and Zero-Shot" section of our manuscript (Section 2). These works exploit the linearity assumption and focus on finding a transformation that aligns the latent spaces ex-post. For example, in a very recent work, Moayeri et al. [d] further prove that these emerging properties exist, can be exploited, and deserve more attention from the community. [a] Roeder, Geoffrey, Luke Metz, and Durk Kingma. "On linear identifiability of learned representations." *ICML*, PMLR, 2021. [b] Khemakhem, Ilyes, et al. "Ice-beem: Identifiable conditional energy-based deep models based on nonlinear ica.", NeurIPS, 2020 [c] Willetts, Matthew, and Brooks Paige. "I Don't Need u: Identifiable Non-Linear ICA Without Side Information." ArXiv [d] Moayeri et al. “*Text-To-Concept (and Back) via Cross-Model Alignment*”*,* ICML 2023 [e] Hyvarinen et al "Nonlinear ICA using auxiliary variables and generalized contrastive learning."  PMLR, 2019. [f] Sorrenson,et al. “Disentanglement by nonlinear ica with general incompressible-flow networks (gin)” 2020 **Coarse-only mapping:** We would like to emphasize that we worked with both classification and generation as decoder tasks, observing good performance even in the latter scenario. This contrasts with the following comment: “*an affine transformation may yield only a coarse mapping between two latent spaces, and may not capture fine-grained relations*”. We will be happy to include other datasets/tasks to broaden our study, following any suggestions from the Reviewer. --- Rebuttal Comment 1.1: Comment: I would like to first thank the authors and the AC for pointing out the page-limit. I Also appriciate the authors' clarification on linearity and zero-shot stitching, and agree with that. Both have been reflected in the revised review. My concern on assumption of linearity and lack of analysis is on generative models. While it's clear, as the authors pointed out, that for calcification tasks linearity of representation has been identified with theoretical foundation, I'm still not fully convinced of the degree of such a case for generative models. Please see the revised review for details on this regard. --- Reply to Comment 1.1.1: Comment: We thank the Reviewer for taking the time to reconsider their evaluation and adjust their review, particularly concerning page limit, linearity, and zero-shot stitching. We respectfully reiterate our answer to the *concern of Insufficient Comparison with Works within the Broad Field*. We will be happy to discuss the references provided by the Reviewer in the related work section. However, we still consider them out-of-scope for an experimental comparison: **the referenced methods require the enforcement of additional constraints during training.** In contrast, **our post-hoc method can be applied directly to pre-trained models**. Regarding the *assumption of linearity in generative models*: There is a large collection of results in the literature on the identifiability of generative models with auxiliary information [b, c, d, e, f, g, h, i]. When this side information is not available, it is not possible to have strict theoretical guarantees, as shown in [a] and [m]. However, a recent line of work demonstrated that it is possible to have the same characterization in unsupervised generative models, either by providing experimental evidence  [Moschella et al 2022, p], via measuring high identifiability scores for unsupervised generative models [n] or by providing theoretical evidence with a weaker notion of identifiability [p,o], but no auxiliary information. Our experimental assumptions are supported by these findings, suggesting that **in most cases** it is possible to connect latent spaces of generative models via simple transformations. We believe that our method's simplicity is a strength, not a weakness, of our work. The novel insights and practical applicability of our findings contribute to advancing the field, and we are committed to sharing this knowledge with the community. Once again, we thank the Reviewer for their valuable feedback and remain available to address any additional queries. **[a]** Francesco Locatello, Stefan Bauer, Mario Lucic, Gunnar Rätsch, Sylvain Gelly, Bernhard Schölkopf, Olivier Bachem. Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations. ICML 2019 **[b]** A. Hyvärinen, H. Sasaki, and R. E. Turner. Nonlinear ICA Using Auxiliary Variables and Generalized. PMLR 2019 [**c**] P. Sorrenson, C. Rother, and U. Köthe. Disentanglement by Nonlinear ICA with General Incompressible-flow Networks (Gin). ICLR 2020 [**d**] Khemakhem, R. P. Monti, D. P. Kingma, and A. Hyvärinen. ICE-BeeM: Identifiable Conditional Energy-Based Deep Models Based on Nonlinear ICA. NeurIPS 2020 [**e**] A. Hyvärinen and H. Morioka. Unsupervised Feature Extraction by Time-Contrastive Learning and Nonlinear ICA. NeurIPS 2016 [**f**] Locatello, Francesco, et al. "Weakly-supervised disentanglement without compromises." International Conference on Machine Learning. PMLR, 2020. [**g**] Locatello, Francesco, et al. "Disentangling Factors of Variation Using Few Labels”. ICLR 2020 [**h**] Khemakhem, Ilyes, et al. "Variational autoencoders and nonlinear ICA: A unifying framework." International Conference on Artificial Intelligence and Statistics. PMLR, 2020. [**i**] Von Kügelgen, Julius, et al. "Self-supervised learning with data augmentations provably isolates content from style.". NeurIPS 2021 [**l**] Zimmermann, Roland S., et al. "Contrastive learning inverts the data generating process." International Conference on Machine Learning. PMLR, 2021. [**m]** Hyvärinen, et al. "Nonlinear independent component analysis: Existence and uniqueness results." *Neural networks* 12.3 (1999): [**n]** Willetts, Matthew, and Brooks Paige. "I Don't Need u: Identifiable Non-Linear ICA Without Side Information." ArXiv [**o**] Barin-Pacela, Vitória, et al. "Identifiability of Discretized Latent Coordinate Systems via Density Landmarks Detection.". ICML Workshop on Structured Probabilistic Inference & Generative Modeling, 2023. [**p]** Asperti, et al. "Comparing the latent space of generative models." *Neural Computing and Applications* 35.4 (2023) [**q]** Kivva, Bohdan, et al. "Identifiability of deep generative models without auxiliary information." NeurIPS 2022
Summary: This article proposes a method of latent space translation through semantic alignment, using the similarity of latent spaces learned by different neural models on semantically similar data. The method allows for direct conversion of learned representations between different pre-trained networks, and achieve zero-shot stitching of encoders and decoders without additional training. The method is widely validated in various experimental settings, tasks (classification and generation), and modalities, demonstrating the ability to zero-shot stitch neural models under different architecture and modality changes. Strengths: 1. The paper proposes a method for the zero-shot stitching between encoders and decoders on various tasks, enabling seamless integration without the need for additional training. 2. Compared to previous methods, this article expands to different latent space dimensions, architectures, and modalities. 3. Extensive set of experiments have been done to show the possibility to zero-shot together neural models across different architectural and modalities. Weaknesses: 1. In the experiment, only the most basic abs method is compared, but no comparison is made with other methods. 2. In the generation task, zero-shot stitching is performed under the same model framework and training data. Whether it is more practical to migrate between different model frameworks and training data? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. How much performance difference is there between the proposed zero-shot stitch method and concatenation followed by finetuning? 2. How is the selection of anchor points determined, and how much impact does different selection have on performance? 3. In experiments, zero-shot stitch is performed between different frameworks on the same dataset. So, can this method be effectively used for neural frameworks trained on different domain datasets? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable input, which has prompted us to further enhance the soundness of our work.  Here, we provide deeper clarification and offer additional insights regarding: **Additional Comparisons**: we will add several columns to our tables (Table 1 and Table 2): **i)** performance of relative decoders; **ii)** stitching performance of relative decoders (to be used as a direct comparison with the Moschella et al. method); **iii)** stitching performance when optimizing an affine transformation with SGD (using the same anchor set as the other methods), namely “Linear”, as a standard and robust alternative stitching method [a, b]. We remain open to additional baseline suggestions. [a] Lenc and Vedaldi “*Understanding image representations by measuring their equivariance and equivalence.*“CVPR 2015. [b] Moayeri et al. “*Text-To-Concept (and Back) via Cross-Model Alignment*”*,* ICML 2023 A preview of the new Table 1 can be found in the rebuttal PDF. **Cross-Architecture stitching (generation)**: in the rebuttal PDF, we show a stitching experiment between the same autoencoder architecture used throughout the whole paper but with varying training seed (2 different ones) and bottleneck size (250 and 500) on multiple datasets. The performances are similar to the ones in Table 3 of the main manuscript, showing that cross-architecture stitching is possible even between autoencoders and has good performances. **Fine-tuning (Question 1)**: it is unclear to us the link between fine-tuning on a concatenation of the two representations and stitching. What would be the downstream training objective? Could the reviewer kindly expand on this? We’d be open to adding it as an alternative/baseline method. **Anchor selection (Question 2)**: Our approach adheres to the standard RelRep setting, where anchor points are uniformly drawn from the training samples. Notably, Moschella et al. indicate that more intricate policies do not yield discernible improvement, a finding supported by their supplementary analysis in the appendix (Moschella et al., Section A2). We will further clarify this in the Preliminaries section. **Multi-domain (Question 3)**: if we interpreted the question correctly, the reviewer is asking whether our method can be applied to translate across samples represented in different domains. We see our “Cross-modality” section (4.2), where we translate from the text domain to the image one and vice-versa, as a generalization of this idea, given that a modality change is supposedly stronger than a domain one. Nevertheless, we present in the rebuttal PDF and will incorporate in the paper a new experiment where we stitch between classifiers trained on different versions of the CIFAR10 dataset: the standard one and a grayscale version. The performances are comparable to the ones in Table 1 of the main manuscript (CIFAR10 row), showing that cross-domain stitching is possible and with good performances. We thank the reviewer again for their feedback and remain available to discuss any further concerns or questions. --- Rebuttal Comment 1.1: Comment: Thanks for your rebuttal.
Summary: This paper show the latent space of different pretrained models can be translated between each other with simple transformations by using anchors. By using this method, a variety of encoders architectures are able to be cross-stiched to different classifier heads or within autoencoding models. Strengths: This paper tackles an interesting problem for how to semantically align different trained latent spaces. The paper is a natural extension to Moschella et. al 2023, going from just relative encoding to full latent space transformations. The results are clear in showing zero-shot stitching is possible without retraining the decoders Weaknesses: It wasn't clear on my first read of the paper that this work is a direct follow up to Moschella et al. 2023. Without that context, the paper was very difficult to read the first time. After reading Moschella et al., I was better able to follow this proposed work. Multiple details are missing, such as the construction of the anchor set, which are glossed over in this work and caused confusion on the zero-shot nature of this work. Another major missing key detail is that prior work's decoders must work on the relative embedding, this crucial detail was not clear until after multiple read throughs. This improvement should be front and center. I find it surprising that tables 1 and 2 do not report Moschella's results. I realize that the decoders must be trained with the relative embeddings, but the lack of comparison feel likes an obfuscation rather than a highlight of the difference between methods. Overall, I find the clarity of the paper to be a major hinderance. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: None Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 1 poor Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's insightful feedback, highlighting important aspects warranting clarification and enhancement in our work. In light of their review, we have identified these steps to improve our work: - Emphasize the relationship with RelRep in the Preliminaries section as already stated in the **general response** to better contextualize the foundations of our work; - Add a direct comparison with the RelRep method in both Table 1 and 2. A preview of the new results to be added to Table 1 can be found in the rebuttal PDF. - The anchor set construction procedure, aligned with the standard RelRep sampling (a uniform random sampling over the training set), will be incorporated into the Preliminaries section. This will both elucidate the process and draw a parallel with the original work's findings in Moschella et al.. - We will offer a more transparent depiction of our work's "zero-shot" nature. As correctly highlighted by the reviewer, our distinction from RelRep lies in removing the requirement for relative training of the decoder. Instead, we achieve (better) stitching performance by solving a closed-form problem. This methodology will be emphasized to highlight our innovation. We hope to have clarified the reviewer's concerns, and we respectfully remark that R2, R3, and R4 didn't consider clarity a significant weakness, assigning higher scores to it. Nevertheless, we remain available to discuss any further concerns or questions. --- Rebuttal Comment 1.1: Title: Thanks authors. Comment: The extra context is great. The clarity issues seem to be due to my unfamiliarity, I will increase my score to weak accept. I still am a little concerned with the final framing, but that's not a content issue.
Rebuttal 1: Rebuttal: We sincerely appreciate the constructive feedback provided by the reviewers. We would like to address the relationship between our work and the concepts presented in "relative representations" (Moschella et al.). While we draw upon the foundational principles of Moschella et al., it is essential to clarify that our work takes a different path. Therefore, we do not consider it a direct extension. - Moschella et al. showed that when different latent spaces share semantics: 1) a simple angle-preserving transformation connects them; 2) a reduced set of key points (anchors) can be used to reconcile them into a **new, distinct (relative) representation**. - Instead, we directly estimate the **transformation** from one space to another using provided anchor points without relying on an auxiliary representation. A key advantage of this approach is that it circumvents the necessity of training the decoder on relative representations, simplifying the integration of diverse models arbitrarily trained. Considering the reviewers' valuable input on clarifying this aspect to strengthen its message, we commit to adding a new section to specify the connection to Moschella et al. better. We'll present this relationship accurately and concisely, expanding the “Preliminaries” section, utilizing available space within the nine-page limit. Pdf: /pdf/ed0ac0d41fdb209a84cf0b9c0dbf2e9303188dd9.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Demystifying the Optimal Performance of Multi-Class Classification
Accept (poster)
Summary: This paper tackles the important problem of performance estimation in a multi-class setting. The contributions come in the form of several Bayesian Error Rate estimators for different multi-class variants. Several theoretical guarantees are provided throughout the paper, as well as empirical results on synthetic and real dataset. Strengths: # Motivation The problem of estimating and validating the performance of a classifier in a multi-class setting has been a hot topic in Machine Learning. As such, the results provided in this paper are of prime interest for the ML community. Estimating the BER can lead to a better understanding of the multi-class problem and to novel methods for such setting. # Completness The main strength of this paper resides in providing theoretical results for various settings. The first result is provided for the vanilla case of multi-class learning, where the data is drawn i.i.d. from a hidden distribution. The authors then provide a robust version of the previous result. Finally the paper provides several adaptation of the previous results to the noisy setting, i.e. when noise is added to the labels. # Experimental results The experimental section provides extensive empirical validation of the theoretical results. Several synthetic and real-world datasets are analyzed and discussed in Section 5 thus shedding light on the benefits of the proposed estimates. Weaknesses: # Clarity While the main motivations are easy to follow, and the paper is overall very-well organized, in several occasions definitions and notations are poorly introduced. In lines 76-78, the definition of Y is misleading since it uses the same notation as in multi-class multi-label learning, particularly since no assumption is made on c and how it is retrieved from y. In line 79, it's not clear what $e$ in $P_e$ stands for and how it is related to the definition that follows. In line 86, "an optimal classifier." implies that several classifiers can achieve BER. In other words, looking at the problem from an optimization problem point of view, it means that there are several global minima exist. I'm not quite sure I fully grasp the interest of such an assumption. In line 111 the notation $Y_{M:M}$ is used without being properly introduced before or in the remainder of the paper. In Definition 4, it's unclear wether an assumption on class distribution exists when defining the K subsets. The absence of such an assumption might explain the trade-off discussed in line 163, in which case it should be clearly noted in the paper. I'm not quite sure I understand the role of paragraph Notation (lines 65 to 71), when most of the notations are loosely introduced in the remainder of the paper. Equation (11) is quite puzzling. If I'm reading it right, the denoise estimator is based on samples that have the same exact $\bf x$, i.e. it implies that there are duplicated data in the learning dataset, which seems odd compared to the i.i.d. assumption of line 76. The denoise estimator of Equation (14) seems like a better and more realistic choice. # Conclusion While the experimental section provides several discussions on the proposed estimators, I strongly think that a Conclusion section is needed. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: * Can the framework be extended to multi-label classification? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Not applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for their time and effort put into reading and reviewing our manuscript. Clarity 1. The focus of our paper is on multi-class single-label learning, where each data sample is associated with only one true class. Our rationale behind representing $\mathbf{y}$ as a vector stems from the fact that we consider soft labels and hence, $\mathbf{y}$ provides the soft-valued information of each class (see eq.(3)). The Bayes error rate (BER) is denoted with $P_e$ (see Definition 1), where the subscript $e$ stands for error. Clarity 2. We define a classifier as optimal if it achieves the BER that is unique (i.e., there is only one global minimum). However, there may exist several classifiers that achieve the BER. For example, if two different classifiers (with different complexity or algorithm) achieve the BER, both would be considered as optimal. This is the reason for using the wording ``an optimal classifier''. Clarity 3. We defined $Y_{M:M}$ in the Notation (lines 69 and 70). In particular, $Y_{M:M}$ is the $M$-th order statistic of $\mathbf{Y}$, that is, the largest value of $\mathbf{Y}$. Clarity 4. Definition 4 is just a definition, which holds without any assumption on the class distribution. However, in Section 3, we consider the case of soft labels and hence, Theorem 2 holds under this assumption. In the revised version of the paper, we will add this assumption in the statement of Theorem 2. Clarity 5. We acknowledge that part of the notation introduced in lines 65-71 is repeated in the paper and may be familiar to an audience in the machine learning community. However, we believe that these few lines clearly introduce the reader to some notation that will be extensively used throughout the paper. Clarity 6. We made the assumption that the feature set $\mathcal{X}$ is finite, which implies the potential existence of duplicated data samples with a non-zero probability. However, this does not create any conflict with the i.i.d. assumption. An illustrative example is the i.i.d. sampling from the uniform distribution supported on {1, 2, 3}; each sample data can have one of these three values with equal probability and two different data samples can have the same value. As another example, in the MovieLens dataset in our experiment, two different users can give a different rating to the same movie. Conclusion. We thank the Reviewer for this suggestion on which we agree. We will make sure to add a Conclusion section in the revised paper. Question. This is a good question. Extending our framework to multi-label classification is a very interesting area for future work. To the best of our knowledge, several metrics (other than the BER) have been adopted for multi-label classification. Examples are the exact match ratio, the F1 score, and the Hamming loss. We believe that extending our framework to estimate the minimum Hamming loss of multi-label classification is not too challenging. However, extending it to estimate the other metrics may not be straightforward and may require the development of new tools. --- Rebuttal Comment 1.1: Comment: Thank you for your replies, I have a better understanding of the contribution. I'm changing my rating to 6, mostly due to the changes needed for the overall structure of the paper. --- Reply to Comment 1.1.1: Comment: We thank the Reviewer for reading through our rebuttal, and for changing their original score. The comments of the Reviewer greatly improve our paper.
Summary: This paper studies the estimation of the Bayes error rate (BER) in the multiclass classification problem. BER is the best classification error (in terms of expectation) that can be achieved by the Bayes optimal classifier. First, this paper studies the soft labels case, where the direct extension of the estimator proposed by Ishida et al. [ICLR2023] can be shown to be valid in the multiclass case. Next, this paper studies the case where the soft labels are noisy and proposed the median-of-means estimator to improve the robustness. Furthermore, the denoise estimator cluster-based BER estimators are also proposed. Experiments are also provided to justify the usefulness of the proposed methods. Strengths: 1. Writing is easy-to-follow: paper motivation, objectives, and solutions are clear. 2. Proposed methods are intuitive, easy to implement, and theoretically guaranteed. Many estimators are provided for different use cases. 3. I found the discussion about the robustness of the estimator $\psi_{\mathrm{soft}}$ interesting and the proposed median-of-BERs estimator to mitigate this problem is theoretically guaranteed and practically effective in experiments. In theory, I found the discussion in Lines 158-166 provides interesting comparisons between $\psi_{\mathrm{soft}}$ and its MoB estimator. 4. Experiments are convincing to show that the extension beyond straightforward estimator $\psi_\mathrm{soft}$ (MoB, cluster, denoising)can significantly outperform $\psi_\mathrm{soft}$. Weaknesses: 1. Experiments to compare other methods may not be sufficient. In my understanding, only experiments with synthetic datasets with limited hyperparameter choice are conducted to justify the superior of the proposed methods (result in Figure 2). Additional experiments in Appendix C are also only synthetic datasets with the same hyperparameter choice as Figure 2. 2. Evaluation of the non-synthetic dataset is difficult to know if the Bayes error rate is well estimated, as also suggested in the paper that the labels themselves can be unreliable because of human errors (as discussed in Lines 312-314). But this issue is very problematic by itself and I have no idea how to better deal with this. 3. Soft labels can be restrictive in many scenarios which may make the proposed method prohibitive. Nevertheless the cluster estimator sounds in (15) reasonable as a good heuristic for BER estimation. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Regarding Section 3.1, it is still not clear to me why having a higher value of the breakdown point guarantees more robustness of the estimator (Lines 139-141). What is the meaning of robust here? Does it mean a more accurate estimator under high noise? 2. I feel Definition 3 is quite difficult to interpret and more discussion could be useful for readers to grasp the importance and how to use Definition 3. I failed to fully appreciate it and would like to ask some questions. - 2.1 What is definition of $\mathcal{D}^{(\kappa + \tau)}$? - 2.2 How to interpret the meaning of Eq. (5)? Is there an intuitive explanation of the value of the breakdown points? - 2.3 Why $B(\psi_\mathrm{soft}) = \frac{1}{n}$ is not robust? Why we can conclude that this makes the estimator not robust (Line 143)? What number of $B(\psi)$ is robust? 3. Questions about Figure 2 experiments: - 3.1 I don't see $\psi_\mathrm{soft}$ but rather $MoB_K(\psi_c)$, is this a typo? If so, which methods are compared? I think it is also useful to see the performance of other proposed estimators as well. - 3.2 Is it reasonable to use $k=1$ for $k-NN$ bound? I am not familiar with this method but I feel $1$-nearest neighbor may not be robust enough. - 3.3 How hyperparameter selection is carried out? I feel the choice of $K=44$ for the proposed estimator is quite unintuitive and was wondering about the sensitivity of this hyperparameter. 4. Is there a theory to justify the statistical properties of the cluster estimator (15)? If not, is there a way to theoretically validate this estimator? 5. It is difficult to know the accuracy of BER estimation from Table 1. Is it possible to also put a reference of a model trained for these classification tasks similar to Figure 4 in Ishida et al? 6. It is discussed that MoB estimator is more robust and therefore gives lower value in Table 1. I was wondering if it is possible to have an outlier that has a low value also? Typo: 1.Line 229: nosy -> noisy? 2. In some places $MoB$ is used instead of $MoB_K$, e.g., legend in Figure 2 and some other places. Maybe we should unify it if this is not intentional. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I have no further things to add here. Please see the weaknesses and questions section for some discussions on limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for their thorough reading of our manuscript and their effort in carefully reviewing it. Weakness 1. We agree with the Reviewer that comparisons with SOTA BER bounds are performed only on synthetic datasets. However, this is not due to a limitation of our approach, but rather to the lack of SOTA BER estimators that are applicable to complex classification tasks. SOTA BER estimators either require to find the data distribution/divergence or have a prohibitive complexity. Instead, we feel that our BER estimation methods are well-suited for such challenging tasks, as we have shown for the MovieLens dataset. That said, as the Reviewer has also suggested, we are training a model using the benchmark and real-world datasets in Table I. Unfortunately, we could not obtain the results by the time this rebuttal was due, but we will incorporate them in the revised version. Weakness 2. It is true that labels can be unreliable, and this is one of the major challenges in understanding if the BER is well estimated. This is the main motivation behind our contribution of proposing denoising methods to reduce the noise that can corrupt the data labels. To the best of our knowledge, this concept of leveraging denoising methods for BER estimation is new. Our results also suggest that this technique is effective and hence, we feel that it could be well-suited to mitigate the unreliability aspect of the data labels. Weakness 3. It is true that currently soft labels are less used compared to hard labels. However, we feel that the use of soft labels is growing considerably (see first paragraph in Section 3). In Section 4.2, we have also shown that the noisy soft label framework can be used to study the case of one-hot labels. This study leads to the BER estimator in eq.(15), which the Reviewer deems reasonable. Question 1. We consider robustness to outliers, where an outlier is a data sample that is corrupted by high noise. The breakdown point captures how robust a model is with respect to outliers. For example, the sample mean (i.e., an estimate of the mean of $X$, which is given by $\hat{\mu} = \frac{1}{n}\sum_{i=1}^n x_i$ where {$x_i$}$_{i=1}^n$ is a set of realizations of $X$) would fail to estimate the true mean even for the case when there is a single outlier (e.g., $x_i = \infty$ or an $x_i$ with a value very different from the true mean). Thus, the sample mean has a breakdown point equal to $\frac{1}{n}$, meaning that one outlier can break the estimate. Differently, the median is more robust since it has a breakdown point of $\frac{1}{2}$, i.e. half of the samples need to be outliers for breaking the estimate. Question 2.1. Per Definition 3, we have that $\mathcal{D}^{(\kappa + \tau)} =$ {$ D_1,\ldots,D_{\kappa+\tau} $}. Question 2.2. Eq.(5) quantifies the breakdown point for an estimator $\psi: \Omega^{\tau+\kappa}\to\Theta$. An estimator $\psi$ breaks if $\psi(\mathcal{D}^{(\kappa+\tau)})$ -- i.e., $\psi$ evaluated on the clean dataset with $\kappa + \tau$ data samples -- changes of at least $||\mathsf{rad}(\Theta)||$ when $\tau$ clean data samples are replaced with $\tau$ outliers. The breakdown point is defined as the minimum value of $\frac{\tau}{\tau+\kappa}$, i.e., the ratio between the number of outliers ($\tau$) and the total number of data samples ($\kappa + \tau$), such that $\tau$ outliers are sufficient to break $\psi$. Question 2.3. Per Definition 3, the breakdown point (see also our response to Question 2.2) is the minimum value of $\frac{\tau}{\tau+\kappa}$ such that $\tau$ outliers suffice to break the estimator $\psi$. In this sense $B(\psi_{{\rm{soft}}}) = \frac{1}{n}$ is not robust. In general, $\frac{1}{2}$ is the maximum value of a breakdown point. Question 3.1. This is a typo and we will fix it: in the caption, it should be $\mathsf{MoB}_K (\psi _{\mathsf{C}})$. Among the proposed estimators, in Figure 2 we only evaluated $\mathsf{MoB}_K(\psi _{\mathsf{C}})$. This is because we observed that it performs well with respect to SOTA methods and hence, we omitted the other estimators. Question 3.2. The optimal value of $k$ is not known and this might be one of the reasons for which generally, it is set to $1$. Moreover, the value of $k$ should be a small fraction of the total number of samples, and this could be another reason for setting it to $1$. That said, it may happen (and this is the case in the asymptotics) that increasing $k$ leads to a more accurate BER estimation. In the revised version, we will also provide the BER curve relative to the optimal value of $k$ (which we will find numerically). Question 3.2. We chose $K=\lfloor \sqrt{n} \rfloor$ based on Theorem 2, which shows that if $K<o(\sqrt{n})$ and $n\to\infty$, then $\mathsf{MoB}_{K}$ is asymptotically normal. Thus, we chose $K$ to be large enough (for robustness), yet smaller than $\sqrt{n}$. Question 4. We did not investigate the statistical properties of the cluster estimator in eq.(15); this is an interesting future direction. As Reviewer 2nax pointed out, this estimator resembles the Nadaraya-Watson estimator with the Parzen window kernel. Thus, we believe that a theoretical validation of the estimator in eq.(15) may be doable. Question 5. This is a great suggestion! We have started working on this, but we could not obtain the results by the time this rebuttal was due. We will incorporate these suggestions in the revised version. Question 6. In general, there might be outliers that lead to lower values of the estimated BER. However, since the labels belong to $[0,1]$ and the estimated BER values in our examples are very small ($0.004\sim0.04$), we believe that outliers with large values would be more impactful on the final estimate than outliers with small values. For instance, if the true estimate is around $0.05$, outliers with a smaller value than this might not be even considered as outliers. Typo 1 and Typo 2. We will fix these in the revised version of the paper. --- Rebuttal Comment 1.1: Title: Thank you very much for your kind explanation. Comment: I have read the rebuttal and it made me understand much more about the part that I could not fully understand at first. Thank you! I raised my score to 6. In particular, I found the responses about breakdown points (Q1-2), low-value outliers (Q6), and how to choose K (Q3.2) very satisfactory and they are the main reasons I updated to score. Maybe it is just me but I found the explanation in the current paper was not that easy to understand about the breakdown point (as I am not familiar with it). I found the explanations in rebuttals for Q 1 and Q 2.2 highly useful to grasp this concept. It might be useful to include such explanations in the paper if possible. --- Reply to Comment 1.1.1: Comment: We thank the Reviewer for having read our rebuttal, and for raising their original score from 5 to 6. The comments of the Reviewer greatly helped improve our paper, and we will for sure include the explanations of the rebuttal in the revised version of the paper.
Summary: This paper proposes a few methods for estimating Bayes error rates of multi-class classification in different scenarios. The proposed method generalize the previous soft-label approach for binary classification [Ishida et al., 2023] and also extends it for robustness to label noise and outliers. The first extension uses the median-of-means in place of the ordinary average for better robustness, and the second one uses the feature-wise average assuming the features are discrete. The third one uses a Nadaraya-Watson estimator with the Parzen kernel for incorporating data points near each point. The paper provides theoretical analyses for the estimation errors, convergence rates, and the robustness guarantees. Finally, the paper presents experiments using synthetic data and three benchmark datasets demonstrating the superiority of the proposed methods. Strengths: - The paper is well-written and pleasant to read. - The extensions to the noisy cases including the one with hard (one-hot) labels may be practically useful because many datasets do not have soft labels. - The use of the median of means to this problem seems interesting. The theoretical results about this part are also insightful. - The experiments show the clear superiority of the proposed method over the baseline methods. Weaknesses: - The multi-class extension when soft labels are provided seems straightforward. - The denoising method in Eq. (11) is a bit disappointing because it heavily relies on the assumption that the features are discrete. - The denoising method in Definition 6 could be applied to any $\mathcal{X}$ with a distance metric, but this essentially estimates the conditional probability $P_{Y|X}$, which contradict the following claim: "This is an appealing feature of our work, different from taking a plug-in approach that first estimates the istribution from which the data is drawn, and then evaluates the BER. Indeed, our BER estimators, which are proved to be unbiased, consistent and robust to label noise and outliers, do not require the estimation of the data probability density to perform an effective BER estimation." - Regarding the previous point, if the method in Definition 6 is no better than the SOTA method in the conditional probability estimation, the estimate will be an upper bound of the SOTA score (and the Bayes error). However, it is not convincing to say the proposed method, which looks like a classical method, is better than the SOTA method. Technical Quality: 3 good Clarity: 3 good Questions for Authors: ### Major comments - How does one choose $r$? More importantly, how did the authors choose it in the experiments? - The proposed denoising method essentially estimates the class posterior probability using the kernel method. What is the motivation for using this method when there are many other classification methods performing well? - The footnote on page 2 says "We assume that $\mathcal{X}$ is a finite set, but several of our results easily extend to the case when $\mathcal{X}$ is an infinite set." Having a finite $\mathcal{X}$ is very restrictive. Why not work on the infinite case in the first place if the extension is easy? What about the continuous case? ### Minor comments - Line 137, "$\operatorname{rad}(\theta)$ is the vector consisting of $L$ radiuses along the main axes of the largest $L$-dimensional ellipsoid in replaced by outliers.": I could not understand this. What are the $L$ radiuses and the vector consisting of them, more precisely? - Lines 164-165, "For example, setting $K = \ln n$ leads to a higher breakdown point (and hence, is more robust) than setting $K = \sqrt{n}$." Doesn't a larger $K$ have better robustness? - Line 178, "We assume that z has i.i.d. components each with zero mean (without loss of generality)." I think this does lose the generality. - In footnote 3 on page 5, is the definition of sub-Gaussianity correct? - I am not very comfortable to call the method in Definition 6 a "nearest-neighbor" method because it uses a fixed kernel (with a fixed $r$). I would call it a Nadaraya-Watson estimator with the Parzen window kernel. - I suggest using different letters for $P_e$ and $P_e(\cdot)$. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: A limitation is that the paper assumes that the set of features is finite. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for their thorough reading of our manuscript, and for the positive assessment of our contribution. Weakness 1. We agree that the multi-class extension may seem straightforward, but we also feel that this is not a weakness. Indeed, it is just the starting point for further analysis of the Bayes error rate (BER) estimator. Moreover, one needs to formally prove the theoretical properties of this estimator (Theorem 1) before exploring more advanced estimators (e.g., with denoising or robust version). Weakness 2. The motivation to consider features that are discrete mainly stems from the fact that in practice many classification problems utilize such features (or quantized values from real-valued features). As we pointed out in the footnote on page 2, one can easily extend several of our results on discrete features to real-valued ones (see also our response to Major Comment 3). For example, as the Reviewer commented, the estimator in Definition 6, which extends the one in eq.(11), works well for real-valued (as well as for discrete or mixture valued) features. Weakness 3. In our problem setting, the soft labels are available and they may be corrupted by some noise. In particular, the soft labels contain the value of $P_{Y|X}$ with noise; our proposed approach reduces the noise effect in the labels, instead of estimating the data distribution. This was the main reason of the sentence. We feel that, although the denoising method in Definition 6 applied to one-hot labels is equivalent to estimating the conditional probability, our approach proposes a novel perspective which could lead to new directions to effectively estimate the BER. Denoising methods have been largely studied and applied to several signal processing problems; this paper shows that they are effective also for BER estimation. Weakness 4. We believe that the considered SOTA BER bounds, namely the GHP and the $k$-NN error, leverage effective methods for estimating either the conditional probability or the divergence between the class conditional distributions. In this sense, our evaluations show that we outperform SOTA methods. That said, we agree with the Reviewer that a comparison with an additional approach that uses a SOTA technique for estimating the conditional probability, and then uses this to evaluate the BER would validate more our statement. We will add this comparison in the revised version. Major Comment 1. The parameter $r$ is a hyperparameter that should depend on the data feature space and on the underlying classification problem. Thus, it is not possible to choose an optimal $r$ before we look at the data and problem, and finding an optimal $r$ is a difficult task in general. In our experiments, we did an empirical search of the value of $r$ in a heuristic way. We chose the value of $r$ at the point where the estimated BER becomes stable (see Figure 9 in the supplementary material). The heuristic method that we used to find $r$ is similar to the elbow method for finding the number of clusters in $k$-means clustering algorithms. Major Comment 2. The main motivation for proposing our method lies in exploring new types of labels, i.e., the soft labels, which are starting to be utilized in many applications. By means of Theorem 1, using soft-valued labels with noise, we can estimate the BER by reducing the noise effect instead of estimating the data distribution, which is the approach pointed out by the Reviewer. We believe that for "many other classification methods performing well" the Reviewer intends methods based on estimating the class posterior probability. The main reason why we did not focus on such estimators is due to the hardness of estimating the data distribution if the data is complex or has high-dimensionality. Instead, our estimator just uses a cluster-like method that is applicable to most dataset (see also Remark 2). In the revised version, we will add such discussion. Major Comment 3. We believe that the assumption of having features that are discrete is somehow without loss of generality in practice. Indeed, several classification problems utilize discrete features, or quantized values from real-valued features. This is the main reason for our assumption. That said, we highlight that Theorem 1 and Theorem 2 also hold for the infinite feature space. Theorem 3 is the only result that we should carefully analyze for the case of infinite feature space. However, we feel that this is doable. In the revised version, we will add this. Minor Comment 1. Any $L$-dimensional ellipsoid (after rotation) can be defined as $\frac{x_1^2}{a_1^2} + \cdots + \frac{x_L^2}{a_L^2} = 1$. The $L$ radii (and not radiuses) are $\{a_1, a_2, \cdots, a_L\}$. In the $2$-dimensional space, one can imagine that an ellipsoid has $2$ main axes defining it, and the radii are the half length of each main axes in the ellipsoid. Minor Comment 2. Yes, a larger $K$ gives more robustness; $K=\ln n$ and $K=\sqrt{n}$ should be interchanged. We will fix this in the revised version. Minor Comment 3. We believe that the assumption of zero mean noise is without loss of generality (wlog). If a noise distribution has a non-zero mean, we can subtract the mean of the noise from the noise distribution, which then becomes zero mean. However, the assumption of i.i.d. is not wlog; we feel that this may be the source of confusion for the Reviewer. In the revised version, we will clarify that only the assumption of zero mean is wlog. Minor Comment 4. We will fix the definition of a $\gamma^2$-sub-Gaussian random variable, i.e., a random variable $X$ satisfying $\mathbb{E}[e^{\lambda (X-\mathbb{E}[X])}] \leq e^{\frac{\lambda^2 \gamma^2}{2}},~\forall \lambda \in\mathbb{R}$. Minor Comment 5. We will refer to it as Nadaraya-Watson estimator with the Parzen window kernel. Minor Comment 6. We will replace $P_e(\cdot)$ with $\mathcal{E}(\cdot)$. --- Rebuttal Comment 1.1: Title: Re: Rebuttal by Authors Comment: I thank the authors for the response. I have a few follow-up questions. > Weakness 2. The motivation to consider features that are discrete mainly stems from the fact that in practice many classification problems utilize such features (or quantized values from real-valued features). As we pointed out in the footnote on page 2, one can easily extend several of our results on discrete features to real-valued ones (see also our response to Major Comment 3). For example, as the Reviewer commented, the estimator in Definition 6, which extends the one in eq.(11), works well for real-valued (as well as for discrete or mixture valued) features. There are also many problems with continuous features, especially in modern applications. I would think it as a limitation. > Major Comment 2 (...) the Reviewer intends methods based on estimating the class posterior probability. Do we really need an estimate of class posterior probabilities? If we only need the argmax of them (like idx(.) does), we may only need a classifier. > Minor Comment 3. We believe that the assumption of zero mean noise is without loss of generality (wlog). If a noise distribution has a non-zero mean, we can subtract the mean of the noise from the noise distribution, which then becomes zero mean. However, the assumption of i.i.d. is not wlog; we feel that this may be the source of confusion for the Reviewer. In the revised version, we will clarify that only the assumption of zero mean is wlog. If the mean of the noise is unknown, one has to estimate it for centering. Can we still make the zero-mean assumption for free? --- Reply to Comment 1.1.1: Comment: We thank the Reviewer for their thorough reading of our rebuttal, and for the additional follow-up questions. We hope that our response properly addresses the Reviewer’s comments. ### Regarding Weakness 2. We totally agree with the Reviewer that continuous features are very relevant, and we apologize if our response was perceived as understating their importance. While some of our results (Theorem 1 and Theorem 2) can be easily shown to hold also for the infinite feature space, the estimator in eq.(11) assumes that the set of the features is finite. As the Reviewer also pointed out, this is a limitation of this estimator, and was one of the main motivations to propose the cluster-based estimator in eq.(15). This estimator indeed extends the one in eq.(11) and it works well for real-valued (as well as for discrete or mixture valued) features. The next step will consist of proving a similar result as in Theorem 3 for the estimator in eq.(15). We are currently investigating this problem, and we are hopeful that this is doable. We will also add this discussion in the Discussion/Conclusion section that we will include in the revised version of the paper. ### Regarding Major Comment 2. Our approach in Definition 5 consists of two phases: we first propose a denoising label estimator in eq.(11) – which as the Reviewer pointed out estimates the class posterior probabilities – and then we use this to estimate the BER – by using eq.(12). In other words, we are not using a classifier to estimate the BER. We may have missed something in the Reviewer’s concern and, if this is the case, we would appreciate further elaboration on which existing classification method(s) can be used to estimate the BER. ### Regarding Minor Comment 3. Yes, the Reviewer is absolutely correct in saying that, if the mean is not known, then it has to be estimated. The wording ‘without loss of generality’ conveys the fact that estimating the mean of the noise (which is beyond the scope of our paper) is often regarded as a trivial task in many applications when the mean is unknown. However, we acknowledge that this might cause confusion, and we will make sure to remove the ‘without loss of generality’ in the revised version of the paper.
Summary: This paper aims to design a tighter estimate for the Multi-class classification error. The paper analyzes several theoretical aspects of the suggested Bias Error rate estimator, including its consistency, unbiasedness, convergence rate, variance, and robustness. Moreover the authors utilize a denoising method to reduce label noise if such is present and improve robustness to the outliers by incorporating the median-of-means estimator. They validate the effectiveness of our theoretical results via experiments both on synthetic data under various noise settings and on real data (CIFAR-10H, Fashion-MNIST-H and MovieLens -- movie recommendation dataset) Strengths: The paper raises an important issue of estimating performance limit of the multi-class classifier. It provides thorough theoretical analysis, backed up by some experimentation. Weaknesses: I'm struggling to see an impact of this paper. It aims to demistify optimal performance of multi-class classification, but I did not see any illustration of this in the paper. I would expect to see performance of SOTA method on a complex task (e.g., ImageNet) vs. the provided estimator, but the paper uses relatively easy classification tasks for illustration of the bounds. So not clear what is the applicability of this paper for more challenging tasks of Computer Vision and NLP. Also I would expect some theoretical derivation on what is a good feature space for which the suggested denoising holds. Note that the denoising itself is not novel, and seems a bit ad-hoc solution to improve the accuracy of the estimator. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: I would love the authors to explain what is the impact of their work and experiment (maybe in Discussion/ Conclusion section that is currently missing). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: My main concern is that the paper does not have a real impact on more complex Deep Learning tasks of NLP and Computer Vision. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for their time and effort put into thoroughly reading and reviewing our manuscript. Regarding the impact/applicability of our paper. We would like to first note that we did consider a complex classification task for the evaluation of our bounds using the MovieLens dataset. This is indeed a challenging task in computer vision where a multi-modal network model classifies a two-hour movie (input feature) with audio. None of the SOTA Bayes error rate (BER) estimators would be applicable to such a complex task since they either require to find the data distribution/divergence or have a prohibitive complexity. Differently, our estimator simply requires labels and a denoising method. Second, we would like to point out that the reason we evaluated the bounds for some easy classification tasks (e.g., Gaussian noise) was to showcase the effectiveness of them in estimating the BER in scenarios where this is indeed known. In summary, we agree that including more experiments on complex tasks would strengthen the paper. However, we also feel that this would go beyond the scope of this single paper. As also suggested by another Reviewer, in the revised version of the paper, we will add a Conclusion section with such a discussion. Regarding the denoising method. The results presented in this paper show that the proposed denoising method is consistent for any (finite) feature space in the asymptotic (see Theorem 3). We agree that finding a good feature space could be the next natural step, and we thank the Reviewer for this comment. In particular, one could analyze the rate of convergence of our BER estimator paired with the denoising method as a function of the characteristics (e.g., cardinality, distribution) of the feature space. We will add this discussion in the Conclusion section in the revised version of the paper. Finally, we would like to note that our denoising technique is theoretically universal (see Theorem~3) and hence, we feel that it is not an ad-hoc solution. Of course, if we have knowledge about the noise distribution in the dataset, then we can improve the accuracy of the estimator, which might be indeed an ad-hoc solution. However, we here make a very general assumption on the noise distribution (i.e., i.i.d. and zero-mean). In summary, we agree that the denoising technique can be further theoretically analyzed, but we do not feel that this represents a weakness of our work. --- Rebuttal Comment 1.1: Title: Follow up on rebuttal Comment: I would like to thank the authors and other reviewers for the discussion on this page. I do agree with the Reviewer 2nax that the denoising method is disaappoitning. I believe there should be a thorough discussion about the limitations of the proposed approach. This goes back to my comment on impact. However, I also agree with other reviewers that in overall the paper might be interesting to the community and encourage further research. Given the feedback from other reviewers and thoughtful follow-ups of the authors, I changed my feedback to 5. --- Reply to Comment 1.1.1: Comment: We thank the Reviewer for providing further feedback on our paper and raising the score to 5. We acknowledge that the denoising method presented in eq.(11) is specifically suitable for discrete features, which does represent a limitation of our work. However, it is important to highlight that we have shown to be aware of this limitation by introducing a cluster-based denoising approach (eq.(15) in Definition 6), designed to be effective with any type of feature. We are grateful to the reviewer for pointing us out to the absence of a discussion addressing this limitation. We will include a detailed discussion in the revised version of the paper.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
From Trainable Negative Depth to Edge Heterophily in Graphs
Accept (poster)
Summary: This paper tackles the problem of graph heterophily in training Graph Neural Networks. Through graph spectral analysis, the authors decompose the graph Laplacian matrix into eigengraphs. Dwelling on the observation that low/high-frequency eigengraphs correspond to homophily/heterophily, the authors propose to use a continuous and trainable depth parameter to model the weight of the eigengraphs so the trained GNN can be adaptive to the level of heterophily in the target graph. The authors also realize the efficiency issue of eigendecomposition in large graphs and hence propose an approximation method that only considers the top-k largest/smallest eigenvectors and two trainable depth parameters to improve the efficiency. Experimental results show promising improvement over baseline methods on heterophily datasets. Strengths: I enjoy reading the paper as the authors explain the details and intuition behind the equations very well. The experiments are also solid verification of the proposed methods. The functional calculus trick to transfer the depth from the integer domain to the real domain is interesting and novel. The approximation method TEDGCN-D seems like a good alternative to using the full eigendecomposition and is organically developed from the original version. Weaknesses: My primary concern lies in its comparison to existing work. The related work section only mentioned some work in the related field. However, a large body of work also realizes the importance of incorporating high-frequency passes in heterophilic graphs. How is the proposed method different from the existing ones? More specifically, note that in the formulation of the paper, the eigenvalues are shifted to (0,1] and d is uniform to both low/high frequency eigengraphs, meaning that d only shifts the focus on high/low frequency (amplifying the gap between eigenvalues too), what makes this different from other adaptive methods as the authors themselves point out in the paper? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - line 187. I do not fully understand the negative depth explained for integer values. Since the proposed method only use either positive or negative depth, when does the model perform both positive message passing and negative message passing? - The value of K is max(1000, 0.1n), I think 0.1n is still a very large value. (1) what is the processing time for TEDGCN-D on larger dataset, e.g. ogbn-arxiv. (2) What happens when a smaller K is used for a larger graph? Is the top few components sufficient? - line 305. How is the optimal depth calculated? If it is a trainable parameter, how do you plot the full range of all depth? - Is using a single depth parameter d necessary? For example, you can use trainable d for all different eigengraphs. Note that this should bring minimal extra computation cost as the complexity is still the same. This potentially provide more flexible adaptation to different graphs. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The proposed method uses spectrum and hence require additional computation for new and changed graphs. The cost of computing eigendecomposition for large graphs, even using the approximation method, is still unclear. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: line 187. I do not fully understand the negative depth explained for integer values. Since the proposed method only use either positive or negative depth, when does the model perform both positive message passing and negative message passing?** **Response**: When the depth is negative, it does not mean **all** edges in the augmented graph are negatively weighted. As we point out in the analysis part, every graph can be regarded as a combination of many eigengraphs (Figure 1 and Eq. (3)). (1) Even the eigengraph corresponding to the highest frequency (e.g., the red eigengraph 3 in Figure 1) may still own some positively weighted edge (e.g. $\frac{1}{6}$ in eigengraph 3 in Figure 1). (2) After we obtain the negative depth, it is equivalent to merging all eigengraphs back into an augmented graph, where the eigengraph corresponding to the highest frequency may achieve large weight after merging. Due to reasons (1) and (2), these positively weighted edges in eigengraphs corresponding to high frequency may still be positively weighted, while some other edges may be changed from positively weighted to negatively weighted. In this case, there will exist both positive and negative edges in the augmented graph and the model performs both positive message passing and negative message passing simultaneously. This can also be verified in Figure 6: some red points represent that the original positive weights for some edges in the graph are still increased. **Q2 and Limitations: The value of K is max(1000, 0.1n), I think 0.1n is still a very large value. (1) what is the processing time for TEDGCN-D on larger dataset, e.g. ogbn-arxiv. (2) What happens when a smaller K is used for a larger graph? Is the top few components sufficient?** **Response**: It is true that 0.1n could be still large for a very large graph. (1) The processing time (most time is for the eigen-decomposition) for ogbn-arxiv is about 1 hour. (2) If the graph is only dominated by low/high frequency components, a smaller $K$ is sufficient for TEDGCN-D, which is validated by the results in Table 1 and Table 2. However, if the dataset contains non-trivial portion of intermediate frequency components, the top/bottom few components would not be sufficient, and TEDGCN-D may have a relatively poor performance, which can be observed in Table 2 on the Actor dataset: TEDGCN-S's accuracy (35.3\%) is much better than TEDGCN-D's (27.6\%). One possible solution for this problem (also a possible solution for the scalability issue) can be as follows: Since each node is closely related with the nodes within a multi-hop receptive field, we can divide the large graph into several connected components (i.e., subgraphs), set a maximum node number limit for each subgraph and run TEDGCN-S on each subgraph. For example, for a graph with 10000 nodes, we can divide it into 10 connected components/subgraphs and each subgraph has about 1000 nodes. In a vanilla implementation of eigen-decomposition with time complexity of $O(n^3)$, this solution can accelerate the algorithm on each subgraph by about 1000 times. In addition, this design will not sacrifice/discard low/high/intermediate frequency components. We leave the exploration to this tentative idea in future works. **Q3: line 305. How is the optimal depth calculated? If it is a trainable parameter, how do you plot the full range of all depth?** **Response**: Firstly, for each result in Tables 1 and 2, we set an initial depth value and then start training the model. During this process, the depth changes. When the best **validation accuracy** is reached, we compute the testing accuracy. Secondly, for Fig. 4, it is slightly different. Since it is only for illustration purpose, we skip the validation step and the initial unstable several training epochs. We directly plot the test accuracy that changes with the depth value on the fly during the training process, so that the plotted testing accuracy for the optimal depth in Fig. 4 is slightly higher than that in Tables 1 and 2. Different initial depths are tried, but the optimal values are very similar/close. Fig. 4 shows one realization in each case. **Q4: Is using a single depth parameter d necessary? For example, you can use trainable d for all different eigengraphs. Note that this should bring minimal extra computation cost as the complexity is still the same. This potentially provide more flexible adaptation to different graphs.** **Response**: This is a really good suggestion. It may open a door for broad future models. While single depth or dual depth are simple and effective, they are not the only strategies. For example, current TEDGCN-D can capture both low and high frequency components but may discard the intermediate frequency components. We may design models that can capture both low and intermediate frequency components or both high and intermediate frequency components in the future work. We may design models with three/four depths or even more complex/specific model with stronger expressive power in the spectrum domain. **Response to Weakness**: The graph spectrum analysis is indeed a commonly used tool in many works on handling heterophily. The differences between various models lies in the specific ways of manipulating the high/low-pass filters. As we describe in introduction, our **core theoretical contribution** can be summarized as below: To our best knowledge, we are the *first* to unveil the intrinsic relationship between the *negative GCN depth* and edge heterophily in graph. We also provide in-depth geometric and spectral explanations for negative depth. We will include some discussions about this point in related works section in the revised version of our paper. --- Rebuttal Comment 1.1: Comment: After reading the authors' response, I still believe this is a solid paper with a good contribution, and the answer cleared some of my concerns. I am happy to raise my score from 6 to 7. --- Reply to Comment 1.1.1: Title: Thanks Comment: Thank you very much for reading our response and raising your score !
Summary: The paper proposes a new GCN that is capable of handling both homophilic and heterophilic networks. This method can be interpreted as generalizing the depth of the GCN, which is generally a natural number and a hyperparameter, to an arbitrary real number that is a trainable parameter. The first version of this method is essentially a variant of the well-known SGC method that makes the exponent on the graph matrix (corresponding the the depth) a trainable parameter. This can be computationally expensive, so the paper also proposes a second version which extracts the lowest frequency components and highest frequency components of the graph, and gives two tunable depth parameters to both sets of components. Experiments show that these versions generally achieve comparable or superior performance on the standard set of benchmark networks for homophilic/heterophilic node classification, relative to some prior methods. Some other experiments examine the depth parameters found by the method on various networks, as well as other insights from the method. Strengths: - The paper is organized well and easy to read. - The proposed method is relatively simple as compared to similar recently proposed methods like GPR-GNN, but still demonstrates comparable or superior performance. - The paper proposes and evaluates both a conceptually simple version of the method, as well as a slightly more complex but scalable version. Weaknesses: - As the authors note, this method is only directly applicable to the most basic setting of static undirected graphs in the transductive setting. Many of the competing methods to which this paper provides comparisons also apply to the inductive setting. - While the simplicity of the method is appealing, since its dual variant (which is the only scalable variant) discards intermediate frequency components, it may struggle with graphs like Actor to which these components also seem crucial. The paper notes this. - While the number of datasets is quite reasonable, and these have been the most popular ones in this field recently, I would have liked to see evaluations on different datasets. 3 of the datasets are very small, and two others (Squirrel/Chameleon) have recently been shown to have other issues - see the paper cited below. That paper also introduces other heterophilic datasets, which could yield a better evaluation. - This is not really a weakness, but I do not see a significant conceptual contribution here prior to the introduction of the new method. There has been an abundance of recent papers on handling heterophily, and hence an abundance of discussion of various spectral filters and their interpretations. ### Typos/minor - Line 101: "and the corresponding eigengraph $\mathbf{u}_1 \mathbf{u}_1^\top$ has an identical value $\frac{1}{n}$ for all entries" - isn't this only true for regular graphs? - Line 125: "also corresponds a" - Line 153-154: "applying an arbitrary function on a graph Laplacian is equivalent to applying the same function only on the eigenvalue matrix" - I think "arbitrary" has to be specified a bit here, e.g., this is true a matrix squaring function but not an entrywise squaring function - Table 2 caption: "heterphilic" Technical Quality: 3 good Clarity: 3 good Questions for Authors: - The Squirrel and Chameleon datasets were recently found to have a lot of erroneous duplicates ("A critical look at the evaluation of GNNs under heterophily: are we really making progress?" by Platonov et al.). Is this paper using the revised datasets or the originals? - The current method does not apply to the inductive setting, which is noted as a limitation of the work. Do the authors have any ideas on how the insights here could apply to that setting? - The depth parameter found by the single variant often seems to be interpretable. Are the two depth parameters found by the dual variant also interpretable? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Limitations are very briefly discussed in the appendix and pertain to the limited setting to which the method is applicable (static/transductive). There could be more discussion of limitations beyond this, such as limitations about the evaluation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: The Squirrel and Chameleon datasets were recently found to have a lot of erroneous duplicates ("A critical look at the evaluation of GNNs under heterophily: are we really making progress?" by Platonov et al.). Is this paper using the revised datasets or the originals?** **Response**: We use the original datasets in our submitted paper. Thanks a lot for reminding us of the issues reported in this ICLR2023 paper. We will add this paper and some other works discussed in this paper into related works in the revised version of our paper. In addition, during the rebuttal period, we have evaluated TEDGCN-S on 5 new datasets provided in this ICLR 2023 paper by Platonov et al. (Revised Chameleon, Revised Squirrel, Tolokers, Minesweeper and Roman Empire) with same semi-supervised setting as other datasets in our paper. **The results are presented in Table 1 in the general response above** . We can observe that TEDGCN-S outperforms all baselines on Revised Chameleon, Revised Squirrel and Tolokers and achieves the second best performance on Minesweeper and Roman Empire, which demonstrates the effectiveness of TEDGCN-S on these 5 new datasets from Platonov et al. **Q2 and Weakness 1: The current method does not apply to the inductive setting, which is noted as a limitation of the work. Do the authors have any ideas on how the insights here could apply to that setting?** **Response**: For the inductive setting, one possible solution is to borrow the idea from GraphSAGE about how to generalize the transductive GCN to the inductive setting. Following the similar idea as in GraphSAGE, since each node is closely related with the nodes within a multi-hop receptive field, we can sample a multi-hop subgraph (e.g.,3-5 hops) for the center node $v$ whose embedding needs to be optimized. Then, we can run TEDGCN-S on this small subgraph around the center $v$ to learn the optimal depth on this small subgraph. This design offers the flexibility for different subgraphs to learn different optimal depths. Furthermore, this can be applied to the inductive setting, when unseen/new node arrives, we can simply sample such a subgraph and run TEDGCN-S on this subgraph without re-training TEDGCN/conducting eigen-decomposition on the whole large graph, which can save a lot of time/space. **Q3: The depth parameter found by the single variant often seems to be interpretable. Are the two depth parameters found by the dual variant also interpretable?** **Response**: The two depths can be explained as follows: as we point out in the analysis part, every graph can be regarded as a combination of many eigengraphs (Figure 1 and Eq. (3)). TEDGCN-D will merge $K$ eigengraphs corresponding to the highest frequencies into a high-frequency augmented graph and merge $K$ eigengraphs corresponding to the lowest frequencies into another low-frequency augmented graph. Then, the high-frequency augmented graph and the low-frequency augmented graph will be considered together to learn the optimal combination of depths on both augmented graphs. The obtained depth on each augmented graph can be interpreted similarly as single depth in TEDGCN-S. **Response to Weakness 2**: It is true that TEDGCN-D may discard intermediate frequency components and this is reflected in the results of TEDGCN-D on Actor. One possible solution for this problem (also a possible solution for the scalability issue) can be as follows: Since each node is closely related with the nodes within a multi-hop receptive field, we can divide the large graph into several connected components (i.e., subgraphs), set a maximum node number limit for each subgraph and run TEDGCN-S on each subgraph. For example, for a graph with 10000 nodes, we can divide it into 10 connected components/subgraphs and each subgraph has about 1000 nodes. In a vanilla implementation of eigen-decomposition with time complexity of $O(n^3)$, this solution can accelerate the algorithm on each subgraph by about 1000 times. In addition, this design will not sacrifice/discard low/high/intermediate frequency components. We leave the exploration to this tentative idea in future works. **Response to Weakness 3**: Thanks for the suggestion. Please refer to our response to **Q1** for this point. **Response to Weakness 4**: The graph spectrum analysis is indeed a commonly used tool in many works on handling heterophily. We will consider to move part of the background of spectrum graph theory to appendix in the revised version of our paper. Furthermore, as we describe in the introduction section, our core theoretical contribution can be summarized as below: To our best knowledge, we are the first to unveil the intrinsic relationship between the **negative GCN depth** and edge heterophily in graph. We also provide in-depth geometric and spectral explanations for negative depth. **Minors 1: Line 101: "and the corresponding eigengraph $\mathbf{u}\_{1}\mathbf{u}\_{1}^{\top}$ has an identical value $\frac{1}{n}$ for all entries" - isn't this only true for regular graphs?** **Response**: It is true for the symmetrically normalized graph Laplacian ($\mathbf{L}\_{sym}$) for **any** connected undirected graph. This is because any $\mathbf{L}\_{sym}=\mathbf{I} - \mathbf{D}^{-\frac{1}{2}}\mathbf{A}\mathbf{D}^{-\frac{1}{2}}$ for a connected undirected graph has a eigenvalue $0$, which has one corresponding eigengraph $\mathbf{u}\_{1}\mathbf{u}\_{1}^{\top}$. **Response to other minors/typos**: Thanks for the comments. We will fix the typos and polish the text according to your suggestions in the revised version of our paper. --- Rebuttal Comment 1.1: Comment: Thank you for the response. I appreciate your running your method on the datasets from Platonov et al. Regarding Q3, apologies if my question was unclear. What I was looking for is not an explanation of the math of the two depth parameters, but rather something like Figure 4, where you show that for real-world network, with a single trainable depth, the depth that is found reflects whether the network is consider homophilous or heterophilous. Are two depth parameters similarly interpretable on real-world datasets? Regarding the eigenvector with zero eigenvalue: >Response: It is true for the symmetrically normalized graph Laplacian ($\mathbf{L}_\text{sym}$) for **any** connected undirected graph. According to [this](https://people.orie.cornell.edu/dpw/orie6334/Fall2016/lecture7.pdf), while the all-ones vector is a zero-eigenvalue eigenvector of the unnormalized Laplacian for any graph, for the normalized Laplacian, the eigenvector depends on the degree. --- Reply to Comment 1.1.1: Title: Further Discussion with Reviewer iB1Z Comment: **Further P1: Regarding Q3, apologies if my question was unclear. What I was looking for is not an explanation of the math of the two depth parameters, but rather something like Figure 4, where you show that for real-world network, with a single trainable depth, the depth that is found reflects whether the network is consider homophilous or heterophilous. Are two depth parameters similarly interpretable on real-world datasets?** **Response**: Thanks for clarifying your question. For the real-world datasets: First, for homophilic datasets, similar results are obtained as TEDGCN-S, both $d\_{l}$ and $d\_{h}$ are positive, and $d\_{l}$ and $d\_{h}$ are close to the single $d$ in TEDGCN-S (e.g., $d\_{l}=5.65$ and $d\_{h}=5.98$ for Cora). Second, for heterophilic datasets, the results are a little bit different and interesting. For example, for Chameleon, $d\_{l}=1.56$ and $d\_{h}=-2.76$. This can be explained as the following: our adopted transformation function is $g(\lambda)=(1-0.5\lambda) \in (0, 1]$. When $d\ge 0$, $0<g(\lambda)^{d}\le 1$. When $d < 0$, $g(\lambda)^{d} > 1$, which is reflected in Figure 5. Assume that we have two eigenvalues $0<\lambda\_{l}<\lambda\_{h}<2$ and $d\_{h} < 0 < d\_{l}$. If we adopt one single negative depth $d\_h$, we have $g(\lambda\_{h})^{d\_h} > g(\lambda\_{l})^{d\_h} > 1 > g(\lambda\_{l})^{d\_l}$. Thus, to capture heterophily, a positive $d\_{l}$ and a negative $d\_{h}$ may further enlarge the relative weight strength between the high and low frequency eigengraphs (i.e., more flexibility), compared to a single negative depth. This is also validated in Table 2 that TEDGCN-D outperforms TEDGCN-S on Squirrel/Chameleon/cornell5. To conclude, because of the adopted transformation function ($g(\lambda)=(1-0.5\lambda) \in (0, 1]$), the weights of high frequency eigengraphs will be amplified to $>> 1$ with a negative $d\_{h}$ or reduced to $<< 1$ with a positive $d\_{h}$. So $d\_{h}$ plays an important role in adjusting the weights. According to the results of TEDGCN-D, positive $d\_{h}$ (with close positive $d\_{l}$) reflects homophily and negative $d\_{h}$ ($d\_{h} < 0 < d\_{l}$) reflects heterophily. **Further P2:Regarding the eigenvector with zero eigenvalue: Response: It is true for the symmetrically normalized graph Laplacian ($\mathbf{L}\_{sym}$) for any connected undirected graph. According to this, while the all-ones vector is a zero-eigenvalue eigenvector of the unnormalized Laplacian for any graph, for the normalized Laplacian, the eigenvector depends on the degree.** **Response**: Thanks for the reviewer pointing out that $\mathbf{u}\_{1}$ should be $\mathbf{D}^{0.5} \mathbf{e}$, where $\mathbf{e}$ is a all-ones vector. We will correct this point in the revised version of our paper. One thing we want to clarify is that when $\mathbf{u}\_{1} = \mathbf{D}^{0.5} \mathbf{e}$, the first eigengraph still corresponds to edge homophily in message-passing with all edges weighted positively, which will not affect the key idea and the model design of our work.
Summary: This paper shows a connection between the depth of a GCN and it's suitability for homophilic / heterophilic graphs, by analyzing graph spectra. It proposes to address heterophily with negative depth and presents a GNN architecture called TeDGCN which allows for trainable and negative depth. TeDGCN outperforms / is competitive with other GCN variants on a number of graph datasets. It further leads to a graph augmentation approach when the optimal depth is known. Strengths: This paper tries to extend GCN with trainable real-valued depth. This has consequences for automatic tuning of the GCN depth, and for better addressing heterophilic datasets, both of which make it a significant novel contribution to the community. It establishes the connection between positive/negative depth with low/high frequency components of the graph signal and homophily/heterophily. The analysis results in a new architecture TeDGCN. The paper further extends it to two variants, dealing with both low- and high-frequency components and addressing scalability challenges for large graphs. The method gives superior results, especially for heterophilic datasets. The trained depths are analyzed and also used to propose a graph augmentation mechanism which sometimes outperforms TeDGCN. The paper is well-written and the ideas are presented clearly. Weaknesses: To enable trainable depth, the paper operates on a simplification of GCN which removes the intermediate non-linearities. Thus the resulting model is a linear model, which limits the expressive power of the approach. Also, as a GCN variant it is not applicable to the inductive setting. It is not clear whether the lessons from this paper can be applied to more advanced GNNs or hold more broadly. Essentially trainable depth results in an augmented graph. Can this augmented graph be used with non-linear GNNs to still capture heterophily? Some baselines with trainable depth (ODE-based GNNs) could be compared against. Is it possible to realize negative depth by inverting the graph Laplacian first? Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: On line 262 the split should be 60/20/20 right? Recent work [1] has shown pressing issues in old benchmarks of heterophilic datasets, which have been used here. Can you also include / replace with the results on the newer datasets proposed in [1]? The same paper also shows that standard architectures like SAGE and GAT can outperform heterophily-specific architectures. Can you include these as baselines? Even if not adding datasets from [1], these baselines would be representative of models with higher expressive power than GCN or GCN-variants. [1] Platonov et. al. "A critical look at the evaluation of GNNs under heterophily: Are we really making progress?" ICLR 2023. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The limitation to transductive problems is mentioned briefly in the appendix, but the limitation to linear models and expressive power is not discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: On line 262 the split should be 60/20/20 right?** **Response**: Thanks for your question. As we mentioned in line 254, we adopt the **semi-supervised** node classification task to evaluate TEDGCN and other baselines with the split 20/20/60\% for training/validation/testing set. The reason is that in real-world applications, it is very difficult to have the fully supervised setting with a large ratio of nodes' labels known and marked as training data points (e.g., 48\% or 60\%). Therefore, we believe the semi-supervised setting with much less than 50\% nodes functioning as training set is more reasonable to evaluate different methods, so without the loss of generality, we choose 20\%. For the fully supervised setting: (1) we have the results on benchmark datasets with the given 48/32/20\% split provided in [2] in Appendix A.5, which demonstrates the effectiveness of TEDGCN. Please refer to Appendix A.5 for more details. (2) We also evaluate TEDGCN on the given split of the revised chameleon dataset and the revised squirrel dataset provided by [1] during the rebuttal period. The accuracies for TEDGCN-S are $42.12 \pm 4.33$ (\%) on the revised chameleon and $41.70 \pm 1.97$ (\%) on the revised squirrel, both of which outperform all results from Table 2 in [1]. [1] Platonov et. al. "A critical look at the evaluation of GNNs under heterophily: Are we really making progress?" ICLR 2023. [2] H. Pei, B. Wei, K. C.-C. Chang, Y. Lei, and B. Yang. Geom-gcn: Geometric graph convolutional networks. arXiv preprint arXiv:2002.05287, 2020. **Q2: Recent work [1] has shown pressing issues in old benchmarks of heterophilic datasets, which have been used here. Can you also include / replace with the results on the newer datasets proposed in [1]? The same paper also shows that standard architectures like SAGE and GAT can outperform heterophily-specific architectures. Can you include these as baselines? Even if not adding datasets from [1], these baselines would be representative of models with higher expressive power than GCN or GCN-variants.** **Response**: Thanks a lot for reminding us of this ICLR2023 paper, we will add this paper and some other works discussed in this paper into related works in the revised version of our paper. Following reviewer's suggestion, we have evaluated TEDGCN-S on 5 new datasets provided in [1] (Revised Chameleon, Revised Squirrel, Tolokers, Minesweeper and Roman Empire) with same semi-supervised setting as other datasets in our paper. We also include SAGE and GAT as baselines. **The results are as presented in Table 1 in the general response.** We can observe that (1) as mentioned in **Q2**, SAGE is a strong method and achieves the best accuracy on Minesweeper and Roman Empire; and (2) TEDGCN-S outperforms all baselines on Revised Chameleon, Revised Squirrel and Tolokers and achieves the second best performance on Minesweeper and Roman Empire, which demonstrates the effectiveness of TEDGCN-S on these 5 new datasets in [1]. **Other questions in Weaknesses/Comments/Limitations**: **P1: The resulting model is a linear model, which limits the expressive power of the approach. Also, as a GCN variant it is not applicable to the inductive setting.** **Response**: (1) For the limited expressive power of linear model, first, our results show that with a proper depth, nonlinearity may be not be a critical factor to determine the performance, and the simplicity is a natural byproduct of our framework which requires no sacrifice of the performance. Second, one possible solution for this question is to leverage the augmented graph obtained by TEDGCN as the input for other non-linear GNNs, in which we can add non-linearity between each layer. (2) For the inductive setting, one possible solution is to borrow the idea from (Graph)SAGE about how to generalize the transductive GCN to the inductive setting. Following the same idea in (Graph)SAGE, since each node is closely related with the nodes within a multi-hop receptive field, we can sample a multi-hop subgraph (e.g.,3-5 hops) for the center node $v$ whose embedding needs to be optimized. Then, we can run TEDGCN-S on this small subgraph around the center $v$ to learn the optimal depth on this small subgraph. This design offers the flexibility for different subgraphs to learn different optimal depths. Furthermore, this can be applied to the inductive setting, when unseen/new node arrives, we can simply sample such a subgraph and run TEDGCN-S on this subgraph without re-training TEDGCN/conducting eigen-decomposition on the whole large graph, which can save a lot of time. We will leave the implementation of this idea to future works. **P2: Essentially trainable depth results in an augmented graph. Can this augmented graph be used with non-linear GNNs to still capture heterophily?** **Response**: Yes, the augmented graph can be used with non-linear GNNs, which only need to treat the augmented graph as input. One example is that in Subsection 4.4, we use the augmented graph as input for normal (non-linear) GCN, which achieves even better performance than TEDGCN-S/TEDGCN-D in capturing heterophily on the Cornell dataset and the Wisconsin dataset. **P3: Is it possible to realize negative depth by inverting the graph Laplacian first?** **Response**: We are afraid that this is not mathematically feasible. Actually, we do have thought about directly inverting the graph Laplacian first. However, the graph Laplacian for any connected graph has an eigenvalue 0, which makes the graph Laplacian non-invertible. --- Rebuttal Comment 1.1: Comment: Minor comment regarding the following: >We are afraid that this is not mathematically feasible. Actually, we do have thought about directly inverting the graph Laplacian first. However, the graph Laplacian for any connected graph has an eigenvalue 0, which makes the graph Laplacian non-invertible. It is possible to use a pseudoinverse of the Laplacian instead. This is a well-studied technique, see, e.g., [this link](https://arxiv.org/abs/2109.14587). --- Reply to Comment 1.1.1: Title: Response to the minor comment by Reviewer iB1Z Comment: **Response**: Since the graph Laplacian for any connected graph is non-invertible, it is natural to come up with the idea of conducting the proposed transformation in the paper, which enjoys (1) positive eigenvalues; (2)monotonicity versus eigenvalues; and (3) good geometric interpretability. We thank the reviewer for pointing out the pseudoinverse technique. We agree with the reviewer that by generalizing the **inverse requirement to pseudoinverse technique**, it is possible to realize negative depth by inverting the graph Laplacian first. To better answer reviewer's question, we have tried a simple implementation of direct pseudoinverse of the symmetrically normalized graph Laplacian $\mathbf{L}\_{sym}$ on the revised squirrel and the revised chameleon with same semi-supervised setting in our paper. The ACC for the revised chameleon is 37.07\% and for the revised squirrel is 34.71\%, both of which are worse than TEDGCN-S(40.03\% for the revised chameleon and 37.21\% for the revised squirrel). One possible reason is that the direct pseudo-inverse might own some different properties from the **real** inverse/the transformation function. We leave the detailed analysis and improvement of the pseudoinverse technique as the future work.
Summary: This paper proposes two algorithms to learn GNN with a trainable depth. At first, the authors exploit previous theoretical results about the correlation between frequency and zero crossings and the intuition that capturing heterophily needs more zero crossings to motivate adjusting the weights of eigen-graphs. Then it is natural to achieve this goal by tuning the depth of a GNN, where the authors propose to extend the possible space of depths to real numbers. To my knowledge, it is the first attempt to train a GNN with trainable and possibly negative depth, especially since the proposed algorithms are not built upon continuous-time diffusions. Extensive experiments are conducted, showing that the proposed algorithms are quite effective for heterophilic graphs, and they tend to learn the optimal depth. Strengths: 1. This well-written paper enables me to pick up its core ideas effortlessly. Particularly, the provided examples are quite helpful for understanding the corresponding definitions and formulas. 2. Learning a negative depth to encourage larger weights for eigen-graphs that provide more zero crossing interests me. More importantly, it is well-motivated and demonstrates significant advantages in capturing heterophilic graph patterns. 3. The empirical studies seem to be comprehensive. At first, both homophilic and heterophilic graphs are included. I am glad to see that the authors honestly report the results on those four homophilic graphs, where the proposed algorithms have not achieved a consistent improvement. And it is great to see the dramatic improvements on heterophilic graphs, which strongly support the idea of this paper. Second, the detailed analysis confirms that the proposed algorithms can truly learn the optimal depth. This lets us attribute the successes in capturing heterophilic graph patterns to such a mechanism and its consequence of up-weighting zero crossings. 4. I am quite fond of the interpretation of augmenting original graph structure, which perfectly obeys our intuition. Weaknesses: 1. How to apply the proposed algorithms to large graphs is a critical question that has not been fully resolved. 2. The motivation of proposed algorithms seems to be entirely built upon previous theoretical results. I am curious about what is exactly the contribution of this paper. 3. To achieve a trainable and negative depth, the designed GNN is in a similar form to SGC, which also sacrifices some non-linearity (or expressiveness) to gain simplicity. 4. The rescaling of the base into (0, 1) is reasonable and explainable, which is satisfactory to me. Yet, it would be better to justify its expressiveness and numerical properties as original bases (i.e., original frequencies). Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: In line 130, the cardinalities of examples are 1 and 2. As the considered graph is undirected, why not 2 and 4? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: There is a complexity analysis of the proposed algorithms, which can be regarded as a discussion of the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q1: In line 130, the cardinalities of examples are 1 and 2. As the considered graph is undirected, why not 2 and 4?** **Response**: Thank you for raising this question. In undirected graph, we regard edge $(v\_{j}, v\_{k})$ and edge $(v\_{k}, v\_{j})$ as one edge. We will add a footnote here to clarify this point in the revised version of this paper. **Other points in Weaknesses/Comments/Limitations**: **P1:How to apply the proposed algorithms to large graphs is a critical question that has not been fully resolved.** **Response**: We think that the TEDGCN-D in the paper is proposed from the *spectral domain* to reduce the time complexity on large graphs. An alternative solution is from the *spatial domain*: Since each node is closely related with the nodes within a multi-hop receptive field, we can divide the large graph into several connected components (i.e., subgraphs), set a maximum node number limit for each subgraph and run TEDGCN-S on each subgraph. For example, for a graph with 10000 nodes, we can divide it into 10 connected components/subgraphs and each subgraph has about 1000 nodes. In a vanilla implementation of eigen-decomposition with time complexity of $O(n^3)$, this solution can accelerate the algorithm on each subgraph by about 1000 times. We leave the exploration to this tentative idea in future works. **P2: The motivation of proposed algorithms seems to be entirely built upon previous theoretical results. I am curious about what is exactly the contribution of this paper.** **Response**: The graph spectrum analysis is indeed a commonly used tool in many works (e.g.,GCN). We look at the the existing theory from a novel angle and our core contributions can be summarized as below: (1) From the theoretical aspect: to our best knowledge, we are the first to unveil the intrinsic relationship between the negative GCN depth and edge heterophily in graph. We also provide in-depth geometric and spectral explanations for negative depth. (2) From the model aspect, we propose a simple and powerful model TEDGCN with two variants (TEDGCN-S and TEDGCN-D) and discuss a novel graph augmentation method based on TEDGCN. **P3: To achieve a trainable and negative depth, the designed GNN is in a similar form to SGC, which also sacrifices some non-linearity (or expressiveness) to gain simplicity.** **Response**: First, our results show that with a proper depth, nonlinearity may be not be a critical factor to determine the performance, and the simplicity is a natural byproduct of our framework which requires no sacrifice of the performance. Second, as pointed by Reviewer `wZLL`, one possible solution for this question is to leverage the augmented graph obtained by TEDGCN as the input for other non-linear GNNs, in which we can add non-linearity between each layer. **P4: The rescaling of the base into (0, 1) is reasonable and explainable, which is satisfactory to me. Yet, it would be better to justify its expressiveness and numerical properties as original bases (i.e., original frequencies).** **Response**: Actually, we do have thought about using the original frequencies in TEDGCN. However, every connected graph has an eigenvalue 0, which will lead to problem when we have negative depth (e.g., -1) because $0^{-1} = \frac{1}{0}$ is meaningless, and this makes it hard to explain the numerical properties in the original spectrum. Thus, we propose a transformation function $g(\lambda)=1-\frac{1}{2}\lambda\in (0,1]$ to solve this problem. --- Rebuttal Comment 1.1: Title: Discussions Comment: Thanks for your response! Most of my concerns are resolved. I will keep my score to support this paper. --- Reply to Comment 1.1.1: Title: Thanks Comment: Thanks for reading our response and supporting our paper!
Rebuttal 1: Rebuttal: **General Response for All Reviewers**: We sincerely thank all reviewers for their valuable time and insightful feedbacks which are very helpful for further improving the quality of our paper. We are grateful that the reviewers appreciate the novelty of our work (`wZLL`:"a significant novel contribution to the community", `yXi3`:"to my knowledge, it is the first attempt", `jrHF`: "interesting and novel"). We are also encouraged that reviewers think that (1) our paper is well-written and the key idea is presented clearly and easy to follow (`yXi3`, `wZLL`, `iB1Z` and `jrHF`); and (2) the empirical experiments are comprehensive (`yXi3`) and solid (`jrHF`). We have provided our point-to-point response to the questions of each reviewer below and we sincerely invite the reviewers for further discussion. In addition, during the rebuttal period, as suggested by Reviewer `wZLL` and Reviewer `iB1Z`, we have evaluated TEDGCN-S on 5 new datasets provided in the ICLR 2023 paper by Platonov et al [1]. (Revised Chameleon, Revised Squirrel, Tolokers, Minesweeper and Roman Empire) with same semi-supervised setting as other datasets in our paper. The results are as presented in Table 1. [1] Platonov et. al. "A critical look at the evaluation of GNNs under heterophily: Are we really making progress?" ICLR 2023. Table 1: Performance comparison on 5 new datasets in [1] ($ACC \pm std$(\%)) | Model | Revised Chameleon | Revised Squirrel | Tolokers | Minesweeper | Roman Empire| | ------------ | :-----------: | :-----------: | :-----------: | :-----------: | :-----------: | | SGC | $35.92 \pm 2.06$ | $29.70 \pm 1.25$ | $78.31 \pm 0.64$ | $80.02 \pm 0.39$ | $39.01 \pm 0.78$ | | GCN | $36.70 \pm 2.72$ | $30.75 \pm 2.09$ | $78.49 \pm 0.51$ | $79.92 \pm 0.41$ | $39.27 \pm 0.42$ | | SAGE | $35.43 \pm 3.80$ | $31.75 \pm 1.67$ | $\textit{78.54} \pm \textit{0.65}$ | $\textbf{82.22} \pm \textbf{0.32}$ | $\textbf{65.57} \pm \textbf{0.92}$ | | GAT | $36.59 \pm 1.18$ | $31.53 \pm 0.89$ | $78.01 \pm 0.51$ | $80.02 \pm 3.30$ | $38.71 \pm 0.98$ | | APPNP | $37.00 \pm 2.18$ | $31.91 \pm 1.19$ | $78.05 \pm 0.49$ | $80.04 \pm 0.37$ | $62.03 \pm 0.41$ | | GPRGNN | $\textit{37.08} \pm \textit{1.88}$ | $\textit{36.60} \pm \textit{0.73}$ | $78.09 \pm 0.51$ | $80.06 \pm 0.78$ | $57.28 \pm 0.49$ | | FAGCN | $35.96 \pm 2.61$ | $32.50 \pm 4.58$ | $78.37 \pm 0.54$ | $80.11 \pm 0.40$ | $62.81 \pm 0.89$ | | TEDGCN-S | $\textbf{40.03} \pm \textbf{1.60}$ | $\textbf{37.21} \pm \textbf{1.70}$ | $\textbf{78.58} \pm \textbf{0.78}$ | $\textit{80.41} \pm \textit{0.70}$ | $\textit{63.05} \pm \textit{0.96}$ | We can observe that TEDGCN-S outperforms all baselines on Revised Chameleon, Revised Squirrel and Tolokers and achieves the second best performance on Minesweeper and Roman Empire, which demonstrates the effectiveness of TEDGCN-S on these 5 new datasets from Platonov et al.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Modality-Independent Teachers Meet Weakly-Supervised Audio-Visual Event Parser
Accept (poster)
Summary: This paper proposes a method called VALOR to assign segment level audio-visual (AV) event/object labels given weak labels at the video-level. It makes use of pseudo-labels obtained from unimodal pretrained models CLIP and CLAP to derive additional guidance for the AV model. The final model is trained using a combination of AV, audio-only and video-only losses, and the AV component is based on the hybrid attention network (HAN) structure. Experiments are performed on two tasks, AV video parsing and AV event localization. In both cases, the VALOR approach outperformed previous studies on the respective tasks by a clear margin. Some additional ablation experiments further validate the design choices that have been made in the system. * After rebuttal Given the authors' response, and them clarifying my concerns, I am increasing my score, especially for presentation. Strengths: The main strength is that the experimental results clearly outperforms older approaches on the two tasks described in the paper. * Originality: Even though the individual sub-components such as CLIp, CLAP, HAN are not novel, the paper brings them together in a certain way (unimodal guidance with dense labeling of the video segments) to solve the AV video parsing problem on a weakly labeled dataset. * Quality: Experimental evaluations successfully support the use of the newly proposed VALOR technique. * Clarity: Language is mostly clear. * Significance: The paper probably establishes the new state-of-the-art the for the AV video parsing task on the LLP dataset. Weaknesses: * The main weakness of the paper is probably the flow of the paper. 1. There is a section called Preliminaries and then in Section 3.1, CLIP and CLAP are introduced. They can go to preliminaries. 2. The motivation behind adding Section 4.2 is not clear. Similar applications have been already included in the Introduction. * Another weakness is lacking clarity about some of the experimental settings: Instead of Section 4.2, it could be better to save some space and describe the dataset a little clearer. For example, 1. how many labels are there, are they coming from a closed vocab? (We later get the answers to these questions by looking at Figs. 3 and 4) 2. Even though the timestamps are not given, do we know the order of the labels, or is it more like a bag of labels at the video level? b. What does CLIP+CLAP without modality labels work in Table 2? Do we take the union of CLIP and CLAP labels and then use this as the ground truth for both audio and video losses? * The improvements in F-scores are mostly stated between HAN and the VALOR approach, but as mentioned at times, there were other stronger baselines to compare (JoMoLD and CMBS in Tables 1 and 4, respectively). It could be better to mention improvements w.r.t. those ones although the gains will look slightly smaller in that case. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Do we know the order of the weak labels? 2. In AV event localization evaluations, did the authors allowed a margin to evaluate the F-scores? 3. The paper describes Type@AV as the Average of A-only, V-only and AV F-scores. It might make it clearer if the name reflects this fact. 4. Event@ AV score is also not very self-explanatory. How does it differ from the AV F-score? 5. Do the authors consider how the number of occurrences of an event in the training dataset correlate with the final performance? Is there any bias do to class imbalance? 6. Paper mentions class-dependent threshold to binarize the CLIP/CLAP outputs. Are they manually or automatically tuned? If automatic, which subsets are used to determine these numbers? 7. Instead of binarizing the labels from CLIP/CLAP, have the authors tried to use the soft labels directly in the loss computation? Soft-labels have sometimes been useful in other semi/self-supervised applications. 8. What is the chosen segment length (duration)? Have the authors tested various granularity levels? 9. The notation is Section 2 can be improved. Even though we can understand their meaning from the context, there are many f's in various forms: $f$, $F$, $\mathrm{F}$ 10. The level of detail in the case of MMIL looks a little long given that the rest of the paper does not mention it that frequently in the rest of the paper. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Some limitations have been mentioned in the paper. For example, whether the model will be effective in the case of large vocabulary labels. Also how the inherent limitations of CLIP can harm the proposed VALOR approach. 1. It is not clear whether there are any issues due to class imbalance in the dataset. Or if the LLP data has a similar label distribution in train/dev/test splits? Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## **Response to Reviewer uN7g** We thank Reviewer uN7g for the constructive comments and suggestive remarks. Please see our responses below for each raised issue. **Q1. Improvement of presentation flow. Sect. 2 and Sect. 3.1. The motivation behind adding Section 4.2.** Sect. 2 (Prelim) defines the task of AVVP, including the baseline HAN [72]. While Sect. 3 introduces our proposed model, Sect. 3.1 is to explain the idea of exploiting pre-trained cross-modality language models for zero-shot label transfer, in which CLIP and CLAP are utilized. As suggested by Reviewers 82wA and t3gt, we have verified that our method is not limited to the use of CLIP/CLAP. Our Section 1 explains audio-visual learning tasks and highlights the challenge of “modality unaligned” settings, which is also the main focus of our paper. To maintain the fluency and clarity of the Introduction, we only provide a few examples of related studies. As for Section 4 (Related Work), we provide thorough literature reviews covering relevant research on audio-visual learning in various settings. **Q2. Clarity in some experimental settings. How many labels? Are they from a closed vocab? Is the label order available? What’s CLIP+CLAP w/o modality labels in Table 2?** As noted in Sect. 5.1, there are 25 events (labels) in total, which are from a closed/predefined vocabulary. With the dataset and settings defined in [72], the order of the video labels is not available. Table 2 is for ablation study purposes. CLIP+CLAP without modality labels means the pseudo labels derived for each segment are the union of audio and visual labels produced by VALOR, followed by training the HAN model for AVVP. Since degraded performance was observed, it verifies the effectiveness of our VALOR in predicting modality-aware pseudo labels for cross-modal learning. **Q3. Possible improvements on strong baselines (JoMoLD and CMBS)?** In the second paragraph of Section 5.2, we have presented the improvement over JoMoLD. We note that VALOR scored higher across all metrics, including a 5.4 F-score improvement for segment-level Type@AV under a fair setting. In the quantitative results of Section 5.4, we also noted that we surpassed CMBS on the weakly-supervised AVE task with an improvement of 4.4% in accuracy. **Q4. Margin to evaluate the F-scores?** No, and we are not able to do so. This is because, according to [71], F-score is not the evaluation metric defined for the AVE task. Since the AVE dataset consists of video segments with equal and pre-determined length, we adopt accuracy as the evaluation metric Moreover, since each video contains only one event, each segment is labeled with only one event label, which is why accuracy is qualified as an evaluation metric. **Q5. Type@AV naming confusion** We thank the reviewer for the advice. We agree that Type@AV might not be sufficiently intuitive to represent the average of the A-only, V-only, and AV F-scores. However, since the term Type@AV was introduced in the first paper of the AVVP task [72]. Existing literatures (including ours) addressing AVVP follow this notation for presentation consistency. **Q6. How Event@AV differs from the AV F-score?** Defined by [72], Event@AV calculates the F-score of all audio/visual events (i.e., across all segments, modality-aligned or not); as for AV F-score, it is the F-score calculated for the “audio-visual” events only (i.e., across modality-aligned segments). **Q7. Does the number of event occurrences correlate with performance? Is there class imbalance bias?** To address this issue, we show Figure 2 in the rebuttal PDF, showing that the number of event occurrences does not correlate with the final performance on the validation split. Note that the leftmost sub-figure in Fig. 2 shows the number of videos containing each event, the middle sub-figure shows the event-wise audio F-scores, and the rightmost figure shows the event-wise visual F-scores. Moreover, since ㄍthe F-score is calculated by (2 * TP) / (2 * TP + FP + FN) for each class, its value would not be biased even with the presence of class imbalance. **Q8. Determination of class-dependent thresholds to binarize the CLIP/CLAP outputs** For simplicity and fairness, we select such thresholds for each class based on its best segment-level F-score on the validation split. As suggested by Reviewer veqt, if using a class-independent threshold, which is determined by the best overall segment-level F-score on the validation split, we observe a comparable performance in Table 1 of our global rebuttal pdf. Thus, the performance is not sensitive to the thresholds selected from the validation set. **Q9. Possible use of soft labels?** Yes, we have tried to let our model directly learn from the logits output from the pre-trained models (without thresholding into hard labels), which is the knowledge distillation (KD) experiment in the second row of Table 3. From this table, we observe that such labeling strategy only resulted in F-scores of 51.1/64.0 for audio/visual events, while our VALOR achieved 62.7/66.3. Thus, use of our proposed VALOR to leverage video-level labels for pseudo label prediction is desirable. **Q10. What is the chosen segment length? Have the authors tested various granularity levels?** Each segment length (duration) is 1 second. Since the length of each video segment is pre-determined and fixed by [72], we are not able to consider video segments at different granularity levels. **Q11. The notation in Section 2 can be improved (e.g., many f's in various forms: $f$, $F$, $\rm F$)** Thank you for pointing this out. $f\in R^d$ represents a segment audio/visual feature, $F\in R^{T \times d}$ represents the collection of all segment audio/visual features, and $\rm F\in R^{2 \times T \times d}$ represents the collection of all segment features in both modalities. To avoid confusion, we will replace $\rm F\in R^{2 \times T \times d}$ with $X$ instead. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for providing detailed answers to my questions including the generation of graphs in the rebuttal pdf. In Q7, for class-imbalance vs. F-measure issue, I think the answer depends on how you exactly calculate the F-measure ((2 * TP) / (2 * TP + FP + FN) ) for multi-class classification. In one version, each of TP, FP, FN are the total sum over all classes, i.e. $ TP = \sum_k TP_k $ where $ k$ is the class index (there are $K$ classes). Whereas in the other version, one computes the F-measure per class and then averages those across classes, i.e. $ F-measure = \frac{1}{K} \sum_k F_k = \frac{1}{K} \sum_k (2 * TP_k) / (2 * TP_k + FP_k + FN_k) $. Especially, in the second version, it is possible, for example, that a class with small number of samples will have a noisier F-measure estimate and that may skew the results. That was why I was interested in the class distribution. So, I disagree with the claim that "its value would not be biased even with the presence of class imbalance". However, the rebuttal shows that this is not the case, hence, there is not a problem. Overall, I am going to increase my score especially for presentation/clarity. --- Reply to Comment 1.1.1: Comment: We thank the reviewer again for the input and suggest further clarification. We agree that the class imbalance would still affect the F-measure, depending on how it is calculated. We will revise the description in the manuscript accordingly, so that it would not cause potential concerns.
Summary: The paper aims to improve performance of audio-visual video parsing (AVVP) on the LLP dataset via a novel pseudo-labelling technique. This is because LLP contains only coarse video-level labels, and expects the model to match fine-grained, temporally dense labels during testing. This mismatch (due to the intense labelling effort that would be required to have temporally dense labels in the training set) is obviously an issue, and has been approached by prev. works using the Multi-modal Multiple Instance Learning (MMIL) loss for soft-selection and label smoothing, without much success. This paper proposes to use CLIP and CLAP, two pre-trained image/audio encoders, to extract pseudo-labels for each video/audio in an attempt to create temporally dense labels for the training set. This work also uses CLIP and CLAP as feature encoders for better results. This method considerably outperforms previous works, and its design choices are motivated by an ablation study. The method also performs well on audio-visual event localization. Strengths: The paper is well written and does a good job at introducing the reader to the LLP task, the challenges it entails and why previous works have struggled with it. The figures are great, in particular figure 2, and the tables are all clear and have a clear purpose. The narrative of the paper is consistent and has good momentum, and the initial questions and issues it poses are adequately solved/answered by the end of the paper, with good justifications and conclusions. The method is rather simple but quite elegant, and seems to be the a clear low-hanging fruit to improve the LLP task. The preliminaries are well-explained and the choice not to perform distillation (which, again, would seem like a low-hanging fruit) is well-justified. Related work seems to be adequately, comprehensively, and fairly referenced and discussed. When compared with other works, the results are very convincing. Ablations are also welcome and have clear motivations and conclusions. Table 3 is especially good in my opinion - it answers a lot of questions. Weaknesses: I don't think the method has any particularly noticeable weaknesses, apart from the limitations that are brought up after the conclusion, which are understandable and are largely outside of the scope of this work. However, I think the paper would benefit heavily from more experiments. CLIP and CLAP are great, and they are compared with HAN (which is effectively a completely different model), but some comparisons with other audio/image encoders would help justify the choice of using CLIP and CLAP specifically. Also it would be great if the authors could experiment with different datasets or even new tasks, if this is possible, as it would give us a wider breadth of results to draw conclusions from. Would be good to have the best numbers in bold for Table 2. Would be good to highlight second and third best to highlight valor and valor+ rather than just valor++. Some extra comments on the poor performing classes in Figure 3 would be nice. I would suggest that the authors focus more on the term "pseudo-labelling" rather than "teacher" since student-teacher modelling is a broad topic with multiple techniques, but clearly here the authors are performing pseudo-labelling, which is a much more specific technqiue. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: When do you plan to make the code available? To the best of the authors' knowledge, are there any other works that use CLIP or CLAP for pseudo-labelling, perhaps in audio-only or image-only literature? If so, it would be very relevant to discuss such approaches in the related work. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Some limitations are mentioned, which is good. However, there should also be a broader impact discussion (can be brief) about the potential misuse of this technology (e.g., aiding automated audio-visual surveillance, what happens if this model is deployed and gives a wrong output, etc.). Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## **Response to Reviewer t3gt** We thank Reviewer t3gt for the positive comments and suggestive remarks. Please see our responses below for each raised issue. **Q1. Possible to consider visual/audio-language models other than CLIP or CLAP? What about other datasets?** A1: In Table 2 of the rebuttal PDF, CLAP and CLIP are replaced by AudioCLIP and OpenCLIP, respectively. In the left subtable of Table 2, the audio performance of the model slightly drops when it is trained with the AudioCLIP’s generated labels. On the other hand, in the right subtable of Table 2, the visual performance of the model trained with OpenCLIP’s generated labels is nearly the same as that of the model trained with CLIP’s. Although most existing papers addressing the AVVP task only consider one dataset, we also tackled the task of audio-visual event localization using the AVE dataset [71], which is a dataset different from the LLP dataset for AVVP. The AVE experiment results are in Table 4 in the main paper, where we observed a 5.1 improvement in accuracy compared to the baseline method of using the HAN model. It is worth noting that, both AVE and AVVP tasks perform learning from audio-visual data in the challenging “modality unaligned” setting. **Q2. Better visualization for top cases/numbers in Table 2** A2: In Table 1 in the main paper, as suggested, the best numbers are highlighted in bold and the second best numbers are already underlined. We will take the suggestion and also highlight the third highest numbers and update the table caption accordingly. **Q3. Comments on poor performing classes in Figure 3** A3: When comparing to the model simply using videl-level labels for audio segment pseudo-labels, we found that “cat”, “baby cry” and “cheering” were the events with slightly degraded audio events, as shown in Fig. 3. For these three audio events, a drop up to 3 in F-score were observed. When performing error analysis, we found that this was mainly due to the decrease of true positives instead of the increase of false positive ones. This suggests that these three audio events exhibit large intra-class variations. Nevertheless, as also shown in Fig. 3, most audio events reported improved F-scores (up to 13). **Q4. Suggest focusing more on the term "pseudo-labeling" rather than "teacher"** A4: We agree that using the terms “teacher” and “student” might lead readers to think that we are employing the knowledge distillation techniques for teacher-student model learning. Since the focus of our work is to provide modality-specific segment-level labels by leveraging large-scale pre-trained open-vocabulary models (CLIP and CLAP), we will just use “pseudo labeling” in our paper to avoid confusion. **Q5. Will the code be publicly available?** A5: Definitely. We have enclosed the code regarding model architecture and loss calculation in the supplementary materials, and we will release the code within a week after the author notification. **Q6. Any other works that use CLIP or CLAP for pseudo-labeling, even for audio-only or image-only literature?** A6: As discussed in Sect. 4.1, VPLAN [95] addresses the AVVP task by utilizing CLIP for pseudo labeling, which leverages CLIP to generate visual segment-level labels. Recently, LSLD [A] is proposed (after the NeurIPS submission deadline) to address the AVVP task with a similar method. Note that they only focus on visual pseudo labeling only, while we generate pseudo labels in both modalities. We will update Sect. 4 accordingly. [A] Fan et al. "Revisit weakly-supervised audio-visual video parsing from the language perspective." arXiv:2306.00595. **Q7. Discussion on potential misuse of this technology** A7: Since our VALOR requires pre-trained audio/visual-language models to predict pseudo-labels, the lack of ability in describing fine-grained or unseen semantic concepts would be the major limitation. Misclassification of audio or visual events due to the above constraint might endanger video analysis tasks like surveillance or autonomous driving. On the other hand, since our VALOR is not a generative model, we do not expect that its misuse would result in DeepFake-like deceptive content. --- Rebuttal Comment 1.1: Title: Response to authors Comment: Thank you for answering my questions. You have clarified all my doubts. Very happy to hear that the code will be made available. Happy to raise my score slightly based on the convincing responses to all reviewers. I am not completely familiarized with this specific field in depth, which is why my confidence score is not 5. But overall, I think the paper is quite good. --- Reply to Comment 1.1.1: Comment: We are glad that our responses have sufficiently clarified your raised issues, which definitely help us strengthen our work. We will make the code available as promised.
Summary: The paper proposes visual-audio label elaboration (VALOR) for weakly supervised audio-visual video parsing. It generates fine-grained temporal labels in audio and visual modalities by harnessing large-scale pretrained contrastive models CLIP and CLAP and providing explicit supervision to guide the learning of AVVP models. The paper shows that utilizing modality-independent pretrained models and generating modality-aware labels are essential for AVVP. Strengths: 1 The paper proposes a simple and effective AVVP framework, VALOR, to harvest modality and temporal labels directly from video-label annotations, with an absolute improvement of +8.0 F-score. 2 The paper is the first to point out that modality independence could be crucial for audio-visual learning in the unaligned and weakly-supervised setup. 3 VALOR achieves new state-of-the-art results with significant improvements on AVVP (+5.4 F-score) with generalization to AVE (+4.4 accuracy) jointly verified. Weaknesses: 1 The paper is really well-written and clear. It would be great if the paper can have a wider use scenerio. The current paper and contribution, experiments are limited to LLP dataset. It would be great if the paper could extend the method to other datasets. Adopting other widely used datasets to validate the proposed method can be a big plus for the paper. 2 The paper achieves big improvements than previous methods. It would be great if the paper could add FLOPs, throughput and number of parameters comparisons with previous methods. Model complexity comparison is a common practice for video related methods. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please check [*Weaknesses] Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## **Response to Reviewer Qd1s** We thank Reviewer Qd1s for the positive comments and suggestive remarks. Please see our responses below for each raised issue. **Q1. Extension to datasets other than LLP?** A1: In addition to audio-visual video parsing (AVVP), we also tackled the task of audio-visual event localization using the AVE dataset [71], which is a dataset different from the LLP dataset for AVVP. The AVE experiment results are in Table 4 in the main paper, where we observed a 5.1 improvement in accuracy compared to the baseline method of using the HAN model. It is worth noting that both AVE and AVVP tasks perform learning from audio-visual data in the challenging “modality unaligned” setting. **Q2. Details on FLOPs, throughput, and number of parameters when compared with previous methods.** A2: Please see table below for the implementation details and comparisons. We also add such information in Table 3 of the rebuttal PDF. | | VALOR |VALOR+| HAN |MM Pyramid| MGN | JoMoLD | CVCMS| |------------------------|:--------:|:----:|:----:|:--------:|----:|--------:|-----:| | FLOPs (K)↓ | 17.3 | 27.6 | 17.3 | 84273 |901.5| 17.3 | 98.9 | | throughput (K)↑ | 45.1 | 40.8 | 48.3 | 9.0 |16.2 | 46.8 | 14.7 | | trainable params (M)↓ | 5.1 | 5.0 | 4.6 | 44.0 | 4.4 | 4.6 | 11.4 | --- Rebuttal Comment 1.1: Title: Thank you for the rebuttal! Comment: Thank you for these results! I do not have further concern of the paper. It seems the paper and responses are good to all reviewers. Thus, I increase the score a little bit. Thanks for the contribution! --- Reply to Comment 1.1.1: Comment: We are glad that our responses have sufficiently cleared all your concerns. Your suggestions will definitely help us strengthen our work in the next revision.
Summary: The paper tackles the task of audio-visual event parsing, where the goal is to independently recognize the localize the events occurring in the visual and audio modality. The paper argues that modality independent processing can be crucial for this task compared to joint modeling of the two modalities. Consequently, the paper makes use of pre-trained vision and audio models like CLIP and CLAP to harvest temporal labels independently for each modality. The proposed method achieves clear boost in scores across the board. The experiments are clear and sufficient. Rebuttal: I have read the rebuttal. The rebuttal answers all my questions. So I have raised my rating. Strengths: I think the paper is clearly written (in most places), has a clear motivation and proposes simple intuitive methods to solve the discussed issues. The improvement in performance is significant and is useful for future works in this space. Weaknesses: The proposed automatic label harvesting using CLIP and CLAP have not been evaluated directly. Since a test set is available with segment labels, is it possible to evaluate the automatic annotation technique using the ground-truth labels on these test sets to directly evaluate how well the automatic annotation procedure works? The automatic training signal extraction technique could have been explored in more detail, as it is the crux of the paper. To start with line 156-157 are quite unclear. What does “contrastive models understanding logits” mean? The intersection operation with y is not described in the text at all. Also, how are the threshold values decided for each class? Is it helpful to keep a slightly lower confidence label (z_t < theta) that is present in “y”? If both CLIP and CLAP models predict the same label with low probability (because something else is dominating the scene), can we keep that? Also some captions that are used to query CLIP or CLAP could be too fine-grained. E.g. the exact musical instrument or vehicle (car or motorcycle) might be hard to identify, so during the automatic training label harvesting, multiple variations of the captions could have been tried, such as using synonyms for example. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: See weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Yes, the authors have discussed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## **Response to Reviewer JWv6** We thank Reviewer JWv6 for the positive comments and suggestive remarks. Please see our responses below for each raised issue. **Q1. How accurate are our pseudo labels (derived via CLIP/CLAP)?** A1: In Table 7 of the supplementary material, we have evaluated the quality of the pseudo labels generated by VALOR on the test split. As seen in Table 7, the audio segment-level F-score reached 84.92, and the visual segment-level F-score reached 82.8. These scores are significantly better than the approach directly leveraging video-level labels as segment labels, whose audio and visual segment-level F-scores are 79.33 and 69.30, respectively. In addition, we also evaluate the pseudo labels simply predicted by CLIP/CLAP (i.e., without our video label filtering), resulting in 17.74 and 31.89 F-scores in audio and visual modalities, respectively. This indicates the effectiveness of our VALOR which exploits video-level labels to guide the learning in each modality. **Q2. More details about the automatic training signal extraction technique. What does “contrastive models understanding logits” mean? The intersection operation with y is not described in the text at all.** A2: We thank the reviewer for giving us the opportunity to improve our paper. Regarding “contrastive models understanding logits”, they are the logits output from the pre-trained contrastive models (CLIP or CLAP), denoted as where superscript could represent CLIP or CLAP. This term comes from our use of pre-trained contrastive models for describing the associate segments, however, we will simply rephrase this term as modality-aware logits to avoid confusion. As for the intersection operation, it is the logical AND between the hard labels obtained from modality-aware logits (with thresholding) and the video-level labels. This operation eliminates the events that are erroneously predicted by the pre-trained models but not presented in the video. We will add this to the main paper for completeness. **Q3. How are the threshold values decided for each class? Possible to exploit labels with lower confidence (i.e., )? What if both CLIP and CLAP predict the same label but with low probability?** A3: For simplicity and fairness, we select such thresholds for each class based on its best segment-level F-score on the validation split. If using a class-independent threshold, which is determined by the best overall segment-level F-score on the validation split, we observe a comparable performance in Table 1 of our global rebuttal pdf. Thus, the performance is not sensitive to the thresholds selected from the validation set. Regarding the second issue, reducing the thresholds for events occurring in the video may not be helpful. This is because such an approach is equivalent to reducing all event thresholds in our method. In our work, since such thresholds are selected via validation, any further adjustments to the thresholds would only result in poorer pseudo labels. **Q4. Possible to exploit captions (e.g., variants or synonyms) for CLIP or CLAP to improve pseudo label quality?** A4: Following the suggestion of the reviewer, we now utilize caption variants using different prefixes for CLIP/CLAP. For CLIP, the prefixes are 'A photo of', 'An image of', and 'This photo contains'; as for CLAP, the prefixes are 'This is the sound of' and 'This audio contains'. In addition, we also follow the suggestion and consider synonyms when describing event labels. For instance, synonyms for 'car' are 'automobile' and 'motorcar', synonyms for 'cat' are 'feline' and 'kitty', and synonyms for 'laughing baby' are 'chuckling infant' and 'giggling baby'. With the above caption manipulation/augmentation, we observe a 2.1 improvement in pseudo labels' visual F-score when employing three captions as opposed to a single caption per event. However, in terms of audio, this approach does not yield superior results and actually results in a 1.8 decrease in F-score. The complete results are presented in Table 4 of our rebuttal PDF. How to properly exploit captions for improved pseudo label prediction will be among our future research directions. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed answers. I have raised my rating. --- Reply to Comment 1.1.1: Comment: We thank the reviewer again for the positive remarks. The discussions above will definitely help us strengthen our work.
Rebuttal 1: Rebuttal: ## **General Response** We sincerely appreciate the valuable time and insightful feedback provided by the reviewers. We are grateful for the opportunity to address the concerns raised by each reviewer, which fundamentally strengthens our work. The strengths pointed out by the reviewers include: **Method**: Motivation is clear. [Reviewer JWv6, t3gt]. Our VALOR is simple, technically sound, and novel. [Reviewer veqt, 82wA, JWv6, Qd1s, t3gt, uN7g]. **Experiment**: Extensive experiments (on two types of audio-visual tasks) and ablation studies are provided. [Reviewer veqt] **Performance**: Strong performance was shown to prove the effectiveness of the proposed method. [Reviewer veqt, 82wA(good performance), JWv6, Qd1s, t3gt, uN7g] **Presentation**: The paper is mostly clear and easy to follow. [Reviewer veqt, 82wA, JWv6, t3gt] **Importance of modality-independent learning**: The claim that modality-independent learning being crucial for audio-visual learning under noisy audio-visual conditions is a good point to highlight. [Reviewer 82wA, Qd1s] We would like to point out that particular concerns are raised, as listed below. Please refer to the responses to each reviewer for further details. 1. Selection of class-dependent thresholds [Reviewer veqt, JWv6, uN7g] 2. Variations of text prompt [Reviewer veqt, JWv6] 3. Ablation studies on video-label filtering [Reviewer 82wA] 4. Flexibility of re-trained model selection [Reviewer 82wA, t3gt] 5. Performance on runtime/speed [Reviewer Qd1s] 6. Label distribution analysis and its effect [Reviewer uN7g] We thank the reviewers again for the suggestions and the raised issues. Should the reviewers have follow-up questions, we will be more than happy to answer them in the next phase. Given the recognized strengths in the initial reviews, together with additional experiments and clarification provided during rebuttal, we hope this work would be of great value to the audio-visual learning community. Pdf: /pdf/e52fa6d625411db3334fbeac0d5bc106590bbe10.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper proposes methods for weakly-supervised audio-visual event learning. The method relies primarily on pre-trained audio-language (CLIP) and visual-language (CLAP) models to guide the learning process. These pre-trained models serve as teachers and provide pseudo-labels which are then used to compute loss which guides the overall learning process. Experiments are done on an LLP dataset and the proposed method improves the state of the art by a good margin. Strengths: – The proposed approach is simple – it relies on pre-trained CLIP and CLAP models and achieves good performance in both segment level and event level. – One key claim the authors make is that modality independent learning can be crucial for audio-visual learning in some conditions. While most multimodal learning works focus on using multiple modalities to improve performance on a given task, the significance of learning independently from each modality is a good point to highlight. – The paper is mostly clear and easy to follow except for some mathematical descriptions which I believe can be improved at a few places. Weaknesses: — The proposed approach relies heavily on the pre-trained models audio-language and image-language. Considering that it might be important to understand the impact of these models themselves. I believe that the paper is missing some crucial analysis in those areas. Some which might help pain a better picture. --- What kind of information (pseudo-labels) these pre-trained models are providing for the audio and video modalities ? Can we get some insight into the distribution of \hat{y}^m_t and how that is related (if any) to the actual video level ? Do the audio and visual model provide complimentary or similar information ? This can be analyzed before any training. --- It’s not fully clear (Table 3) what happens if the ground truth event labels are not used at all and just the pre-trained models are used to train. --- What is the impact of using different pre-trained models -- different CLIP/CLAP type models? — I think experiments on some other weakly supervised data might be helpful. — While the “non alignment” of the modalities in the videos does sound like a valid problem to address – it is not clear how much of that is present in the LLP dataset. How often does the video and the audio do align with the event label ? There is no analysis of to what extent that is present in the current dataset and to what extent it gets addressed. Some other comments/questions -- What is KD in in Table 3 ? Is it the one where ground truth labels are not used ? Just knowledge distillation from teachers ? -- Why is label smoothing applied to modality training targets ( line 106 ) but it’s not applied anywhere else ? -- For event localization, I am not sure accuracy is the right metric. Some metric as well as visualization which factors in the beginning and of the event will be more informative. -- The notations (line 145 - 147) q^P and q^m are confusing. Simplifying and clearly explaining what precisely they represent will improve clarity. -- How are the threshold parameters \theta^P obtained for each class ? -- In the current VALOR formulation only CLIP/CLAP outputs for only classes marked to be present are used (the logical and with y). What if you --- Updated review after rebuttal---- Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Please follow-up on the questions/concerns in the weakness section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The authors describe limitations of their work in terms of whether the approach will generalize to larger settings. Societal impact and other such limitations are not discussed. Given the nature and scope of this paper that might be okay. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## **Response to Reviewer 82wA** We thank Reviewer 82wA for the positive comments and suggestive remarks. Please see our responses below for each raised issue. **Q1. Missing analysis on pre-trained audio/image-language models.** **(1a) What info is extracted from the pre-trained single-modality models?** A1a: The information produced by the pre-trained models is their confidence (i.e., logit) of the occurrence of each event in a video segment. The hard pseudo labels derived from these logits and video-label filtering indicate the presence of the events in each video segment. **(1b) Pseudo labels and their relation to the actual video labels.** A1b: In the supplementary material Table 7, we first validate the accuracy of the pseudo labels generated by VALOR before using them to train the model. The audio pseudo labels achieve an audio F-score of 84.92, and the visual pseudo labels achieve a visual F-score of 82.8. In contrast, if we use video labels directly as segment labels, the audio F-score is only 79.33, and the visual F-score is only 69.3. This demonstrates that the pseudo labels we generated are much more reliable. **(1c) Pseudo labels and their relation to the actual video labels.** A1c: In the supplementary material Table 7, we first validate the accuracy of the pseudo labels generated by VALOR before using them to train the model. The audio pseudo labels achieve an audio F-score of 84.92, and the visual pseudo labels achieve a visual F-score of 82.8. In contrast, if we use video labels directly as segment labels, the audio F-score is only 79.33, and the visual F-score is only 69.3. This demonstrates that the pseudo labels we generated are much more reliable. **(1d) What if no ground truth labels are used during training?** A1d: If the video-label filtering is not performed (i.e., discard the intersection operation with the video-level labels), significantly degraded performances would be expected because this hurts the pseudo label quality. We now provide such results as an additional ablation in the initial rows of the two subtables found in the rebuttal PDF Table 1. For example, the resulting audio/visual F-score is merely 47.9/53.8, compared to 63.4/65.9 produced by our VALOR. If the reviewer's intention is to suggest not only avoiding the use of video labels for filtering but also not using video labels as the ground truth for calculating the loss, then the task becomes purely unsupervised. This deviates too much from the original setting (weakly-supervised) and goes beyond the scope of our work. **(1e) What’s the impact of pre-trained models?** A1e: To further assess the flexibility of using pre-trained models in VALOR, we now replace CLAP and CLIP with AudioCLIP [24] and OpenCLIP [A], respectively. The experimental results of substituting CLAP with AudioCLIP are presented in the left subtable of Rebuttal PDF Table 2, while the results of replacing CLIP with OpenCLIP are shown in the right. The segment-level audio F-scores before and after the substitution are 63.4 and 61.0, respectively.The visual F-scores before and after the substitution are 62.3 and 61.6, respectively. We can conclude that our VALOR is not limited to the use of the specific instance of CLIP/CLAP. [A] Cherti et al., “Reproducible scaling laws for contrastive language-image learning.” In CVPR, 2023. **Q2. Missing analysis on how often visual-audio misalignment occurs** A2: To address this issue, we consider the validation split of LLP. We found that there are a total of 9126 event segments with labels assigned for at least one modality, and 4048 of them appear to be non-aligned, i.e., the label is assigned to exactly one modality. The baseline method, HAN [72], only correctly predicts 188 segments (accuracy of 4.6%). While our VALOR and VALOR++ were able to predict correct per-modality labels for 1518 and 1850 segments (accuracy of 37.5% and 45.7%), respectively. The SOTA method of JoMoLD only predicted 1471 of them and thus resulted in a poorer accuracy of 36.3%. Thus, it shows that our method is better at handling the “modality misalignment” problem. **Q3. Missing analysis on how often visual-audio misalignment occurs** A3: KD in Table 3 means knowledge distillation, indicating that the model learns directly from the logits output from CLIP and CLAP. The details of KD are mentioned in Sect. 3.2 in the main paper. Yes, the ground-truth video labels are not used in KD. **Q4. Why is label smoothing applied to modality training targets (line 106) but not anywhere else?** A4: Label smoothing is a heuristic method introduced by Tian et al. [72], which multiplies the ground-truth video labels by modality-dependent coefficients to assign labels to audio and visual data. However, [72] did not explicitly explain how such coefficients are determined. On the other hand, our method directly predicts pseudo labels in each modality by utilizing pre-trained audio/visual-language models, under the guidance of video-level labels. **Q5. Proper metric for event localization** A5: In the AVE task determined by [71], each video is divided into segments of equal and fixed length, with each segment containing only one event label. With such a setting, only per-segment prediction accuracy is required for evaluation. Note that accuracy is the official metrics [71] used in the AVE task. **Q6. Confusing notations (line 145 - 147) of $q^P$ and $q^m$** A6: $q$ represents the probabilities/logits of event labels. The superscript $P$ denotes the output from pre-trained models (CLIP or CLAP) and $m$ denotes the data modality (audio or visual). For example, $q^{CLIP}$ is the probabilities output from CLIP and $q^v$ denotes the visual probabilities output from the HAN model. **Q7. Determination of the threshold parameters $\theta^{P}$** A7: For simplicity and fairness, we select such thresholds for each class based on the performance on the validation split, i.e., the best segment-level F-score. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed rebuttal. Several of the my concerns were addressed. I have made my overall score more positive. --- Reply to Comment 1.1.1: Comment: We are glad that our responses have sufficiently clarified your raised issues. Your comments and suggestions are greatly appreciated, which definitely help us strengthen our work.
Summary: In this work, the authors propose a unified weakly supervised audio-visual scene understanding framework for audio-visual video parsing and audio-visual event localization. Different from previous works, the proposed Visual-Audio Label Elaboration (VALOR) method is simple and effective. It leverages large-scale audio-text and visual-text contrastively pre-trained models as the modality teachers to predict individual labels for audio and visual modalities to tackle the modality and temporal uncertainty issue and boost event parsing performance. Extensive experiments and ablation studies on the LLP and AVE datasets can validate the effectiveness of the proposed approach. Strengths: + The proposed method is new and technically sound. To alleviate drawbacks in past approaches, the proposed VALOR leverages large-scale audio-text and visual-text contrastively pre-trained models as the modality teachers to predict individual labels for audio and visual modalities for weakly-supervised audio-visual event parsing tasks. + Extensive experiments and ablation studies on two datasets are provided, and the strong results can demonstrate the effectiveness of the proposed method. + The core code is provided in the supplementary material, and the authors promised to release the source code. + The paper is easy to follow. Weaknesses: + The process for selecting class-dependent thresholds in VALOR is not clearly outlined in the paper. These thresholds are crucial hyperparameters for generating audio and visual labels for training, and it appears that different classes may require different thresholds. Further explanation is needed on how these thresholds are chosen and how changes in these thresholds could potentially impact the performance of event parsing. + The authors did not provide a clear explanation as to why directly using Knowledge Distillation (KD) with audio and visual semantic class embeddings results in even worse parsing performance (especially for audio event parsing) on the LLP dataset, as shown in Table 3. While the proposed VALOR method does improve performance, a more detailed analysis on the KD models would be beneficial for further understanding. + The CLIP model uses a text prompt that is created by adding a "A photo of" prefix to the event's natural language form. However, the visual data involved in audio-visual event parsing are videos, not single images. This raises the question of whether there is a better prompt that could be used to improve model performance by taking into account the video nature of the data. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please address questions in Weaknesses. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors discussed method limitations in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## **Response to Reviewer veqt** We thank Reviewer veqt for the positive comments and suggestive remarks. Please see our responses below for each raised issue. **Q1. Selection of class-dependent thresholds in VALOR. What if other threshold choices?** A1: For simplicity and fairness, we select such thresholds for each class based on its best segment-level F-score on the validation split. If using a class-independent threshold, which is determined by the best overall segment-level F-score on the validation split, we observe a comparable performance in Table 1 of our global rebuttal pdf. Thus, whether the thresholds are class-dependent or class-independent does not affect the results significantly. | CLAP Event Thresholds| Segment-level Audio F-score | |----------------------|:--------:| | class-dependent | 62.7 | | class-independent | 63.4 | | CLAP Event Thresholds| Segment-level Audio F-score | |-|:-:| | class-dependent | 66.3 | | class-independent | 65.9 | **Q2. Why Knowledge Distillation (KD) with audio and visual semantic class embeddings results in worse parsing performance?** A2: In order to perform knowledge distillation (KD), the Softmax function is applied to the logits predicted by pre-trained models (CLAP/CLIP) and those produced by the HAN model, followed by calculating their KL-divergence. Since the Softmax function predicts a single dominant label, which does not align with the multi-label setting of the AVVP task, the use of the KD-trained model is expected to obtain degraded performances than VALOR. **Q3. Possible to exploit visual prompts to describe visual information in videos?** A3: We note that visual/video data in the task of AVVP are presented as a collection of consecutive one-second video clips, with very few visual changes in each clip. Thus, it is sufficient to apply an image-text pre-trained model of CLIP with the standard prompt of “A photo of” to extract the visual information. Although we did not design prompts specifically for video data, we have created a variety of prompts tailored for images as suggested by Reviewer JWv6. Through this, we aimed to explore whether generating more prompts for each event could lead to more accurate pseudo labels. The experimental results are presented in Table 4 of the rebuttal PDF. We observe that generating more prompts for each event slightly improves the accuracy of the generated visual pseudo labels, resulting in a 2.1 increase in segment-level visual F-score. However, this approach does not improve the accuracy of the generated audio pseudo labels; instead, there is a slight decrease of 1.8 in segment-level audio F-score. --- Rebuttal Comment 1.1: Title: Response to Authors Comment: The rebuttal can address my questions. I will keep my positive rating. Thanks! --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the positive remarks. We also appreciate the opportunity to clarify the raised issues, which definitely help us strengthen our work.
null
null
null
null
DIN-SQL: Decomposed In-Context Learning of Text-to-SQL with Self-Correction
Accept (poster)
Summary: In their study on the Text2SQL task, the authors demonstrate the efficacy of breaking down the generation problem into sub-problems when utilizing LLMs to improve performance on the Spider dataset. The approach is simple yet effective on the Spider dataset. It yielded consistent improvements across three LLMs: GPT-4, Codex Davinci, and Cushman. Notably, the paper achieves a new SOTA performance on the Spider holdout test set. Strengths: - SOTA on Spider holdout test set - Approach consistently improve results on three OpenAI LLMs Weaknesses: - The analysis and experiments conducted in the study were solely focused on a single dataset, namely Spider. - The study does not provide sufficient evidence to support the generalizability of the proposed approach to Text2SQL tasks in general. - It is important to highlight the existence of several large-scale and robust Text2SQL datasets that could have strengthened the results in terms of generalizability. Notably, the BIRD-bench dataset (https://bird-bench.github.io/) provides an additional resource for evaluating the proposed approach. Furthermore, the Spider dataset has various variants, such as Spider-SYN, Dr.Spider, Spider-realistic, and Spider-DK, which offer opportunities to assess the effectiveness of the approach in different contexts and scenarios. Conducting experiments on these datasets would have enhanced the overall robustness and applicability of the findings. - The foundation models utilized in the study are exclusively based on OpenAI models. - While the classification step for SQL difficulty proves effective on the Spider dataset, it is unlikely to exhibit strong generalization capabilities to other real-world Text2SQL data. The classification of SQL difficulty may be influenced by annotation artifacts specific to the Spider dataset. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. How did the authors come up with the self-correction prompts? Was it through a lot of trial and error from the training and validation set? 2. How did the author decide on this set of in-context learning exemplars? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 2 fair Limitations: No negative societal impacts Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: The first, second, and third weaknesses mentioned by the reviewer have been thoroughly addressed in the general response to all reviewers. Regarding the weakness related to other LLMs beside OpenAI models, at the time of writing this paper, GPT-based models, PaLM, and the OPT model were the widely used language models. Access to other LLMs besides the GPT family was limited as they were not accessible through API calls. Moreover, due to the large number of parameters, we faced difficulties in loading those models onto our available infrastructure. Concerning the last weakness regarding the applicability of our proposed approach to real-world scenarios with more complex SQL query structures, our current classification structure effectively handles numerous benchmarks in the text-to-SQL domain and real-world applications. To accommodate more complex scenarios, we can readily add another class for intermediate steps and include additional steps in the existing most complex class. As the Spider dataset already contains the most complex SQL queries, it serves as a valuable benchmark for assessing intricate SQL structures. Question: How did the authors come up with the self-correction prompts? Was it through a lot of trial and error from the training and validation set? Answer: The concept of using LLMs to correct themselves was inspired by an application mentioned on the OpenAI website. LLMs were suggested as a tool for debugging codes. Recognizing the LLMs' potential in debugging, we proposed utilizing them to self-correct their own responses. This approach aligns with the ideas presented in independent work conducted around the same time as ours, such as the paper "self-debug" by Google DeepMind, where LLMs were employed to debug their generated SQL queries and Python codes. Question: How did the author decide on this set of in-context learning exemplars? Answer: The examples in our prompts are deliberately selected to include at least one instance for almost all widely used SQL keywords. This careful selection ensures that the LLM has access to all the necessary information before generating the SQL query. --- Rebuttal Comment 1.1: Comment: Thanks for your response and new experiment results. I have updated my ratings accordingly
Summary: This paper addresses the task of text-to-SQL prediction using large language model (LLM) prompting. The authors propose DIN-SQL, a chain-of-thought (CoT) prompting method which decomposes the SQL prediction process into four substeps: schema linking, complexity classification, SQL prediction, and self-correction. For each step, a manually designed prompt with few-shot in-context learning samples are provided to the LLM to obtain the prediction. Especially, after complexity classification, samples with different complexity levels are addressed with different prompts. Experiments on the Spider dataset demonstrate the effectiveness of the proposed method. Strengths: - The overall pipeline is conceptually simple but practically effective. It appears to successfully leveraged the power of LLMs and achieved promising results. - The idea of applying different prompts for samples with different complexity is justifiable. It is based on prior observations that CoT prompts are effective on harder samples but can hurt the performance on easy samples. The ablation studies also echoed this observation, and verified the usefulness of this proposed mechanism. Weaknesses: - The proposed method does not carry much techinical innovation, as CoT is already a widely studied and applied technique, and the intermediate representation is imported from previous work NatSQL. - The complexity categorization and corresponding prompt designs are more or less tailored for the Spider dataset. It might be unclear whether the method is directly applicable in real-world scenarios, where the variety of target SQL might be much larger, and may include additional mechanisms such as self-join. - Based on my observations on the "sub-questions" generated for nested complex samples, it seems that many of them (though not all) are identical to the original questions. This might not be the desired behavior and leaves room for further improvements. For example: - "Find the title, credit, and department name of courses that have more than one prerequisites?" can be solved by knowing the answer to the following sub-question "What is the title, credit value, and department name for courses with more than one prerequisite?". Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - Around L231: The SQL-prediction step prompt for nested complex questions are designed in a comprehensive way, allowing multiple sub-questions to be included. However, looking at the samples in the appendix, it seems that at most one sub-question is predicted. Is the method actually able to handle very complex queries by generating >1 sub-questions? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Several limitations are discussed, although I think there are additional points as I mention above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: The first weakness raised in the review has been thoroughly addressed in the general response to all reviewers. Regarding the second weakness concerning the applicability of our proposed approach to real-world scenarios with more complex SQL query structures, our current classification structure is effective for many benchmarks in the text-to-SQL domain, which aim to resemble queries in real-world applications. However, it is fair to say that the queries in real world applications can be more complex. To handle more complex scenarios which fall outside our current query classes, we may simply add more query classes to our query classification module and provide more relevant steps to each class in our query generation module. We could only evaluate our model on existing benchmarks, and Spider, at the time of writing this paper, stood out as the most complex text-to-SQL benchmark we could use. The third weakness pertains to identical questions and sub-questions. During our experiments with query decomposition, we noticed instances where the query did not have a clear subquery or could not be broken down to independent components. In those cases, the model generated a paraphrased version of the original question. To promote this behavior especially when a question cannot be broken to sub-questions, we included examples of such cases in our prompt. Our evaluation shows that including such cases overall helps the model. Question: Is the method actually able to handle very complex queries by generating >1 sub-questions? Answer: The presence of only one sub-question in all the few-shot demonstrations can be attributed to the database selection process from the training set of Spider. For the samples in our chosen database, we did not encounter queries that required more than one sub-question. However, during the inference stage, we noticed that the model was capable of generating multiple sub-questions, even though some were unnecessary. It was evident that often one sub-question, along with a SQL query utilizing JOIN operators, proved to be sufficient. --- Rebuttal Comment 1.1: Comment: Thank you for the response. My overall evaluation of the contributions of the work is not changed, thus I will keep my current score. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful comments. In our effort to address your concerns about the applicability of our approach, we have included the performance of our approach on the BIRD benchmark in the comment section of the rebuttal to all reviewers, where DIN-SQL establishes itself as the state-of-the-art performer. This additional evaluation further underscores the effectiveness of our approach and strengthens its applicability in real-world scenarios. We believe that the results we've presented can provide valuable insights into the reliability and impact of our method. Your dedication to thoughtful evaluation is invaluable to us, and we're pleased to have been able to provide additional evidence that reinforces the merits of our work. If you have any further questions or observations, please feel free to share them with us.
Summary: The paper proposed to decompose the text-to-SQL reasoning into multiple steps and solve them with large language models. The authors began by conducting an error study of LLMs with few-shot learning and identified the common errors, such as "schema linking" and "JOIN". To address these common errors, they proposed a new method called DIN-SQL which breaks the text-to-SQL task down into four modules: (1) schema linking, (2) query classification and decomposition, (3) SQL generation, and (4) self-correction. In the schema linking and query classification modules, examples from the Spider dataset's training set are used to guide LLMs in identifying the relevant database schema and potential SQL structure. The SQL generation modules leverage different examples to handle various SQL structures. For more complex SQL structures, such as non-nested complex and nested-complex queries, demonstration examples with hand-written intermediate reasoning steps are utilized. At last, the self-correction module asks the LLMs to correct the predicted SQL if they realize that their initial generation was incorrect. The proposed framework demonstrates remarkable performance, surpassing state-of-the-art models on the Spider test set. Strengths: 1. The proposed method DIN-SQL demonstrates very impressive results on the hold-out test set in the Spider dataset, showing the effectiveness of DIN-SQL on Spider. 2. The error analysis reveals the drawbacks of LLMs with standard few-shot demonstrations and the proposed method are designed to address these drawbacks. Weaknesses: 1. The proposed methods seem highly tailored to the examples in the Spider dataset. Previous work [1-2] has highlighted the spurious correlations in the Spider dataset, such as the strong lexical matching between the database and the question. It seems important to evaluate the proposed method on other datasets such as Spider-syn [1], Spider-DK [2], Dr. Spider [3], or KaggleDBQA [4]. 2. The proposed SQL generation modules require hand-written intermediate reasoning steps. I don't think it is a problem given the limited number of examples requiring annotation but my concern is whether these examples will still serve as good demonstrations for the datasets other than Spider. I hope this can be addressed along with 1. [1] Gan, Yujian, et al. "Towards robustness of text-to-SQL models against synonym substitution." arXiv preprint arXiv:2106.01065 (2021). [2] Deng, Xiang, et al. "Structure-grounded pretraining for text-to-sql." arXiv preprint arXiv:2010.12773 (2020). [3] Chang, Shuaichen, et al. "Dr. Spider: A diagnostic evaluation benchmark towards text-to-SQL robustness." arXiv preprint arXiv:2301.08881 (2023). [4] Lee, Chia-Hsuan, Oleksandr Polozov, and Matthew Richardson. "KaggleDBQA: Realistic evaluation of text-to-SQL parsers." arXiv preprint arXiv:2106.11455 (2021). Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: I am somewhat confused by the disparity between the results of DIN-SQL in the Spider development set and the Spider test set. On the test set, DIN-SQL demonstrates superior performance compared to the state-of-the-art model [1], achieving a margin of 5.4 points higher (85.3 vs. 79.9). However, on the development set, it falls short by 9.9 points (74.2 vs. 84.1) and is outperformed by other in-context learning methods [2]. Do you know what factors contribute to DIN-SQL's high performance on the test set but comparatively low performance on the development set? [1] Li, Haoyang, et al. "Resdsql: Decoupling schema linking and skeleton parsing for text-to-sql." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 37. No. 11. 2023. [2] Ni, Ansong, et al. "Lever: Learning to verify language-to-code generation with execution." arXiv preprint arXiv:2302.08468 (2023). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: Looks good to me. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: The first weakness highlighted by the reviewer has been thoroughly addressed in the general response to all reviewers. Regarding the second weakness raised in the review, we would like to clarify that our examples were chosen from the training set of Spider and were selected deliberately to include at least one example for each of the widely used SQL keywords. Additionally, the cross-domain nature of the Spider dataset ensures that achieving the highest performance on this benchmark translates to strong performance across various domains. As a result, we believe that the examples used in our prompts are highly applicable to a wide range of existing benchmark datasets. Question: Why is there a disparity between the results of DIN-SQL in the Spider dev set and the test set? Answer: The test set of Spider, which is intended to be hidden, has not been published, making it challenging to determine the exact reason for the performance disparity. However, we have some speculations that may shed light on potential contributing factors: 1) Our proposed approach relies solely on the database schema to generate answers for given questions. This strategy poses challenges in cases where the model needs knowledge of how values are stored in the database. For instance, when the question is "Give me the name of female users with age over 40," the model may not know whether the stored values in the gender column are "female" or "F." Consequently, it relies on the question's context and chooses "female." We speculate that the number of questions requiring knowledge of specific database values is lower in the test set compared to the dev set of Spider. 2) Ambiguities exist in the database schema of the dev set of Spider. In some databases within the dev set, tables have columns that store the same information about entities but in a slightly different format. While using any of those columns may be treated as correct, a generated query is deemed correct if its column choice matches that of the reference query. The higher performance of our model on the test set might be attributed to the test set having a more well-structured database schema, reducing such ambiguities. While these speculations offer insights into potential reasons for the observed performance disparity, the lack of access to the test set hinders us from confirming these hypotheses conclusively. --- Rebuttal Comment 1.1: Title: Evaluation on other datasets Comment: Thank you for providing the response. I think that the disparity between the results of DIN-SQL in the Spider dev set and the test set, when compared to other approaches, raises a crucial concern about the robustness/stability of DIN-SQL. Given that, it is essential to evaluate the proposed method on another text-to-SQL dataset. --- Reply to Comment 1.1.1: Title: DIN-SQL performance on other datasets Comment: Thank you for your thoughtful comment. We greatly value your concern about the robustness and stability of the DIN-SQL approach. In response to this concern, we have conducted a comprehensive evaluation of our method on the BIRD benchmark, another challenging text-to-SQL benchmark. We are pleased to share that our evaluation on the BIRD benchmark development and test sets reaffirms the effectiveness and robustness of the DIN-SQL approach. We have included the results of our evaluation, where DIN-SQL establishes itself as the state-of-the-art performer, in the comment section of the rebuttal to all reviewers. These results further highlight the consistency and reliability of our approach across different datasets. We appreciate your attention to these critical aspects and hope that our evaluation on the BIRD benchmark adds clarity and confidence in the stability of the DIN-SQL method.
Summary: This paper proposes to improve few-shot prompting LLMs for text-to-SQL task. It first provides detailed error analysis on existing few-shot prompting LLM approaches, into six categories. Then the paper proposes a new approach to decompose the task into a few sub-tasks, solve each task individually, and compose the subtask for the final answer. Experiments show the proposed approach achieve state-of-the-art on the Spider benchmark dataset. Strengths: 1. The result is solid, as it achieves state-of-the-art on the challenging Spider benchmark dataset. In addition, this paper provides detailed ablation study and analysis, therefore it's straightforward to understand the strength and weakness of the proposed approach. 2. The paper is well-written, easy to follow the motivation and details. Weaknesses: Overall, my biggest concern is there's little additional novelty beyond "just another prompt engineering paper". It's expected that with larger LM, few-shot approaches could outperform existing fine-tuning based approaches. However, after reading this paper, it's still unclear whether scaling LLM can solve this text-to-SQL task and if not, where is the bottleneck. More specific concerns: 1. The manual error analysis (Section 3) only applies to Codex, it's unclear whether these errors still exist for larger language models such as GPT4. 2. There's only marginal improvement compared with chain-of-thought prompting (Table 4). And I guess the margin would be even smaller if we use GPT4 as LLM. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. What's the difference on errors for Codex and GPT4 2. What's the result on chain-of-thought promoting for GPT4. 3. What do you think is the upper bound for this dataset, and do you think scaling LLM can achieve the upper bound? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: A detailed explanation of the novelties of our prompting method is provided in the general response to the reviewers. The argument regarding marginal improvement over the Chain-of-thought method is invalid because the cited performance “decomposed COT prompting” is not for the chain-of-thought method alone. The reported performance here is for our comprehensive framework that includes all our components except classification. This means that the approach contains not only COT prompting but also Schema Linking, Self-Correction, and NatSQL Intermediate Representation, all of which are significant contributions of our work. It is essential to consider the entire framework's performance, as each module contributes to the overall effectiveness of our approach. Question: What's the difference on errors for Codex and GPT4 Answer: Our error analysis reveals that the majority of errors in the area of schema-linking remains in transitioning from Codex to GPT-4. Ambiguities within the database schema pose a significant challenge, even for advanced models like GPT-4, in accurately identifying correct columns and tables. While using a larger model addresses certain issues, particularly in cases where the model struggles to generate accurate SQL queries (miscellaneous class of errors), schema-linking complexities remain the primary obstacle. In summary, larger models generally exhibit enhanced SQL query generation capabilities, yet the challenges arising from schema-linking ambiguities remain prominent. Addressing these challenges will be a key focus for further improvement and refinement of our approach. Question: What's the result of chain-of-thought promoting for GPT4. Answer: We did not specifically test the pure chain-of-thought approach on the Spider dataset with GPT-4. However, we evaluated the performance of the chain-of-thought approach when integrated with three other modules proposed by us. Using the chain-of-thought approach uniformly for all questions, irrespective of their complexity, resulted in a performance degradation, as highlighted in Table 4. The decomposed chain-of-thought result presented in this table refers to employing the most complex prompt, developed for the nested complex class, for all questions instead of adopting a classification-based approach to determine prompt complexity based on the question's level of difficulty. Question: What do you think is the upper bound for this dataset, and do you think scaling LLM can achieve the upper bound? Answer: We think we are approaching the upper bound on the Spider dataset because there are certain ambiguities in the database, which make it challenging to achieve a flawless performance even if larger LLMs are employed. Some natural language questions have multiple interpretations, and relying on a single reference query as the gold standard is insufficient to address these diverse interpretations. While scaling LLMs can address some issues related to generating complex SQL queries, the primary challenge persists in the area of schema linking, where comprehending database schema based on less-descriptive table and column names remains a difficult task. --- Rebuttal Comment 1.1: Title: Please Reply to Author Rebuttal Comment: Dear reviewer, Thanks a lot for your efforts and valuable reviews. Would you please check this author rebuttal and see how they address your concerns on novelty, manual error analysis, and marginal improvement? Please reply to authors by adding your following comments below this author rebuttal. As the author-reviewer discussion is closed soon, we would appreciate if you could submit your reply to authors by Aug 21st 1pm EDT. Thanks! Best, AC
Rebuttal 1: Rebuttal: Our method draws inspiration from chain-of-thought and decomposed prompting techniques and brings valuable contributions to prompting across various domains, including text-to-SQL. These contributions are as follows: 1) Adaptive Prompting Based on Task Complexity: Our technique involves classifying the input task complexity and adjusting the prompt complexity accordingly. Tailoring prompts based on input question complexity outperforms using generic or overly complex prompts, as demonstrated in Table 4. Moreover, this classification based on task complexity can reduce the number of tokens that are used for simple questions, hence reducing the cost. 2) Schema-Linking Module: Our work introduces a schema-linking module, which has not been utilized in the context of prompting approaches. Inspired by our work, a few other prompting methods (e.g. C3 paper and LangChain SQLAgent) boost their performance using schema linking. The schema-linking template we propose is optimized through numerous iterations to mimic human thought processes. 3) LLMs for Self-Correction: Our research is among the first to propose using Language Models (LLMs) to self-correct their generated responses. The effectiveness of this method is demonstrated by other independent work conducted around the same time as ours, including the self-debug paper, across various domains, not limited to text-to-SQL. Our approach influenced the widely used text-to-SQL agent of LangChain, with practical applications in the industry. In addition to the aforementioned contributions in the prompting techniques domain, our method stands out with the highest performance, surpassing not only fine-tuning approaches but also other prompting methods. For our evaluation, we utilized the Spider dataset, a comprehensive cross-domain benchmark with databases from various domains. Given its extensive coverage, a successful approach on this dataset is expected to perform well across all domains. There are also a few other datasets that are derived from or are based on Spider: 1) Spider-Syn replaces question terms in Spider with synonyms. As reported in the Spider-Syn paper, 99% of those modifications are done on Schema words and the remaining 1% on cell values. 2) Spider-DK modifies the questions in Spider by adding domain knowledge or question paraphrases. For example, "in the order of birth date" in Spider is replaced with "order of their birth date from old to young" in Spider-DK, and "dog" is replaced with "abandoned dogs". 3) Spider-realistic modifies the questions in the Spider dataset to remove the explicit mentions of column names. 4) Dr.Spider applies perturbations to Spider tabases, natural language questions, and SQL queries to measure the robustness of the models. We did not evaluate our work on these variants for two reasons: 1) Many of these benchmarks share a similar SQL query structure with the Spider dataset but extend the benchmark in one direction. Based on our experience with these datasets, the modified queries in these variants tend to be less complex, and our classification and decomposition techniques are expected to do well on these variants. Also, with the wealth of information stored within the parameters of LLMs, and the less reliance of our method on the training data, our model is expected to do well on these variants compared to fine-tuned models that heavily rely on the training data. For example, LLMs have demonstrated exceptional paraphrasing capabilities, making them resilient to synonym substitutions. Experimental results in the paper "A comprehensive evaluation of ChatGPT’s zero-shot Text-to-SQL capability" support these claims, as the model's performance remained stable under various perturbations to the natural language questions. 2) Cost was the main factor in evaluating our model. We had to use the OpenAI API calls in our evaluation, and the cost of making those API calls with the large number of queries in the Spider dataset was not cheap. Running our model on other variants would have doubled or tripled the cost. Although we could not provide results for these other datasets due to the expensive nature of employing GPT-4, the resilience of the previous prompting method on these variants suggests a similar trend to hold for our method. BIRD, another cross-domain dataset similar to Spider, has been very recently proposed. At the time of writing this paper, the BIRD dataset was not published, hence it was not included in our study. This dataset intentionally focuses on database values and external knowledge provided in queries, whereas our queries are generated independent of database values and only based on database schema for generality reasons (as discussed in the paper). Our prompts for this dataset will need to include such information to be effective. We are currently working with the authors of BIRD to evaluate our model on their datasets.
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper's contributions are the following: - The authors examined the common failure modes of doing text-to-SQL with few-shot prompting of LLMs: schema linking, JOIN, GROUP BY, nested queries and set operations, invalid SQL and miscellaneous. - The authors propose a method to do text-to-SQL by decomposing the task into 4 sub-problems, and solving each with a few-shot / zero-shot prompt to an LLM: schema linking, classification and decomposition, SQL generation, and self-correction. Their in-context learning method is reported to attain the highest execution accuracy on the test set of the Spider dataset, without making use of database content. - The work shows that LLMs can be used for text-to-SQL via prompting with performance equivalent to or better than methods that make use of fine-tuning. Strengths: - The proposed method is a good application of chain-of-thought style problem decomposition to in-context learning techniques for text-to-SQL. - The method is well motivated by first conducting an examination of common failure modes when using LLMs for text-to-SQL in the few-shot setting. Investigating the error improvements using their method in Figure 4 is a very clear way to show how their method helps with better in-context learning. - The paper is written in a largely clear manner that makes it easy to follow. - The paper's results are a relevant contribution to the text-to-SQL community, in that their performance is significantly better than other LLM techniques like those in Rajkumar et al., 2022 and Liu et al., 2023. Weaknesses: - There could have been more discussion about how the proposed four modules have been implemented without using prompting, to better situate the work in the literature. - The paper should have been clearer about how the intermediate representation of NatSQL bridges the gap between queries and SQL statements. An example each for the non-nested complex class and nested complex class would have been helpful in the main paper, instead of leaving it to the Appendix. In particular, it is not clear in the paper 1) how the intermediate representation is obtained, 2) how the removal of operators like GROUP BY or the WHERE clause from the syntax of the intermediate representation can help the LLM still generate the right SQL statements and 3) how the LLM is induced to solve sub-queries for the nested complex class. - The paper could have been clear about the latency of its proposed method. While high-performing, it is likely that making several sequential calls to an LLM will be high latency. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - How is the intermediate representation obtained? I refer to Appendix A.5.2 to see that it is inserted into the prompt, but it is not clear where this is generated. - How do the authors think the removal of operators like GROUP BY or the WHERE clause from the syntax of the intermediate representation help the LLM still generate the right SQL statements? - Can the authors illustrate how a query is decomposed into sub-queries for the nested complex class? It is not clear to me still in Appendix A.5.3. - What is the latency of the proposed decomposed in-context learning method, and how does it compare to other methods (like RESDSQL-3B + NatSQL or Graphix-3B+PICARD)? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes, limitations are adequately addressed Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Question: How is the intermediate representation obtained (ref Appendix A.5.2)? Answer: For our few-shot examples, we used the intermediate representation from the NatSQL Github repo. The repo gives the intermediate representation for all queries in the training set of Spider. Question: How does removing operators in the intermediate representation help? Answer: Removing/merging operators makes the transition from natural language to SQL easier and is part of our problem decomposition. Expressions in natural language queries may not clearly map to a unique SQL clause or they may map to multiple clauses. For example, some conditions are mapped to the WHERE clause whereas others are mapped to the HAVING clause. Some SQL clauses do not have a clear counterpart in text descriptions (e.g. JOIN and GROUP BY). Dispensing or merging the operators makes the generation task easier and pushes the LLM to focus on correctly predicting the query structure before refining it in the next step. Question: Can the authors illustrate how a query is decomposed into sub-queries for the nested complex class? It is not clear to me still in Appendix A.5.3. Answer: The sub-questions are extracted in the classification and decomposition module. For example, as shown in Fig 3-b, for the question “how many courses that do not have prerequisites,” the sub-question: “which courses have prerequisites” is extracted. In SQL generation, SQL is generated for sub-questions in a step-by-step reasoning process before generating the whole query. For example, consider the last demonstration in the prompt in Appendix A.5.3, where the question “Give the title and credits for the course that is taught in the classroom with the greatest capacity” has the sub-question “What is the capacity of the largest room?”. The mapping of the sub-question to SQL, i.e. “ (SELECT max(capacity) FROM classroom)”, is provided in the reasoning process before generating the final answer. Question: Latency compared to other methods? Answer: Our latency heavily relies on that of the OpenAI API calls which varies a lot over time. For GPT-4, this was typically under 1min. This time can significantly be improved using the LLM on Microsoft Azure. The latency of other approaches also depends heavily on the hardware and the GPU. --- Rebuttal Comment 1.1: Title: Please Reply to Author Rebuttal Comment: Dear reviewer, Thanks a lot for your efforts and valuable reviews. Would you please check this author rebuttal and see how they address your concerns on more discussion on implementation without prompting, intermediate representation of NatSQL, and latency of its proposed method? Please reply to authors by adding your following comments below this author rebuttal. As the author-reviewer discussion is closed soon, we would appreciate if you could submit your reply to authors by Aug 21st 1pm EDT. Thanks! Best, AC --- Rebuttal Comment 1.2: Comment: Thank you for your reply and the encouraging results on the BIRD dataset. I have read the other comments and will maintain my rating.
null
null
null
null
null
null
Riemannian stochastic optimization methods avoid strict saddle points
Accept (poster)
Summary: The authors study saddle point avoidance of stochastic gradient algorithms formulated on Riemannian manifolds. They prove a number of results on saddle avoidance for which the Euclidean analogs, possibly under slightly different technical assumptions, are known. These results show that stochastic gradient descent algorithms also avoid strict saddle points even if working in a geometric setting. This is a purely theoretical paper and does not include any experiments or empirical examples. EDIT: I've read the rebuttal, and am happy to keep my score where it was. Strengths: This paper addresses an important topic and is very-well-written, particularly its main text. It is very readable to those with theoretical training, but not necessarily experts in optimization - a common pitfall among works written at a similar technical level. Due to field's historical emphasis on convex optimization at the expense of non-convex optimization, theory on saddle point avoidance is only beginning to become available. At the same time, Riemannian optimization is a rapidly growing area, and has the potential to help bring improved understanding to optimization at a time when doing so is increasingly important due to the rise of deep learning. Therefore, the contribution is timely and valuable The technical appendix reads as if it was written carefully. I spotted no hints of sloppiness that would call the results into question, but at the same time this work is heavy enough that typos - including potentially in references the work relies on - are a concern. I did not do a full check, since my primary work is in a different (albeit related) area, and this paper is technical enough that it needs an expert familiar other saddle-avoidance papers for the verification to be trustworthy. I encourage the authors to do a careful read-over after not thinking about this paper for at least a month, so as to go over the logic and spot typos. With this caveat stated, the paper clearly meets the NeurIPS bar from a technical viewpoint. The introduction is particularly well put together, makes clear what is already done, what the authors are adding, and what the key takeaways are. I would use this paper as an example to students for how to write a good introduction. Weaknesses: No experiments. This paper would have been made significantly stronger with even a minimal empirical study to illustrate how a few examples of RSGD behave on common manifolds. It would have been particularly interesting to see a case or two where RSGD does not get stuck, confirming the theory, and one which violates the theory's assumptions where it does get stuck, illustrating the limitations. Sec 2: consider a more informative title Optimization-theoretic assumptions involving the setting should be stated more clearly clearly all in one location, so that optimization theory people can easily see how strong the obtained results are: * Lipschitz gradient * Avoidance defined in terms of strict saddle points * Geodesically complete manifold * Maybe consider adding \paragraph called "Key Assumptions"? Paper takes a bit too long to get to the point, page 5 is still describing the setting. Even though the description is good, consider simplifying by having less examples: * Consider splitting the examples into one or two key examples, then have the results more in the middle of the paper, then having a final section which describes additional examples The paper talks way too many times about how intricate the analysis is. My experience is that most publishable forms of analysis are either intricate or they are obvious, so stating this just makes it feel like the authors want to make themselves look fancy and important to the readers. This is distracting and takes away from the paper's actual content. Rather than saying it's intricate, maybe instead say what specific elements of the analysis make it so. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Certain points that describe what is new could be given in more detail: * Line 336: why does this work's probabilistic analysis diverge significantly? Step 2: would it be fair to say this part of the proof is effectively via a reduction to the Euclidean case? General question: why is the defined "(strict) saddle manifold" actually a manifold? If the saddle points are mere points, and not ridges, this'll (trivially) be a zero-dimensional manifold, as it is a finite or countable point set, but what if there are ridges? Does something assumption here that I've missed prevent singularities from appearing? May help to include some intuition here. Am I right to think of the RRM scheme as traveling some distance in a tangent space, then mapping back onto the manifold? Thought in this way, it starts to feel a bit like mirror descent, which also projects back onto the space. Am I right to think about the retraction version as "viewing the difference between the retraction and exponential map as noise, which we then simply merge with the gradient noise"? Typos: * 48: "scopein" -> typo * Line 626: wrong \eqref, this should instead refer to (13), not to (12) * Appendix B: title in the PDF table of contents is "Proof of thm:repeller", use \texorpdfstring to fix this (you can look up what it does) Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Yes, this is a theory paper and the technical assumptions are clearly stated throughout Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are sincerely grateful for your encouraging remarks and thoughtful suggestions. We reply to your questions below, and we will revise our paper accordingly in the next revision opportunity. > This paper would have been made significantly stronger with even a minimal empirical study to illustrate how a few examples of RSGD behave on common manifolds. It would ha been particularly interesting to see a case or two where RSGD does not get stuck, confirming the theory, and one which violates the theory's assumptions where it does get stuck, illustrating the limitations. We took your point to heart and we generated a series of plots to illustrate how RSGD avoids saddle points on the torus (everything is included in the one-page pdf that we were allowed to upload). Regarding your second point, the stochasticity involved makes it quite difficult to construct an example where RSGD does indeed get stuck with positive probability but, on the flip side, we performed a series of experiments with the extra-gradient method RSEG to illustrate that our avoidance results are not limited to RSGD. > Sec 2: consider a more informative title. Will do - we will highlight that it is intended to fix some basic definitions of Riemannian geometry and optimization. > Optimization-theoretic assumptions involving the setting should be stated more clearly all in one location, so that optimization theory people can easily see how strong the obtained result Will do - we will modify the beginning of Section 4 accordingly, along the lines you recommended. > Consider splitting the examples into one or two key examples, then have the results more in the middle of the paper, then having a final section which describes additional examples. Point taken. We did want to emphasize the flexibility afforded by the RRM template, but we also understand the benefit of getting to the point faster. We will restructure accordingly (sending some examples to the appendix if needed). > The paper talks way too many times about how intricate the analysis is. Apologies, we did not mean to make the exposition sound bloated or over-inflated. We will remove all such instances that we spot, and we will do a full editorial pass to remove any adverbs and/or adjectives that are subjective. > Line 336: why does this work's probabilistic analysis diverge significantly? This is due to the disparities in the step-size and noise assumptions: To provide a simplified overview, an essential step in the analysis is to control the fluctuation of cumulative noise $\sum_{n\geq m} \gamma_nU_n$, where $\gamma_n$ is the step-size and $U_n$ is the noise. In [31] (cf. numbering as in the full paper, including the appendix), because the step-size schedule exhibits a significantly faster decrease rate $\Omega(1/\sqrt{n})$, the Burkholder-Davis-Gundy inequality serves as the appropriate tool for that scenario. However, this analysis is inadequate when dealing with gradually decreasing step-sizes of the form $\mathcal{O}(1/(\log n)^{1+\epsilon})$ in our particular context. >Step 2: would it be fair to say this part of the proof is effectively via a reduction to the Euclidean case? Yes, this is the driving idea. >General question: why is the defined "(strict) saddle manifold" actually a manifold? If the saddle points are mere points, and not ridges, this'll (trivially) be a zero-dimensional manifold, as it is a finite or countable point set, but what if there are ridges? Does something assumption here that I've missed prevent singularities from appearing? May help to include some intuition here. Apologies for any confusion. When we wrote "smooth compact component" in the definition of a strict saddle manifold, the "smooth" was meant in the manifold sense. As you recommended above, we will bring this definition in Section 4 to collect all relevant information in a single place and ensure there is no ambiguity or confusion. >Am I right to think of the RRM scheme as traveling some distance in a tangent space, then mapping back onto the manifold? Thought in this way, it starts to feel a bit like mirror descent, which also projects back onto the space. This is not our intuition. For us, RRM is traveling on the manifold directly, along a search direction defined by an "approximate" / "surrogate" gradient, including noise and/or a (possibly non-random) bias/offset. This bias/offset term is crucial for our purposes as it allows us to capture several different algorithms, possibly with a forward-backward-forward structure (like ROG and RSEG). However, the RRM iterates per se never leave the underlying manifold. When embedded in real space, the part $X_n + \gamma_n \hat V_n$ of (16) which represents the RRM update minus the geodesic offset does exhibit the behavior that you describe. If this is what you are referring to, then we agree with your intuition - with the "geodesic offset" essentially standing in for the projection mechanism. >Am I right to think about the retraction version as "viewing the difference between the retraction and exponential map as noise, which we then simply merge with the gradient noise"? Your intuition is accurate, with a minor caveat. The gradient "noise" is conventionally assumed to have zero mean, while the difference between the retraction and exponential map does not possess this property. To circumvent any potential confusion, we opt for the term "error" instead of "noise" when discussing the differences between random and systematic errors. > {Typos} Will fix, thanks for spotting them and bringing them to our attention! --- We hope that these points address your questions - please let us know if any of the above is not sufficiently clear. Thank you again for your input and positive evaluation, The authors --- Rebuttal Comment 1.1: Comment: Thanks for your response. I continue to like this paper and am happy for it to be accepted. --- Reply to Comment 1.1.1: Title: Official Comment by Authors Comment: Thank you for the in-depth review and encouraging comments.
Summary: This paper proves that under a rather general RRM scheme (akin to vanilla GD in euclidean space), strict saddles can be avoided when stochastic approaches are used. Strengths: 1. Proves that Riemannian Optimization, much like its euclidean counterpart, can have its strict saddles easily avoided by using a simple update rule (RRM in this case) coupled with perturbations. 2. This paper analyzed a number of different stochastic algorithms (algorithms 1-5) and shows that they all can escape saddle points. 3. The proof outline is nice. It provides readers a new proof technique that can deal with Riemanian optimization. Weaknesses: 1. For unfamiliar readers, it is hard to gauge how limiting the "our assumptions 1-3" are, and I would like to see the authors provide a bit more analysis on the limitations of these assumptions. 2. Maybe it's better to include the convergence analysis of the RRM update rule to let users understand better the limitations of this result (it doesn't need to be self-derived, it's perfectly fine to quote well-known results). Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: I want to know how important the noise perturbation is to this analysis. For the RRM, if no random error is observed, will the theorems still work? Also I want to know whether random perturbations is more important for saddle-escaping or the stochastic oracles are more important. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your input, insightful questions, and positive evaluation. We address each of your questions in a point-by-point thread below, and we will revise our manuscript accordingly in the next revision opportunity. >For unfamiliar readers, it is hard to gauge how limiting the "our assumptions 1-3" are, and I would like to see the authors provide a bit more analysis on the limitations of these assumptions. We understand your concern. It is for this reason that we had included a specific paragraph for discussing each assumption in detail but, at the same time, we were constrained by the 9-page limit and could not include a more extensive positioning for unfamiliar readers. We will be happy to take advantage of the extra page afforded in the upcoming revision to expand on our discussion and provide more details for readers less familiar with the relevant literature. >Maybe it's better to include the convergence analysis of the RRM update rule to let users understand better the limitations of this result (it doesn't need to be self-derived, it's perfectly fine to quote well-known results). Point well-taken. Again, due to space limitations, we had to make some tough choices in terms of what to present and what to leave out, but we will be happy to take advantage of the extra page provided in the revision stage to include the basic steps of the convergence analysis for RRM. >I want to know how important the noise perturbation is to this analysis. For the RRM, if no random error is observed, will the theorems still work? We did not address the noiseless scenario because, to a certain extent, it had already been examined in various prior studies for RGD [34,35]. The high-level conclusion is that, depending on the initialization, Riemannian first-order methods may or may not evade saddle points; however, the set of initial conditions that result in convergence to saddle points occupies a Lebesgue measure of 0, aligning conceptually with our result that $\mathbb{P}(X_n \rightarrow \mathcal{S}) = 0$ for stochastic methods. It is noteworthy to mention that the analysis of the noiseless scenario follows a straightforward trajectory: it emerges as a direct outcome of linearization near saddle points and the implicit function theorem. However, this approach is not sufficient in the presence of noise, leading to a substantially more intricate proof. >Also I want to know whether random perturbations is more important for saddle-escaping or the stochastic oracles are more important. The direction the Reviewer is seeking to emphasize would determine the context: Our paper asserts that, in the presence of stochastic oracle feedback, *no random perturbations* are needed to avoid saddle points. Previous research indicates that if the primary objective is to elude saddle points *efficiently*, then random perturbations can be employed to accelerate the process of escape - but, otherwise, in this "efficient escape" literature, the optimizer is assumed to have full access to the function's gradients. --- We hope and trust that these points address your questions - but please let us know if any of the above is not sufficiently clear. Thank you again for your input and positive evaluation, The authors --- Rebuttal Comment 1.1: Comment: Thank you for your response, as it cleared some of my questions, not necessarily concerns of your paper. I continue to believe in the value in this work, and would like to keep my original score. --- Reply to Comment 1.1.1: Title: Official Comment by Authors Comment: Thank you for the encouraging feedback and helpful comments.
Summary: The paper presents a focused study on the avoidance of saddle points for stochastic Riemannian optimization algorithms. To tackle this issue, the authors introduce the Riemannian Robbins-Monro (RRM) schemes to the context of Riemannian manifolds, where includes fundamental Riemannian stochastic optimization methods as special cases, such as Riemannian stochastic Gradient Descent (GD), Riemannian proximal GD, and Riemannian optimistic GD. In their analysis, the authors have modeled the RRM within the context of a Euclidean stochastic approximation framework by famous Nash embedding theorem, introducing an element of geodesic offset. The authors make the assumption that this offset is $\mathcal O(\gamma_n^2)$, a condition found to be met in the case of a bounded-variance stochastic gradient estimator as indicated in Proposition 1. Based on this assumption, they have been able to indicate that RRM schemes have the capacity to avoid saddle points with a probability of 1. Strengths: - Originality: The analysis presented in Theorem 1 and Theorem 2 is highly innovative. In the analysis, the author apply the Nash embedding theorem to isometrically embed the Riemannian manifold into a Euclidean space. This approach allows them to approximate the intrinsic geometry of the manifold using its extrinsic (now Euclidean) geometry, an approach that seems highly original and may potentially shed light on further development in Riemannian optimization methods. - Quality: The quality of the paper is commendable. The paper is well-written, and the results are convincing. - Clarity: The paper is well-structured. The assumptions made are either standard in the field of Euclidean optimization or are accompanied by discussions regarding their implications, making the paper easy to follow for readers. - Significance: The paper is the first to generalize saddle point avoidance for stochastic optimization methods on Riemannian manifolds. Additionally, the proposed Riemannian Robbins-Monro (RRM) schemes encompass numerous state-of-the-art Riemannian stochastic optimization algorithms, illustrating potential applications in tackling Riemannian machine learning problems. Weaknesses: - Possible typo: l52 shcemes -> schemes. - It would be better if there are numerical experiments to validate the findings. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - In the conclusion section of your paper, you mention 'zeroth-order optimization' as a potential avenue for future work. There have been several studies related to the zeroth-order (or so-called 'bandit') gradient descent method ([1,2]). I am interested to know whether such zeroth-order methods fit within your proposed Riemannian Robbins-Monro (RRM) scheme. If not, could you please clarify the primary gap or obstacle in incorporating these methods into the RRM framework? - A lot of work on Riemannian optimization tends to focus on complexity analysis in terms of iteration rounds, instead of asymptotic behavior. I was wondering if it would be possible to extend your proposed RRM framework to a non-asymptotic setting. In other words, can the avoidance probability be expressed in terms of the number of iteration rounds, like some approaches in Euclidean space? If this is not feasible, could you elaborate on the main hindrances or challenges in doing so? Ref: [1] J LI et.al., Stochastic Zeroth-order Riemannian Derivative Estimation and Optimization [2] X Wang et.al., Online Optimization over Riemannian Manifolds Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have outlined the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your input and remarks. We reply to your questions below, and we will revise our manuscript accordingly in the next revision opportunity. > It would be better if there are numerical experiments to validate the findings. Done - please see the "global rebuttal" where we included a series of plots on the torus to illustrate how RSGD and RSEG avoid saddle points, despite being initialized closed to spurious saddle points. [The plots are all in the the one-page pdf that we were allowed to upload as part of our rebuttal.] >In the conclusion section of your paper, you mention 'zeroth-order optimization' as a potential avenue for future work. There have been several studies related to the zeroth-order (or so-called 'bandit') gradient descent method ([1,2]). I am interested to know whether such zeroth-order methods fit within your proposed Riemannian Robbins-Monro (RRM) scheme. If not, could you please clarify the primary gap or obstacle in incorporating these methods into the RRM framework? This question is fairly intricate: 1. It is not hard to show that the methods in the provided references, as well as a number of other zeroth-order Riemannian methods, indeed fall under the RRM template with certain explicit expressions on the noise and bias/offset terms ($U_n$ and $b_n$ respectively). 2. However, a simple calculation reveals that $$ \|U_n\| = O(\mu_n^{-2}), \quad ||b_n || = O(\mu_n). $$ where, in line with the notation in reference [1] that you provided, $\mu_n$ is the *sampling radius* of the zeroth-order scheme under scrutiny (also known as smoothing parameter, exploration parameter, etc.). In zeroth-order methods, this parameter typically goes to $0$ as $n\to\infty$ to enable convergence so, even though the bias/offset assumption in (12) is not violated, the noise bound becomes time-varying, and our results do not immediately apply in this context. Addressing these challenges, potentially through a careful balance between the step-sizes $\gamma_n$ and $\mu_n$ (plus a suitable extension of the probabilistic estimates of Lemma B.1), is an intriguing research question - but, at the same time, it would require orthogonal work to the current paper, so we left it as a direction for future research. >A lot of work on Riemannian optimization tends to focus on complexity analysis in terms of iteration rounds, instead of asymptotic behavior. I was wondering if it would be possible to extend your proposed RRM framework to a non-asymptotic setting. In other words, can the avoidance probability be expressed in terms of the number of iteration rounds, like some approaches in Euclidean space? If this is not feasible, could you elaborate on the main hindrances or challenges in doing so? It is certainly possible to enhance our analysis further through the imposition of more stringent conditions, such as the capability to intentionally introduce artificial noise, as discussed in references like [20,63], or additional structure on the function being optimized - e.g., like the "$(\alpha,\beta,\gamma,\delta)$-strict-saddle" property of Ge et al. [22] and Jin et al. [25]. In this more restrictive context, we believe it should be possible to obtain a result along the lines that you suggest - i.e., providing an upper bound on the number of iterations required to produce "an" iterate which is sufficiently far from a saddle point. However, this would not imply avoidance, but a "best iterate with high probability" guarantee - i.e., in principle, the sequence of iterates could still converge to a strict saddle, even if, with high probability, it has produced an iterate which is close to a local minimum. [We should perhaps also emphasize at this point that this phenomenon is not exclusive to the manifold context, as similar scenarios are encountered in Euclidean optimization problems.] --- We hope and trust that these points address your questions - but please let us know if any of the above is not sufficiently clear. Thank you again for your input and positive evaluation, The authors --- Rebuttal Comment 1.1: Comment: Thank you for addressing my concerns. In light of your response, I have increased my score from 6 to 7. --- Reply to Comment 1.1.1: Comment: Thank you for your input and upgraded re-assessment!
Summary: This paper studies the problem of when Riemannian first-order optimization algorithms evade strict saddles points. The proof leverages a connection to the Euclidean case, to the continuous-time Riemannian gradient flow, and shows that the deviation from them is bounded. Intuitions are given in the main paper to facilitate readership. Strengths: The paper has done a great job in writing, connecting with existing literature, and giving intuitions on how the proof is done. I agree with the claimed contribution in Lines 42-48 and 276-292. Weaknesses: I don't see major problems. I do have a few questions and minor comments below. ## Questions 1. For the definition of a stable manifold: why do we need positive eigenvalues lower bounded by c+>0? What goes bad if the positive eigenvalues could get arbitrarily close to 0? 1. Equation 6: Consider any update rule whose output is in the range of the exponential map. We can cast the update as (RRM), as is also done in (6). I understand that the gain is that we can then analyze just the exponential map such as the arguments around (15). However, the difficulty is then transferred to checking whether the logarithm of the update satisfies assumption 2, which is much more indirect than checking whether V satisfies assumption 2 as in Algorithm 1. Is this a problem? Could you comment more on this? 1. Related to the above, for algorithm 3, why do I view it as a special case of algorithm 2 and hence of RRM, rather than saying oh, algorithm 3 is a special case of RRM, because one can just take logarithm and exponential map of (SMD)? 1. Does (RGF) always stay within the manifold? If so, why? If not, doesn’t the ‘grad’ become not defined outside the manifold? ## Comments 1. Line 124-125: The definition looks recursive hence confusing: to understand vn_hat I need to understand Un, bn, both of which depend on vn_hat. 1. Line 18: “end state of a stochastic Riemannian algorithm can only be a local minimizer.” Do you mean something like the limiting state? Even for convex problems in Euclidean space, gradient descent is only guaranteed to approach a solution - the more iterations, the closer. So, in general, the end state is something close to a local minimizer. 1. Line 102: the “S” is not defined - I suppose you mean a stable manifold? Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: See above Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you again for your input and remarks. We reply to your questions below, and we will revise our manuscript accordingly in the upcoming revision. >For the definition of a stable manifold: why do we need positive eigenvalues lower bounded by c+>0? What goes bad if the positive eigenvalues could get arbitrarily close to 0? What is really needed is to be able to invoke the stable manifold theorem. In this regard, the $c_{+}$ lower bound is simply a convenient way to ensure that the positive, zero and negative eigenspaces of the Hessian of $f$ do not change dimension across the saddle manifold, and they induce a well-defined separation of the tangent bundle to $S$ (as a sub-bundle of the tangent space of $M$). It would be possible to relax this condition with no impact on our results, but this is the most straightforward condition that we're aware of in the literature. >Equation 6: Consider any update rule whose output is in the range of the exponential map. We can cast the update as (RRM), as is also done in (6). I understand that the gain is that we can then analyze just the exponential map such as the arguments around (15). However, the difficulty is then transferred to checking whether the logarithm of the update satisfies assumption 2, which is much more indirect than checking whether V satisfies assumption 2 as in Algorithm 1. Is this a problem? Could you comment more on this? Indeed, if an algorithm has a "look-ahead" structure (potentially involving a parallel transport step, like RSEG or the optimistic version of RSGD), the verification of Assumption 2 might require some work, and it is not an immediate consequence of the oracle / stochastic gradient assumptions. So, yes, in theory, if one seeks to employ the RRM framework for a given algorithm, they would have to verify whether Assumption 2 is verified for said algorithm. In practice however, in all the examples that we tested from the literature (Algorithms 1-7, and several that we did not include to avoid overbloating the presentation), this verification is a relatively straightforward affair that did not present any major difficulties (conceptual or technical). >Related to the above, for algorithm 3, why do I view it as a special case of algorithm 2 and hence of RRM, rather than saying oh, algorithm 3 is a special case of RRM, because one can just take logarithm and exponential map of (SMD)? This is indeed related to your previous point. In the case of Bregman-based methods, the main difficulty lies in the verification of **Assumption 2**, which explains our preference for viewing (SMD) as a special case of retraction-based methods, as opposed to a direct adoption of the logarithm map. >Does (RGF) always stay within the manifold? If so, why? If not, doesn’t the ‘grad’ become not defined outside the manifold? Yes, $\mathrm{grad} f$ is a section of the manifold's tangent bundle so, by completeness, RGF gives rise to a global flow on the manifold, and hence always remains there, cf. [36, Chaps. 9 and 10]. >Line 124-125: The definition looks recursive hence confusing: to understand vn_hat I need to understand Un, bn, both of which depend on vn_hat. Whoops, yes, Line (125) was supposed to be a consequence of (124). We will fix this, apologies for any confusion. >Line 18: “end state of a stochastic Riemannian algorithm can only be a local minimizer.” Do you mean something like the limiting state? Even for convex problems in Euclidean space, gradient descent is only guaranteed to approach a solution - the more iterations, the closer. So, in general, the end state is something close to a local minimizer. Yes, by "end state" we meant the "limit state", not that a minimizer can be reached in a finite number of steps (in general, this is not possible). Consider this fixed in our revision. --- We hope that the above addresses your questions - but please let us know if any of the above is not sufficiently clear. Thank you again for your input and positive evaluation, The authors
Rebuttal 1: Rebuttal: Dear AC, dear reviewers, We are sincerely grateful for your time, input and positive evaluation. To streamline our rebuttal, we reply to each reviewer's questions in a separate point-by-point thread below. We only include in this global rebuttal a pdf with two figures showing the avoidance of saddle points under the RSGD and RSEG methods (Riemannian stochastic gradient and extra-gradient respectively). For illustration purposes, we used a $2$-dimensional torus and an objective function with three saddle points and one minimizer, depicted as black and red respectively. Despite being initialized close to the spurious saddle points, both methods avoid them and ultimately converge to a (global) minimum, as suggested by the theory of the paper. We defer all other points to the reviewer-specific threads below and we are looking forward to the discussion phase if any further questions remain. With our kindest regards, The author team of Paper 6491 Pdf: /pdf/1988ec530d3d95b028c6e5ecb179f335046575b2.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
How does GPT-2 compute greater-than?: Interpreting mathematical abilities in a pre-trained language model
Accept (poster)
Summary: The paper attempts to improve our understanding on the source of a certain ability (the use of "greater than" in mathematical tasks) in LMs. To this end, the authors outlined a circuit in GPT-2 with interpretable structure and semantics, adding to the evidence that circuits are a useful way of understanding pre-trained LMs. Compared to existing work, the use of circuits is more fine-grained, and reveals special insights of such ability. The experiments are not only conducted on math tasks, but also extended to adjacent tasks that are also involved with "greater than" relations. Although there might be some concerns on whether these findings can be generalized to ever larger models, the reviewer still thinks it is a good paper as it shows a unique way to understand deep neural networks. Strengths: 1. Novel approach on understanding the ability of LMs. 2. Extensive experiments and clear presentation. Weaknesses: 1. The conclusion might be limited to certain model type (auto-regressive LMs such as GPT series). 2. Relatively narrow choice of tasks (only studying the "greater than" relationship). Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Have you ever conducted experiments on "smaller than"? I'm curious about whether the findings are the mirrored version of "greater than", or they are total different from each other. If we can find some "semantic relationship" among the circuits, that will make such pipeline easier to be generalized. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: As mentioned in the Weakness section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review! Here are our responses: > The conclusion might be limited to certain model type (auto-regressive LMs such as GPT series). We agree that some of our conclusions, such as our specific circuit, are limited. Other findings, such as the way attention heads and MLPs work together to solve problems, or the form of partial generalization we observe, could surface in other models as well. For more discussion, see the **Scaling** section of the global response. More broadly, our methodology—from circuit finding to analysis—can easily be applied to other autoregressive or masked transformer LMs. > Relatively narrow choice of tasks (only studying the "greater than" relationship). This is true. We choose our task in part because GPT-2 small is not very mathematically competent: its performance on other math tasks is generally not high enough to be worth interpreting. However, we find that a small problem scope also allowed us to dig deeper and find a detailed circuit; see the **Problem Scope** section of the global response for more details. > Have you ever conducted experiments on "smaller than"? I'm curious about whether the findings are the mirrored version of "greater than", or they are total different from each other. If we can find some "semantic relationship" among the circuits, that will make such pipeline easier to be generalized. Yes, we have conducted experiments on less-than! These experiments are in the generalization section (Sec. 5) of our paper. To summarize, given prompts like “17YY is less than 17”, GPT-2 tended to predict YY. Given prompts like “The war ended in 17YY and started in 17”, GPT-2 used the greater-than circuit to predict a year >YY! This is to say that GPT-2’s less-than behavior is totally different from its greater-than behavior: there is no consistent behavior, and thus no real circuit underlying it. This might seem disappointing—as you say, it would be nice to find a semantic relationship between two circuits that reflects some sort of deeper understanding of mathematics or the order of numbers. However, this is still interesting: it hints that models might perform something in between simple memorization and full generalization. GPT-2 uses our circuit in distinct greater-than scenarios, so there is some partial generalization; however, this does not stem from real mathematical abilities. We think that this partial, incomplete generalization, reflects LMs’ abilities: close to humans’ on the surface, but far in terms of deeper understanding. Let us know if you have any other questions or suggestions to improve our work! --- Rebuttal Comment 1.1: Title: Thank you for the response! Comment: Thank you for the response. I would like to keep my current rating.
Summary: This work investigates in depth the mechanism of GPT-2 small to compute the "greater than" function. Specifically, the work isolates a portion of the computational units that are causally related to making plausible predictions for inputs similar to: "The war lasted from 1731 to 17__" The isolation of computational units is done via a recent previously proposed method called "path patching". The resulting isolated subnetwork is shown to be necessary and approximately sufficient for making plausible predictions for this task. The authors also show that this subnetwork generalizes to other types of templates that require prediction of a greater number. Strengths: - In-depth mechanistic analysis of a language model that would be of interest to some of the NeurIPS audience - Strong scientific validity. I appreciate that the authors thought through possible confounders and did extra experiments to validate their findings Weaknesses: 1. In the current format, the contributions of the current work are difficult to disentangle from previous work. The impact can be strengthened by clearly stating what the current work contributes above previous work, and by a detailed explanation of how a researcher can build on the analysis framework to investigate other emergent skills/properties of language models. 2. The work investigates only one model - GPT-2 small. Investigating additional models can help understand how these mechanisms emerge, and how they are influenced by scale and amount of training data. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: Major: Q1. Related to weakness 2: Which part of the interpretability techniques are borrowed from previous work, and what is newly proposed here? How would a researcher interested in investigating a different function or a different model do this using the proposed framework? Q2. For the sufficiency and necessity experiments in Section 3.2: if the identified circuit is sufficient and necessary, then does it matter what input is provided to nodes outside of the circuit? How would providing random input to the nodes outside the circuit change the results, while providing the original dataset to the nodes inside the circuit? Q3. Which of the results do you expect to hold in larger GPT-2 variants, and how can the proposed analysis framework be used to investigate the emergence of properties at different scales of models / training data? Minor: - Fig 3 is difficult to understand on its own. The change in the direct connection between MLP 10 and the logits is difficult to spot between A and B. I recommend leaving the direct connection where it is in A but coloring it in a different color, and writing a more descriptive caption. Also the caption in C says that MLP11 receives input 01 but there are no direct or indirect arrows between the 01 input and MLP11. - Also it’s not quite clear what it means to “patch” a path. How does one ensure that the perturbed input that is provided to the patched MLP 10 does not influence all downstream components of MLP 10? - The use of footnotes is standard for NLP venues but not so much for ML venues. I personally find footnotes distracting. - L178: typo: “and and” Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 2 fair Contribution: 3 good Limitations: See Weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your careful review! We’ve answered your questions below, omitting the weaknesses, as they correspond roughly to Q1 and Q3. Regarding our contributions: please see the **Contributions** section of the general response, and Q1. We will revise our paper to clarify our contributions, and add details regarding how to apply our methods to other models. > Q1. Related to weakness 2: Which part of the interpretability techniques are borrowed from previous work, and what is newly proposed here? How would a researcher interested in investigating a different function or a different model do this using the proposed framework? The **Methodological Contributions** section of the global response tackles this contributions aspect of this question. Our methods are quite general and can be applied to other tasks/models. Generally, one needs to define a task that the model can perform, with a corresponding dataset and metric. One also defines a corrupted (patch) dataset that induces different model behavior, measurable by the metric. Then, one can corrupt connections using the patch dataset, and determine which connections are important. We suggest patching specific connections, rather than patching the output from one node to all other nodes, in order to gain a finer-grained understanding of how tasks are performed. Our work also shows the potential of testing the found circuit on new datasets for similar tasks. Note that doing this research on new models can involve some model reimplementation, in order to enable manipulation of the computational graph, or allow intervention on its intermediate values. > Q2. For the sufficiency and necessity experiments in Section 3.2: if the identified circuit is sufficient and necessary, then does it matter what input is provided to nodes outside of the circuit? How would providing random input to the nodes outside the circuit change the results, while providing the original dataset to the nodes inside the circuit? This is an interesting question! Our circuit contains all nodes that compute greater-than, given that the quantity being computed is greater-than. That is, if the context requires greater-than output, our circuit computes it. Thus, when we ablate our model, we give the ablated portion input tokens that still require greater-than output, while altering the quantity YY that the output should be greater than. But if the input to the ablated nodes suggests that some other sort of output is needed, behavior could differ. Our circuit might not function, or another circuit, in the ablated portion of the model, triggered by the different input, could dominate model behavior. And, if the other nodes’ values were set to something totally out of distribution, it’s tough to know precisely what the model’s output would look like. We think that your question is deeply tied to questions like “What contexts cause this circuit to activate, and how do LMs detect such contexts?” which should be studied in future work. > Q3. Which of the results do you expect to hold in larger GPT-2 variants, and how can the proposed analysis framework be used to investigate the emergence of properties at different scales of models / training data? The **Scaling** section of the global response answers this question. In essence, the mechanistic motifs that we find—the co-operation of attention heads and MLPs, as well as partial circuit generalization—seem likely to generalize to larger models. As for our methods, the circuits framework scales well on a technical level, especially when automated. It is quite general: if you can define a task with a normal and a patched dataset, as well as a metric, you can apply circuit-finding techniques. The more challenging question is how to assign semantics to these circuits at scale. Others have tried to automate e.g. using other models to assign semantics to model neurons, but this sort of technique is still immature. However, we think that automatically assigning semantics to model internals is challenging for most interpretability frameworks, not just ours. > Fig 3 is difficult to understand on its own[...] Good suggestion; we’ll use a different color for the removed connection between MLP10 and the logits. And indeed, the caption of part C is wrong: MLP 11 does not receive 01-input. We’ll make both of these changes—thanks for your careful attention to our diagrams. > Also it’s not quite clear what it means to “patch” a path. How does one ensure that the perturbed input that is provided to the patched MLP 10 does not influence all downstream components of MLP 10? In the framework we use (rust-circuit), we directly manipulate the model’s computational graph. So, to patch the direct MLP10->logits path, we can copy MLP10 and its ancestor nodes, and corrupt the token inputs to that copy. We then replace the original MLP10->logits edge with a corrupted MLP10->logits edge; we don’t touch the edges between the original MLP10 and other nodes. Thus the corrupted MLP10 is an ancestor of only the logits; any other components with MLP10 as an ancestor see the original MLP10. Simple path-patching can be done using vanilla PyTorch models / hooks, or via a package like TransformerLens—let us know if you’d like details! For a more formal exploration of path patching, see [2] in the global response. Please let us know if you have any other questions or ideas regarding how to improve our work! --- Rebuttal Comment 1.1: Comment: Thanks for the response. The added clarifications re path patching and the contributions are helpful. To me the major limitation to how impactful this work will be remains the ease with which other researchers will be able to build on the interpretability approach proposed here. I urge the authors to include a more specific discussion of this in a future revision. Specifically statements like "one needs to define a task that the model can perform, with a corresponding dataset and metric." are not sufficient, and the characteristics of the desired datasets and metrics, with respect to the task, need to be discussed. That said, I do believe in the importance of the work, even if it ends up being a one-off scientific investigation. --- Reply to Comment 1.1.1: Comment: Thanks for your response! We’re glad to hear that the clarifications were helpful, and that you believe in the importance of our work. We also value enabling others to build on our work. As part of this, we will release our code, included in the supplementary material. As suggested, we will also add details about how to choose tasks, metrics, and datasets, in order to aid future researchers. While we can’t upload revisions to our paper, we’re happy to share a rough outline of this information here. Let us know if you’ve got any other questions, and if there’s anything we can do to help you raise your score! --- ### Task Our path-patching approach is compatible with a variety of tasks. The chosen task should: 1. Have a clearly delimited set of correct and incorrect answers for each example. 2. Require only one forward pass of the model (as opposed to e.g. generation tasks which require multiple passes). 3. Be solvable by your model: if your model cannot solve the task, there may be no circuit. At minimum, the model should exhibit consistent behavior (even if it’s not exactly correct) Keep in mind that the granularity of insights will depend on the granularity of the task chosen. Complex tasks like natural language inference could require different (sub-)circuits depending on the specific question; it might be hard to find one precise circuit responsible for the task. Smaller, simpler tasks will likely yield easier to interpret results. For the purpose of this example, we’ll consider the task of fact retrieval. Each input will have an (ideally single-token) correct answer, which can be predicted with one forward pass. Moreover, it seems possible that facts are mostly stored / retrieved using the same circuit. ### Dataset Path-patching requires two datasets: a normal and corrupted dataset. The normal dataset is just a collection of examples/inputs for the task; its examples should: 1. Clearly indicate the task at hand. LMs perform language modeling, and do not natively perform other tasks; they may leak probability to answers that are not correct or incorrect, but simply task-irrelevant. Your inputs should push as much probability as possible onto the task's output space. 2. Allow evaluation based on only the distribution over possible next tokens (generated via one forward pass) 3. Be representative of your task. Your choice of datasets effectively define the scope of the phenomenon you study—make sure the scopes of your datasets and your intended task match! Each example from the normal dataset should have a corresponding corrupted example / input. The corrupted input should: 1. form a minimal pair with the normal input: they should differ minimally from each other (being the same length, and differing by only one or two tokens) 2. elicit a different model response, with a distinct correct answer, compared to the normal input 3. belong to the same sort of task. Remember that we locate the circuit by activating the same circuit with two different inputs. For fact retrieval, a normal input could be "Paris is the capital of"; the corrupted counterpart could be "Rome is the capital of". Both of these examples are reasonable input for fact retrieval, but the two will elicit very different responses. Note that an input like "Paris is in" would be less appropriate, because it doesn't clearly indicate that the task is fact retrieval, or what fact should be retrieved. ### Metric The metric is a function that takes in model logits and labels. It should: 1. Output a real number measuring model behavior / performance on the task. 2. Detect small changes in whether the model is behaving according to the normal or corrupted input, or somewhere in between. A continuous loss is thus preferable to metrics like 0-1 loss/accuracy. One family of metrics used in previous work is the probability assigned to the correct answer(s), minus the probability of the incorrect answer(s) induced by the corrupted input. In the greater-than case, this is p(y>YY) - p(y<=YY). For fact retrieval, we would compute p(France) - p(Italy). This family of metrics the model is implicitly sensitive to the model generating off-task output as this generally takes away from the probability of correct answers. It is explicitly sensitive to the probability of the corrupted input’s answer. Other metrics are possible. For example, if it is difficult to quantify task performance, but still possible to create minimal pairs, one could simply measure the KL-divergence between the original and (partially) patched / corrupted distributions. However, this is harder to interpret, and less targeted at your actual task of interest.
Summary: This paper explores how GPT-2 performs the ``greater-than'' operation by analyzing its circuit. The authors construct a template of the operation and define two scores to evaluate the performance of GPT-2. They first evaluate that the found circuit is indeed important for performing the greater-than operation, and then find that top MLP layers in GPT-2 compute the operation. --- Rebuttal response: Thanks for the detailed explanations. Some of my concerns about the details are addressed. Thus, I would like to raise the soundness score. Regarding the contribution, I agree that the method part under the mechanistic interpretability scenario is sufficient. However, the findings are not attractive to me. Thus, I will raise the score to 2. Regardless of whether the paper is accepted or not, I would like to suggest that the presentation could be improved. Strengths: This paper explores a very interesting question for LM understanding: how LMs perform tasks inside its architecture. The analysis method, i.e., looking at the circuit of LM, is suitable for this kind of interpretation. Weaknesses: 1. The key contribution is not clear. This paper uses the existing analysis method for interpretation. But, to research the question (how LM performs math calculation), only a simple template and a toy task (greater-than operation) are discussed for GPT-2. There is a limited contribution to the analysis. Besides, for findings, such as GPT-2 performs the operation on top MLP layers, many existing works (e.g., probing works) have also found that, and existing findings are even more comprehensive. Thus, the contribution of this work is not clear. I would suggest that the authors should focus on one side (e.g., analysis method, interpretation task/data/template design) and dig deeper. 2. The presentation needs to be improved. The essential information should be clearly introduced and technical details could be moved to the appendix. For example, in section 2, it says that only single-token numbers are considered in this work (lines 65-72). We know that it is because of GPT-2's BPE tokenizer. There is no need to give specific examples for that. But for the dataset introduction, what's the size of it, how did you construct it (and why in that way), these parts are not well introduced. Besides, it would be better to use some tables in section 3 to make it more readable. 3. Generalization. As mentioned above, generalizing the analysis method to large LMs with more complex math problems is not intuitive. With different tokenizers (e.g., LLaMA tokenizer), the analysis template in this paper would fail. Besides, recent instruction-tuned LLMs may have very different conclusions compared to GPT-2. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: Why the curve in the right figure of Figure 2 are continuous? Some tokens (e.g., 00) are not considered as described in the paper. (Line 81): Why mention ``GPT-2's training data''? Do you finetune GPT-2 or just use it in the zero-shot setting? (Line 87): This should be average instead of sum, right? Otherwise, it would not range from -1 to 1. (Line 305): Typo: there are double ``from''. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 1 poor Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your critique! We hope to resolve your concerns by clarifying both the intent of our work and our contributions. > The key contribution is not clear. …There is a limited contribution to the analysis. We clarify our key contributions and analyses in the global response’s **Contributions** section. > This paper uses the existing analysis method for interpretation. This isn’t quite correct. We introduce new analyses within the circuits framework: we patch multi-node paths, and test our circuit in scenarios distinct from the one in which we found it. Even when using existing methods, we don’t rely on a set of established tools for circuits analysis: the circuits literature is young, and no such toolkit exists. One goal of this paper is to set a methodological standard for circuits research. > But, to research the question (how LM performs math calculation), only a simple template and a toy task (greater-than operation) are discussed for GPT-2. The greater-than task is not intended to explain how LMs perform all math, but rather to examine one small part of LMs’ mathematical abilities in detail. In fact, our results suggest that the greater-than circuit may not be part of a larger set of math capabilities; instead it is a mechanism between memorization and generalization. By studying a small task, we found an interesting mechanism with broader implications for math and generalization in LMs. > Besides, for findings, such as GPT-2 performs the operation on top MLP layers, many existing works (e.g., probing works) have also found that. We respectfully disagree; our findings are not a repetition of existing work. Our core finding is a precise and novel circuit for a math operation in a pre-trained LM—a contribution beyond superficial, layer-wise analyses of top MLP layers. Our circuit generalization results are also absent from prior work. See the **Contributions** section of the global response for more details. We stress that even simple conclusions like “top MLP layers do a lot of work” are not obvious or settled, but rather up for debate. While existing work has posited a role of certain (not top) MLPs in e.g. factual associations [1], other work disagrees, localizing this elsewhere [2]. Our methodology is meaningfully distinct from non-causal methods like probing. Because it is not causal, probing often finds information in model representations that is not actually used by models [3]; our methods avoid this pitfall. Thus, even if probing work has found similar results, reproducing these with more trustworthy causal methods is still valuable. That said, we are happy to discuss more related work in the paper; feel free to highlight such work, especially if you feel it overlaps with ours. > The presentation needs to be improved. The essential information should be clearly introduced and technical details could be moved to the appendix…the dataset…[is] not well introduced. Besides, it would be better to use some tables in section 3... Some of the dataset information unintentionally bleeds over from the Task and Dataset section into the following Qualitative Evaluation section. We will fix this, prioritize essential dataset information, and organize some section 3 results in tables. >Generalizing the analysis method to large LMs with more complex math problems is not intuitive. With different tokenizers…the analysis template in this paper would fail. Recent instruction-tuned LLMs may have very different conclusions. We discuss this in the **Scaling** section of the global response. Circuits scale and have been used at larger scales in the time since submission. While our template may not work for all math problems, the challenge of crafting templates and data is part of all interpretability work, and is not unique to our methods. Regarding findings, we can’t know for sure if large or instruction-tuned LMs process math as GPT-2 does; however, work like ours is a necessary first step towards understanding math in those large models. > Why the curve in the right figure of Figure 2 are continuous? Some tokens (e.g., 00) are not considered as described in the paper. We could have used a scatterplot, as we only have probabilities for discrete years, but the curve had more visual appeal. We do not include 00 as a potential start year; however, the figure’s x-axis is the predicted year y, and the y-axis is p(y|prompt). We do measure p(00|prompt); it stays low, as it is never a correct answer, and is an unnatural tokenization of centuries. >(Line 81): Why mention ``GPT-2's training data''? Do you finetune GPT-2 or just use it in the zero-shot setting? No, we don’t fine-tune GPT-2—we do zero-shot evaluation. We meant to suggest that the durations that GPT-2 predicts for each event are likely related to patterns in its (pre-)training data. However, we can remove this speculation. >(Line 87): This should be average instead of sum, right? Otherwise, it would not range from -1 to 1. No, the sum is correct. The probability difference (PD) for an example with starting year YY is the sum of probability assigned to years > YY minus the sum of the probability assigned to years <= YY. PD is maximized (PD=1) when all probability is assigned to years > YY, and none to any other tokens, including non-year tokens. It's minimized (PD=-1) when all probability is assigned to years <= YY. We aggregate PD by averaging over examples in our dataset. We hope this answers your questions. Other reviewers found our analyses satisfying, and our findings interesting; let us know how we can improve the paper, and make you feel the same way too! ### References [1]: Meng et al. 2022. Locating and Editing Factual Associations in GPT. NeurIPS [2]: Hase et al. 2023. Does Localization Inform Editing? Surprising Differences in Causality-Based Localization vs. Knowledge Editing in Language Models. ArXiV [3]: Belinkov. 2022. Probing Classifiers: Promises, Shortcomings, and Advances. Computational Linguistics --- Rebuttal 2: Title: Thank you for your update! Comment: Thanks for your response—we’re glad to hear that you’re more convinced about the contributions and soundness of our work. We will revise our paper to improve its presentation and clarity as you suggest.
Summary: This paper presents an analysis on how the greater-than operator is implemented in the weights of GPT-2 small. They do so by tasking the model with completing sentences of the form "[something, e.g., a war or time period] lasted from $y_1$ to $y_2$" where $y_1, y_2$ are years. The idea is that GPT often assigns values $>y_1$ much higher probability than values $\leq y_1$, when trying to predict $y_2$. This is likely learned due to statistical co-occurrence, but the authors ask _what mechanism_ encodes this behavior in the weights. Strengths: * The paper is well-written, easy to follow, and well-motivated. The explanations are clear, with limitations stated clearly. * I enjoyed the exposition of how the circuit was discovered and the interpretation of its semantics. * It's generally hard to find simple, easy-to-evaluate tasks that GPT-2 small is proficient at, so kudos here. Weaknesses: Most of my "concerns" are really more aptly formulated as questions. Interpretability is still very nascent, and to me this kind of contribution is meaningful, in spite of (or actually because of) the many unanswered questions it raises. * I will say that I'm not 100% convinced by the argument at the end of Section 5, that this circuit is not using memorization and actually generalizes. It's hard to know what that means and to show it, especially on a small toy dataset with only four-digit numbers (really, two-digit completions). I think you'll need to demonstrate the same phenomenon on much larger datasets, as it's not super unlikely that the model could memorize the relationship between numbers from 0-99. You even show via ablation that the representational structure discovered by PCA is not a complete explanation. Maybe reframe these claims, or perhaps provide more convincing evidence? * I think you can _really_ nail the "so what" of the paper by showing how knowledge of this circuit can be used to improve the reliability of the greater-than operation. Do you think you could design a circuit by hand that outperforms the existing one on this task? Or modify the existing one? What would happen if you integrated it into the model? Can you move the circuit to another location? Why is the circuit the way it is? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: * In Section 3.2: what does the PD look like when you give most of the network the 01-dataset but patch the regular dataset into a circuit _other than_ the one you discovered? Based on Figure 4 I'm guessing it'd be lower, but I don't have a great intuition for how much lower. * In Section 3.3: * Figures 6, 7 restrict the plotted tokens to the numbers. In absolute terms, were those numerical tokens promoted the most by the logit lens? Did you find any surprises, i.e., other tokens being ranked higher than numbers in inner product? * MLP 8 is interesting! What's going on with it? What does it mean for an MLP to contribute indirectly? What could it be doing? Relatedly, you might have answered this already, but do you have a better intuition for how MLPs 8-11 interact? Whether and why you can't just remove one of them? * What kinds of analyses did you have to come up with for this paper? I'm not very familiar with circuits; are most of the techniques here standard or did you have to come up with new tricks while, e.g., explaining the semantics of the circuit? How do these tricks generalize to other problems in interpretability? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your attentive review! We answer your questions below; let us know how else we can improve the paper! > I will say that I'm not 100% convinced by the argument at the end of Section 5, that this circuit is not using memorization and actually generalizes. [...] Thanks for this comment—we’ve been thinking about this post-submission as well. We agree that our original framing suggested generalization too strongly. There is a real possibility that our circuit has memorized greater-than; its limited ability to activate in new contexts could come from the model’s ability to recognize relevant contexts / patterns, as NNs are suggested to do. We will revise the paper to reframe our claims, emphasizing this as an interpretation. We do want to add, though, that we find this possibility very interesting as well, and believe that mechanistic evidence of this sort of memorization is also valuable. > Using our circuit to improve reliability of greater-than/outperform the existing circuit/etc. We agree that this sort of contribution would be valuable, and regret that it falls out of the scope of our work. This study mostly answers “When GPT-2 computes greater-than, how does it do so?”. However, the best way to improve GPT-2’s greater-than reliability would be to cause the circuit to activate in situations where it should, but doesn’t currently. To achieve this, it would’ve been better to ask “How does GPT-2 decide if it should compute greater-than?”. If we knew precisely when/why our circuit activated, we could cause it to activate in new scenarios. This would be exciting follow-up work. There is other work that uses circuits for more practical purposes; for example, other studies have cut relevant edges from model graphs to curtail bad behavior [1]. We hope to investigate such techniques in the future. [1]: Li et al. 2023. Circuit Breaking: Removing Model Behaviors with Targeted Ablation. ICML Deployable Generative AI Workshop > In Section 3.2: what does the PD look like when you give most of the network the 01-dataset but patch the regular dataset into a circuit other than the one you discovered? Interesting question! Choosing the “other” circuit is tricky, but we tested this, selecting circuits around the same size/location as the original. We found that performance is related to how well input->attention->MLP (especially 9/10)->logits paths are preserved. If the path is interrupted (i.e. no attention head or no MLP overlap with the original circuit), PD is very low (-37%). If at least one path is preserved, PD improves with each additional component in common with the original circuit; MLPs have the biggest impact. > Figures 6, 7 restrict the plotted tokens to the numbers. In absolute terms, were those numerical tokens promoted the most by the logit lens? Did you find any surprises, i.e., other tokens being ranked higher than numbers in inner product? Yes, numerical tokens were the top-ranked tokens for both MLPs and attention heads. For the attention heads, only the top ~3 tokens were numbers, while for the MLPs, the top-k tokens were numbers at higher k. We attribute this to the fact that the MLPs upweight a set of numbers, while the attention heads generally upweight only the one (their top-1 token, generally). That the numerical tokens are the top tokens is a little surprising—in theory, other tokens could be boosted, and later pushed down by other modules; modules might also be making high-magnitude changes to other words’ logits at the same time as they upweight the right answer. However, other work has also observed that LMs create predictions by promoting the right answer (not downweighting others). This seems like a useful observation to add to the paper. > MLP 8 is interesting! What's going on with it? What does it mean for an MLP to contribute indirectly? What could it be doing? Relatedly, you might have answered this already, but do you have a better intuition for how MLPs 8-11 interact? Whether and why you can't just remove one of them? We also find MLP 8 interesting! Its output, like that of the attention heads, is important to making MLPs 9-11 upweight the correct values; unlike them, though, it doesn’t clearly upweight YY! Its contributions may relate to the logits of non-years—or perhaps logit space isn’t the correct one to view its contributions in. That is, we know that MLP 8 helps the other MLPs, but we’re still unsure of how. MLPs 8-11 are interconnected in our circuit. Removing one is possible, but removing e.g. an MLP with strong direct effects like MLP 10 or 11 would drastically reduce the portion of the circuit upweighting the right tokens. Not all connections are equally important though: we consider all MLPs interconnected for simplicity, but for some MLP pairs, the edges between them can be cut without a performance drop. >What kinds of analyses did you have to come up with for this paper? [...A]re most of the techniques here standard or did you have to come up with new tricks while, e.g., explaining the semantics of the circuit? How do these tricks generalize to other problems in interpretability? We address this in the **Methodological Contributions** section of the global response. Many of our semantics techniques are from other work. However, the circuits literature is rather young, with little in the way of standard techniques; with this paper, we aimed to develop a toolkit for studying circuit semantics. Here are some techniques we used, and how we feel they generalize to new categories of problems: - The logit lens characterizes individual nodes; combining it with complex path-patching can tell you what nodes compute, and what nodes use the information computed. These techniques are generally applicable. - Neuron-level interventions seem useful for MLP semantics. - PCA on the output of individual nodes was visually appealing and interesting, but difficult to transform into concrete, causal insights. --- Rebuttal Comment 1.1: Comment: Thanks for all the care you put into this response, and the others as well! I'll adjust my score accordingly.
Rebuttal 1: Rebuttal: We thank the reviewers for their insightful reviews. We're glad you found our problem and approach interesting (Nf2K,GMC8,BwzU,WeSY), our experiments extensive and scientifically sound (BwzU,WeSY), and our paper clear and well-written (eiM2,WeSY). Still, we want to address some key concerns: our choice of task, our contributions, and how our methods and results scale. ## Problem Scope (Nf2K, GMC8, WeSY) Many reviews noted that we studied a small task. We agree that this greater-than is simple; however, it lies in the middle of a major gap in the interpretability literature: how do LMs implement math abilities? Our simple task allowed us to dig deep into the model, developing a detailed explanation. While our case study cannot explain math in all LMs, it provides intuition and hypotheses useful across tasks and models. For example, we find that GPT-2 performs greater-than using a mechanism that lies between memorization and generalization; this could explain LMs’ inconsistent math performance more broadly. As circuits techniques mature, these insights could help us tackle larger problems in interpretability. ## Contributions (eiM2,GMC8, BwzU) Many reviewers were unsure of how our contributions differed from those of other work—we will revise our paper to clarify this. Here, we outline our scientific findings and the methodological contributions that enabled them. ### Scientific Findings - We find a circuit for greater-than in GPT-2. At submission time, no other causal interpretability work had been done on math in pre-trained LMs. Moreover, there is only one prior work published on circuits in pre-trained LMs [1]; it focuses on a circuit that copies one token from the input. In contrast, our task has a wider output space and richer structure: our circuit must interpret tokens as numerical quantities, and upweight a specific set of tokens not present in the input. Our circuit is a highly detailed account of an LM’s algorithm; prior work does not make such fine-grained causally motivated claims. This is exemplified by our finding that attention heads pass YY information to MLPs, which then compute greater-than. Although these concrete findings are specific to our circuit, the broad motifs—attention heads moving information into MLPs, which act both directly and indirectly—may generalize. By understanding the mechanisms via which LMs implement specific capabilities, we hope to better understand the overarching mechanisms by which LMs work. - We show that GPT-2’s implementation of greater-than lies between memorization and generalization. We claim this because in our generalization experiments, the circuit does activate in new greater-than scenarios; however, it cannot support related computations like less-than or equal-to, and doesn’t clearly involve general math representations. It thus reflects neither full math competence nor simple memorization. Through this work, we hope to add nuance to the memorization-generalization dichotomy, and take the first step towards a rich characterization of the states in between them. We thank reviewers Nf2K and eiM2 for their questions about our circuit’s generalization. We agree that our original framing suggested generalization too strongly, and that this framing better represents both our evidence and how LMs work. We will update the paper. ### Methodological Contributions Our methods contributions over previous circuits work [1] are as follows: - Instead of patching individual edges in the model, we patch full subgraphs of our model, allowing us to find a complex circuit with many interconnecting components. This is enabled by our use of rust-circuit [2], a framework that allows for direct manipulation of models’ computational graphs. We did not create rust-circuit, but are the first to use it in this fashion. - We used separate datasets to conduct our circuit-finding study and to assess the generalization of the hypothesis design on the first dataset, similar to the train/test split used in machine learning. In contrast, preceding LM-circuits work found their circuit and tested it on the same set of prompts. Finally, some reviewers asked: aren’t many of our methods standard, established methods? Not so: as circuits research is very young, there is not yet a standardized methodology. In writing this paper, we hoped to establish such a toolkit for future circuits researchers by bringing together diverse techniques like the logit lens, PCA on representations, and neuron-level interventions. ## Scaling (GMC8, BwZU, WeSY) Some reviewers were concerned that our methods or findings would not scale. Regarding circuits methods, automatic circuit-finding [2] is already in development (contemporaneously with our work); moreover, in the time since submission, circuits methods have already been applied to 70B-parameter LMs [3]. Our complex subgraph ablations scale less well; we foresee them being used to zoom in on particular components, rather than to study entire large models. Our findings, too, have potential to generalize at scale. While our circuit is specific to GPT-2 small, other findings need not be so restricted. Larger models too may coordinate their attention heads/MLPs as we observed, or rely on circuits that only partially generalize. These are only hypotheses; however, via these hypotheses, we lay a foundation for future work that explores these phenomena at scale. We hope that this rebuttal has addressed your concerns! Please reach out with any other questions—we’d be happy to chat more. ### References 1: Wang et al. 2023. Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 Small. ICLR 2: Goldowsky-Dill et al. 2023. Localizing Model Behavior with Path Patching. ArXiV 3: Conmy et al. 2023. Towards Automated Circuit Discovery for Mechanistic Interpretability. ArXiV 4: Lieberum et al. 2023. Does Circuit Analysis Interpretability Scale? Evidence from Multiple Choice Capabilities in Chinchilla. ArXiV
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper studies and tries to explain how GPT-2 (small) could be computing the mathematical operation of "greater than". A "circuit" (or a subgraph of GPT-2 model's computation graph) is identified by iteratively "patching" individual components to find which components are most responsible for making the correct prediction for this task. The main claim of the paper is that they identify this circuit and show that this circuit is used by GPT-2 small in other contexts that need the greater than operation. Strengths: - The identification of circuits looks sound and an interesting approach. I enjoyed reading that part of the paper -- thank you. - The problem being tackled is very interesting, well motivated and some of the experimental results are really insightful Weaknesses: Main Concerns: - The task formulation seems severely constrained. For instance, GPT-2 is prompted with the year prefix "XX" already given. Effectively, the approach is evaluating whether GPT-2 can perform 2-digit greater-than operation. - Even for the restricted case of year-span prediction task, I think it is important to compute correctness of the entire year string, without providing the XX prefix. Would the results hold in that case? I wonder how much probability mass GPT-2 will assign to the correct XX year token. This might significantly alter the "Prob Difference" metric and the results and conclusions. - The Prob difference metric might also be hiding inaccuracy of GPT-2 in generating the correct year. How often is it the case than GPT-2 assigns high prob to one of the correct answers in its top K tokens?. - I also think it is important to establish a "prior". What is the default behavior of the model when it is shown even simpler prompts that do not require the generation of a number greater than the given number. i.e. is GPT-2 predisposed to generating monotonically increasing numbers, even when the context doesn't need it to? - In a similar vein, a control task that prompts the model to generate a smaller number is needed to claim that GPT-2 can indeed perform the great-than operation. - The other main concern is the fact that each prompt-template yields a somewhat different circuit. The task posed in experiments in Section 5 are the same as the initial setup: predict a 4 digit year given the 2-digit prefix of the correct year. Given this, I do not think the results support the conclusion that gpt-2 has a generalized mechanism to perform greater-than across tasks and contexts. Minor: - While "emergent capabilities" have been used as motivation for this work, I am not sure if GPT-2 small's abilities can be referred to as "emergent". From what I understand, they are capabilities of much larger models. Moreover, next token prediction (how the main task in this paper is formulated) is well within the kinds of capabilities GPT-2 small was trained on. Overall, IMO, this is a very interesting line of work which can be made much stronger and conclusive with some additional experiments and analysis. Technical Quality: 2 fair Clarity: 4 excellent Questions for Authors: Minor: - In Fig 5, Did you intend to flip the instance coming from the actual vs 01-dataset? - Fig 6 is not legible. Please increase the font size - I don't think I understood what "input residual stream" meant. Please clarify. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 4 excellent Contribution: 3 good Limitations: - Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thorough review and thoughtful questions! > The task formulation is constrained. True—for why, see Problem Scope in the general response. > Compute the correctness of the entire year string, w/o the XX prefix: how much probability mass does GPT-2 assign to the correct XX year token? When predicting XX, GPT-2 almost always predicts a valid continuation (numeric token >= XX). On average, considering the top-100 tokens, which comprise 0.95 of GPT-2’s output probability mass, valid continuations receive 0.89 (94%) of the probability mass. The year XX is almost always the top token (mean rank 1.125, mean probability: 0.41); most top tokens are other centuries > XX. So GPT-2 generally wants to generate XX or some other valid continuation. > Prob diff might hide inaccuracy of GPT-2 in generating the correct year. How often does GPT-2 assign high prob to a correct answer in its top-k tokens? We find this unlikely, given our high baseline probability difference of 0.81. This indicates that at least 0.81 of all of GPT-2’s probability mass—not just 81% of all probability assigned to numeric tokens—is assigned to a correct token. However, we also tested this and found that 100% of top-1 tokens, and 98.6% of top-5 tokens are valid (>YY). We thus feel confident that GPT-2 predicts a correct answer with high probability. > Is GPT-2 predisposed to generating monotonically increasing numbers, even when it doesn't need to? Yes. To test this, we gave GPT-2 sequences of 4-digit numbers like “XXY1, XXY2,..., XXYN, XX”. We chose XX, Y1,…,YN randomly for each sequence, respecting tokenization. Sequences were generally not monotone, so any 2-digit continuation YY could have been valid. We found that GPT-2’s behavior depends on the sequence length N. At low N, GPT-2 produces mostly numbers > YN; the proportion of continuations < YN increases smoothly until ~50% by N=20. So even in the context of random sequences, GPT-2 often generates increasing numbers. We found that our greater-than circuit also underlies this behavior. What does this mean for our study? Greater-than behavior emerges even when no greater-than is required; however, we already knew this, having observed it in the less-than case. In our study, we acknowledge that GPT-2 performs greater-than in incorrect scenarios, and do not claim to find a mechanism unique to greater-than scenarios. Rather, we want to connect GPT-2’s greater-than behavior to its underlying mechanisms, be they unique or not to the correct contexts. Our claim is that we have found a circuit that computes greater-than regardless of whether GPT-2 ought to perform the greater-than operation. This claim stands in the face of this new evidence. > A control task that prompts GPT-2 to generate a smaller number We test less-than in Sec. 5 of our paper. Given prompts like “17YY is less than 17”, GPT-2 predicts YY. Given “The war ended in 17YY and started in 17”, GPT-2 uses the greater-than circuit to predict a year >YY! So, GPT-2 can perform greater-than but not less-than. This suggests that GPT-2’s greater-than ability is not supported by general mathematical understanding; it can’t e.g. use the greater-than circuit to produce less-than by upweighting the opposite tokens. GPT-2’s greater-than circuit is thus in an interesting position: it’s not fully general, but can still be applied in some new contexts. > Given that each prompt-template yields a different circuit, and the generalization tasks are the same as the initial setup—predict a 4 digit year given the 2-digit prefix of the correct year—the results don’t support the conclusion that GPT-2 has a generalized mechanism to perform greater-than across tasks and contexts. We disagree with some aspects of this critique. Not all tasks are about years: we shift the context from years to prices, and monotonically increasing numbers. The circuits are mostly identical except in the last case, where additional components seem to be relevant. However, the number of nodes/edges that differ between the two is small; even our original circuit isn’t a bad fit. We could have quantified this circuit overlap better, and will do so in our revision. However, we agree that we suggested generalization a bit too strongly. The evidence points more towards a mechanism that can perform greater-than (narrowly defined) in different contexts, but is not necessarily integrated into a broader suite of math capabilities. GPT-2’s greater-than mechanism thus lies between memorization and generalization. > I am not sure if GPT-2 small's abilities can be referred to as "emergent". From what I understand, they are capabilities of much larger models. We share this hesitation. Though Wei et al. list math as an emergent ability in LLMs, GPT-2 is smaller than most LMs claimed to have emergent abilities. We’re happy to change that phrasing—we believe that math abilities in LMs are an interesting phenomenon, emergent or not. > In Fig 5, did you intend to flip the instance coming from the actual vs 01-dataset? Yes! To find the circuit, we corrupt parts of the computational graph using the 01-dataset; those edges that hurt performance when corrupted are part of our circuit. However, during evaluation (Fig. 5), we instead give normal input to the circuit nodes, and 01-input to the other nodes. The idea is that if the circuit (Fig. 5 center, blue / purple) gets normal input, and the rest of the model (Fig. 5 center, red / purple) gets 01-input, the model should perform the task correctly, as if it were receiving only normal input. This is because the circuit controls model behavior on the task. When we give the circuit 01-input, and the rest of the model normal input, model performance is correspondingly very poor. > Meaning of "input residual stream" The input residual stream to e.g. MLP 10 is the 768-dim input vector that serves as input to MLP 10; this is opposed to the token-level input. Feel free to send follow-up questions / suggestions! --- Rebuttal Comment 1.1: Title: Thank you for the response Comment: Thanks for providing more details and clarifications. I appreciate this work and studying mathematical abilities of LMs is interesting. My comment re. the scope is not to say that "only greater-than is studied", but that "greater-than has been studied using a handful of prompts". This is why I think the results about generalization of the "greater-than" circuit are not completely supported. That said, I found the rest of the response helpful. I am happy to bump up my score a bit.
null
null
null
null
null
null
Principle-Driven Self-Alignment of Language Models from Scratch with Minimal Human Supervision
Accept (spotlight)
Summary: The paper proposes a new method to generate diversified, principle-guided synthetic data from LLM itself to ease requirement of large amount of annotated instruction-following data for supervised fine-tuning task for LLMs. The generation process follows four steps. First step is to generate adversarial topic-guided instructions, second step is generate responses using in-context learning with pre-defined principles, third step is supervised fine-tuning with generated instruction-following data. Fourth step is to leverage context distillation for verbose cloning. `LLaMA-65b` model is fine-tuned with the data generated from the aforementioned approach. Evaluations are conducted on TruthfulQA and Bigbench-HHH Eval dataset. The fine-tuned model shows strong performance. Strengths: The paper combines the approaches of self-instruct paper and constitution AI paper and proposed the topic-guided, principle-following self-instruct way of data generation which could ease requirement of large amount of annotated data and improve diversity of instruction-following data. Although nothing is new, the paper follows a clear and logical path to generate synthetic data in the era of LLMs. Given a strong pretrained model, we have reason to believe this will work pretty well which is to some extent verified in the evaluation results. Weaknesses: - The pretrained LLMs are not instruction fine-tuned. It could be challenging to generate clean topic-guided instructions and principle-following response. The paper didn't talk if there is any filtering step following these generation steps. - How much data is generated, how much data are there in the eval set. These numbers are not shown in the paper clearly. - Evaluation is not enough. We might need more evaluations on instruction-following dataset - Verbose clone is not working well, and no reason or discussion is given as why this is the case. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - The topics are generated from a red-teaming (adversarial) way, where the LLM is not supposed to perform well. Then how do we guarantee the quality of generation of responses? - Can you give more intuition of verbose tax in the paper instead of just giving the concept? The `alignment tax` is only seen on smaller scale models. - Can you tell how much data are generated in the process and used to fine-tune the model - At the starting point, when model is not trained to understand instructions, can it generate topics and new instructions well? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: None is given Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for the thorough review and positive feedback on our work. Your questions/comments/suggestions are invaluable for the improvement of our method and the revision of the manuscript. Let us address each individual comment/question below. > Concerning the quality of response generation for red-teaming prompts: We've designed principles including ethical, question assessment, candor, dated knowledge, static, etc., as the guidelines for the model to follow, which are trained with red-teaming and other prompts. It's worth noting that if we only train our model without red-teaming prompts, then that would be an issue, because the anti-red-teaming principles will not be triggered. However, this is not the case. > On providing a deeper understanding of the "Verbose Tax": Certainly! The concept of "Verbose Tax", as mentioned on line 287, hints at an inherent balance that the model tries to maintain between offering extensive (i.e., verbose), helpful responses and ensuring the safety and coherence of those responses [1]. > Regarding the volume of data generated and utilized for fine-tuning: Detailed specifications related to the generated data's volume and its usage in the fine-tuning process are elaborated in Appendix D. > On the model's initial capability to generate topics and understand instructions: In the literature, base (unaligned) large language models have demonstrated the ability to interpret instructions and generate relevant topics [2-3]. This proficiency can be attributed to the expansive data they've been trained on, coupled with their in-context learning capability when suitably prompted [4]. Our approach inherent such a capability. --- [1] Llama 2: Open Foundation and Fine-Tuned Chat Models [2] Language Models are Few-Shot Learners [3] The Capacity for Moral Self-Correction in Large Language Models [4] Self-Instruct: Aligning Language Models with Self-Generated Instructions
Summary: This paper proposes Self-Align, a method for aligning a language model from scratch (without previous RLHF training) with fewer annotations. Self-Align works by using the LM to generate a set of example instructions/tasks, generating from the LM conditioned on the instruction + a human-written set of principles, and then distilling this back into the model by finetuning without the principles and demonstrations in context. The paper shows that this method improves performance/accuracy and (synthetic human) ratings on variety of benchmarks. Strengths: - simple and clearly presented method - strong performance / thorough comparison to both open- and closed-source baselines Weaknesses: - generally seems to underperform Vicuna, even though the Dromedary model is 5x larger. I still see that the Self-Align methodology could be a contribution (with better understanding of how much the different parts of the methodology matter), but there may not be a reason to build on the Dromedary model when Vicuna is available / more accessible in size. - If this is the case then perhaps the paper could be stronger if it shows that the Self-Align methodology still provides improvements *on top of* Vicuna's alignment (i.e. applying Self-Align with Vicuna as the base model). - lacking in ablations: I don't have a great sense of how much the different design choices are contributing to the final performance of the model, e.g.: sensitivity to self-instruct instructions, particularly the 20 "topic-specific prompts" (were these hand-crafted to match the downstream tasks?), importance of / sensitivity to ICL examples during self-alignment (both the specific 5 examples and the number of examples). It would be great to get a sense of e.g. variance across prompts to understand how much of the method works because of the specific prompts the authors crafted, as opposed to the methodology in general. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - l36: what are "topic-specific prompts"? - sec 3.4 verbose cloning: are the model's brief responses primarily because the ICL examples are short? - is the verbose cloning process simply repeating principle engraving with a longer demonstration in context, or is there something else going on here? - any idea why verbose cloning often decreases performance? Can readers conclude that verbose cloning hurts accuracy but makes the model's answers more "stylistically" aligned (hence the improvement on the Vicuna evals)? - sec 4.2.1 truthfulQA: does Dromedary outperform Vicuna on MC? Why weren't all the models tested on MC? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for the thorough review and positive feedback on our work. Your questions/comments/suggestions are invaluable for the improvement of our method and the revision of the manuscript. Let us address each individual comment/question below. > Regarding the difference between Dromedary over Vicuna Firstly, Vicuna is designed for knowledge distillation from ChatGPT, and constrained by non-commercial usage, Dromedary, on the other hand, allows flexibility for application to commercial models like LLaMA-2, when the Self-Align methodology is integrated. Secondly, Vicuna, being dependent on the pre-aligned ChatGPT model, cannot surpass the performance of ChatGPT. In contrast, the Self-Align approach aims to and successfully achieved the goal of enhancing the base LM model with minimal human supervision, as we have proven in the paper. > On enhancing the alignment of models like Vicuna using Self-Align: This suggestion is indeed valuable. We recognize that a model adept at instruction-following could boost the in-context learning and reasoning by the base LM, potentially rendering it a more fitting candidate for the Self-Align pipeline. However, the central goal of our approach is to minimize human supervision, and to build upon an already aligned model which does not deviate from that goal (such as those demanding massive human annotations). > Concerning the dependency of our methodology on the specificity of prompts: We understand and appreciate your concern. The intrinsic relationship between the Self-Align methodology and the specific prompts we utilize makes it a considerable challenge to conduct an ablation study with varied prompts. However, with the advent of LLaMA-2 and its extended 4k context length, we've experimented with responses modeled in the general-specific-general style in ICL examples. These have shown that Dromedary-2 achieves superior outcomes, even when negating the need for verbose cloning and inference-time few-shot instances. The results on Vicuna benchmark questions are shown below: | v.s. Vicuna-13b | | W | T | L | |---|---|---|---|---| | Dromedary-1-65b (final) | 0-shot | 16 | 1 | 63 | | Dromedary-1-65b (final) | 2-shot | 19 | 8 | 53 | | Dromedary-2-70b (self-align-only) | 0-shot | 28 | 32 | 20 | | Dromedary-2-70b (self-align-only) | 2-shot | 52 | 17 | 11 | We will incorporate these findings in our revised paper. ### Clarifications 1. "Topic-specific prompts" refer to the 20 Seed Prompts designed for Topic-Guided Red-Teaming Self-Instruct, as detailed in Appendix M. 2. The succinctness of the model's responses is indeed influenced by the brevity of the ICL examples. 3. For the verbose cloning process, we leverage a distinct prompt, detailed in Appendix K, guiding the initially aligned model to produce more extensive responses. 4. The observed "Verbose Tax" (mentioned on line 287) could be attributed to an inherent balance between the model's helpfulness and safety, as delineated in [1]. 4. Our decision to not test all models on MC was driven by our focus on presenting results for the base and aligned language models via RLHF and Self-Align. To maintain clarity and succinctness, we've chosen not to include results for models like Alpaca or Vicuna, which are derivatives of Text-Davinci-003 and ChatGPT. --- [1] Llama 2: Open Foundation and Fine-Tuned Chat Models
Summary: The authors use the self-instruct approach combined with principle-driven prompting to self-instruct a pre-trained LLM. The instruction/response generation generally refers to what self-instruct and Alpaca did. The principle-driven prompting can be treated as a SFT version of Constitutional AI. Self-instructed LLaMA-65B can achieve slightly worse performance than ChatGPT, while outperforms the base LLaMA and Alpaca. Strengths: 1. The principle-driven prompting is a very insightful idea to generate a large amount of fine-tuning data with minimal human annotation. The cost of labeling data for LLM alignment is a pain point in today's LLM development. The proposed method can be a worth-attempting approach for researchers with limited labeling budget. 2. The verbose version training and the discussion of verbose tax is very insightful. How to balance model's performance on specific tasks and its HHH is always an important question in the field. The authors show that simply SFT with context distillation has some limitations but can still reach better performance than non principle-driven distilled models. 3. The prompt design is very detailed and careful. Although the idea is clearly inspired by self-instruct and Constitutional AI, the details of the prompt design should be still considered as a novel contribution and helpful to the community. Weaknesses: 1. The paper doesn't cover preference data generation and thus not applicable to RL tuning. A lot of OpenAI's work and talks (InstructGPT etc.) have stated the importance and necessity of RLHF. The model may finally face a performance ceiling if only using the current SFT-style self-instruct, which might be where the performance gap comes from. This is not only a con of this paper but also for those LLaMA family papers including Alpaca, Vicuna, etc. 2. The self-instruct style of data generation may lead to a narrow data distribution, e.g., the base LLM is not likely to generate a complex math problem and its answer --- and therefore the tuned model would fail to solve difficult math problems (e.g. possibly poor performance on MATH benchmark). This problem may be solved by adjusting the seed prompts and principles. But in general, how this approach generalize to different tasks is not well-discussed. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: line 152, 153: new instructions novel instructions. Duplicated. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: In the paper, actually there is no fair comparison on fine-tuning on human labeled data v.s. fine-tuning on self-instruct generated data. Dromedary v.s. ChatGPT/InstructGPT, they have different base model. Dromedary v.s. Alpaca/Vicuna, latter's data are not human labeled. But this is a very important question for the developers in LLM alignment teams as they have to decide whether to spend money on human labeling and how much. The discussion on this topic is kind of beyond this paper, but is an important question remains in future work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for the thorough review and positive feedback on our work. Your questions/comments/suggestions are invaluable for the improvement of our method and the revision of the manuscript. Let us address each individual comment/question below. > On the potential performance ceiling of SFT-style self-instruct: Thanks for this insightful comment. We concur that relying solely on the current SFT-style self-instruct would not be good enough in the future, and we are actively exploring ways to push up the ceiling such as better ICL exemplars and reinforcement learning training as a part of our upcoming work. > Regarding the narrow data distribution concern with self-instruct style data generation: Great point! We envision that target-informed prompt generation, as outlined in [1], is promising for addressing this potential issue. We also recognize that tailoring the prompt generation process for distinct problem domains, such as mathematics or coding, are crucial. An alternative route could be leveraging openly collected user intentions [2] or gleaning instructions from comprehensive NLP datasets [3]. All of these directions are exciting topics for future research. > Comparing fine-tuning on human-labeled data and self-instruct generated data: Excellent point! At the time we embarked on the Dromedary project, the availability of open-source human-labeled datasets was limited. In future work, we plan to evaluate the performance of Dromedary (or its successor, Dromedary-2) with LLaMA (or LLaMA-2) fine-tuned on human annotation datasets like Dolly [4], OpenAssistant [5] or LIMA [6] for comparison. --- [1]: Synthetic Prompting: Generating Chain-of-Thought Demonstrations for Large Language Models [2]: ShareGPT [3]: Orca: Progressive Learning from Complex Explanation Traces of GPT-4 [4]: Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM [5]: OpenAssistant Conversations -- Democratizing Large Language Model Alignment [6]: LIMA: Less Is More for Alignment --- Rebuttal Comment 1.1: Title: Acknowledging Having Read The Rebuttal Comment: Thanks for the rebuttal. I've read the content and have no further questions.
Summary: The authors study the problem of language model alignment and propose to leveraged hand-crafted prompts, principles, and examples to provide guidance, instead of relying on manually annotated human preference data. The authors make comparisons with various ai systems and the results demonstrate the effectiveness of the proposed approach. Strengths: The studied problem is important, the proposed solution is novel, and the empirical performance is good. Also, systematic analyses are provided to better understand the effectiveness of the proposed method. Weaknesses: The experiments mainly focus on demonstrating the effectiveness of the proposed algorithm. It would be better to have more analyses on the algorithm design. For example, whether the effectiveness of the algorithm has a heavy dependency on the number/quality of prompts. Technical Quality: 3 good Clarity: 3 good Questions for Authors: What are the computation requirements of the proposed algorithm? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: As mentioned before, it would be better to have analyses of the performance with various number of prompts/examples. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for the thorough review and positive feedback on our work. Your questions/comments/suggestions are invaluable for the improvement of our method and the revision of the manuscript. Let us address each individual comment/question below. > Regarding the dependence of our algorithm on the number/quality of prompts: This question is indeed perceptive. After the recent release of LLaMA-2 and its extended 4k context length, we have experimented with responses that adhered more closely to the general-specific-general style within ICL examples. The results are highly promising: our Dromedary-2 model, trained from LLaMA-2 and improved with ICL examples, has enhanced performance even without the verbose cloning phase nor inference-time few-shot examples. The results on Vicuna benchmark questions are shown below: | v.s. Vicuna-13b | | W | T | L | |---|---|---|---|---| | Dromedary-1-65b (final) | 0-shot | 16 | 1 | 63 | | Dromedary-1-65b (final) | 2-shot | 19 | 8 | 53 | | Dromedary-2-70b (self-align-only) | 0-shot | 28 | 32 | 20 | | Dromedary-2-70b (self-align-only) | 2-shot | 52 | 17 | 11 | We will include these new findings in the version of our manuscript. > In relation to the computational demands of our algorithm: As outlined in our released code, a general setup would entail a minimum of 2 x 6 V100-40G GPUs for generating synthetic responses, with a more extensive requirement of 8 x 6 V100-40G GPUs for the training (LoRA fine-tuning) process.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: Paper presents a SFT approach for instruction fine-tuning with minimal supervision. (1) First uses self-instruct approach to augment instructions;\ (2) Using human-written rules & in-context demonstrations for thought process of response, final responses are generated by foundation models, and then distilled to model;\ (3) Further conducts context distillation to make verbose output; Results show pretty powerful performance compared to other open-source and API models. Strengths: The proposed approach is easily comprehensible and showcases its effectiveness in producing instruction question-answer pairs to train instruction-following models with limited human supervision. This approach holds significant importance for the open-source community as it demonstrates a strong commitment towards democratizing large language models. Weaknesses: Paper shows cost-efficient approach to create powerful instruction-tuned models but I have following minor concerns. 1. According to the evaluation results depicted in Figure 5, it becomes evident that incorporating few-shot examples is essential in achieving high-quality answers. This observation leads to the suspicion that the model must need in-context examples to generate high-quality answers since you distill outputs from in-context learning (Appendix Figure 5 also shows that zero-shot performance is worse compared to other “instruction-tuned multi-turn'' models). This observation suggests that there is still a requirement for answers generated by human writers (e.g., LIMA and LLAMA2) or responses derived from instruction-tuned models (e.g., Alpaca and Vicuna), rather than depending on outputs generated by in-context learning, despite the associated costs? 2. In-context learning pipeline described in this paper incorporates a combination of instructional guidance (rules) and few-shot examples (thought process of response). I believe instruction-tuned models may have better capability for this kind of in-context learning compared to relying on foundation models (Wei et al., 2023). [1] LIMA: Less is more for alignment., Zhou et al., 2023\ [2] Larger language models do in-context learning differently., Wei et al., 2023 Technical Quality: 3 good Clarity: 3 good Questions for Authors: (Please refer to Weakness) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for the thorough review and positive feedback on our work. Your questions/comments/suggestions are invaluable for the improvement of our method and the revision of the manuscript. Let us address each individual comment/question below. > the need for in-context examples to produce high-quality responses: We do observe Dromedary’s performance is better with in-context examples, but it does not imply Dromedary essentially needs in-context examples. 1. We recognize that the performance of GPT-4 and similar models tend to favor lengthier responses and its inherent response patterns [1,2,3]. For instance, a typical GPT-4 response might initiate with an overview, delve into specifics, and wrap up with a summary (known as the general-specific-general style). Given that the ICL examples we used aren't particularly lengthy or aligned with this style, we leverage the two-shot examples primarily to enhance the response's length and style, as a post-training fix. 2. After the recent release of LLaMA-2 with its extended 4k context length, we have experimented with responses that adhered more closely to the general-specific-general style within ICL Self-Align examples. The results are highly promising: The Dromedary-2 model, trained from LLaMA-2 and with improved ICL exemplars, has enhanced performance even without the verbose cloning phase nor inference-time few-shot examples. The results of Vicuna benchmark questions are shown below: | v.s. Vicuna-13b | | W | T | L | |---|---|---|---|---| | Dromedary-1-65b (final) | 0-shot | 16 | 1 | 63 | | Dromedary-1-65b (final) | 2-shot | 19 | 8 | 53 | | Dromedary-2-70b (self-align-only) | 0-shot | 28 | 32 | 20 | | Dromedary-2-70b (self-align-only) | 2-shot | 52 | 17 | 11 | We will include these new findings in the version of our manuscript. > the potential of instruction-tuned models for in-context learning: It's conceivable that a FLAN-style multi-task training process would boost the in-context learning and the reasoning prowess of the base LM, rendering it as an enhanced component in the Self-Align pipeline. However, the benefits from such instruction-tuned models would rely on the availability of human-annotated data while our focus in this paper is to minimize the dependency on large volumes of human annotations. --- [1] How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources [2] The False Promise of Imitating Proprietary LLMs [3] Judging LLM-as-a-judge with MT-Bench and Chatbot Arena Once again, thank you for taking the time to review our manuscript. We will polish it further and include the discussions above. We're eager to discuss and address any concerns you may have.
null
null
null
null
null
null
FlowPG: Action-constrained Policy Gradient with Normalizing Flows
Accept (poster)
Summary: Handling constraints in reinforcement learning is a fundamental problem that has application in areas such as robotics and resource allocation. A common solution is to incorporate a projection step to compute feasible actions, which involves a computationally expensive optimization solver in the loop. This can be prohibitively slow, especially when constraints are non-convex. When used as part of the policy with a differentiable solver, it can led to the zero gradient problem if the policy output is far outside the feasible space. To circumvent the need for an optimization solver, the authors propose to learn an action mapping which respects the constraints with high probability by design. Specifically, they train a normalizing flow offline on samples from the distribution of feasible actions. These samples are generated via Hamiltonian Monte-Carlo for continuous action spaces and probabilistic sentential decision diagrams for discrete action spaces. During RL, the policy then outputs a latent action, which is mapped to the space of feasible actions via the normalizing flow. The weights of the flow are frozen, but the gradients are still propagated through the network to train the policy. They evaluate their approach, FlowPG, on two continuous control robotics tasks and a discrete resource allocation problem. FlowPG achieves the highest average return across ten random seeds for all problems and has the lowest number of constraint violations (prior to a projection step). When it does violate a constraint, the magnitude of the violation is lower, meaning it is closer to the feasible action space. And it achieves these benefits with reduced wall clock time compared to the best baseline. Strengths: - Addressing action constraints in reinforcement learning is an important problem and critical for many real-world robotics and decision making tasks. - FlowPG is a novel solution to this problem and effective in improving performance while reducing constraint violations and wall clock time, all while being fairly straightforward to implement. - The finding that a uniform prior reduces constraint violations when used in conjunction with RL is a useful insight. The bounded support of the uniform distribution also works nicely with the fact that policy gradient algorithms often work well when the output is passed through a squashing function to respect box constraints on action limits. - Constraint violations which do occur, however infrequently, can still be remedied by solving an optimization problem. Importantly, this does not need to be done when performing the policy updates, just during rollouts, which makes training more efficient. - The paper is well organized and clearly written. It does a good job explaining the novelty and results and provides enough information to support its claims. In particular, I liked the visualizations in Figure 3 of the learned action space for the Reacher task. Weaknesses: - The tasks considered in the paper are fairly standard benchmarks and showcase the effectiveness of FlowPG. However, the constraints in each task are relatively simple, with only Half Cheetah having constraints which depend on a portion of state. This makes it difficult to gauge how this approach will scale to more challenging, state-dependent constraints. Tasks with more complex constraints would significantly strengthen the paper. - It seems difficult to scale the sample generation procedure to higher-dimensional action spaces. And for state-dependent constraints, it seems challenging to span the relevant portions of the state space which will be visited by the intermediate and final policies used during learning. Again, more challenging tasks would help prove the concept. It may be that an iterative approach which refines the flow on relevant portions of the state space will be necessary. - There does not appear to be any discussion of how the discrete actions in the BSS environment are handled. My guess is that the flow is trained on the integer values, but the output will still be continuous. We can then just round the output, but this could result in constraint violations. A better discussion of how this is handled in the main paper would help. - There is a lengthly discussion of gradient computation for the normalizing flow, but this is usually just handled by the deep learning framework. It seems unnecessary to get into these details unless the form of these gradients is leveraged to improve the speed of training. If auto-differentiation is still used as normal, then this feels a bit like filler and could be replaced with more relevant details about training, the tasks, or results. - The motivation for using a normalizing flow, rather than other generative models, seems missing in the paper. The authors do mention something, but it felt a bit hand-wavy. If we were optimizing a stochastic policy with an on-policy algorithm, such as PPO, having a tractable likelihood would be really important. This would be a great motivation for using a normalizing flow. But since we are using a deterministic policy and training with DDPG, it seems more arbitrary. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - How do you think this approach would scale to more complex, state-dependent constraints? Would it be too difficult to generate samples which cover the state space sufficiently? And if so, would an iterative approach which refines the flow on states encountered under the current policy be feasible? - How are the discrete actions in the BSS environment handled? If the output of the flow is rounded, does this lead to more constraint violations? - Is the form of the normalizing flow leveraged to more efficiently compute gradients while training? Or are we using standard auto-differentiation? - What is the rationale for using a normalizing flow over other generative models in the deterministic policy case? - How do you think this approach would work for an on-policy algorithm, such as PPO, which requires the log-likelihood? - How was the "hard wall" for handling constraints implemented with HMC, which needs a differentiable log-likelihood? Were they just barrier functions? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors discuss how their approach can still lead to constraint violations, requiring the use of an optimization solver in this case. However, they show that the probability of constraint violation is significantly reduced. A possible limitation not really discussed is scalability due to the need for sample generation to train the flow. It may be hard to scale this approach to higher-dimensional action spaces and constraints which heavily depend on the state. The Half Cheetah task definitely shows it can work, so at the very least, this is a good preliminary study to illustrate the potential of the approach. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. How to generate samples to cover the state space sufficiently? Refine flow during training? Indeed, this is a good point. We note that our approach models the space of feasible actions for different states. Therefore, even if the feasible state space is large, but the relationship between different states and feasible action space is roughly similar, our approach can work well. If the space of feasible actions varies significantly for different states, then training flow in a static manner would be challenging. Refining the flow during the training so that the flow can focus on reachable part of the state space, similar to training of the Q function using replay buffer, is indeed a good idea. Our overall approach can be easily modified to perform this online flow training. We shall explore it actively for larger problems with more complex state-action constraints. In our current settings, we were able to get good accuracy and recall without this online flow training as noted in table 1 in rebuttal pdf. 2. How to handle discrete action? The simulator for BSS is equipped with a rounding procedure that tries to apply a heuristic method to extract an integer solution from a given action. In our experience, this method was successful for most cases. When this rounding method did not give an integer solution satisfying all the constraints, we solved an integer program to perform the projection. This approach is applied consistently to all the baselines. 3. Is the form of the normalizing flow leveraged to more efficiently compute gradients while training? Or are we using standard auto-differentiation? We use Auto-differentiation. However, we wanted to explicitly highlight how the policy update in Eq 11 (main paper) utilizes the gradient from the flow model. In particular, in Eq 13, 14, highlight the policy gradient part coming from the flow. 4. Rationale for using normalizing flows Please see the common response. (Sec. 1) 5. PPO+Flow Assuming the trained flow model achieves a high accuracy rate and recall rate, the outputs of the flow model are likely to be valid actions. In this case, the PPO approach can be effectively employed. This is due to the fact that the log probabilities of actions can be computed using the flow and that the optimization of policy is conducted within the feasible region. However, in cases where projection is required and the projected action is mapped using the inverse transformation of the flow to an area beyond the domain of the latent variable (e.g., outside $[-1, 1]$ for uniform prior), then we need to compute the closest point in the latent space and use this as a proxy to compute the log-probability of the action. 6. How was the "hard wall" for handling constraints implemented with HMC, which needs a differentiable log-likelihood? Were they just barrier functions? We initially tried with differentiable barrier functions. However, we found that better or comparable results could be achieved through a non-differentiable function with sharp barriers. Specifically, we employed a piecewise function that takes on a value of $-\infty$ if there is constraint violation, and $0$ otherwise. We set the gradient to be zero everywhere. For further details and precise implementation, you may refer to the `sample_generation/hmc.py` file in the supplementary materials. --- Rebuttal Comment 1.1: Comment: > Rationale for using normalizing flows The empirical results provided in the common rebuttal response are convincing and should be included in the final paper. If placed in the appendix, there should be a reference to it in the main paper. > Other action-constrained RL domains (from common response) Thank you for these additional results, they are definitely promising! What are the state features used in Hopper and Half Cheetah? Are they also velocities? > PPO + Flow That makes sense! An additional question, though, is that when using DDPG, you do not project during the policy update, right? Could you not also do this in the case of PPO? The projection, if needed, could be considered part of the environment, and the action added to the replay buffer is the one output by the flow. However, I could see how this may reinforce bad actions and lead to more constraint violations. --- Reply to Comment 1.1.1: Comment: Thank you very much for the response and comments. We shall include these new experimental results in the next revised version. > State features used in Hopper and Half Cheetah Yes, the reviewer is correct. The state features involved in the constraints for Half-Cheetah, Hopper, and Walker2d are angular velocities. > When using DDPG, you do not project during the policy update, right? We do not project action during the policy update in DDPG. > Could you not also do this in the case of PPO? We agree that storing the actions generated by the flow model (pre-projected action) is possible for PPO. We note that when the trained flow model achieves high accuracy and recall rates, even if a pre-projected action violates the constraints, the distance between the pre-projected action and the projected action tends to be relatively small (as shown in Column 3, Figure 4 in main paper). In such cases, the constraint violations might still remain within reasonable limits, and the zero gradient problem would not be a major issue. However, if the distance between projected and pre-projected actions is significant (e.g., when the flow is not trained well), then as the reviewer also mentions, it may lead to higher constraint violations and the zero gradient problem.
Summary: This paper solves the problem of action-constraint reinforcement learning. The author utilizes the Flow model to learn a projection from the action to the latent variable, and then integrate DDPG to construct the FlowPG framework. Empirically FlowPG outperforms its competitors in both fewer constraint violations and faster in elapsed time. Strengths: 1. The manuscript is clearly structured and well presented, and it is easy to read. 2. It seems interesting to apply the generative model in constraints optimization problem. Introducing the flow model is novel in action-constraint scenarios, and it effectively avoids solving a QP problem after the policy network. 3. The authors utilizes HMC and PSDD in sampling from the valid actions set. Weaknesses: The action-constraint problems is actually somehow similar with constraint-RL problem, and there are some important works such as [1,2] and other related works. It is suggested that the authors should provide a discussion about whether other constraint RL methods could be applied in the action-constraint scenarios, and provide some experimental result if possible. [1] Constrained Policy Optimization https://arxiv.org/pdf/1705.10528.pdf [2] Safety-Constrained Reinforcement Learning for MDPs. https://arxiv.org/abs/1510.05880 Another issue is that the action-constraint RL problem could be seen as offline RL problem (invalid actions space could be OOD actions). There are various offline RL methods appearing [3,4] in recent years ([5] also applies the flow model), and the authors are suggested to consider the offline RL methods in this manuscript. [3] Off-Policy Deep Reinforcement Learning without Exploration. https://arxiv.org/abs/1812.02900 [4] Stabilizing off- policy q-learning via bootstrapping error reduction. https://arxiv.org/abs/1906.00949 [5] APAC: Authorized Probability-controlled Actor-Critic For Offline Reinforcement Learning. https://arxiv.org/abs/2301.12130 Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. There are some confusions in generating valid action space, even with HMC and PSDD, could it be assured that the $\tilde{a}$ could be transformed to a valid action after the flow model projection in Figure 2(a). In addition, as we want to maximize the cumulative reward in RL problem, should the input of the flow model be ($\tilde{a}, s, s'$) where $s'$ is the next state. It is also noted when $\tilde{a}$ becomes a valid action $a$, the next state $s'$ may also changes 2. The author claims that flow model is more efficient than VAE and GAN for data generation. Could the author provide some ablation study on this issue, especially for generating valid action samples. 3. The experiments seem not convincing enough. The authors only considers Half-Cheetah and Bike Sharing System. Actually Half-Cheetah belongs to D4RL tasks, and there are many other tasks inside (some could be constraint RL tasks), it is suggested to consider more dataset for comparisons. 4. In addition, the MCMC style methods are often time-consuming, how is the entire training time, compared with other competitors? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: see weakness and questions Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. Other constrained RL methods In our paper, we consider the scenario where constraints are imposed on actions at each RL step, and these constraints have closed forms. Unlike the standard constrained MDP, we do not define cost functions for individual state-action pairs. Therefore, the direct application of well-known techniques like CPO[1] and Lagrangian Relaxation[2] becomes challenging for solving action-constrained RL. Additionally, these approaches do not provide a guarantee of constraint satisfaction. We shall clarify this point in the revision. 2. Connection to Offline RL methods Thanks for pointing to this line of work, we shall certainly discuss it in the revision. We do highlight some key differences from off-policy RL methods: – Off policy RL methods require a dataset for training from an expert or random policies. This is not required in our method as action constraints are known, and we use HMC/PSDD to sample feasible actions for different states. – Notice that collecting off-policy data from a random policy for offline RL is generally trivial. However, this is not trivial for our action constrained setting. Even a random policy must select actions uniformly from *only* the feasible action space, which itself is a key component we are trying to address. 3. Input to flow? Is output of the flow guaranteed to be valid? We shall clarify that the actions generated by the trained flow model are not guaranteed to be valid actions. However, a well-trained flow model exhibiting a high accuracy rate, such as 99.98% in the Reacher domain, is highly likely to generate valid actions. Even in instances where an invalid action is generated, our experiments show that these invalid actions are generally closer to the feasible region when compared to actions generated by the DDPG+P and NFWPO approaches. We also note that the projection of invalid actions into the feasible region is required in all approaches since the environment only accepts valid actions. The input to the normalizing flows model are $s$ and $\tilde{a}$, where $\tilde{a}$ belongs to the latent distribution of the flow model. This $\tilde{a}$ is transformed into the environment action $a$ by the flow model, which is then fed into the RL simulator (after projection, if required). The dynamics of the environment (i.e., the next state) only depends on $s$ and $a$. The input to the flow cannot include the next state s’ as the policy output $\tilde{a}$ cannot be executed by the environment directly before mapping by the flow model. Thus s’ is not available as an input for the flow model. Reward maximization is primarily handled by the policy gradient part connected with the flow model as shown in Figure 2a in the paper. 4. Other generative models such as VAE and GAN Please see the common response. (Sec. 1) 5. More domains in D4RL tasks Please see the common response. (Sec. 3) 6. Entire training time when considering HMC data generation In our approach, generating valid actions and training a normalizing flows model are conducted prior to RL training. Specifically, the generation of data using HMC takes less than 5 minutes to produce 1 million samples for all the domains.This rapid data generation is attributed to the efficiency of HMC. An ablation study comparing the efficiency of HMC and traditional rejection sampling has been included in the general response. The training time for the flow model for different domains is shown in Table 2 of rebuttal PDF. We do highlight that in our approach, once the flow is trained, retraining is not required if some problem parameters change such as the initial state, reward function etc. However, for approaches such as NFWPO, every time we run the approach, projection steps will be required, thus leading to higher runtime (as shown in figure 4, last column in main paper). --- Rebuttal Comment 1.1: Title: Thank you for your rebuttal Comment: Thank you for your detailed reply. Your response has addressed some of my concerns, and I do appreciate your work. **Other generative models such as VAE and GAN** The section 1 in common response is our strong enough to convince me, and comparisons between flow model, wgan and vae are somehow weak. More empirical studies are suggested. **Entire training time when considering HMC data generation** Thank you for supplemental studies. I am OK with about the training time study. --- Reply to Comment 1.1.1: Comment: Thank you for your response. As noted in the rebuttal pdf, just measuring accuracy alone for a model can be misleading due to potentially low recall rate (i.e., limited coverage of the feasible action space). Low recall rate implies that the RL policy does not optimize over the entire feasible region. As we noted in our common rebuttal (under “Rationale for using normalizing flows”), it is not straightforward to compute the recall rate for WGAN and VAE as these are not invertible models, unlike the normalizing flows. Nonetheless, we can use the Wasserstein distance (W-Dist) as a proxy to approximate the recall rate for non-invertible models. We follow the steps below: 1. Generate data points using HMC (i.e., from the feasible action space) 2. Generate data points using generative models such as Flow/WGAN 3. Compute the Wasserstein distance between two datasets generated in Step 1 and 2 using the paper (and its publicly available GitHub implementation “​​geomloss”) [5] If the W-Dist is low (close to zero), then it means the difference between two distributions is small. That is, in our case, recall rate of the model used in Step 2 is high (which is desirable) as the data generated from the model is close to the feasible actions generated from the HMC sampling. If the W-Dist is high, it analogously represents a low recall rate. Our results for different domains are as follows (we generated 100K samples in Step 1 and 2 each): ``` +--------------+----------+----------+ | Problem | WGAN | Flow | +==============+==========+==========+ | Reacher | 0.000058 | 0.000002 | +--------------+----------+----------+ | Half-Cheetah | 1.181345 | 0.096802 | +--------------+----------+----------+ | Hopper | 0.315453 | 0.009221 | +--------------+----------+----------+ | Walker2D | 0.302288 | 0.088734 | +--------------+----------+----------+ ``` The W-Dist in the Flow model is significantly smaller than the distance in WGAN by more than an order of magnitude in most cases. This indicates a significantly high recall rate for the Flow model. Despite tuning WGAN's hyperparameters over multiple days, we were not able to further improve the WGAN’s results. We also point the reviewer to paper: An Empirical Comparison of GANs and Normalizing Flows for Density Estimation, 2021 [3]. This paper also shows that the Flow models outperform WGAN on several different types of distributions. The VAE model was much worse than WGAN (see Figure 3b in rebuttal PDF). Therefore, given the limited time, we did not explore VAE further. [3] Liu, Tianci, and Jeffrey Regier. "An Empirical Comparison of GANs and Normalizing Flows for Density Estimation." arXiv preprint arXiv:2006.10175 (2020). [5] Feydy, Jean, et al. "Interpolating between optimal transport and MMD using sinkhorn divergences." The 22nd International Conference on Artificial Intelligence and Statistics. PMLR, 2019.
Summary: This paper provide a new method for ACRL, which incorporate normalized flow methods to alleviate the action violation problem. It achieves better results on MuJuCo compared with other methods. Strengths: - Introduce normalized flow into action control -- which maps the original hard-to-control action space into another easy-to-control space. - Design a HMC-PSDD framework to efficiently train the flow model. - Find an appropriate prior distribution for the flow model. Weaknesses: Main issue: - An ablation study for PSDD may be necessary -- as we don't know the quality of generated valid actions. For example, a possible baseline can be: interact with the environments many times to get the possible valid actions. Minor issue: - DDPG is relatively outdated. More recent SoTA methods should be added into the experiments. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I am not an expert in ACRL so I wonder if there are other ACRL benchmarks? Only 3 environments seem not enough. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No obvious negative societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. Ablation study for PSDD and HMC Please see the common response. (Sec. 2) 2. Comparison with recent ACRL algorithms + more domains Please see the common response. (Sec. 3 and 4) --- Rebuttal 2: Comment: Thanks for the detailed explanation! --- Rebuttal Comment 2.1: Comment: Thank you very much for reviwing our response. We hope that we have addressed your concerns, and we shall include the ablation study and additional experiments in our revised version.
Summary: The paper introduces a novel action-constrained reinforcement learning (ACRL) algorithm called FlowPG, which utilizes a normalizing flow model to generate actions within the feasible action region. Experimental results demonstrate that FlowPG effectively handles action constraints and outperforms two existing ACRL algorithms by reducing the number of constraint violations. Strengths: The paper is well-written and provides good motivation. Weaknesses: The paper lacks the ablation study to validate importance of the proposed HMC and PSDD. The advantages of training speeds are not obvious. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Q1. The motivation behind the use of Hamiltonian Monte Carlo (HMC) and probabilistic sentential decision diagrams (PSDD) is not sufficiently clear. Additionally, an ablation study is necessary to validate the importance of these proposed techniques. Q2. The paper claims that the proposed method achieves faster training speeds compared to the baselines. However, Figure 4(a) shows no clear advantages in terms of convergence speed for FlowPG compared to the two baselines. This discrepancy needs to be addressed and clarified. Q3. The paper would benefit from including more comparisons between the proposed method and recent ACRL algorithms. By providing such comparisons, the authors can further highlight the strengths and weaknesses of their approach and provide a more comprehensive evaluation. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: see Questions Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. Motivation behind HMC and PSDD. Ablation study to justify the reason of using them Please see the common response. (Sec. 2) 2. Convergence speed of FlowPG For half cheetah and reacher, we shall highlight that our approach has a similar training curve as NFWPO. However, the key difference is that each training step is much faster in our approach than NFWPO as shown in Figure 4 (last column). Therefore, in terms of runtime, our approach converges much faster than NFWPO. For BSS domain, our approach has better training curve than NFWPO (in terms of # of training steps), and is also significantly faster as shown in Figure 4 (first and last column). 3. Comparison with recent ACRL algorithms Please see the common response. (Sec. 4) Citations for all our responses: [1] Achiam, Joshua, et al. "Constrained policy optimization." International conference on machine learning. PMLR, 2017. [2] Tessler, Chen, Daniel J. Mankowitz, and Shie Mannor. "Reward constrained policy optimization." arXiv preprint arXiv:1805.11074 (2018). [3] Liu, Tianci, and Jeffrey Regier. "An Empirical Comparison of GANs and Normalizing Flows for Density Estimation." arXiv preprint arXiv:2006.10175 (2020). [4] Lin, Jyun-Li, et al. "Escaping from zero gradient: Revisiting action-constrained reinforcement learning via Frank-Wolfe policy optimization." Uncertainty in Artificial Intelligence. PMLR, 2021. --- Rebuttal Comment 1.1: Title: Response Comment: Thanks for the additional experiments and explanation, that address my concerns. --- Reply to Comment 1.1.1: Comment: Thank you very much for reviewing our response. We shall include the ablation study and additional experiments in our revised version.
Rebuttal 1: Rebuttal: We thank all the reviewers for their thoughtful feedback and suggestions. We would like to address a few common questions as follows. 1. Rationale for using normalizing flows Our reasons are: High Accuracy: We conducted an ablation study comparing different generative models such as VAE and WGAN (which is more stable than GAN) in the Reacher domain with the constraint $a_1^2 + a_2^2 \leq 0.05$. We evaluated the accuracy by calculating the percentage of valid actions among 100k generated actions via such models respectively. The accuracy of the various generative models was as follows: Normalizing Flow: 99.98%; WGAN: 98%; VAE: 83%. Recall Through Invertible Transformations: Normalizing flows provide the ability to measure recall due to their invertible bijective transformations. The recall rate (we also call it “coverage”) indicates the fraction of valid actions that can be generated from the latent space. Given a state $s$, it can be computed as follows $recall(s) = \frac{\sum_{a\in\tilde{\mathcal{C}}(s)} \mathbb{I}\_{\textbf{dom}f_{\psi}}\big(f^{-1}\_{\psi}(a,s) \big) }{|\tilde{\mathcal{C}}(s)|}$ Where $\tilde{\mathcal{C}}(s)$ is the set of valid actions that are uniformly distributed in feasible region. $f^{-1}\_{\psi}$ is the inverse transformation function of the normalizing flows model $f\_\psi$. We compute the average recall over all uniformly sampled states to obtain the recall of our flow model. The achieved recall rates for our trained normalizing flow model are as follows: 97.85% for the Reacher, 78.01% for the Half Cheetah, and 82.35% for the BSS environment. In contrast, recall rate cannot be computed in a straightforward fashion in VAE and WGAN since determining the corresponding latent action for a given valid action is not possible [3]. Nonetheless, we still can visualize the coverage of VAE and WGAN in Reacher. In Figure 3 in the rebuttal PDF, we can see that the feasible region is not fully covered in both VAE and WGAN models. A higher accuracy rate indicates fewer projection operations. A higher recall rate indicates that the agent is able to explore a nearly complete feasible region. Table 1 in the rebuttal PDF summarizes the accuracy and recall 2. Ablation study on HMC and PSDD To measure the efficiency of sample generation , we employ a success rate metric, defined as the percentage of valid actions per 100 generated sample points. In both two domains, the HMC method achieves a success rate of 100%. For the rejection sampling, the success rates are 3.93% and 4.7% in the Reacher and Half Cheetah domains, respectively. Figure 1 in the rebuttal PDF shows the density of generated sample points within the feasible region. HMC method results in a significantly higher number of data points uniformly distributed across the feasible region. It indicates that HMC is more efficient in sample generation when compared to the rejection sampling method. When action space constraints are expressed as (in)equalities (such as in the BSS environment), generating valid actions through either rejection sampling or HMC becomes challenging (e.g., rejection/HMC sampling does not produce any action that satisfies all (in)equality constraints within a practical time limit). The advantage of using PSDDs lies in their ability to represent a probability distribution over all valid actions, which implies any sampled action from PSDD is guaranteed to satisfy the constraint. Furthermore, PSDD enables fast sampling of actions with complexity linear in its size and can easily represent uniform distribution over the feasible action space (section 3.2 in main paper). 3. Other action-constrained RL domains We have evaluated our proposed framework in two more domains Hopper and Walker, which belong to the Gym-MuJoCo continuous control task. In Hopper domain, we consider the state-dependent constraint $\sum\_{i=1}^3 \text{max}(w\_i a\_i, 0) \leq 10$ , where $w\_i$ is the state feature. This is similar to the constraint in Half Cheetah.Similarly, we consider constraint $\sum\_{i=1}^6 \text{max}(w\_i a\_i, 0)\leq 10$ in Walker domain. All experimental settings remain the same as in the paper. Figure 2 in the rebuttal PDF shows different curves during training in Hopper and Walker. In Hopper domain, when comparing to DDPG+P, our approach has less constraint violations before the projection operation, and achieves comparable results in terms of average return, magnitude of constraint violation, and running speed. In contrast to the NFWPO approach, our method performs better across all metrics, except for minor differences in the magnitude of constraint violation. In Walker domain, our approach outperforms DDPG+P across all metrics except running time. When compared with NFWPO, while our approach has slightly more cumulative constraint violations, the achieved magnitude of constraint violation and average return remain comparable. Significantly, our approach has much faster runtime compared to NFWPO. To summarize, in the two new domains featuring state-dependent constraints, our approach consistently achieves a comparable or better average returns as other baselines. Moreover, our approach also demonstrates significantly reduced constraint violations, faster running time in comparison to other baselines. 4. Comparison with recent ACRL algorithms In constrained RL, most works focus on tackling cumulative constraints such as $\mathbb{E}\_{\pi}[\sum_{t=0}^\infty C(s\_t,a\_t)] \leq c$. There exist a limited number of works that specifically target ACRL where constraints are expressed as closed-form conditions on actions. Among the existing solutions for ACRL, representative approaches include DDPG+Projection, SAC+Projection, DDPG+OptLayer, and NFWPO. Among these, NFWPO has been demonstrated to have better performance in terms of return and constraint violations before projection [4]. This is why we have selected NFWPO as our primary baseline for comparison. Pdf: /pdf/8531c11d51f6804defcc479f418acabadc052aea.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Temperature Balancing, Layer-wise Weight Analysis, and Neural Network Training
Accept (spotlight)
Summary: This paper proposes TempBalance for temperature balancing based on the theory of heavy-tail self-regularization (HT-SR), which is a simple yet effective layer-wise policy applicable to general global temperature allocations in deep learning regularization. This paper proposes learning rate balancing across layers, which has received less attention compared to global or parameter-wise learning rate assignment. The HT-SR motivated capacity control metrics characterize the layers to achieve maximum temperature balance during model training, resulting in improved performance during testing. Extensive experiments show that TempBalance significantly outperforms ordinary SGD and carefully tuned spectral norm regularization, as well as a number of state-of-the-art optimizers and learning rate schedulers. Strengths: 1. The article proposes a simple yet effective layer-wise learning rate schedule TempBalance based on HT-SR theory. 2. The article compares TempBalance with SGD and SNR on various training tasks, including different network architectures (such as ResNet, VGG, WideResNet), different datasets (such as CIFAR10, CIFAR100, SVHN, TinyImageNet), and extensive ablation studies (such as varying widths, depths, initial learning rates and HT-SR layer-wise metrics). 3. TempBalance outperforms a range of state-of-the-art optimizers and learning rate schedulers, and maintains stable performance over SGD baselines when the model size changes. 4. The designed algorithm is simple and easy to understand, and the paper is well-written. Weaknesses: 1. The algorithm implementation is too simple and lacks theoretical innovation. 2. The motivation is not clear enough, and it does not explain why it is necessary to design layer-wise learning rate schedule strategy based on HT-SR theory. 3. The paper only conducts experiments on classification tasks and does not conduct experimental validation on downstream tasks or other areas. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. What are the advantages of layer-wise learning rate schedule strategy based on HT-SR theory over other layer-wise learning rate schedule strategies? What are the advantages over other global and parameter-wise learning rate schedule strategies? 2. Optimizers are closely related to the model architecture, e.g. Transformers typically use the second-order optimizers, while CNNs typically use the first-order optimizers. In this paper, we do not analyze and experimentally verify this common phenomenon, and only use CNNs in combination with first- and second-order optimizers for classification tasks, which may lead to "We do not find them to provide better results than the SGD baseline with cosine annealing". Can more extensive experimentation on this issue be done when conditions permit? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: This article provides necessary discussions on the limitations and future directions that can be explored. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Experiments on other areas In Table 6 of the rebuttal PDF, we provide experiments of applying our method TempBalance (TB) to two different tasks: object detection and language modeling. In both tasks, TB consistently improves generalization, outperforming the baseline scheduler cosine annealing (CAL) when both are combined with Adam/AdamW optimizers. Here are experimental settings for object detection. We utilized the PASCAL VOC2007 [1] dataset and the YOLO-v8n [2] model with the pre-trained weights from the official website. The baseline methods are CAL combined with Adam/AdamW, while our method is TB with Adam/AdamW. For both methods, we trained for 200 epochs with batch size 64 and set the same hyperparameter for the optimizers: weight decay = $5.0×10^{−4}$, $\beta_{1}$ = 0.9, $\beta_{2}$ = 0.999, $\epsilon$ = $10^{−8}$. We searched the initial learning rate for all methods among {$7.5×10^{-6}$, $1×10^{-5}$, $2.5×10^{-5}$}. We report the mean and standard deviation of mean Average Precision (mAP, higher is better) over five random seeds on the test set. Here are experimental settings for language modeling. We studied the Penn Treebank (PTB) dataset [3] using a three-layer "tensorized transformer core-1"[4]. We compared our method TB with the baseline scheduler CAL with both applied to the Adam optimizer. For both methods, we trained the models for 40000 iterations with a batch size of 120 and a dropout rate of 0.3. We searched the initial learning rate for all methods among {0.000125, 0.00025, 0.0005, 0.001, 0.00125, 0.0025, 0.005}. The hyperparameters for Adam are $\beta_{1}$ = 0.9, $\beta_{2}$ = 0.999, $\epsilon$ = $10^{−8}$. The mean and standard deviation of perplexity (PPL, lower is better) across five random seeds on the test set are reported. ## Experiments with Transformer and Adam We agree that optimizers are closely related to the architecture and using the Adam with Transformer instead of CNN is a more suitable experimental setup. In our rebuttal PDF Table 6 (b), we trained a tensorized transformer using Adam. Further details are in "Experiments on other areas." Our results show that TB outperforms the Adam baseline with the CAL scheduler, demonstrating its compatibility with Transformers and Adam. ## Advantages over different learning rate scheduling 1. **Compared to layer-wise learning rate scheduling (e.g., LARS):** Our method utilizes a more precise generalization metric, the alpha metric from heavy-tailed self regularization (HT-SR) theory, to enhance the generalization performance of deep models during training. This "shape-based" metric estimates the shape of the eigenspectrum of weight matrices. In contrast, LARS uses a "norm-based" metric, such as the layer-wise gradient norm. A recent study in HT-SR [5] has shown that the shape-based metric alpha surpasses norm-based ones in assessing model generalization performance. Figure 4 of the submitted paper confirms that our method outperforms the layer-wise scheduler LARS in test accuracy. 2. **Compared to parameter-wise learning rate scheduling (e.g., Adam):** Similarly, our method employs the "shape-based" metric alpha to improve the generalization, an approach not incorporated in traditional parameter-wise methods. Our updated results in Figure 18 of the rebuttal PDF confirm that our method outperforms the parameter-wise scheduler Adam and LAMB in test accuracy. Moreover, our updated experiments in Table 6 show that combining our method with Adam/AdamW further improves the generalization. Here are the experimental setups for Figure 18. For Adam, we searched the initial learning rate over {0.00005, 0.0001, 0.001, 0.01, 0.1}, we used $\epsilon$ = $10^{−8}$. For LAMB, we searched the initial learning rate over {0.005, 0.01, 0.02}, and used $\epsilon$ = $10^{−6}$. Both used a weight decay of $5.0×10^{−4}$, $\beta_{1}$ of 0.9, $\beta_{2}$ of 0.999, learning rate decay with cosine annealing. Each experiment was conducted with five random seeds. All other hyperparameters were consistent with those described in the paper. The experimental details of Table 6 can be found in our response titled "Experiments on other areas". ## Lack of theoretical innovation and simple algorithm design We wish to emphasize that this is the first study to design learning rate schedulers based on the heavy-tailed self regularization (HT-SR) theory. We draw the theoretical insights from HT-SR, noting that the weight matrices of layers in well-trained models typically exhibit a heavy-tailed eigenspectrum. We propose to use the Hill estimator to quantify the heavy-tailed pattern of each neural network layer, utilizing this value for more steady layer-wise learning rate scheduling. This theory-guided method is novel, efficient in improving test accuracy, and straightforward to implement, as affirmed by the reviewer in the "Strengths" section. ## Unclear motivation: why design layer-wise learning rate schedule strategy based on HT-SR theory The heavy-tailed self regularization (HT-SR) theory suggests that the empirical spectral densities of weight matrices in a well-trained neural network typically display a heavy-tailed pattern. This heavy-tailed characteristic can indicate the quality of each layer, signifying whether it is overtrained or undertrained. Such theoretical insights motivate us to balance the overtrained and undertrained levels of different layers by designing a layer-wise learning rate schedule based on the heavy-tailed pattern measurement. Specifically, we assign higher learning rates to undertrained layers (as indicated by less pronounced heavy-tailed patterns) and lower rates to overtrained layers (indicated by more pronounced heavy-tailed patterns). We dynamically monitor these measurements and adjust the scheduling throughout the training process. ## Reference [1] Everingham et al, 2010. [2] Redmon et al, 2016. [3] Mikolov et al, 2011. [4] Ma et al, 2019. [5] Martin and Michael, 2021.
Summary: The paper proposes a layer-wise learning rate scheduler that adapts the learning rate to the eigenvalue distribution of the Weight-covariance matrix. As explained in the appendix, output and weight covariance matrices as well as the Hessian and Fischer-Information matrix are closely related, and they often show a power-law decay in their eigenvalues. The hypothesis is that layers with a smaller exponent in the power-law-decay are more "overtrained" than those with larger values, and thus should receive smaller updates. This behavior is implemented by interpolating learning rates linearly by (fitted) decay-exponents. Experiments show an improved generalization performance over base-line methods for a variety of standard computer-vision benchmarks and (CNN) architectures. Strengths: The paper follows an interesting and less explored approach to understanding the training dynamics of networks. The observation of an emergence of power laws as such is fascinating, and studying the connection to training dynamics adds an interesting and novel perspective. Further, the empirical results are encouraging: It seems that the proposed criterion for adaptive training does indeed improve generalization performance, which is an unexpected, non-trivial result that on its own raises some eyebrows (i.e., should be discussed and explored). The paper is well-written and in particular the discussion of the background in the appendix provides an interesting and insightful read. Weaknesses: My main concern with the submission in its current form is that it is light on the analytical side. In short, the paper does not try to explain why there is a connection between training success and the proposed criterion/schedule and there is a risk of overlooking unmodelled or indirect effects: In my experience, the training process of multi-layer networks is affected by a mix of numerical and fundamental issues that are usually difficult to separate. A very significant problem are for example gradient magnitude excursions (explosion/vanishing). For networks that contain batch normalization layers (or most variants thereof), the standard He initialization leads to exploding gradients at initialization. This causes early layers to "learn" at an exponentially larger rate than later layers, but the effect vanishes over time. It seems likely, that large rank-1 updates from initially exploding gradients distort the estimation of the decay coefficients ("alpha"), and likely in the "correct" direction of dampening exploding layers. Residual architectures still suffer from this effect within each stack in each residual block (but to a lesser degree). In contrast, non-normalized networks such as the traditional VGG suffer from vanishing gradients for other reasons. Then, there is the problem of concentration of the overall singular value spectrum for stacks of random matrices (higher-order powers of Wigner spectra), which again affects the effective rank of the updates. In order to understand better if and how the proposed method could (or does) work, it would be useful to take such effects into account, for example by measuring gradient magnitudes over layers, or by a theoretical model. This could inform the reader better with respect to the technical justification the proposed scheme and might help removing (or justifying) ad-hoc choices, such as linear interpolation of learning rates. It would also be helpful to carefully consider architectural aspects such as normalization and residual connections, and (at least) monitor the layer-wise statistics of gradient magnitudes and decay coefficients in order to understand better what is going on. There are some smaller issues that could be improved, such as experimenting on a more diverse set of architectures and data sets and comparing against a larger set of base-line schedulers. In the former case, I would consider the current effort sufficient (as computational costs are skyrocketing easily, and the effect does seem quite pronounced already), in the latter case a more comprehensive study (maybe only for one or a few representative examples) could help in solidifying the result further. Overall, I found this paper very interesting and the results are unexpectedly strong. The only reason for my slightly negative overall assessment is that (for a paper at NeurIPS) more effort could be put into understanding the results better (empirically and/or analytically). Technical Quality: 3 good Clarity: 3 good Questions for Authors: Are the new results state-of-the-art in terms of generalization performance (compared with other LR-schedulers)? That does not need to be the case to have an interesting paper, but it would be good to know which characteristics of other approaches reach similar goals. Conceptually: Would it be possible that the whole effect observed boils down to implicitly addressing exploding or vanishing gradients? EDIT: Post rebuttal, I have raised my score to accept (and soundness and contribution accordingly), as the response of the authors shows very clearly that the new approach has effects beyond gradient magnitude excursions, which was my main concern in terms of empirical evidence for the effect described. I would encourage the authors to clarify this in a revision of the paper. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: My main concern in terms of limitations is the possibility that other primary effects case the spectral criterion to trigger as a secondary feature. This could be excluded by additional experiments (or even some theory). One could also discuss limitations of the empirical study a bit more in detail, but I would think that the limitations are obvious to an attentive reader, so there would be no serious issues in this regard (I would not understand this as a "new technique" paper but rather a paper that explores a novel methodological approach, not yet arriving at a deployable method). Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Does TempBalance (TB) boil down to addressing gradient excursions We first summarize the reviewer's questions and our primary responses, with subsequent detailing of our experiment and supporting results. 1. **Does gradient excursion exist?** We discovered that gradient explosion does exist, but it is confined to the first epoch out of a total of 200 training epochs, leading us to believe it does not significantly impact the test accuracy. We observed no gradient vanishing. 2. **Does the observed gradient explosion impact the estimation of alpha?** We discovered that the large rank-1 updates resulting from the gradient explosion indeed affect the Empirical Spectral Density (ESD) as well as the alpha estimation. However, this effect is again restricted to the first epoch. 3. **Does TB boil down to addressing gradient explosion?** We found that postponing the use of TB until the epoch when neither the gradient explosion nor the alpha estimation is affected does not compromise the test accuracy. To support the above answers, we conducted three experiments (see figures in the rebuttal material). We discuss the setup of these experiments first and then analyze the results. * (Figures 12,13) We aim to detect gradient excursion by tracking the gradient norm across layers during training. We examine the model every 30 iterations over the first 10 epochs, calculating the $L_2$ norm of each gradient update across layers using the training batches of size 128. This produces an empirical gradient norm distribution with a total sample size of update numbers $\times$ layer numbers. Figure 12 presents the maximum/minimum/mean of the distribution, while Figure 13 visualizes these distribution for several iterations of Epoch 1. * (Figure 14) We aim to assess the impact of gradient explosion on alpha estimation by monitoring the ESD. Figure 14 examines the change of the ESD of a single weight matrix over several iterations, tracked in the experiment depicted in Figures 12 and 13. * (Figure 15 (a)) We aim to see if TB enhances generalization by implicitly addressing gradient explosion. Since the gradient explosion and its effect on alpha estimation only transpire in the first epoch, we postpone the starting epoch of TB to Epochs 2, 5, and 10 and see if it affects the test accuracy. Our responses to the questions are supported by the results obtained from the three experiments: 1. **First question (Figure 12 and 13):** We observed that the notable exploding gradients only occur in the initial 200 iterations of the first epoch. In Figure 12, we pinpoint a singular peak of maximum gradient norm within the first epoch. This aligns with the abnormal distribution with a large gradient norm in the subfigure of Figure 13 titled "Epoch 1, iteration 30". 2. **Second question (Figure 14):** Note that large rank-one updates have been studied in random matrix theory, which manifests as a "bulk+spike" pattern. This has been analyzed in, e.g., Theorem 2.13 of [1]. Figure 14 shows this "bulk+spike" pattern, but only in the first epoch. The ESD exhibits a heavy-tail distribution in subsequent epochs, suggesting the influence of rank-one updates is limited. 3. **Third question (Figure 15 (a)):** Delaying the application of TB until after the first epoch does not adversely affect the test accuracy. Figure 15 (a) illustrates that applying TB from Epochs 2, 5, and 10 results in test performance comparable to when TB is applied from Epoch 1. Since the gradient explosion only occurs in the first epoch and its effect on alpha estimation diminishes after this, the effectiveness of TB does not rely on addressing gradient explosion or biased alpha estimation from large rank-one updates. 4. **Third question**: We compare TB with the baseline method LARS, which uses gradient norms to determine layer-wise learning rates in combating gradient vanishing/explosion issues. As illustrated in Figure 4 of the submitted paper, TB outperforms LARS in terms of generalization performance. ## Light on the analytical side 1. Our method is founded on the HT-SR theory, detailed in both the introduction and Appendix A. This explains: **(1) our use of alpha for better generalization:** The paper [2] showed that modern neural networks' Empirical Spectral Densities (ESDs) typically exhibit a heavy-tail distribution. Alpha, the decay coefficient of ESD, effectively gauges generalization. [3] provided a rigorous bound for this, and [4] pinpointed an optimal alpha value close to 2. **(2) our approach to learning rate schedule based on alpha:** Based on insights from [2] and [5], we set layer-wise learning rates according to alpha values, as a higher learning rate decreases alpha. Thus, layers with larger alpha get higher learning rates. Our linear interpolation design for learning rate assignment was based on its better performance over other designs (e.g., square root, log2, step). See Figure 16 in the rebuttal PDF. 2. The success of TB and its relationship with HT-SR are elucidated in Appendix B (Figures 10, 11). It reveals that TB effectively controls the shape of the ESDs. Compared to CAL, TB consistently attains a more concentrated distribution, with both the mean and median approaching the theoretically optimal PL_Alpha_Hill value of 2 [4]. This is consistently observed across various settings. ## Smaller issues: comparison to other schedulers Figure 4 in our paper demonstrates our method's advantage over multiple baselines. Our rebuttal's Figure 18 further confirms its consistent improvement over parameter-wise schedulers such as Adam and LAMB. Also, our rebuttal PDF Table 6 reveals our method's performance gains in object detection and language modeling. ## Reference [1] Couillet and Liao, 2022 [2] Martin and Mahoney, 2021 [3] Simsekli et al, 2020 [4] Bartlett et al, 2020 [5] Gurbuzbalaban et al, 2020 --- Rebuttal Comment 1.1: Comment: Dear Authors, thanks for the very detailed reply and extensive additional analysis. The additional findings and explanations indeed resolve the open problems I saw previously. The issue of gradient magnitude excursions is usually limited to the first few steps of optimization in networks with batch normalization, a the normalization layer will counteract weight excursions from large gradient updates and effectively freeze the affected layers. That your method still provides an improvement after the first epoch is a clear indication (in my perception) that it does more than "just" counteracting gradient magnitude differences. I also overlooked the LAMB results already in the paper (LAMBs per layer adjustment avoid such problems completely, but the new method is still better). I would correspondingly adjust my score and recommend mentioning the differentiation to "just" exploding gradients in the final version (maybe providing some of the new results in a suitable way). If possible, it would also be good to add a few more sentences on the HT-theory (maybe in the appendix) to help readers less familiar with the background. Thanks again for the detailed feedback! --- Reply to Comment 1.1.1: Title: Thank you for the response Comment: Thank you for the constructive comments. We are glad that our rebuttal helps answer your questions and clarify that our method is more than mitigating the issue of gradient explosion/vanishing. We agree that addressing gradient magnitude excursions is vital for improving training. We will ensure that these discussions on gradient magnitude excursions, along with the details on HT-SR theory and the results on LAMB, are highlighted in the final version.
Summary: This paper propose TempBalance, an adaptive lr schedule that assigns lr to each layer based on its heavy-tail characterization. The authors estimate PL_Alpha, the exponent of the power law distribution that fits the heavy tail part of the empirical spectral density, for the weight at each layer. They propose to assign a higher lr to the layer with larger PL_Alpha and lower lr to the layer with smaller PL_Alpha as a larger PL_Alpha often indicates a layer is under-trained while a smaller PL_Alpha often indicates a layer is over-trained. Strengths: 1. The proposed TempBalance is novel in that it adjusts lr based on metrics from HT-SR theory, providing a new perspective from statistical mechanics of learning to neural networks optimization. 2. The experiments results demonstrate that TempBalance exhibits regularization effect and improves the generalization performance upon some existing lr schedulers, optimizers and spectral norm regularization methods. Weaknesses: 1. Eq. 2: The authors design the relation between the lr and the value of PL_Alpha_Hill to be linear, which seems a bit arbitrary to me. Have the authors tried other designs? 2. The authors do not provide a convergence analysis. 3. Eq. 2: TempBalance requires the computation of eignevalues of the weight at each layer, inducing some computational overheads. Therefore, it can be difficult to scale TempBalance to models with larger widths and depths. The authors reduce the lr update frequency to alleviate the cost, but this might compromise the model performance. 4. Missing baseline: LAMB [1]. No hyper-parameter study on $s_1$ and $s_2$. 5. The authors only evaluate on small models. It would be good if the authors can provide an experiment on a ResNet-101 to demonstrate its potential for larger models. [1] You, Yang, et al. "Large batch optimization for deep learning: Training bert in 76 minutes." arXiv preprint arXiv:1904.00962 (2019). Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Could authors provide a visualization of the layerwise lr with some insights on how lr varies across layers and how layerwise lr changes through training? 2. See weakness. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The authors have address the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Other design of learning rate assignment? Our selection of the linear interpolation design for learning rate assignment in our TempBalance (TB) method was based on its superior performance in our ablation study, as provided in rebuttal PDF Figure 16. We evaluated three alternative learning rate assignment functions: Square root (Sqrt), Log2, and Step: * Sqrt : $f_t(i)=\eta_t\frac{\sqrt{\alpha_t^i}}{\frac{1}{L} \sum_{j=1}^{L}\sqrt{\alpha_t^j}}$, * Log2: $f_t(i)=\eta_t\frac{log(\alpha_t^i)}{\frac{1}{L} \sum_{j=1}^{L}log(\alpha_t^j)}$, * Step: For layer $i$ with $k$-th minimum alpha among all the layers, $f_t(i)= \eta_t (s_1 + (k-1)\frac{s_2 - s_1}{L-1}) $ . Here, $\eta_t$ denotes the base global learning rate at epoch $t$, $(s_1, s_2)$ represents the minimum and maximum learning rate scaling ratios relative to $\eta_t$, $\alpha_t^i$ is the PL_Alpha_Hill estimate of the layer $i$ at epoch $t$, and $L$ is the total number of model layers. All these notations are consistently used in the main paper. As depicted in Figure 16, our method, TB, surpasses the other designs when tested on VGG and ResNet architectures on CIFAR100. All hyperparameters are consistent with the main paper. Each experiment was conducted with five random seeds. ## Computation problem 1. **Is our method difficult to scale to large models?** * In rebuttal PDF Figure 17, we conducted a scaling experiment to show that our method is applicable to large models. The experiment setup is based on ResNet-series on CIFAR100. We studied models of depth in {18, 34, 50, 101} and ResNet18 models of width in {512, 768, 1024, 2048}. For each model size, we recorded the duration of a single training epoch and the time taken to apply our method once. From this, we calculated the percentage increase in time when using TB once per epoch, using this as an indicator of computational overhead. Our findings reveal that the computational overhead remains low (less than 9%) even when applied to exceptionally wide or deep models (ResNet18 with width 2048 or ResNet101). We report the mean and the standard deviation of the results over 10 runs. The test platform was one Quadro RTX 6000 GPU with Intel Xeon Gold 6248 CPU. * The computation overhead is not large because the most computation-intensive part of our method is SVD decomposition, which we have optimized using GPU implementation and batch processing. 2. **Does reducing the SVD computation compromise the test accuracy?** We conducted an experiment on reducing the update interval of learning rate schedule to see if it affects the test accuracy of TB. Figure 15 (b) shows the experiments conducted with ResNet18 on CIFAR-100. We reduce the update interval from 390 iters used in our paper (equivalent to one epoch) to 300, 200, 100, and 50. We observed that there indeed exists a trade-off between the computation time and test accuracy, but reducing the update interval only brings mild improvement. ## Missing baseline and hyperparameter study 1. **Missing baseline.** In rebuttal PDF Figure 18, we provide additional results by comparing our method with Adam and LAMB. We found that our method outperforms both baseline methods. Furthermore, we also found that the Adam-based methods do not provide better results than the SGD baseline with cosine annealing (CAL) in our experiment setting, which was mentioned in line $267-268$ in the paper. For Adam, we searched the initial learning rate over {0.00005, 0.0001, 0.001, 0.01, 0.1}, and we used $\epsilon$ = $10^{−8}$. For LAMB, we searched the initial learning rate over {0.005, 0.01, 0.02}, and we used $\epsilon$ = $10^{−6}$. Both methods used weight decay = $5.0×10^{−4}$, $\beta_{1}$ = 0.9, $\beta_{2}$ = 0.999, learning rate decay with cosine annealing. Each experiment was conducted with five random seeds. 2. **Missing hyperparameter study.** In Figure 19, we provide additional results of a hyperparameter study on $(s_1, s_2)$, in which we consider five different settings for $(s_1,s_2)$: $[(0.5,1.5), (0.6,1.4),(0.7,1.3), (0.8,1.2), (0.9,1.1)]$. We run tasks on CIFAR100 with four VGG and ResNet architectures, each with five random seeds. Our results show that a larger learning rate scaling range $(0.5,1.5)$ performs best. This hyperparameter setting is the default setting used in our paper. All hyperparameters are consistent with those described in the paper. ## Experiments on ResNet 101 We present additional results in rebuttal PDF Figure 20, showing the application of our method (TB) to ResNet 101 on CIFAR-100, and we compare it with the baseline (CAL). We searched the initial learning rate among {0.05, 0.1, 0.15} for both the baseline and our method. The results report the mean and standard deviation across five seeds. We found that TB offers improvements for the larger ResNet101 model comparable to those observed for ResNet18/34, demonstrating its potential for larger models. We used the same hyperparameters as those for ResNet18/34 in Appendix C. ## Visualization of layer-wise learning rate In rebuttal PDF Figure 21, we visualize the layer-wise learning rates for ResNet 18/34 trained on CIFAR-100. We report the learning rate (or alpha) every epoch throughout the 200-epoch training duration. 1. **How does the learning rate vary across layers?** We observed a correlation between the layer-wise learning rate and the layer-wise alpha distribution: layers with larger alphas are allocated larger learning rates, whereas those with smaller alphas receive smaller learning rates. 2. **How does the layer-wise learning rate evolve during training?** The variations in layer-wise learning rates closely reflect shifts in the layer-wise alpha distribution. Initially, the alpha distributes uniformly across layers but eventually converge to a layer-wise pattern where earlier layers have smaller alphas and later layers have larger ones. --- Rebuttal Comment 1.1: Title: Thanks for your response Comment: The new experiment results and analysis have addressed my concerns. My score has been updated. --- Reply to Comment 1.1.1: Comment: Thank you for the valuable feedback. We will ensure the new results and analysis are incorporated into the updated version.
Summary: The paper proposes a way of modulating the learning rate, independently for each layer, when training deep networks via gradient descent. This modulation keeps the average learning rate (over all layers) on a predefined path (e.g., cosine decay), but "balances" it according to the relative training stage of each layer: layers comparatively "overtrained" get a smaller scaling factor, layers comparatively "undertrained" get a larger one. That comparison is done by leveraging the theory of "heavy-tail self-regularization" (HT-SR), estimating the heavy-tail characteristics of the empirical spectrum density (ESD), the spectrum of eigenvalues of weight correlation matrix, specifically the alpha coefficient of a power law fitting the heavy tail. A wide range of experiments on small-scale image datasets, across architectures (ResNet, VGG) and variants (width and depth) show this method leads to better generalization error, compared to single-learning-rate optimizers, and can be further improved when combined with spectral norm regularization (SNR). Strengths: Originality -------------- Layer-wise scaling factors for learning rates is an under-explored area. Exploring it by applying the results of theoretical models of deep network training is novel, and welcome. Quality ---------- The paper presents extensive experiments supporting the proposed methods, and the papers conclusion. They feature a breadth of variants for the VGG and ResNet architectures, and hyper-parameters that have been carefully tuned. Experiments have been replicated with 5 different seeds, and error bars reported. Experimental settings are clearly reported in the appendix. Overall really solid experimental validation. Clarity --------- The paper is clearly organized, and explained really clearly the theoretical bases it builds on, existing algorithms, as well as the proposed new method. Significance ----------------- A method that improves the generalization of existing architectures is extremely interesting, especially when the computation overhead is reasonable. This method could be quite impactful, either directly or through further improvements or further research in a similar direction. Weaknesses: No major weaknesses, just a few limitations. 1. It would have been interesting to see one series of experiments on larger-scale data (full ImageNet?), or maybe non-image data. 2. No explicit comparison with parameter-wise scaling schemes (e.g., a variation of Adam) 3. No mention of the algorithmic complexity, or overhead of computing these scales, before the penultimate paragraph. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Clarification questions 1. l. 129, $\lambda_\mathrm{min}$ as the medium (median?) of the ESD. Is that what is shown on Figure 1? It was unclear to me how $\lambda_\mathrm{min}$ was selected based on the figure. 2. Should $(s_1, s_2)$ be mentioned in the Hyperparameters section (l.235)? Further information / curiosity 3. In structured architectures like ResNet, is there a pattern of under / overtraining, either within each block, or between them? For instance, are layers within a block usually at the same "stage", or do you see correlations between the first layers of each stage? This might suggest ways in which to share a scaling factor within a block, for instance. 4. Is there a usual way in which the $\alpha_t^i$ usually evolve during regular SGD training, or a pattern? If there is, is it disrupted or modified when using `TempBalance`? 5. Similarly, how do the $f_t(i)$ evolve through time? For instance, smoothness may indicate that increasing the frequency of update may not be beneficial, but instabilities may suggest the opposite. 6. Is there evidence that convergence with `TempBalance` could be faster or slower than SGD? Could it compensate for the overhead of computing `PL_Alpha_Hill`, or worsen it? *Update after rebuttal* I believe all my questions were addressed. The additional experiments are thorough and lead me to increase my score. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Properly addressed within the body of the article. Flag For Ethics Review: ['No ethics review needed.'] Rating: 9: Very Strong Accept: Technically flawless paper with groundbreaking impact on at least one area of AI/ML and excellent impact on multiple areas of AI/ML, with flawless evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Experiments with non-image data In Table 6 (b) of our rebuttal PDF, we present a new experiment using a language dataset. Our TempBalance (TB) method performs better than the baseline cosine annealing learning rate schedule (CAL) when both used the Adam optimizer for language modeling. Here are experimental settings for language modeling. We studied the Penn Treebank (PTB) dataset [1] using a three-layer "tensorized transformer core-1" [2]. We compared our method TB with the baseline scheduler CAL with both applied to the Adam optimizer. For both methods, we trained the models for 40000 iterations with a batch size of 120, and a dropout rate of 0.3. We searched the initial learning rate for all methods among {0.000125, 0.00025, 0.0005, 0.001, 0.00125, 0.0025, 0.005}. The hyperparameters for Adam are $\beta_{1}$ = 0.9, $\beta_{2}$ = 0.999, $\epsilon$ = $10^{−8}$. The mean and standard deviation of perplexity (PPL, lower is better) across five random seeds on the test set are reported. ## Explicit comparison with parameter-wise scaling schemes In Figure 18 of the rebuttal PDF, we compared our method with parameter-wise learning rate schedulers including Adam and LAMB. We show that our method outperforms them with ResNet18/34 on CIFAR100. Here is the experimental setup. For Adam, we searched the initial learning rate over {0.00005, 0.0001, 0.001, 0.01, 0.1}, and used $\epsilon$ = $10^{−8}$. For LAMB, we searched the initial learning rate over {0.005, 0.01, 0.02}, and used $\epsilon$ = $10^{−6}$. Both methods used weight decay = $5.0×10^{−4}$, $\beta_{1}$ = 0.9, $\beta_{2}$ = 0.999, learning rate decay with cosine annealing. All results were obtained by running five random seeds. ## $\lambda_{min}$ as the median of the ESD (l.129)? In line 129, we state that the $\lambda_{min}$ is fixed as the median of all eigenvalues in the Empirical Spectral Density (ESD), represented by the black vertical line in Figure 1. The histogram plot's log scales on both axes might make this less intuitive. We thank the reviewer for pointing out the typo, and we will fix and clarify it in the revised version of the paper. ## Should $(s_1,s_2)$ be mentioned in the hyperparameters section (l.235)? We have listed the settings for $(s_1,s_2)$ for each experiment of the main paper in Appendix C's Table 1-5 (Hyperparameter settings) under the last column. We will mention it in line 235 in the revised draft. ## No mention of computation overhead We provide an additional study on the computation overhead of our TB method, with results presented in rebuttal PDF Figure 17. We conducted a scaling experiment to demonstrate that the computational cost remains low for different sizes of ResNet models. Our findings reveal that the computational overhead remains low (less than 9%) even when applied to exceptionally wide or deep models (ResNet18 with width 2048 or ResNet101). The experiment setup is based on ResNet-series on CIFAR100. We studied models of depth in {18, 34, 50, 101} and ResNet18 model of width in {512, 768, 1024, 2048}. For each model size, we recorded the duration of a single training epoch and the time taken to apply our method once per epoch. From this data, we calculated the percentage increase in time when using TB once per epoch, using this as an indicator of computational overhead. We report the mean and standard deviation of the results over 10 runs. The test platform used was a Quadro RTX 6000 GPU with an Intel Xeon Gold 6248 CPU. We will be sure to include these discussions in the updated draft of the paper. ## Patterns of under/overtraining between ResNet blocks In the rebuttal PDF, Figure 21 (b, d) illustrate the layer-wise alpha of ResNet 18 and 34 during training. Note that the ResNet 18/34 architecture is organized into four stages, with each stage comprising multiple residual blocks. Our primary focus is on the blue curves, which represent the alpha value at the end of training. One can recognize the patterns between stages: earlier layers (closer to the input) tend to have a greater number of layers with lower alpha values (indicative of overtraining) compared to later layers. Within one stage of the network, the second convolutional layer typically exhibits a lower alpha compared to the first convolutional layer. We will be sure to include more discussions about over/undertraining patterns in network architectures, as well as the future work on saving the computations by implementing a shared scaling factor for certain layers in the updated draft of the paper. ## How $\alpha_{t}^{i}$ and $f_{t}(i)$ evolve through training In Figure 22 of the rebuttal PDF, we present visualizations of the $\alpha_t^i$ (alpha) and $f_t(i)$ (learning rate) of two layers within the same ResNet18 during the training process. From Figure 22 (b, d), we can see that with the baseline CAL scheduler (blue curves), the earlier layer (index=1) achieves a smaller alpha value compared to the larger alpha value of the later layer (index=15). In contrast, our TB method (orange curves) narrows this gap, indicating our approach balances the undertraining/overtraining levels (as signified by alpha) of different layers. This is further corroborated by Figures 10 and 11 in the submitted paper, where our method consistently refines the layer-wise alpha distribution. Regarding the learning rate plots in Figure 22 (a, c), our TB method allocates a lower learning rate for earlier layers and a higher one for later layers than the baseline does. This leads to a more balanced alpha distribution between layers as mentioned above. Additionally, we noted instability in the learning rate curves during early training phases, while smoother transitions emerge in later phases. ## Comparison of convergence rate We observed that our method converges at the same rate as the SGD baseline, but it achieves higher test accuracy upon convergence. ## Reference [1] Mikolov et al, 2011. [2] Ma et al, 2019. --- Rebuttal Comment 1.1: Title: Thank you! Comment: Thanks for the update and additional thorough experiments. I believe all my questions are addressed. --- Reply to Comment 1.1.1: Title: Thank you for the response Comment: We appreciate your valuable feedback and will make sure to include the discussions and new experiments in the revised paper.
Rebuttal 1: Rebuttal: We want to thank all the reviewers for the constructive feedback, which helps us improve our paper. Please refer to the attached PDF for our new experiments and see below for our responses to each comment. Pdf: /pdf/d9bed6d608d2baf0a2eb178d010ef651b37e4814.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Language-based Action Concept Spaces Improve Video Self-Supervised Learning
Accept (poster)
Summary: This paper proposes to transfer CLIP to the video domain for self-supervised learning. Textual features of video categories are used to obtain text classifiers and fixed during pre-training to obtain transferable information. Multiple complementary loss functions are designed for pre-training. Experimental results on three datasets demonstrate the effectiveness of the method. Strengths: Pros: 1. A new paradigm is proposed to transfer the knowledge of CLIP to the video domain for self-supervised learning. 2. Impressive performance has been achieved on multiple datasets. 3. The results of the ablation experiments demonstrate the effectiveness of the method. Weaknesses: Cons: 1. The authors claim that they did not use labeled or captioned videos in the paper. But they used the labels of the Kinetics-400, UCF101, and HMBD51 data sets during training. Does this violate the self-supervised setting? 2. Unfair comparison. The proposed LSS leverages the pretrained CLIP for weight initialization while existing methods such as SVT,VideoMAE do not. 3. LSS is pre-trained on the Kinetics dataset. Why did the label of HMBD51 and UCF101 be added to the pre-training? Does this break the downstream transfer setup? Since the target label has been leaked in the pre-training. 4. The fully-finetuned experiments and SSv2 results are missing. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to Weaknesses for more details. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Limitations have been listed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive comments and address all concerns below. 1. `Using dataset labels for training:` We understand this shortcoming and run two new experiments that use no textual labels from datasets for our action concept spaces. We report these results in main rebuttal PDF (LSS-B & C in Tables 1, 2) and highlight the on-par performance of these additional experiments. We also include these results for linear probing (top) and zero-shot (bottom) top-1 accuracy below for quick reference. | Method | ITP | HMDB-51 | UCF-101 | |:------------:|:---:|:----:|:----:| | LSS-A (ours) | yes | 69.2 | 91.0 | | LSS-B (ours) | yes | **69.4** | **91.1** | | LSS-C (ours) | yes | 69.1 | 90.8 | | Method | Action Labels | HMDB | UCF | |--------|------------------|------|------| | CLIP | - | 47.2 | 70.3 | | Ours | K400+U+H (LSS-A) | 49.5 | 72.0 | | Ours | GPT labels (LSS-B) | 50.2 | 73.8 | | Ours | I-VLM labels (LSS-C) | **51.4** | **74.2** | This highlights how LSS can operate without using dataset labels for training. 2. `Unfair comparison`: We have updated Table 1 to compare against methods using CLIP pre-trained weights for initialization (reported in main rebuttal PDF). We confirm that LSS also outperforms these prior works taking advantage of CLIP pre-training. We also highlight the zero-shot abilities inherent to our LSS different from traditional SSL approaches. 3. `Adding UCF & HMDB to pre-training action set:` We thank the reviewer for this suggestion. Our experiments with LSS-B and LSS-C confirms that adding downstream dataset action classes is not necessary for strong performance. We will highlight this more in the final version of our paper. The motivation behind adding these UCF & HMDB labels in our initial experiment was to confirm capabilities of LSS given all possible action categories. Our further experiments as suggested above illustrate that even without such downstream labels, LSS achieves strong performance better than prior work. These results are presented in Table 2 (rebuttal PDF). We repeat these results below for quick reference. | Method | Action Labels | HMDB | UCF | |--------|---------------|------|------| | CLIP | - | 47.2 | 70.3 | | Ours | K400+U+H | 49.5 | 72.0 | | Ours | K400 only | 48.4 | 71.1 | | Ours | GPT labels (LSS-B) | 50.2 | 73.8 | 4. `Experiment on more datasets:` We report mAP results for *zero-shot multi-label classification* task on Charades video dataset below as an additional point of comparison: | Method | Charades mAP | |:------:|:----:| | CLIP | 19.7 | | LSS-B | 23.1 | In the case of motion heavy datasets like SSv2, we note that LSS poses limitations (since it has no language awareness for motion). We hope to explore this direction further in future work. --- Rebuttal Comment 1.1: Comment: Thanks to the authors' responses, some of my concerns were addressed. But I still question whether the task setting itself is Self-supervised, because it needs to get the category annotation of the action. Although the author also carried out the experiment of GPT labels, in fact, the production of GPT-label also requires a real action label. Based on the above observations and the comments of other reviewers, I decided to keep the original score, and hope that the author can modify and improve the paper according to the comments of reviewers. --- Reply to Comment 1.1.1: Title: LSS-B & C require no annotated action labels Comment: We thank the reviewer for their comments, but we would like to clarify that, 1. Experiments with LSS-B & LSS-C **require no annotated action labels** for learning. They use the same data (only videos) as traditional SSL methods during the self-supervised learning phase. 2. These variants obtain on-par (or better) performance to using annotated action labels
Summary: The paper proposes a new self-supervised approach to adapt image-level CLIP features to video. The key idea is to use a teacher-student self-supervised learning framework, and distill the knowledge in the action concept space, derived from text action concepts using the CLIP's text encoder. The resulting framework produces SoTA self-supervised video features. Strengths: - The work tackles an important problem - adapting the CLIP visual encoder to video. Doing it without supervision is nice a bonus here. - The approach is simple and sound. - The evaluation is extensive and the results are strong. Weaknesses: - Since the set of concept vectors is taken from Kinetics-400 UCF-101 and HMDB-51, where the evaluation is performed, it is a little unfair to call the method completely unsupervised. While it is true that no video labels are used during training, the training process does use the knowledge of the actions vocabulary, which would aid the evaluation. To avoid this shortcoming, the authors may consider constructing the action concepts in a dataset-agnostic way. - I could not find the ablation of the uniform distribution prior regularization. It is important to understand the influence of that prior on the training. Could the authors please include this? - The writing could be significantly improved. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: in line 79, the authors probably meant “self-supervised learning” instead of “semi-supervised”? Semi-supervised assumes you have access to some labeled data. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: It is not very clear whether knowing the text action concepts is important for the method to work well. This may be a potential limitation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback and address all suggestions below. 1. `Use dataset-agnostic action concepts:` We take the reviewers advice and develop two alternate strategies to construct action concepts in a dataset-agnostic way: GPT based category generation (LSS-B) and image VQA based labels (LSS-C). These methods have no reliance on dataset textual information and obtain results (rebuttal PDF Table 1, 2) on par with our default setting (LSS-A). We also include these results for linear probing (top) and zero-shot (bottom) top-1 accuracy below for quick reference. | Method | ITP | HMDB-51 | UCF-101 | |:------------:|:---:|:----:|:----:| | LSS-A (ours) | yes | 69.2 | 91.0 | | LSS-B (ours) | yes | **69.4** | **91.1** | | LSS-C (ours) | yes | 69.1 | 90.8 | | Method | Action Labels | HMDB | UCF | |--------|------------------|------|------| | CLIP | - | 47.2 | 70.3 | | Ours | K400+U+H (LSS-A) | 49.5 | 72.0 | | Ours | GPT labels (LSS-B) | 50.2 | 73.8 | | Ours | I-VLM labels (LSS-C) | **51.4** | **74.2** | This highlights how LSS can operate without knowing the textual action concepts of the training or downstream datasets. 2. `Add uniform distribution prior (UDP) ablation:` This UDP regularization is crucial for stable training. Without it the pre-training stage leads to collapse. We include ablations here and in rebuttal PDF (Table 3). | Method | HMDB | UCF | |-------------|------|------| | Default LSS | 48.4 | 71.1 | | w/o UDP | 33.4 | 54.3 | 3. `Fix typos and writing:` Thank you for pointing out the typo on L79 - we will fix that to be *self-supervised learning*. We will also revise our final manuscript further to improve our writing. --- Rebuttal Comment 1.1: Title: Response Comment: I thank the authors for their response. I think point 1 makes the paper look stronger and point 2 gives important intuition about the problem. I advise to include them in the final paper. Otherwise, the rebuttal answers all my questions. I keep my score as week accept. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their comments and all useful feedback. We will update our final manuscript to reflect all these proposed modifications.
Summary: The paper introduces a language-tied self-supervised learning approach to adapt an image CLIP model to the video domain. The method employs two video-specific self-supervised learning objectives: concept distillation and concept alignment, for training the model. The authors showcase that the proposed method achieves good zero-shot and linear probing performance on three action recognition benchmarks. Strengths: 1. The motivation behind using language for video self-supervised learning is good as it addresses the existing challenges associated with video datasets and holds the potential to offer effective solutions. 2. The paper presents good zero-shot performance 3. The paper is well written and the overall message is well understood. Weaknesses: 1. While this paper refers to itself as a self-supervised method, it relies significantly on labeled information. For instance, the text classifier utilizes language embeddings derived from the dataset categories. Additionally, the construction of action concept spaces and category concept spaces takes into account the awareness of dataset categories. This information leak introduces a potential unfairness in comparing the proposed self-supervised learning method with other SSL methods. 2. Consider adding experiments that involve removing the awareness of dataset-level labels to align with other SSL methods. 3. It is not fair to directly compare the proposed method, which utilizes pre-trained CLIP weights, with other SSL methods that are trained from scratch. This distinction should be discussed, and the attribute of pre-training should be added to Table 1. Technical Quality: 3 good Clarity: 3 good Questions for Authors: One of the main issues with this paper is the misleading setting employed by LSS. As mentioned in the weaknesses section, LSS is not a purely self-supervised method as it requires some dataset-level labels and also uses CLIP pre-trained weights. It is important to differentiate this setting from traditional SSL settings and explain the practical utility of the proposed LSS setting in applications. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors have addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful comments and address all concerns below. 1. `Experiments removing dataset-level label awareness:` We eliminate the need for dataset-level labels in 2 additional variants (GPT based category generation and image VQA based labels) and run experiments for these variants. Results (presented in rebuttal PDF Table 1, 2) demonstrate how LSS can be trained without any textual information from video datasets, removing all dataset-level label awareness. We also include these results for linear probing (top) and zero-shot (bottom) top-1 accuracy below. | Method | ITP | HMDB-51 | UCF-101 | |:------------:|:---:|:----:|:----:| | LSS-A (ours) | yes | 69.2 | 91.0 | | LSS-B (ours) | yes | **69.4** | **91.1** | | LSS-C (ours) | yes | 69.1 | 90.8 | | Method | Action Labels | HMDB | UCF | |--------|------------------|------|------| | CLIP | - | 47.2 | 70.3 | | Ours | K400+U+H (LSS-A) | 49.5 | 72.0 | | Ours | GPT labels (LSS-B) | 50.2 | 73.8 | | Ours | I-VLM labels (LSS-C) | **51.4** | **74.2** | 2. `Fair comparison:` We update Table 1 (in main rebuttal PDF) to distinguish methods using CLIP pre-training and include more methods using such pre-training. LSS performs the best among prior work using CLIP pre-training, showcasing our unique strengths. 3. `Differentiate from traditional SSL:` LSS uses image-text pre-training additionally compared to traditional SSL approaches. However, LSS contains zero-shot capabilities unlike prior SSL works. This unique ability creates multiple practical utilities of LSS over prior SSL works. We will update our final manuscript to discuss this clearer. --- Rebuttal Comment 1.1: Comment: Thanks for your response. However, the rebuttal does not address my major concerns. Even though two additional variations have been introduced (LSS-B and LSS-C), it appears that they still rely on dataset-level label awareness. To illustrate, the i-LVM labels continue to be derived from the visual contents within the datasets. I kindly request clarification if my interpretation is inaccurate. Furthermore, I recommend that the authors provide comprehensive explanations regarding the methodology employed for generating these newly introduced labels. --- Reply to Comment 1.1.1: Title: No reliance on dataset-level labels Comment: We apologize for the lack of clarity on our part. LSS-B uses *no dataset level information at all* - it contains generic action labels. LSS-C uses captioning models (trained on images similar to CLIP, no access to videos) to generate labels, and *accesses only videos in training dataset* (same videos used for SSL training). We explain in detail below. For LSS-B, we use GPT to generate a large set of action labels. We first prompt GPT to categorize all common human actions / activities into 20 groups. For each group, we again ask GPT to generate at least 100 visually diverse action categories. These are all collected to create a set of 2000 action labels. We then use projections of these labels in CLIP text-encoder representation space to eliminate labels of high semantic similarity, achieving only 1000 diverse action categories. So our **1000 action categories for LSS-B are generic, and not tied to any of our training datasets**. This experiment demonstrate the scalability of our approach without accessing annotated dataset labels. For LSS-C, we generate a label set using only videos from the training dataset. We use PCA based clustering to identify 2000 representative videos from a randomly sampled subset (50,000) of our training dataset and then use image-captioning models on video center frames to generate a diverse set of 2000 action labels. This is further reduced to 500 eliminating labels that are similar in feature space of the CLIP text encoder. In this case, our generated **labels are tied to the training dataset, but uses no annotated category labels**. We use only the videos (same videos used for SSL training) and an image-to-text captioning model (trained on images) to generate our label set. These generated label sets are then used (in place of ground truth category labels from dataset) to construct our proposed action concept spaces . They are treated as the action concept set for the rest of our SSL training. We will ensure to highlight these details better in our revised manuscript.
Summary: The paper presents a method to adapt a vision-language model (CLIP) to represent videos. The method extends the image encoder of CLIP to a video encoder via factored space-time attention. The paper introduces a self-distillation-based objective to adapt the video encoder to train on unlabelled videos (no video captions are required). This self-distillation is performed in a so-called "action concept space" which results from projecting visual embeddings into the space spanned by 0-shot action classifiers or text embeddings of action descriptions. The approach is evaluated on Kinetics 400, UCF101, and HMBD in 0-shot and linear action classification experiments. Strengths: - The adaptation approach does not require additional video-level captions (only knowledge of action categories in the training data is required) - The method performs well in "zero-shot" and linear probing experiments in the considered action datasets - The use of fixed projections obtained through 0-shot classifiers for distillation is interesting and seems effective when the classes observed in transfer are known - The paper is overall well written and the method well presented Weaknesses: - The experiments are limited to 0-shot and linear classification of the same action classes also observed and used in the adaption training. It is unclear if the method can generalize to new actions (ones not used to define the action concept spaces) - The experiments also lack any text-to-video retrieval benchmarks. These would be essential to demonstrate the claim that the method "preserves and improves the strengths of CLIP ... for video operation" L49 - The comparisons to most methods in Table 1 are unfair since they are all fully self-supervised (no captions used in pre-training), and most are trained exclusively on videos (much less effective training data). Also, it is unclear why many methods are included without any performance numbers. Instead, it would be better to compare to other CLIP-based methods as in Tab 2 in this benchmark as well - The difference in Tab 2 and 3 suggests the method is very sensitive to the set of actions used to define the projection space Technical Quality: 2 fair Clarity: 3 good Questions for Authors: I would appreciate it if the authors could address my concerns listed in the weaknesses above. Most importantly, I would like to know: - How does the model perform for text-to-video retrieval? - How does the model generalize to actions not used during pre-training? For example, what if Tab 5 only consists of K400 classes? Additionally, I'm wondering: - What is the importance of w_s (Eq 5)? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Some of the limitations I see (see weaknesses) have not been addressed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed feedback and address all comments below. 1. `Text-to-video retrieval:` We run experiments on MSR-VTT text-to-video retrieval benchmark to demonstrate how LSS improves over our baseline CLIP. The performance increase is significant and consistent to our prior results. | Method | R@1 | R@5 | R@10 | |--------|------|------|------| | CLIP | 30.6 | 54.4 | 64.3 | | LSS | 33.8 | 58.2 | 70.3 | 2. `Generalize to actions not used during pre-training:` We run 3 new experiments to demonstrate how LSS generalizes to unseen actions. First, pre-training on K400 labels only (action classes overlapping with UCF/HMDB are removed here) with UCF/HMDB evaluation is reported in Table 2 (rebuttal PDF) and below (zero-shot top-1 accuracy). | Method | Action Labels | HMDB | UCF | |--------|---------------|------|------| | CLIP | - | 47.2 | 70.3 | | Ours | K400 only (w/o U, H) | 48.4 | 71.1 | | Ours | K400+U+H (original) | 49.5 | 72.0 | Next we show results for experiments we run on 2 variants introduced in main rebuttal, LSS-B and LSS-C, that use **no dataset textual labels** for pre-training. These results are reported in Table 1 & 2 (rebuttal PDF) and surpass our default setting. We also report these below (zero-shot top-1 accuracy). | Method | Action Labels | HMDB | UCF | |--------|------------------|------|------| | CLIP | - | 47.2 | 70.3 | | Ours | K400+U+H (original) | 49.5 | 72.0 | | Ours | GPT labels (LSS-B) | 50.2 | 73.8 | | Ours | I-VLM labels (LSS-C) | 51.4 | 74.2 | 3. `Importance of w_s:` This term represents confidence of the target concept space projection for a given sample. Since each basis of the concept space corresponds to action categories, if a sample is more aligned to a single basis, we assume higher confidence in that sample, leading to higher w_s value (max element of softmax normalized projected vector). And the reverse when a sample is equally aligned to a number of basis axes. Since each sample during training is a clip sampled from a video (which covers a temporal crop of video), our intuition for this weight is to act as a way of prioritizing more important clips over the less important ones. We also include ablation for w_s below. | Method | HMDB | UCF | |-------------|------|------| | Default LSS | 48.4 | 71.1 | | w/o w_s | 47.2 | 70.3 | 4. `Unfair comparison in Table 1:` We update Table 1 (please see main rebuttal PDF) to include comparisons to CLIP based methods using image-text pre-training (ITP). Results show how our proposed LSS performs significantly better than prior ITP approaches and retains strengths of CLIP. Missing performance for some methods was a latex typo - we have fixed that, thanks for the pointer! We add updated Table 1 below too (linear probing top-1 accuracy). | Method | ITP | HMDB-51 | UCF-101 | |------------|:---:|:----:|:----:| | SVT | no | 57.8 | 90.8 | | VideoMAE | no | 60.3 | 84.7 | | MERLOT | yes | 55.4 | 80.1 | | VATT | yes | 66.4 | 87.6 | | TVTS | yes | 58.4 | 83.4 | | LaViLa | yes | 61.5 | 88.1 | | LSS-A (ours-original) | yes | 69.2 | 91.0 | | LSS-B (ours-new) | yes | **69.4** | **91.1** | | LSS-C (ours-new) | yes | 69.1 | 90.8 | --- Rebuttal Comment 1.1: Comment: I have read the rebuttal and the other reviews. I appreciate the comprehensive author's response and novel results. The new results make the paper much more convincing and largely resolve my concerns. I'm happy to increase my rating. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their consideration and highly useful feedback. We will update our final manuscript accordingly with all additional material from rebuttal.
Rebuttal 1: Rebuttal: We thank all reviewers for positive comments: results show high transferability and generality of method (R-Cb1q); interesting and seems effective for transfer on known classes (R-WHFS); presents good zero-shot performance, holds the potential to offer effective solutions (R-mCow); tackles an important problem, extensive evaluation and strong results (R-aWeV); impressive performance on multiple datasets, ablations demonstrate effectiveness (R-vhEc). We discuss our two main modifications in response to concerns raised by reviewers below. 1. `Concern: Reliance on textual action categories of datasets` * We run experiments on two additional variants of LSS that do NOT rely on any dataset labels. They utilize different concept spaces (variants B & C in rebuttal PDF Table 1) that use no textual class information from training or downstream datasets. These obtain similar performance (superior to prior work) in both linear probing and zero-shot settings, highlighting how **proposed LSS can operate without access to textual action categories of datasets**. 2. `Concern: Fairer comparison against prior SSL work` * Following suggestions from the reviewers, we modify Table 1 (in rebuttal PDF) to explicitly highlight our use of image-text pretraining (ITP). We also include comparisons to related works that use image-text pretraining. Results show how our proposed LSS performs significantly better than these image-text pretraining approaches. * We also reiterate the additional zero-shot capabilities of our method that traditional SSL do not possess. Please refer to attached PDF for tables. Further rebuttals are written as responses to each review addressing the concerns raised by reviewers. Pdf: /pdf/84017a1b5ef6325e2ab98751620f3f3eac08cb83.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper proposes a novel language-based self-supervised learning framework (LSS) for video representation learning. It extends the self-distillation based SSL approaches like BYOL and SimSiam by replacing the randomly initialized project network by the text classifier defined by language embeddings extracted from the image CLIP text encoder. With two novel self-supervised learning objectives, the pretrained video model retains and improves transferability and generality of image CLIP representations better in comparison to existing video SSL methods. LSS achieves state-of-the-art results under linear probing settings and competitive zero-shot transfer performances on HMDB-51 and UCF-101. Strengths: + The paper is overall well written and easy to follow. + The idea of replacing the project network with the text classifier defined by image CLIP embeddings is really interesting and makes sense. The video representation learning suffers from relatively expensive and noisy annotations; we can easily distill knowledges obtained from abundant and diverse image-based datasets using LSS. + The experimental results show the high transferability and generality of the proposed method, LSS. Weaknesses: - LSS uses action categories of Kinetics-400, UCF-101 and HMDB-51 for defining the action category and description concept spaces. However, if those action concept spaces can be used only for pretraining on one of the three datasets used for concept space construction, this means that we can only use manually annotated action recognition datasets for pretraining. This limits the data scalability of LSS. The authors should provide experimental results with pretraining on more general, non-labeled video data using the same action concept spaces. Otherwise, for using abundant web videos without labeling, the authors should come up with the action space construction method without manually annotated action categories. - Please fix the typos; e.g., the l2 norm, not the squared l2 norm, should be used in Eq. (1) and (3) for normalization. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The key idea of the paper, replacing the project network with the text classifier really makes sense and is very interesting. I will lean towards acceptance if the authors address my concerns in the rebuttal. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have adequately addressed the limitations and potential societal impact of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for positive comments and helpful feedback. We address all concerns below. 1. `Pre-training without dataset labels:` As we also described in the main rebuttal, we run 2 additional experiments for variants of our method that use NO textual labels from any datasets. Results (LSS-B & LSS-C in Tables 1, 2 of rebuttal PDF) showcase equally strong performance by these variants. We also report the same linear probing top-1 accuracy below. | Method | ITP | HMDB-51 | UCF-101 | |------------------|:---:|:----:|:----:| | LSS-A (ours-original) | yes | 69.2 | 91.0 | | LSS-B (ours-new) | yes | **69.4** | **91.1** | | LSS-C (ours-new) | yes | 69.1 | 90.8 | Zero-shot top-1 accuracy compared to a CLIP baseline are also reported below again for quick reference. | Method | Action Labels | HMDB | UCF | |--------|------------------|------|------| | CLIP | - | 47.2 | 70.3 | | Ours | K400+U+H (LSS-A) | 49.5 | 72.0 | | Ours | GPT labels (LSS-B) | 50.2 | 73.8 | | Ours | I-VLM labels (LSS-C) | **51.4** | **74.2** | These results indicate that LSS can be used with pre-training video datasets that contain no action annotations. 2. `Label-free action space construction:` The LSS-B variant uses GPT-3 to generate a set of 2000 common activity labels. This is reduced to 1000 by eliminating labels that are similar in feature space of the CLIP text encoder and then used to construct the action space. The LSS-C variant generates a label set using only videos from the training dataset. We use PCA based clustering to identify 2000 representative videos from a randomly sampled subset of our training dataset and then use image-captioning models on video center frames to generate a diverse set of 2000 action labels. This is further reduced to 500 eliminating labels that are similar in feature space of the CLIP text encoder and then used to construct the action space. We will elaborate these details further in our revision. 3. `Typos:` Thank you for pointing out these typos - we have fixed them. --- Rebuttal Comment 1.1: Comment: I appreciate the efforts of the authors to provide rebuttals. The authors addressed most of the reviewers' concerns, and especially Tables 1 and 2 make the proposed approach more stronger. Please add them in the final draft. I will raise my score to week accept. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the useful feedback and positive comments. We will update the final draft with all these details.
null
null
null
null
null
null
Deep Discriminative to Kernel Generative Networks for Calibrated Inference
Reject
Summary: The paper presents an algorithm to learn an auxiliary model. The goal is to enhance the out-of-distribution calibration performance while maintaining the in-distribution performance of a base discriminative model. This base model is either based on a random forest or a multi-layer perceptron. The algorithm follows three main step: Firstly, it constructs an adjacency matrix using the training samples. The entries of this matrix depend on the "rule/activation patterns" of the corresponding discriminative model. Secondly, it performs clustering to divide the input space into regions. Lastly, for each region, the algorithm uses a Gaussian kernel to regress the output of the original model (by learning the center and the covariance matrix). The algorithm reaches convergence to the ground truth posterior distribution in the limit of infinite training data. The experiments are conducted on two-dimensional toy data and datasets from the OpenML-CC18 Benchmark Suites, thus demonstrating the algorithm's superior performance in terms of out-of-distribution calibration compared to the base models. Furthermore, it achieves comparable accuracy and in-distribution calibration performance to the base models. Strengths: 1. The algorithm to identify the partition induced by a classifier (random forest, MLP) on the input space is original and novel. Indeed, it can help to shed light on the calibration properties of existing classifiers (**Originality**) 2. Asymptotic convergence of the algorithm to the true posterior is guaranteed in the infinite sample regime (**Soundness**). 3. Overall, the paper is clear (**Clarity**). The text could be improved by providing more details about the algorithm, rather than focusing on the discussion between discriminative and generative models, which seems a bit out of scope. 4. Code is available, but I haven’t tried to reproduce the experiments (**Reproducibility**) Weaknesses: 1. It’s unclear how the algorithm performs in the finite sample regime and in high dimensions and how these two quantities are related. The scope of the analysis is therefore limited to a regime with infinite data (**Significance**). In absence of an analysis on the number of dimensions, the authors could provide an experimental analysis with higher dimensional datasets, like CIFAR10 or CIFAR100. 2. Previous works have shown that neural networks are over-confident/wrong on OOD data [1] and have suggested a set of experiments to check the phenomenon. Providing an analysis on such experimental settings might strengthen the work (**Quality**) - See Questions. 3. The authors could relate the partitioning technique with the literature on approximation based on splines (**Quality**) - See Questions. Technical Quality: 3 good Clarity: 3 good Questions for Authors: **Major questions:** 1. [1] provides an interesting insight on OOD classification. Specifically, when training on object datasets like CIFAR-10 and CIFAR-100, neural models assign high likelihood also to SVHN data. What does it happen when using the proposed algorithm? Does it overcome the weaknesses of neural approaches or does it preserve the same properties? 2. Considering that deep networks partition the input space with a number of regions that grow exponentially with network depth [2,3], how do you choose the number of clusters in the algorithm, to ensure high regression fidelity (in terms of accuracy and in-distribution calibration) with the base model? 3. How reliable is the analysis in the finite sample regime and in high dimensions? 4. It is not clear why the proposed approach is a generative model (Eq. (7) is never used. Indeed, the algorithm only relied on Eq. (15)). Can you please elaborate on that? 5. In Figure 2 (row 3, column 1) and in Figure 3 (column 1), the base model achieves higher accuracy with a larger number of samples. This seems counterintuitive. Do you have an explanation for this phenomenon? 6. What is the impact of pruning on the performance? **Minor questions:** 1. In Eq. (6), what is the purpose of the bias and why should the bias vanish for infinite number of samples? 2. In Eq. (18), the computation of weights is based on the number of equivalent paths. Is there an intuitive explanation for preferring this solution over the one which simply counts the number of activated/disabled neurons in the network? **References** - [1] Do Deep Generative Models Know What They Don’t Know? ICLR 2019 - [2] On the Number of Linear Regions of Deep Neural Networks. NeurIPS 2014 - [3] A Spline Theory of Deep Networks. ICML 2018 Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The work is limited in the analysis (input dimensionality, class of neural network models). Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - **[1] provides an interesting insight on OOD classification. Specifically, when training on object datasets like CIFAR-10 and CIFAR-100, neural models assign high likelihood also to SVHN data. What does it happen when using the proposed algorithm? Does it overcome the weaknesses of neural approaches or does it preserve the same properties?** We have conducted additional vision experiments following this suggestion by the reviewer. - **Considering that deep networks partition the input space with a number of regions that grow exponentially with network depth [2,3], how do you choose the number of clusters in the algorithm, to ensure high regression fidelity (in terms of accuracy and in-distribution calibration) with the base model?** Please note Equation 5 and 15, Line 114 where we mention that we only consider the polytopes which are populated by the training data. Therefore, the number of polytopes considered in our approach is upper bounded by the number of training data. The proposed model loses regression fidelity in terms of accuracy for sparsely populated polytopes in high dimensional space. However, as shown in the additional experiment in the attached PDF, the loss in accuracy can be avoided using Geodesic distance in Equation 4. - **How reliable is the analysis in the finite sample regime and in high dimensions?** We have addressed this concern in the global response. - **It is not clear why the proposed approach is a generative model (Eq. (7) is never used. Indeed, the algorithm only relied on Eq. (15)). Can you please elaborate on that?** We apologize for emphasizing more on the generative aspect in the abstract, which may have deviated the reviewers from the main point of the paper which is `calibration’. We will revise the abstract in the final version without mentioning the generative aspect of the proposed approach. However, Equation 15 is an approximation of Equation 5. After describing Equation 15, we need to use the approximation in Equation 6, 7 and 8, respectively. We have fixed this in our revised draft. To sample from the estimated density, one can pick a kernel with probability $\frac{n_{ry}}{n_r}$ and sample from the corresponding Gaussian distribution. We discuss the possibility of sampling from the proposed model in the discussion. - **In Figure 2 (row 3, column 1) and in Figure 3 (column 1), the base model achieves higher accuracy with a larger number of samples. This seems counterintuitive. Do you have an explanation for this phenomenon?** The loss in accuracy in Figure 2 (row 3, column 1) can be explained using Figure 1 (row 4, column 3). The posteriors learned by KGF are noisy because of the axis aligned split used in the parent random forest at each node. This phenomenon has been mitigated by KGN (Figure 1 row 6, column 3) as the parent deep-net can implement non-linear decision boundaries. The loss of accuracy in Figure 3 (column 1) is due to the presence of high dimensional datasets in OpenML which was explained in the Discussion section. However, we have pursued additional experiments using Geodesic distance which can further elaborate about the loss of accuracy in Figure 3 (column 1). - **What is the impact of pruning on the performance?** The impact of pruning has been discussed in the Discussion section and a potential solution using Geodesic distance in Equation 4 was proposed which we have pursued in the additional experiments attached in the global response. - **In Eq. (6), what is the purpose of the bias and why should the bias vanish for infinite number of samples?** Theorem 1 is derived using the estimated density in Equation 5 (mentioned in the statement) which has no bias term. For Theorem 1 to be true for Equation 6, the bias term in Equation 6 should vanish for an infinite number of samples. As shown in Appendix A.2 (proof of Theorem 2), the bias term is necessary for Theorem 2 to be true. We will clarify this further in the final draft. - **In Eq. (18), the computation of weights is based on the number of equivalent paths. Is there an intuitive explanation for preferring this solution over the one which simply counts the number of activated/disabled neurons in the network?** In Section 7.3 of [1], the authors proposed a similarity measure from each layer of the network based on counting the number of activated/disabled neurons in the network. However, it is unclear how to combine the similarity from different layers into a single score while considering the order of the layers. This is because the lower layers encode base features and the higher layers gradually encode more complex features. One should consider this hierarchy of information from layers of the network while measuring the similarity score. However, our proposed approach inherently considers the order of activations from different layers resulting in less loss of information. The effectiveness of the proposed similarity measure is further illustrated in the new additional high dimensional experiments where the model preserves the accuracy of the parent model when Geodesic distance is used as the similarity measure. Moreover, as illustrated in Algorithm 2 and 3 of the appendix, the similarity measure calculation in deep-nets is roughly analogous to that of random forests. [1] A Spline Theory of Deep Networks. ICML 2018. --- Rebuttal Comment 1.1: Title: Thank you for the Answer Comment: Thank you for the clarifications and the additional experiments. I went through the whole rebuttal and I share the same feeling of other reviewers, namely that major effort is required by the authors to make the paper complete especially in terms of experiments. Specifically, the additional experiments provided by the authors with multi-variate Gaussians highlight the fact that the original proposed solution did not deal well with the curse of dimensionality. The authors subsequently proposed a modification to it by replacing the Euclidean distance with a geodesic one inspired by the weights computed in the kernel generative framework. While I appreciate this new solution and that this suggests to scale better to higher dimension, a careful analysis identifying its limitations is currently missing. Additionally, the experiments provided in Figure 5 of the attached pdf focus on an unconventional setting (from the continual learning literature, where the discriminative model is trained on binary classification tasks). I suggest the authors to properly run the evaluation in the multi-class setting following the same methodology used in [1] (as highlighted also by reviewer fq3V), [2] and [3]. This would indeed resolve the issue about missing baseline and also properly address my weakness 2. Overall, I think that the proposed solution is interesting and has the potential to make a good contribution to the literature of ID, OOD calibration. However, in its current form the work is not complete. For this reason, I update my score to 4. [1] Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem CVPR 2019 [2] Do Deep Generative Models Know What They Don’t Know? ICLR 2019 [3] Your Classifier is Secretly an Energy Based Model and You Should Treat it Like One. ICLR 2020
Summary: This paper tackles the ID and OOD problem by proposing Kernel Generative Forest and Kernel Generative Network for estimating the similarity $w_{rs}$ and in turn approximate the class-conditional density. The main contributions include: 1) theoretical results of the convergence of the approximated class-conditional density under certain conditions. 2) the similarity measure with KGF and KGN between the polytopes. Strengths: This paper proposed a novel idea of estimating ID and OOD by approximating the class-conditional density. For that, the similarities between polytopes of the input space when data falls into is estimated. Both theoretical and empirical results show that the proposed model can successfully tell when test data is OOD while preserving the classification accuracy. The paper is overall well written. This paper has the potential of contributing to the community. Weaknesses: The proposed method is highly related to [4], but in the experimental part there is no comparison to any of the recent methods such as [4] (except for the "parent algorithms" RF and DN). At least MMC is also reported in [4]. The readers will also benefit if a detailed discussion of similarity and difference between [4] is provided. The background and the proposed method is well illustrated, but the demonstration of the experimental results is less clear. e.g. 1) In fig 1 the simulated yellow points overwrite the blue points and therefore do not match the true posteriors. 2) The "Difference" in fig 3 is hard to understand, as well as the legends of "wins". What is the reason not plotting like Fig 2? Technical Quality: 3 good Clarity: 3 good Questions for Authors: - What is $A_l$ in line 200 on page 6? - in Theorem 1, it requires the hypercubes to be of the same size. I am not sure if my understanding is correct or not, how can one partition e.g. the $R^1$ into 3 hypercubes $(-\infty, a), [a,b), [b, \infty)$ of same size? - what is "median performance"of classification error on the simulated datasets? Isn't the classification error calculated from the whole test set at once? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper discussed the limitations of the adopted Euclidean distance measure. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - **The proposed method is highly related to [4], but in the experimental part there is no comparison to any of the recent methods such as [4] (except for the "parent algorithms" RF and DN). At least MMC is also reported in [4]. The readers will also benefit if a detailed discussion of similarity and difference between [4] is provided.** We have addressed this concern in the global response. - **In fig 1 the simulated yellow points overwrite the blue points and therefore do not match the true posteriors.** We apologize for being unclear. We will clarify the simulation more in the revised draft. Note that the first row in Figure 1 represents the samples which are randomly sampled according to the class conditional posterior in the second row. Please note that the posteriors are 0.5 at the junction of two class boundaries in the second row, meaning that samples from both the classes are equally likely to be sampled in those regions. This is why the yellow points and the blue points are overlapping in the first row. - **The "Difference" in fig 3 is hard to understand, as well as the legends of "wins". What is the reason not plotting like Fig 2?** We are sorry Figure 3 is unclear, we tried to summarize results from 46 different experiments. To be more clear, we will include a few examples in Figure 3 organized like Figure 2. Note that Figure 2 does not have a single panel that summarizes across all simulations. We chose to make one for the real data, rather than cherry-picking results, or relegating it to the appendix. The summary statistic we chose was the difference of medians, because averaging across datasets did not make much sense to us. We will explore other options also for the final draft, and include all 46 experiments, depicted as those in Figure 2, in the Appendix for greater clarity. - **What is $A_l$ in line 200 on page 6?** We apologize for this typo. We have taken $A_l$ off. - **In Theorem 1, it requires the hypercubes to be of the same size. I am not sure if my understanding is correct or not, how can one partition e.g. the R1 into 3 hypercubes (−∞,a),[a,b),[b,∞) of same size?** Please note that there is no upper bound in the total number of polytopes in the statement, i.e.,$\mathcal{P} =$ {$Q_1, Q_2, \cdots $} . We can divide $R^1$ into an infinite number of hypercubes of the same size. As shown in the derivation of Theorem 1 in the appendix, we only need to consider the polytopes which are populated by the training samples while deriving the theoretical results. We have included the ‘infinite number of hypercube’ term in the theorem statement to emphasize this point. - **what is "median performance"of classification error on the simulated datasets? Isn't the classification error calculated from the whole test set at once?** The simulation experiments were repeated 45 times for different training sample sizes and 1000 test samples. The median of the performance over 45 runs are reported in Figure 2. We will elaborate it in the caption for the final submission. --- Rebuttal Comment 1.1: Comment: Many thanks for the clarification. My point on Figure 1 is that the authors plot the blue samples first and then the yellow samples overwrite the blue ones. Visually, the number of samples from each class are not equal. I still don't get the "wins" label in Figure 3. If I am not mistaken, the classification performance of the proposed model drops a lot compared to the base model. Theorem 1 needs improvement since it is confusing as it is now. Thanks for the additional comparison. However, from Fig.5, it is not clear how they perform with the metrics of Classification Error, Hellinger Dist, and MMC w.r.t. sample size and distance, which are shown in Fig.2. I am also a bit confused with the MMC on e.g. noise, why the proposed model outperforms most of the cases but fails when training on task 1? And for the baselines, I don't think the model would give higher MMC on noise than test sets. Could you provide some insights why this is happening? --- Reply to Comment 1.1.1: Title: Thanks for the thoughtful feedbacks! Comment: We really appreciate the helpful feedbacks from reviewer fq3V. In what follows we try to address the concerns: - **My point on Figure 1 is that the authors plot the blue samples first and then the yellow samples overwrite the blue ones. Visually, the number of samples from each class are not equal.** We apologize that we did not understand the query completely. We will use lower alpha for the plots so that the samples are more transparent and sample size equality is more obvious. - **I still don't get the "wins" label in Figure 3. If I am not mistaken, the classification performance of the proposed model drops a lot compared to the base model.** We are sorry that Figure 3 is still unclear. We understand the “wins” legend may be confusing. We will take the legend out and explain the plot further in the caption. We plotted (error RF - error KGF) along the Y-axis against different sample sizes along the X-axis. Any point above the dashed line along 0, indicates error RF is greater than error KGF. Hence KGF performs better if the plotted curve stays above the dashed line. - **Theorem 1 needs improvement since it is confusing as it is now.** We are sorry that Theorem 1 seems confusing. Theorem 1 considers a partition rule that partitions a continuous feature space $R^d$ into an infinite number of hypercubes each having the same size $h_n$. Assuming specific conditions are met, the Theorem demonstrates that the estimated density obtained through Equation 5 converges pointwise to the actual density over time. Note that we need a partition rule on $R^d$ that yields an infinite number of hypercubes so that they are of the same size. However, among the infinite number of hypercubes, Equation 5 uses only those hypercubes which are populated by the training data. We will add the above texts to the draft so that the theorem is more clear. - **However, from Fig.5, it is not clear how they perform with the metrics of Classification Error, Hellinger Dist, and MMC w.r.t. sample size and distance, which are shown in Fig.2.** We did not have enough time to run the vision experiment for different sample sizes as we did for Figure 3. Hence, we showed the vision results for the maximum training sample size available to us. We are currently running the above experiments and will add them in the final draft. However, we cannot calculate Helinger distance unless we know the true posterior distribution like we do in simulation datasets. We will calculate ECE instead. - **I am also a bit confused with the MMC on e.g. noise, why the proposed model outperforms most of the cases but fails when training on task 1?** Please note that Figure 5 presents two different approaches. One approach employs the Euclidean distance as described in Equation 4, while the other utilizes the geodesic distance. We agree with the reviewer's observation that the approach using Euclidean distance occasionally encounters issues when trained on Task 1. This decline in performance is notably alleviated when the proposed method incorporates the geodesic distance instead. From the vision experiment, it can be concluded that the Euclidean distance is not as effective as the geodesic distance in detecting the nearest Gaussian kernel in high dimensional feature space. - **And for the baselines, I don't think the model would give higher MMC on noise than test sets. Could you provide some insights why this is happening?** We really thank the reviewer for pointing this out. We checked the experiment code again and found that we sampled the noise samples from a Uniform distribution over $[0,255]^{w \times h \times c}$ (w=width, h=height, c=channel of the images). Unfortunately, we did not normalize the noise samples by 255. We apologize for testing on the noise samples without normalization. We reran the baseline algorithm with the corrected code. As the seeds were fixed for the experiments, nothing other than the noise row in the heatmap changed for the baseline algorithm. The MMC scores for the noise samples are: Task 1: 0.83, Task 2: 0.71, Task 3: 0.74, Task 4: 0.81, Task 5: 0.86. As predicted by the reviewer, the MMC for ACET on noise is lower than that of the test sets, but they are still higher than those of KGN-Geodesic. We will correct the plot in the revised draft.
Summary: The paper proposes to improve OOD detection for deep discriminative models by replacing the affine function over the polytopes with a Gaussian kernel, leading to a method called kernel generative networks. An estimation method is developed for the proposed method and some theoretical results on asymptotic convergence to the true distribution and to OOD. Results based on simulations show that the proposed method can estimate distribution better than the parent algorithms and have benefits in terms of OOD detection or calibration in some cases. Strengths: ### originality The method seems to be novel. ### quality The proposed method is intuitive and sound. The work includes useful theoretical results for the properties of the proposed method. ### clarity The abstract downplay the main focus of the paper or main benefit of the proposed method which is to improve OOD detection/calibration. It makes me confused while reading it for the first time. ### significance The proposed method is simple to understand and could potentially be a standard algorithm in popular ML libraries. Weaknesses: ### quality The results/experiments of the paper require more work. While the paper says the proposed method "results in better in- and out-of-distribution calibration", the results in Figure 3 show a contradicting or mixed results in different cases. As this is the main claim of the paper, it requires an in-depth discussion on why the results are mixed and when do we expect the proposed method outperforms the parent algorithm. As improved OOD detection is part of the main claim, the paper should compare some existing unsupervised method like [1]. [1] Zhang, Mingtian, Andi Zhang, Tim Z. Xiao, Yitong Sun, and Steven McDonagh. "Out-of-distribution detection with class ratio estimation." arXiv preprint arXiv:2206.03955 (2022). Technical Quality: 3 good Clarity: 2 fair Questions for Authors: What's the computational cost of the proposed method compared to the parent algorithm? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Some limitations in terms of the use of Euclidean distance are mentioned, followed by a proposal to fix it by using geodesic distance. It feels to me that the paper should actually explore this considering the mixed results from the experiments section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - **While the paper says the proposed method "results in better in- and out-of-distribution calibration", the results in Figure 3 show a contradicting or mixed results in different cases.** We apologize for not being clear while describing the results in Figure 3. We will clarify the results in Figure 3 further in the final draft. Note that OpenMl comprises many high dimensional datasets. We have included additional experiments on a high dimensional simulation and vision datasets using geodesic distance which explains the loss of accuracy in Figure 3. However, as mentioned in Figure 1 of [1], shallow networks are well-calibrated for in-distribution. In our OpenML experiments, we used a relatively small network which can explain why KGN maintains similar in-distribution calibration to that of its parent algorithm for lower sample size. Conducting experiments with an overparameterized network will result in a large activation pattern which would slow down our algorithm with its current implementation. However, the number of nodes can be reduced significantly by pooling at each layer as proposed in [2]. To keep the content of the draft precise and concise, we will pursue overparameterized networks in a future work. We will add the above description in the Discussion section. - **As improved OOD detection is part of the main claim, the paper should compare some existing unsupervised method.** We have addressed this concern in the global response. - **What's the computational cost of the proposed method compared to the parent algorithm?** We have addressed this concern in the global response. [1] Guo, Chuan, et al. "On calibration of modern neural networks." International conference on machine learning. PMLR, 2017. [2] Olber, Bartłomiej, et al. "Detection of out-of-distribution samples using binary neuron activation patterns." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. --- Rebuttal Comment 1.1: Comment: Thanks for the response and extra experiments. However, my original concerns about mixed results still hold. The authors have suggest to do some of the experiments as future work and the new experiments on high-dimensional data use some tricks that are not adequately discussed in the paper. Although I believe the idea in the paper is interesting, I don't think it's ready for publication in its current form. Therefore I keep my score as 4.
Summary: The paper proposes a new method for confidence calibration in discriminative deep ReLU networks and random forests based on approximating the class-conditional density with Gaussian kernels. Strengths: - The proposed method is conceptually simple and fairly novel, and it does not require retraining the parent learner to work. - Math and derivation are laid out clearly. - The limitations of the work are adequately discussed. Weaknesses: - **Writing has room for improvement.** The writing can sometimes be unnecessarily long and convoluted. For example on lines 35-36: "However, one can adversarially manipulate an OOD sample where the model is less confident to find another OOD sample where the model is overconfident" and on lines 43-46: "The general idea for the generative group is to get likelihoods for a particular sample out of the generative models for both ID and OOD to do likelihood ratio test or control the likelihood for training distribution far away from the training data to detect OOD samples by thresholding." I would suggest the author break down these long sentences into shorter ones that are easier for the readers to parse. - **Some potential problems are left unaddressed.** It is unclear to me how the proposed method would be able to overcome the curse of dimensionality, as the number of polytopes can scale exponentially with the number of neurons. I also find the claim that the proposed method converts discriminative networks into generative networks misleading. Although the paper does provide an expression for the class conditional density $\hat{f}_y(x)$, it seems highly nontrivial to sample from this unnormalized density. I wish the authors would clarify on this point. - **No baselines in the experiments.** As the authors noted themselves, the experimental analyses in this paper are limited to comparisons between the original networks and the proposed kernel generative versions. Due to the lack of baselines, it is unknown whether the achieved improvements are significant or not, especially when they are obtained at the cost of classification accuracy as evident in figure 3. Overall, I think that the methods proposed in the paper is potentially interesting, but more experiments needs to be done to demonstrate its practical effectiveness. I think the appeal of this work is really hindered by the lack of baseline comparisons and the limited analyses of the results, as well as some confusing/potentially misleading statements. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Can you clarify what is meant by "convert[ing] deep discriminative networks to kernel generative networks" in the abstract? - How would the proposed method scale with dimensionality? - What is the additional computational cost of estimating the parameters of the Gaussian kernels? How does this scale with model size/sample size? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors addressed the limitation that the Euclidean metric they used might not be the most suitable. Also, the authors are aware of the lack of comparison to other benchmark methods and commented that they will include it in future work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: - **Writing has room for improvement.** We will thoroughly go over the text with an editor to ensure every sentence is clear and concise. - **It is unclear to me how the proposed method would be able to overcome the curse of dimensionality, as the number of polytopes can scale exponentially with the number of neurons.** Our methods will not overcome the curse of dimensionality, but it can partially mitigate it. Because we only consider the polytopes populated by the training data, the number of polytopes is upper bounded by the training data sample size. This renders the dimensionality effectively always smaller than the training sample size. We will modify the results section to clarify this point, using Figure 4 in the PDF to illustrate this point. - **Although the paper does provide an expression for the class conditional density f^y(x), it seems highly non trivial to sample from this unnormalized density.** We apologize for emphasizing a lot on the generative model in the abstract. We will remove the generative aspect in the final version. That said, we are estimating normalized densities, so we could sample from them. We will clarify this point in the discussion. - **No baselines in the experiments.** We have addressed this concern in the global response. - **Can you clarify what is meant by "convert[ing] deep discriminative networks to kernel generative networks" in the abstract?** We will remove this sentence from the abstract, as it is not demonstrated in the paper. We will discuss the possibility of generating samples in the discussion. - **How would the proposed method scale with dimensionality?** We have addressed this concern in the global response. - **What is the additional computational cost of estimating the parameters of the Gaussian kernels? How does this scale with model size/sample size?** We have addressed this concern in the global response. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response and I would like to say that I really appreciate their honesty. Unfortunately since the claim for "converting discriminative networks to generative networks" is largely vacuous and not a main result/focus of the paper, I stand by my judgement that this paper might have potential but in its current state is marginally below the bar for acceptance.
Rebuttal 1: Rebuttal: We extend our sincere gratitude to all the reviewers for their helpful suggestions and feedback, which we have incorporated to further improve our work. In particular, we pursued the suggestion from reviewer Quey to use geodesic distance and demonstrated its effectiveness with additional experiments. We have also run an experiment to demonstrate the effect of scaling input dimensionality on the kernel generative algorithms. These results are summarized in the attached PDF. We address some shared concerns of the reviewers below. - **Abstract**: We apologize for emphasizing more on the generative aspect in the abstract, which may have deviated the reviewers from the main point of the paper which is `calibration’. We will revise the abstract in the final version without mentioning the generative aspect of the proposed approach. - **Scaling of Accuracy with Dimensions**: As we have described in the discussion section, increasing the input dimensionality may have a detrimental effect on accuracy if we use Euclidean distance to find the nearest polytope. Inspired by the suggestion from reviewer Quey, we have used geodesic distance in a simulation experiment using the Trunk simulation data described in [1]. The binary class simulation is done using 2 multivariate Gaussians assigned to 2 different classes with their means being increasingly closer to each other in the higher dimensions. Thus, higher dimensions have increasingly less discriminative information. We have already discussed in the draft how to measure similarity between two samples $x_1 \in Q_r$ and $x_2 \in Q_s$ using $w_{rs}$. We have used $(1 - w_{rs})$ as the geodesic distance measure in Equation 4 (see [2] for a similar approach). The results from this experiment are shown in the attached PDF. The experiment demonstrates KGN-Geodesic and KGF-Geodesic scale similarly to that of their parent algorithms, overcoming the scaling problems of the Euclidean counterparts. Moreover, following the suggestion by reviewer Up6z, we have conducted an experiment on CIFAR-10 which shows similar results to that of the simulation experiment. For simplicity, we construct 5 different binary classification subtasks (e.g. Cats vs Dog classification) from CIFAR-10 akin to [3]. We collectively call these subtasks CIFAR-10 2X5. Five LeNet-5 models along with five KGN models were trained on each of these tasks. We have reported the mean max confidence (MMC) of the models in a heatmap along with their corresponding accuracies. Note that discriminating between the CIFAR-10 subtasks is much harder than distinguishing between CIFAR-10 and SVHN, as the images are semantically similar. In the vision experiments, it is evident that KGN-Geodesic not only maintains the accuracy of the parent model but also effectively distinguishes between ID and OOD samples. Conversely, other models fall significantly short in achieving this level of distinction. - **No Baselines in the Experiments**: Following the suggestion by reviewer fq3V, we have compared our proposed method with ACET [4] which is an unsupervised SOTA OOD calibration method. The comparison is performed over the CIFAR-10 2X5 subtasks. Our experiment demonstrates that ACET fails to maintain OOD calibration over all the feature space even though it maintains nearly the same accuracy to the parent model. We will provide a detailed discussion about these experiments in our final draft. - **Additional Computational Time**: The additional computational cost is dominated by the cost of calculating the adjacency matrix $W_{rs}$ which is $O(mn^2)$. Here m is the total number of nodes for ReLU-nets or total number of leaves for random forests. The number of nodes can be reduced significantly by pooling at each layer as proposed in [5] which we will pursue in future. However, we used all the nodes without pooling. One important point to note here is that the adjacency matrix computation can be easily parallelized which has been implemented in the kdcnn.py source code in the supplementary materials. [1] Trunk, Gerard V. "A problem of dimensionality: A simple example." IEEE Transactions on pattern analysis and machine intelligence 3 (1979): 306-307. [2] Madhyastha, Meghana, et al. "Geodesic forests." Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2020. [3] Zenke, Friedemann, Ben Poole, and Surya Ganguli. "Continual learning through synaptic intelligence." International conference on machine learning. PMLR, 2017. [4] Matthias Hein, Maksym Andriushchenko, and Julian Bitterwolf. Why relu networks yield high-confidence predictions far away from the training data and how to mitigate the problem. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 41–50, 2019. [5] Olber, Bartłomiej, et al. "Detection of out-of-distribution samples using binary neuron activation patterns." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. Pdf: /pdf/d3eb9a02461ece80058aef2b6ef40c7aaf230c30.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
MIMONets: Multiple-Input-Multiple-Output Neural Networks Exploiting Computation in Superposition
Accept (poster)
Summary: This study introduces Multiple-Input-Multiple-Output Neural Networks (MIMONets) that can process multiple inputs simultaneously, reducing computational cost. Two types of MIMONets, MIMOConv for CNNs and MIMOFormer for Transformers, are presented. MIMOConv can handle multiple image inputs with minimal accuracy loss, while MIMOFormer effectively calculates attention scores for two concurrent inputs. These models offer a dynamic balance between accuracy and processing speed, using a fixed set of parameters. Strengths: MIMONets significantly improve the processing speed by handling multiple inputs simultaneously, reducing the computational cost per input. hey can be applied to various neural network architectures, including CNNs and Transformers. Weaknesses: 1. The concept of superposition can lead to interference between inputs, which may affect the model's accuracy. 2. The integration of variable binding mechanisms and transformations for holistic processing may increase the complexity of the model, potentially making it harder to implement and understand. 3. While the speed of processing is improved, there is a noted drop in accuracy when handling multiple inputs, which may not be suitable for applications requiring very high precision. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Can the MIMONets approach be generalized to other types of neural network architectures beyond CNNs and Transformers? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Not applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank you for your time and feedback. However, it is unclear how the sparse set of stated weaknesses led to a reject decision. In particular, weakness 2 applies to most innovations that are yet to be established. Weaknesses 1 and 3 coincide, are thoroughly addressed in the paper (for instance, Section 5.1 and Appendix E), and do not apply to the developed central use case of dynamic inference. We have responded to your remarks in more detail below and would appreciate an increased rating or further discussion. >“[Weakness 1] The concept of superposition can lead to interference between inputs, which may affect the model's accuracy.” Although this is true, it is rather a limitation of compute-in-superposition than a weakness of the paper. We bring up the issue of interference at multiple points and in fact come up with mitigation techniques such as isometry regularization and high hidden dimension. Specifically, the ablation study in Section 5.1 (with additional experiments and insights in Appendix E) explores such mitigation techniques to suppress the emergence of interference. Finally, to avoid a trade-off between accuracy and speedup we developed the concept of dynamic inference. In particular, we showed that one can enable a model to compute more quickly (via superposition) while still retaining the usual accuracy in the slow mode. >“[Weakness 3] While the speed of processing is improved, there is a noted drop in accuracy when handling multiple inputs, which may not be suitable for applications requiring very high precision.” Although a drop in accuracy is discernible, our method clearly improves on the state of the art Murahari, Vishvak, et al. “DataMUX: Data multiplexing for neural networks”, which was awarded second place in the 2022 Bell Lab Prize. Indeed, our comparison on MNIST, two synthetic language tasks, and a subtask of LRA (see Figure R1 and Table R1) shows the following: for high superpositions (16x) our CNN method outperforms theirs 80.4% to 52.9% on MNIST (the only vision dataset they report on) while being computationally cheaper, our Transformer (2x) outperforms theirs on ListOps 38.08% to 30.54%, and our method does not fail on synthetic language benchmarks which require faithful attention (scoring 96.52% and 99.40%) while DataMUX does (scoring 20.04% and 6.06%). Finally, as is quantified in Table 1 of the paper for dynamic inference, our method enables a single model with fixed parameters to be run at different accuracy-throughput operating points. In particular, at normal speed (N=1), the model performs as accurately as the baseline. This is a unique case where one obtains essentially free lunch, high accuracy and high throughput are guaranteed and can be balanced at will. Since there is no viable alternative to obtain such instantaneous switching between accuracy-throughput operating points within a fixed set of model parameters, and since we outperform state of the art, we do not see the observed drop in accuracy as a dealbreaker. >“[Weakness 2] The integration of variable binding mechanisms and transformations for holistic processing may increase the complexity of the model, potentially making it harder to implement and understand.” In terms of computational complexity, the integration of variable binding mechanisms via binding and unbinding operations is inconsequential, amounting to 0.008%--0.031% of the total MACs for MIMOConv and 0.06%--0.14% for MIMOFormer (see Table R2 and Table R3 in the pdf). Regarding understandability of variable binding, in fact we chose to go with some of the most well-defined binding/unbinding operations (see Table A1 in the paper) of vector-symbolic architectures [8--10]. Vector-symbolic-architectures propose frameworks for constructing symbolic data structures through key-value binding of distributed representations. The construction of this data structure is transparent thanks to the use of explicit binding/unbinding operations with well-defined properties. These properties allow us to compose neural representations on-the-fly as opposed to learning how to compose them from scratch. While it is less well explored to process the resulting data structure using **nonlinear** neural transformations, we believe the choice of well-established binding/unbinding operations leads to a more transparent architecture by design. >“[Question] Can the MIMONets approach be generalized to other types of neural network architectures beyond CNNs and Transformers?” CNNs and Transformers are the most used DNN architectures. Furthermore, in Table 2 of the paper we report both on Transformers with superposition applied to MLPs (att.+MLP) and without MLPs in superposition (att.). Also, CNNs are essentially constrained MLPs with weight-sharing and limited connectivity. Since unconstrained MLPs do not require the locality principles of CNNs, binding mechanisms in much higher-dimensional space can be employed making it easier by decreasing interference via the Blessing of Dimensionality. To further convince the reviewer of the wide applicability of our method, we demonstrate in Table R1 of the pdf that our proposed method of superposition for attention is not restricted to FAVOR+ from the Performer, but instead widely applicable to other linear transformers such as DPFP (Deterministic Parameter-Free Projection) from I. Schlag et al. “Linear transformers are secretly fast weight programmers". To summarise, we have shown our method to work for CNNs, MLPs, and Transformers. Although we do not provide empirical results on other structures such as RNNs and Graph Neural Networks, the theoretical arguments are general and explored in more details in the Appendix (e.g., A.2, A.3.). --- Rebuttal Comment 1.1: Comment: As the author addressed most of the concerns, I will increase the score to 5. --- Reply to Comment 1.1.1: Comment: We are glad that we could address your concerns and thank you for increasing your score.
Summary: The main content of the article is about MIMONets, which are multiple-input-multiple-output neural networks that exploit computation superposition. By using fixed-width distributed representations in vector-symbolic architectures, MIMONets can represent a variable number of inputs in a data structure and process them holistically with nonlinear neural transformations. This leads to a significant reduction in computational burden per input and offers a dynamic trade-off between accuracy and throughput. The article presents two instances of MIMONets (MIMOConv and MIMOFormer), which apply the concept of computation in superposition to convolutional neural network (CNN) and Transformer architectures, respectively. Empirical evaluations show that MIMONets achieve significant speedups while maintaining high accuracy. Strengths: Strengths: 1. The method cleverly combines multiple inputs into a single sample for inference, reducing the computational cost. This is highly meaningful for practical applications. 2. The method achieves promising results in both CNN-based and Transformer-based structures, indicating its versatility and applicability. Weaknesses: Weaknesses: 1. The experimental datasets and network applications in the study are relatively limited. It would be beneficial to apply the method to more datasets, such as ImageNet, and explore its effectiveness on a wider range of networks. 2. When the number of stacked samples is 4, a noticeable performance drop is observed. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In lines 86 and 87, why is it stated that n and $x^{(1)}$ are orthogonal? 2. I don't fully understand how Dynamic Inference is performed. 3. Are unbinding keys shared among all data or does each sample have a corresponding unbinding key? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I believe the limitations of the method mainly manifest in two aspects: 1. Performance significantly decreases when a large number of samples are stacked. 2. It remains uncertain whether similar performance can be maintained when applying the method to a wider range of structures. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >“The experimental datasets and network applications in the study are relatively limited. It would be beneficial to apply the method to more datasets...” We value the reviewer's proposal to conduct additional experiments on other tasks. We addressed this by adding results on MNIST (see Figure R1 of the pdf) and two synthetic language tasks (“associative recall” and “induction head” reported on in Table R1). These tasks have been found to be challenging for language models without attention, such as S4 (Fu, et al. “Hungry Hungry Hippos: Towards Language Modeling with State Space Models”), but are easily solved by our method with accuracy 96.52% and 99.40% respectively. This is a strong indicator of the fidelity of our attention in superposition. Unfortunately, due to licensing issues, we were not able to use some datasets such as ImageNet. We also want to stress that this paper is partly of theoretical nature with many insights on the qualitative behavior of error-terms together with quantitative bounds (see Appendix). The mathematical bounds are especially powerful for large scale models and show vanishing interference in these cases. Given the successful results on smaller benchmarks we plan to extrapolate and experimentally validate on large language models in future. >“When the number of stacked samples is 4, a noticeable performance drop is observed.” Although a drop in accuracy is discernible, our method clearly improves on the state of the art DataMUX; see our comparison on MNIST, two synthetic language tasks, and a subtask of LRA (in Figure R1 and Table R1). Indeed, for high superpositions (N=16) our CNN outperforms theirs 80.4% to 52.9% on MNIST (their only vision dataset) while being computationally cheaper, our Transformer (N=2) outperforms theirs on ListOps 38.08% to 30.54%, and our method does not fail on the challenging synthetic language benchmarks, which require faithful attention (scoring 96.52% and 99.40%) while theirs does (scoring 20.04% and 6.06%). This is because DataMUX blurs attention scores by sharing them between superpositions. Finally, as is quantified in Table 1, for dynamic inference, our method enables **a single model** with fixed parameters to be run at different accuracy-throughput configurations: at normal speed (N=1) the model performs as accurate as the baseline. >“In lines 86 and 87, why is it stated that n and x(1) are orthogonal?” In the case of Hadamard binding $a^{(1)} \oslash a^{(1)} \odot x^{(1)} = x^{(1)}$ and hence the random noise vector $n = a^{(1)} \oslash a^{(2)} \odot x^{(2)}$ is a random vector unaffected by $x^{(1)}$. As a high-dimensional random vector it is almost orthogonal to a fixed vector $x^{(1)}$ with high probability. This “Blessing of Dimensionality” is explored in more depth for vectors of Rademachers in Appendix A.2. >“I don't fully understand how Dynamic Inference is performed.” Thanks for the feedback. With it at the center of our motivation we will add further explanation as follows: To illustrate the idea of dynamic inference, suppose only two superposition channels are used with binding keys $a^{(1)}, a^{(2)}$ and unbinding keys $\tilde{a}^{(1)}, \tilde{a}^{(2)}$. We already know how the model performs standard computation in superposition (see (1) - (4) in paper). Let us thus examine how a network with the same parameters can instead be used as an ensemble-method with higher accuracy, but lower throughput. A superposition is established of twice the same input $x^{(1)}$: $$s = a^{(1)} \odot x^{(1)} + a^{(2)} \odot x^{(1)} $$ After applying the deep neural network $f_\theta$ to the superposition, we may unbind as $$ \tilde{a}^{(1)} \oslash f_\theta(s) \approx \tilde{a}^{(1)} \oslash f_\theta\left(a^{(1)} \odot x^{(1)}\right) + \tilde{a}^{(1)} \oslash f_\theta\left(a^{(2)} \odot x^{(1)}\right) \approx f_\theta\left(x^{(1)}\right) + \tilde{a}^{(1)} \oslash f_\theta\left(a^{(2)} \odot x^{(1)}\right). $$ and $$\tilde{a}^{(2)} \oslash f_\theta(s) \approx \tilde{a}^{(2)} \oslash f_\theta\left(a^{(1)} \odot x^{(1)}\right) + \tilde{a}^{(2)} \oslash f_\theta\left(a^{(2)} \odot x^{(1)}\right) \approx \tilde{a}^{(2)} \oslash f_\theta\left(a^{(1)} \odot x^{(1)}\right) + f_\theta\left(x^{(1)}\right). $$ After averaging the two expressions, we get $$ \frac{1}{2}\left( \tilde{a}^{(1)} \oslash f_\theta(s) + \tilde{a}^{(2)} \oslash f_\theta(s) \right) \approx f_\theta\left(x^{(1)}\right) + n$$ where $n$ is a random noise vector and $f_\theta\left(x^{(1)}\right)$ is approximated as an average of two predictions. Owing to the introduction of stochasticity by the binding and unbinding process these predictions are decorrelated, i.e., each superposition channel is processed to some degree differently. >“Are unbinding keys shared among all data or does each sample have a corresponding unbinding key?” The unbinding keys are independent of data and depend only on the index of the superposition channel. >“[Limitations] Performance significantly decreases when a large number of samples are stacked.” As described above, the performance decrease of MIMOConv is significantly less pronounced than that of the SOTA. But it is true that there is a limit to the number of superposition channels that can be employed, mostly depending on the size of the hidden dimension. >“[Limitations] It remains uncertain whether similar performance can be maintained when applying the method to a wider range of structures.” CNNs and Transformers are the most used DNN architectures. MLPs are used in superposition within our Transformers. To further address the reviewers concerns, in Table R1 we demonstrate our method for attention to not be restricted to FAVOR+, but instead to be widely applicable to other linear transformers such as DPFP from Schlag et al. “Linear transformers are secretly fast weight programmers". Finally, the theoretical arguments are even more general and explored in more details in the Appendix (e.g., A.2, A.3.). --- Rebuttal Comment 1.1: Title: reply to author's feedback Comment: Thanks Reply. I recognize your work, but I still have some doubts and suggestions. 1. Thank you very much for explaining dynamic inference in detail. I want to confirm again: Does Dynamic Inference refer to a single model that can infer 1 to N images? 2. It is better to verify the results on a more convincing dataset. I can understand that this article may be more analytical and theoretical. But if there is a lack of more convincing verification, it is like a lack of soul. --- Reply to Comment 1.1.1: Comment: >”Thanks Reply. I recognize your work, but I still have some doubts and suggestions.” Dear reviewer. We are glad that you recognize our work. We address your remaining question and doubt/suggestion. >”Thank you very much for explaining dynamic inference in detail. I want to confirm again: Does Dynamic Inference refer to a single model that can infer 1 to N images?” Yes, that’s correct. Dynamic Inference refers to a single model that can infer 1 to N images (at once by putting them in superposition) at varying degrees of accuracy. >”It is better to verify the results on a more convincing dataset. I can understand that this article may be more analytical and theoretical. But if there is a lack of more convincing verification, it is like a lack of soul.” Despite the emphasis on theoretical analysis, our method has been shown to work on various meaningful datasets. State of the art CNNs, such as our baseline, are still challenged by CIFAR100. Furthermore, Long Range Arena is one of the most extensively reported dataset collections covering a wide range of tasks such as understanding long mathematical expressions, classifying and compressing (long) natural text documents, classifying natural images, and using visual-spatial reasoning to determine the path-connectedness of points. Following your request, we now provide additional results on the street view house number (SVHN) dataset. Despite the limited time (3 days) and hyperparameter tuning, MIMOConv achieves a high accuracy of 97.17% (N=1), and can maintain the performance with larger superpositions (97.05% and 96.84% for N=2 and N=4, respectively).
Summary: This paper proposes a novel method named multiple-input-multiple-output (MIMO) neural networks, which aims to achieve simultaneous inference for several inputs together by mixing them into one input. To this end, the authors invent one method to first encode the inputs, whose output can be decoded to give separate outputs. The proposed method is evaluated on both convolutional and attention models on CIFAR datasets and LRA framework. Strengths: The proposed method to mix several inputs is novel and interesting. Also, the authors conducted simple analysis and several experiments. Weaknesses: The method is only verified on some small datasets like CIFAR. Also, the results in Table 1 seem to give a significant accuracy drop for multiple input cases, compared to using a single input. Also, it would be better if the author could provide more results on other tasks, like object detection or segmentation. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Could the author provide more explanation on how to apply the proposed method in practice? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors did not provide limitations. I think the practical application of the proposed method can be limited. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >“The method is only verified on some small datasets like CIFAR […] it would be better if the author could provide more results on other tasks […]” We appreciate the reviewer's suggestion to conduct additional experiments on other tasks. We addressed this by adding results on MNIST (see Figure R1 of the pdf) and two synthetic language tasks (“associative recall” and “induction head” reported on in Table R1) in addition to the benchmarks present in the paper (CIFAR10/CIFAR100 and LRA). The mentioned synthetic language tasks have been found to be challenging for language models without attention, such as S4 (see Y. Fu, Dao, et al. “Hungry Hungry Hippos: Towards Language Modeling with State Space Models”), but are easily solved by our method (in superposition) with an accuracy of 96.52% and 99.40% respectively. This is a strong indicator of the fidelity of our attention in superposition. It is also important to note that this paper is partly of theoretical nature with many insights on the qualitative behavior of error-terms together with quantitative bounds, see the extensive Appendix. The mathematical bounds are especially powerful for large scale models and show vanishing interference in these cases. Given the successful results on smaller benchmarks we plan to extrapolate and experimentally validate on large language models in future. >“the results in Table 1 seem to give a significant accuracy drop for multiple input cases, compared to using a single input” Although a drop in accuracy is discernible, our method substantially improves on the state of the art Murahari, Vishvak, et al. “DataMUX: Data multiplexing for neural networks”; see our comparison on MNIST, two synthetic language tasks, and a subtask of LRA in Figure R1 and Table R1. In fact, in terms of accuracy, for high superpositions (N=16) our CNN method outperforms theirs 80.4% to 52.9% on MNIST (the only vision dataset they report on) while being computationally cheaper; our Transformer (N=2) outperforms theirs on ListOps 38.08% to 30.54%, and our method does not fail on the challenging synthetic language benchmarks which require faithful attention (scoring 96.52% and 99.40%) while theirs does (scoring 20.04% and 6.06%). This is because DataMUX blurs attention scores by sharing them between superposed sentences. Finally, as is quantified for dynamic inference in Table 1 of the paper submission, our method enables **a single model** with fixed parameters to be run at different accuracy-throughput operating points. In particular, at normal speed (N=1) the model performs as accurate as the baseline. This is a unique case where one obtains essentially free lunch, high accuracy and high throughput are guaranteed and can be balanced at will. >“Could the author provide more explanation on how to apply the proposed method in practice?” Excellent request, due to space limitations we did not explore the practical motivation to great extent. However, the following will be added to the next revision: Think of large language models which cost enormous amounts of money to train and run, require real-time response, and whose usage fluctuates heavily over time. By training such a model for dynamic inference a provider could ensure to service all its costumers no matter the demand, albeit at the cost of a (hopefully) unnoticeable drop in performance. Alternatively, an autonomous system on a tight memory and power-budget might need a higher accuracy in critical situations. Owing to memory-constraints, having multiple models (with different energy consumption) in memory might not be feasible and incurs additional data-transfer costs due to switching. MIMONet allows a framework where these otherwise incompatible demands can be fulfilled. >“The authors did not provide limitations. I think the practical application of the proposed method can be limited.” We agree and will add a more dedicated limitations section. It will read as follows: - MIMONets make use of the Blessing of Dimensionality, that with high probability exponentially many (in dimension D) vectors are almost orthogonal. Although the components of MIMONet are made near isometric through regularization, a certain number of (hidden) dimensions is still necessary. This naturally limits MIMONets to large (oftentimes over-parametrized) models or models employing low-rank decompositions. - The number of inputs that can be superposed without incurring heavy losses in accuracy is limited given a fixed neural network due to increasingly strong interference between the superposition channels. - The proposed superposition capable attention mechanism converges to faithful attention (without interference between channels) as the embedding dimension increases, but at the price of only a speedup of $N$ when using $N^2$ superposition channels. Being built on linearized attention such as FAVOR+, it further inherits all their benefits (linear scaling) and drawbacks (limited parallelization and increased memory accesses for autoregressive training (see Section 3.1 in Hua, Dai, Liu, et al. “Transformer Quality in Linear Time”). On the other hand, trivial superposition would yield a speedup of $N^2$ instead, but at the cost of blurring the attention scores with each token-token score summarizing attention in all superposition channels at once. Such models employing blurry attention are limited to application where imprecise “summarizing” information suffices. Regarding the last point we demonstrate in Table R1 of the pdf that our proposed method of superposition for attention is indeed not restricted to FAVOR+ from the Performer, but instead widely applicable to other linear transformers such as DPFP (Deterministic Parameter-Free Projection) from I. Schlag et al. “Linear transformers are secretly fast weight programmers". --- Rebuttal Comment 1.1: Title: Thanks for the authors' response Comment: Dear Authors, Thanks for the response and discussion. I would prefer to keep my score based on it. Best.
Summary: UPDATE: scores updated based on rebuttal. This paper proposes a method (MIMONets) for multiplexing multiple independent samples in superposition in such a way that one can train neural networks to simultaneously process those samples in training and inference. The method is adapted both for CNNs and Transformers and there is empirical evaluation for both that show the viability of the method. Strengths: The paper is well written and easily understandable, and the method is relatively well grounded in foundations of previous works and new analysis. Weaknesses: The authors claim that this is the first time this has been done, but they fail to mention some related work such as Murahari, Vishvak, et al. "DataMUX: Data multiplexing for neural networks." Advances in Neural Information Processing Systems 35 (2022): 17515-17527. Please review that work and discuss similarities and differences, and if possible conduct experimental comparison. The DataMUX paper claims an impressive 40 samples multiplexing for the transformers, while MIMONet uses just 4. At least, the CNN part of the MIMONet work seems to give better empirical results, though. One of the strongest motivators for the MIMONets is the ability to “dynamically” scale computation vs. accuracy at inference run-time using the same trained weights by changing the way input and output is processed, and by creating in-network ensembles if more resources are available. The authors should center the motivation more in this area unless they can have a strong argument to the “static” case. Regarding the static case, there is less convincing results. For example, since there is a drop in accuracy with multiplexing 2-4 samples compared to 1, the comparison should be done with a smaller baseline with the same accuracy, and the authors should include full FLOPS figures for the models, including the bind/unbind operation. Also, for the static case, some comparison methods include pruning, quantization aware training, compression. It would be needed to compare these in terms of intOPS/FLOPS and power consumption estimates using models of the same accuracy. MIMOConv For the MIMOConv model, the authors should include the mathematical description of bind/unbind including all dimensions and indexes. Table 1 does not clearly state the FLOPS / sample for each of the rows. Table 1 does not discuss what would be the FLOPS size of a baseline model that reaches the same accuracy as e.g., N=4 MIMOConv models. MIMOFormer The authors should add some more information on the Transformer model, for example, in the Figure 3 it seems every layer does bind/unbind. This seems different from the CNN models, so that authors should describe this more in detail and give more background to these choices. What is the complexity addition in FLOPS from these bind/unbind operations? Table 2 (MIMOFormer results) is not fully clear what is the computational complexity (per sample) for each of the rows. For example, do the MIMOFormer models have higher complexity in FLOPS because of the bind/unbind operations? I assume that MIMOFormer N=4 is almost 4x more efficient pers sample than N=1, but it would be great to report full model FLOPS/sample for each of the rows. Table 2 seems to hint that the +MLP version performs worse (especially for N=4). But do I understand correctly that it also has less FLOPS since the -MLP version repeats the same MLP for each of the superimposed samples? This should be described. Would the +MLP version perform better if MLP was bigger? Table 2 is missing comparison to a non-MIMO Performer baseline that would have the same FLOPS complexity per sample. In Table 2 there seems to be some tasks, such as retrieval, where N=4 seems to make the performance drop dramatically. Could the authors discuss these? Technical Quality: 3 good Clarity: 3 good Questions for Authors: See above Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Comparison for DataMUX missing, some apples-to-oranges comparisons in the experimental section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >“[the authors] fail to mention some related work […] Please review that work and discuss” Thank you for pointing out the DataMUX paper. Given its importance, we discuss key differences and compare against it empirically on additional benchmarks (see pdf) in the global response. In short, DataMUX does not design a dedicated binding method for CNNs, and their Transformers not only introduce additional coarse-grained “sentence summary” tokens but also compute fuzzy attention by sharing attention scores over sentences. Empirically, this is visible in their failure on the synthetic language benchmarks “associative recall” and “induction head”, which require faithful attention (scoring 20.04% and 6.06%) while our method succeeds (scoring 96.52% and 99.40%). Moreover, our Transformer with N=2 outperforms DataMux with N=2 on ListOps, 38.08% vs. 30.54%. Also, on high superpositions (N=16) our CNN method outperforms theirs (80.4% vs. 52.9% on MNIST) while being computationally cheaper. We will include these findings in a revision. >“[the strongest motivator for the MIMONets is the ability to “dynamically” scale computation vs. accuracy at inference run-time […] the authors should center the motivation more in this area” Thanks for the feedback. We tried to highlight dynamic inference as our strongest advantage and most significant innovation, but we will improve on the wording and put further emphasis on it. >“Regarding the static case, there is less convincing results” We agree that dynamic inference is more convincing than static, as we add the capability of high-speed inference without sacrificing accuracy at normal speed. The rationale for including static results is to display the very slim drop in performance of each operating point against the static models. We will make that clearer. Even so, we outperform state of the art alternatives as mentioned above. >“[the static case should be compared to pruning, quantization aware training, and compression] in terms of intOPS/FLOPS…” We do not claim to outperform other throughput-increasing approaches like model downsizing, quantization, and pruning. However, in our opinion, we do not have to due to their lack of ability for dynamic accuracy-throughput selection. Furthermore, our approach is orthogonal to pruning, quantization, compression, and low-rank matrix decompositions. Each of them could be applied on top of our MIMO approach. However, for reasons of transparency, we include more detailed MACs in Table R2, R3, and Figure R1. >“For the MIMOConv model, the authors should include the mathematical description of bind/unbind including all dimensions and indexes.” Thank you for the suggestion; we will add the following: PWHRR is given by (with image tensors $x^{(k)} \in \mathbb{R}^{D \times W \times H}$ and binding key $a^{(k)} \in \mathbb{R}^D$) $$(a^{(k)} \odot x^{(k)}) {\scriptsize {:,w,h}} = a^{(k)} * x^{(k)}_{:,w,h}$$ where $*$ is the usual circular convolution and $k$ indexes the superposition channel. Unbinding is performed by a linear layer (with hidden tensor $h \in \mathbb{R}^{D \times W \times H}$ and unbinding key $\tilde{a}^{(k)} \in \mathbb{R}^{d \times d}$) $$ (\tilde{a}^{(k)} \oslash h) {\scriptsize :,w,h} = \tilde{a}^{(k)} \cdot h_{:,w,h}$$ where $\cdot$ is the usual matrix multiplication. Unbinding is applied after the global pooling to reduce MACs. >“In the Figure 3 it seems every layer does bind/unbind. This seems different from the CNN models, so that authors should describe this more in detail and give more background to these choices” Yes, this is indeed different to MIMOConv. In our proposed attention mechanism, we lay out the superposition channels along a 2D grid. We construct two separate superpositions along different axes, which is the only configuration s.t. attention scores are not shared between channels, i.e., remain accurate. Since the output of our attention layer is a single superposition along one of the two axes, we need to dismantle it for the next layer. Section 4.1 and 4.2 discussed these choices in greater detail. >“What is the complexity addition in FLOPS from these bind/unbind operations?” The relative complexity overhead in MACs is very low (0.06% to 0.14% depending on the configuration, see Table R3 in pdf). >“Table 2 is not fully clear what is the computational complexity (per sample) for each of the rows.” Thank you for the feedback. We added these results in Table R3. >“I assume that MIMOFormer N=4 is almost 4x more efficient per sample than N=1, but it would be great to report full model FLOPS/sample for each of the rows” This is the case. When examining Table R3 and neglecting the cost of K/Q/V-projections, N=4 att.+MLP is exactly 3.98 times faster than Performer. Not superposing before the K/Q/V-projection is a clear oversight. As is apparent from equation (18) in the paper, one could instead superpose before projecting without incurring losses. Future work could address this. >“[does +MLP have less FLOPs than -MLP, given it has lower accuracy?]” Yes, the -MLP version repeats the same MLP for each superposed sample. The additional data in Table R3 should clarify. >“Would the +MLP version perform better if MLP was bigger?” Nice point. The performance drop between +MLP and -MLP is significantly bigger in tasks with a low embedding/hidden dimension (e.g., Retrieval, Image, Pathfinder). This is also supported by our theoretical analysis, which indicates vanishing interference as the network increases in size. It also motivates extrapolation and experimental validation on large language models in the future. >“In Table 2 there seems to be some tasks, such as retrieval, where N=4 seems to make the performance drop dramatically. Could the authors discuss these?” Those are issues with training stability, as is apparent in the large standard deviation reported in this configuration. We could opt to discard outliers or report the median instead. --- Rebuttal Comment 1.1: Title: Comment Comment: Thank you for your rebuttal! One further comment. You say "Furthermore, our approach is orthogonal to pruning, quantization, compression, and low-rank matrix decompositions. Each of them could be applied on top of our MIMO approach." I think this is too strong statement without proof. Can you explain why you believe the gains from MIMO and e.g., quantization and compression are fully additive? Intuitively it feels like quantization, compression would make it harder to do the MIMO approach with good performance. Or put in another way, there might be high dependency between these approaches and combining them might not give fully additive gains. --- Reply to Comment 1.1.1: Comment: >”Thank you for your rebuttal! One further comment. You say "Furthermore, our approach is orthogonal to pruning, quantization, compression, and low-rank matrix decompositions. Each of them could be applied on top of our MIMO approach." >I think this is too strong statement without proof. Can you explain why you believe the gains from MIMO and e.g., quantization and compression are fully additive? Intuitively it feels like quantization, compression would make it harder to do the MIMO approach with good performance. Or put in another way, there might be high dependency between these approaches and combining them might not give fully additive gains.” Thank you for your comment. While we cannot state that the gains of MIMO and other methods are completely additive without experimental validation, we have qualitative insights as to why these methods do not compete for the same resources and consequently would synergize to some extent: - The Blessing of Dimensionality (see Appendix A.2) gives, in terms of dimensionality, an exponentially decreasing probability of interference for superpositions, even for (2-bit quantized) Rademachers. The extent to which these superpositions can be kept intact as linear layers act on them depends on the conditioning of the matrix (ideally nearly-isometric) not on the fidelity of its entries. As such we suspect that MIMOConv can be combined with quantization, pruning, etc. - Regarding MIMOFormer, we can give quantitative insights. As is evident from Theorem 3 in Appendix D, the error bounds have no dependence on the precision of projection weights, but depend only on the embedding dimensionality, the size of keys and queries, and the angles between them. Consequently, quantization, pruning, etc. are not in competition with our approach and can be easily combined. Naturally, when combining different methods not only the gains but also the errors add up. However, with diminishing returns of each method we believe the combination of several to be most effective, especially given that our method is not competing with alternatives for the same resources of a model. If there are further questions or comments we are happy to discuss them. --- Rebuttal Comment 1.2: Title: Comment Comment: Thank you. You write "Those are issues with training stability, as is apparent in the large standard deviation reported in this configuration. We could opt to discard outliers or report the median instead.". It would be much better to understand the source of unstability and fix that instead. Any thoughts? --- Reply to Comment 1.2.1: Comment: >“Thank you. You write "Those are issues with training stability, as is apparent in the large standard deviation reported in this configuration. We could opt to discard outliers or report the median instead.". It would be much better to understand the source of unstability and fix that instead. Any thoughts?” Actually, we have to clarify. It is rather unreliable training than an issue of stability. The loss reaches a plateau with some random seeds, after which the model yields no meaningful predictions (i.e., random chance). According to your remarks we ran additional experiments and found a work-around. To improve training in the high superposition regime, we implemented a curriculum training procedure where the number of superpositions is reduced to N’=N/2 during a warmup phase (1/6th of the training steps). Afterwards, the number of superpositions is increased to the original value (N). This curriculum procedure improved the average accuracy of MIMOFormer (N=4, att.) from 60.99% to 74.38%, and reduced the standard deviation from 9.06% to 0.74%. We will add curriculum training results on complete LRA in the final version of the paper. Thank you for raising this point. We believe that curriculum learning is a valuable addition to our MIMONets.
Rebuttal 1: Rebuttal: We thank the reviewers for their thoughtful feedback. We are encouraged that (b2eJ) found our method well-written and well-grounded in previous works. We are pleased that (b2eJ, A964) appreciated our new theoretical analysis and experimental evaluation. We are glad that (zbz3, 3r5d) agree with the versatility of our approach and that (b2eJ, zbz3) assess our results as promising, particularly for increasing throughput as noted by (zbz3, 3r5d). We appreciate that (b2eJ, zbz3) share our assessment that new applications are enabled by our work. In the following, we provide answers to the reviewer’s comments, which will be reflected in a revised version of the paper. Regarding applications, we would like to remind the reviewers of the central innovation termed “*dynamic inference*”, where one can select on-the-fly an operating point of a given accuracy & throughput. As such, our method is only inadequately comparable with “*static*” approaches that opt for a fixed performance point like model downsizing, quantization aware training, and pruning. Furthermore, these other methods can be added on top. To address questions by (b2eJ, A964, zbz3) we will add several clarifying paragraphs to the paper. Also, as (b2eJ) requested, we now include two Tables (R2, R3) which indicate the complexity of sublayers in MIMOConv and MIMOFormer respectively. Fitting the suggestion of (A964, zbz3) we conducted experiments on two additional datasets: MNIST and a synthetic language task. Finally, at the remark of (A964) we now include a dedicated limitations list. We particularly thank (b2eJ) for pointing out the NeurIPS 2022 paper Murahari, Vishvak, et al. “DataMUX: Data multiplexing for neural networks”, awarded second place in the 2022 Bell Lab Prize. Shortly after submission we also discovered this work in the context of a patentability search. While there are some similarities between their methods and ours (e.g., both use Hadamard binding and unbind before the final readout layer), there are fundamental differences that distinguish them qualitatively and quantitatively. These are discussed in the following two points. 1\) As (b2eJ) puts it: “the CNN part of the MIMONet work seems to give better empirical results”. To strengthen this point, we conducted a direct comparison on the MNIST benchmark for which DataMUX was optimized, and report on the findings in Figure R1 of the pdf. Even with a trivial downsizing for fair comparison from a 28-layer very-wide (10x) MIMOConv to a 10-layer narrow (1x) MIMOConv, we scale much better to high superposition channels (N) than DataMUX does. Indeed, our model shows an accuracy of 80.4% against their 52.9% in case of N=16 superposition channels (highest #channels reported by DataMUX for vision tasks), despite being computationally cheaper (0.47 MMAC/s vs. 0.65 MMAC/s). We attribute the improved performance to a set of innovations which we reiterate here: MIMOConv applies *position-wise binding*, thus retaining the locality property present in natural images and vital for CNNs, whilst as discussed by Murahari, Vishvak, et al. their primary binding does not. As a workaround they proposed binding via two layers CNNs each outputting 8 feature maps. The resulting (pixel-wise) superposition in a low-dimensional space (8-D) leads to high interference. Additionally to using an expensive binding mechanism, it also makes the first layer of the model 8 times as expensive no matter the number of superpositions. We are able to circumvent this issue by applying the first layer of the CNN *before* the pixel-wise binding, increasing the dimensionality of each pixel in an easy-to-understand manner. Another contribution is the use of *isometric neural networks* to further reduce interference during the processing of superposed images. 2\) Regarding Transformers, new experiments on LRA show that we outperform the Transformer variant of DataMUX on ListOps (38.08% vs. 30.54% accuracy) using models of similar size. Contrary to us, DataMUX blurs attention scores by sharing them between superposed sentences, and the introduction of additional global “sentence summary” tokens (w/o superposition) limits their approach to instances where imprecise “summarizing” information suffices. Notably, none of the tasks (token-level and sentence-level classification) they chose to report on requires attention layers at all; this is also discussed in M. Hassid et al. “How Much Does Attention Actually Attend”. As our experiments confirm (see Table R1), on more nuanced tasks in NLP like “associative recall” and “induction head”, their method drops to 20.04% and 6.06% for N=2, while ours, at a score of 96.52% and 99.40% respectively, succeeds. Despite investing significant efforts in the training of DataMUX, it cannot perform on these synthetic tasks. This is in line with the findings of Y. Fu, Dao, et al. “Hungry Hungry Hippos: Towards Language Modeling with State Space Models” which identify the lack of attention as the reason that the Structured State Space Sequence (S4) model is able to completely outperform state of the art in LRA, but is not feasible for large language models. In contrast to DataMUX, our work approximates true attention and our theoretical derivations *show convergence to actual dot-product attention* as the hidden dimension increases, giving us an even stronger case for applicability to large language models (for instance, GPT-3 uses embedding dimension 12,888, far exceeding the maximum of 512 we report on). Finally, note that Linearized Transformers were able to decrease the complexity of dot-product attention from $O(L^2)$ to $O(L)$. Our MIMO-style is compatible with them and thus reduces the complexity from $O(L)$ further to $O(L/N)$. It achieves this while retaining the property that, in the limit of large embedding size, it converges *precisely* to quadratic attention. We again thank the reviewers for their comments and look forward to entering into a more detailed discussion. Pdf: /pdf/5aff6bdffd024b07f31958f71245f7ea3eaa938f.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
On the Properties of Kullback-Leibler Divergence Between Multivariate Gaussian Distributions
Accept (poster)
Summary: This paper explores and proves some properties of KL divergence between multivariate Gaussian distributions. One of the motivations is that as a statistical distance, KL divergence does not satisfy the properties of a metric, that is, symmetry and triangle inequality. In spite of these issues, this paper proposes the relaxed versions. To be specific, it proves the lower bound (resp. upper bound) for reverse KL divergence given the lower bound (resp. upper bound) of forward KL divergence, and the summation upper bound of two bounded KL divergences. Finally, the proposed techniques are applied to anomaly detection with flow based model and reinforcement learning. Strengths: (1) This paper proves the lower bound (resp. upper bound) for reverse KL divergence given the lower bound (resp. upper bound) of forward KL divergence, and the summation upper bound of two bounded KL divergences. (2) The theoretical results can be applied to some applications in deep learning and reinforcement learning. (3) This paper is well-written and easy to understand. Weaknesses: (1) Theorem 1 and Theorem 3 hold when two conditions are satisfied. For example, for the mean, it requires $\mu_1 = \mu_2$, which is too strong in practice. (2) Since the KL divergence has a wide range of applications, the two applications shown in this paper are kind of limited and not convincing. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The KL divergence is widely used in machine learning and statistics, etc. Can the theoretical results in this paper be used to some other tasks in machine learning? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See Weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your careful and valuable comments. We explain your concerns as follows. **C1**: Theorem 1 and Theorem 3 hold when two conditions are satisfied. For example, for the mean, it requires $\mu_1=\mu_2$, which is too strong in practice. **A1**: **These strong conditions are advantages rather than disadvantages when applying our theorems**. In Theorem 1 and Theorem 3, we give the strong conditions when the supremum/infimum can be attained. We can benefit from these strong conditions in applications. As we discussed in Section 5.2, the approximate symmetry of KL divergence between Gaussians brings the following convenience to us. (1) Minimizing one of forward and reverse KL divergences also bounds another. (2) We can exchange forward and reverse KL divergences for small $\epsilon$. For example, when applying the approximate symmetry of KL divergence between Gaussians (Theorem 1), we know the forward KL divergence is small ($\leq \epsilon$). We want to guarantee that the reverse KL divergence is also small such that bounding forward KL divergence also bounds the reverse KL divergence. Theorem 1 states that the supremum of reverse KL divergence is $\epsilon + 2\epsilon^{1.5}+O(\epsilon^2)$, which is a small bound. Since the conditions needed to attain the supremum are strong, it is hard to reach the supremum in practice. This implies that when these strong conditions do not hold, the reverse KL divergence is even smaller than the supremum. That is just what we want in applications. In other words, the supremum describes the *worst case* in application. The stronger these conditions are, the harder to meet the worst case. To summarize, Theorem 1 has the following two meanings. (1) The supremum of reverse KL divergence $\epsilon + 2\epsilon^{1.5}+O(\epsilon^2)$ is small. This tells us the worst case itself in application is acceptable. (2) The strong conditions tell us the worst case barely happens in practice, so the reverse KL divergence is usually smaller than the supremum. This is just what we want in applications. Therefore, the strong conditions to attain the supremum are our theorem's advantage rather than disadvantage. We will add more discussion to explain this point in the revision. **C2**: Since the KL divergence has a wide range of applications, the *two* applications shown in this paper are kind of limited and not convincing. Can the theoretical results in this paper be used to some other tasks in machine learning? **A2**: **Yes!** We have discussed **four** (not two) applications ranging from deep anomaly detection to reinforcement learning in our paper. In the *common rebuttal for all reviewers*, we summarize four existing applications and discuss one new important application in sample complexity research. Please see the *common rebuttal for details*. These five applications have demonstrated the usefulness of our theory. Our theory may have other potential applications. Thanks again for your valuable comments. --- Rebuttal Comment 1.1: Comment: Thank you for your clear explanations! I will update my score.
Summary: Kullback-Leibler (KL) divergence is an important measure of distance between probability distributions with uses in statistics, information theory and many other fields. However, it is not a proper distance measure, since it is not symmetric and does not satisfy the triangle inequality in general. The authors consider the KL divergence between multivariate Gaussian distributions and show that a relaxed notion of symmetry and triangle inequality holds under certain conditions. Specifically, they formulate an upper bound on KL(N2,N1) when KL(N1,N2) < epsilon, and show that it cannot be much greater than epsilon. Similarly, they give a lower bound on KL(N2,N1) when KL(N1,N2) > M. Finally, they upper bound KL(N1,N3) when KL(N1,N2) < epsilon1 and KL(N2,N3) < epsilon2. They conclude by discussing several applications of the results in deep learning and reinforcement learning. Strengths: The disadvantages of KL divergence as far as symmetry and triangle inequality go are well known, so finding conditions in which even a relaxed version of these properties hold is interesting and potentially useful. Weaknesses: Firstly, due to the continuity of the KL divergence around epsilon=0 (N1=N2), the results are not too surprising. The proofs are technical, lengthy and somewhat repetitive. Lemma G.5 in particular is not so digestible for readers. Secondly, the structure of the paper is unorthodox: usually you would have the Related Work section right after the Introduction instead of before Conclusions; that would also be a better place to start mentioning the applications for which your results might be relevant. The section called Lemmas and Notations has no lemmas. The Applications section should be called Discussion. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Can you please rearrange the structure of the paper to be more in line with convention? - Can Lemma G.5 be made more edible, or could a more informative overview be given? -- The authors have thoroughly addressed these points in the rebuttal. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: There is no potential negative societal impact of the work Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your careful and valuable comments. We explain your concerns as follows. **C1**: due to the continuity of the KL divergence around epsilon=0, the results are not too surprising. **A1**: The task of this work is to (1) *quantify* the approximate symmetry and (2) *find* a relaxed triangle inequality. Such issues have hindered researchers for a long time. See the application section and *common rebuttal for all reviewers* for details. **C2**: Can you please rearrange the structure of the paper to be more in line with convention? **A2**: We will move the related work section after the Introduction. **C3**: Can Lemma G.5 be made more edible, or could a more informative overview be given? **A3**: Proving Lemma G.5 is hard. Appendix G gives a proof sketch. Here we give a more detailed proof sketch. In the LHS (resp. RHS) of Inequality G.94, $\varepsilon_{x,2}$ and $\varepsilon_{y,2}$ lie in the second (resp. first) term. We will use Inequality G.94 to move $\varepsilon_{x,2}, \varepsilon_{y,2}$ to the first item such that $\varepsilon_{x,1}, \varepsilon_{x,2}$ and $\varepsilon_{y,1},\varepsilon_{y,2}$ are allocated in only one dimension (see Equation H.178). We construct function $S(\theta_x, \theta_y)=f(w_2(\varepsilon_{x,1}+\theta_x\varepsilon_{x,2})w_2(\varepsilon_{y,1}+\theta_y\varepsilon_{y,2}))+f(w_2(\varepsilon_{x,2}-\theta_x\varepsilon_{x,2})w_2(\varepsilon_{y,2}-\theta_y\varepsilon_{y,2}))$ for $-\frac{\varepsilon_{x,1}}{\varepsilon_{x,2}}\leq \theta_x\leq 1,-\frac{\varepsilon_{y,1}}{\varepsilon_{y,2}}\leq \theta_y\leq 1 $ (see Equation G.112). When $\theta_x=\theta_y=0$, $S(\theta_x, \theta_y)=S(0,0)$ equals the LHS of Inequality G.94. Recall that $w_2(0)=1$ and $f(1)=1$. So when $\theta_x=\theta_y=1$, $S(\theta_x, \theta_y)=S(1,1)$ equals the RHS of Inequality G.94. When increasing $\theta_x$ and $\theta_y$ from 0 to 1, $\varepsilon_{x,2}, \varepsilon_{y,2}$ are moved to the first item gradually and the LHS approaches the RHS of Inequality G.94 gradually. $\theta_x$ and $\theta_y$ control how $\varepsilon_{x,2}$ and $\varepsilon_{y,2}$ are allocated among the two terms. We call $(\theta_x, \theta_y)$ and the corresponding pairs $(\varepsilon_{x,1}+\theta_x\varepsilon_{x,2},\ \varepsilon_{x,2}-\theta_x\varepsilon_{x,2})$ and $(\varepsilon_{y,1}+\theta_y\varepsilon_{y,2},\ \varepsilon_{y,2}-\theta_y\varepsilon_{y,2})$ indiscriminately as *allocations*. When $\theta_x=\theta_y=1$, we call it an *extreme allocation*. To prove Inequality G.94, it suffices to show $S(0,0)\leq S(1,1)$. However, it is hard to prove $S(0,0)\leq S(1,1)$ directly due to the complexity brought by Lambert $W$ function. We treat the problem as an optimization problem where $S(\theta_x, \theta_y)$ is the objective function. We use an analytical and variation version of coordinate ascent to solve the optimization problem. In the beginning, we start from a well-chosen point (see Equation G.113) arbitrarily close to point $(\theta_x=0, \theta_y=0)$. In each iteration, we fix one of $\theta_x$ and $\theta_y$ and make another vary. The goal is to maximize the objective function $S(\theta_x, \theta_y)$. In this way, we construct an infinite sequence of allocations and reach the supremum finally. The proof mainly consists of the following four aspects, which are much harder than a simple coordinate ascent algorithm. **Aspect 1**: We start optimization from a well-chosen point close to $(\theta_x=0, \theta_y=0)$. In each iteration, once we fix one of $\theta_x$ and $\theta_y$ and make another one vary, we prove there exists one and only one supremum. For example, once we fix $\theta_x=\theta_{x,0}$ where $\theta_{x,0}>0$ can be artitrarily small, we prove there exists one and only one $-\frac{\varepsilon_{y,1}}{\varepsilon_{y,2}}< \theta_{y,1}<1$ that maximizes $S(\theta_{x,0},\theta_y)$ (see Proposition G.2). In the next step, we fix $\theta_y=\theta_{y,1}$ and find $\theta_{x,2}$ to lift $S(\theta_x, \theta_{y,1})$ further. In this way, we find an infinite sequence of allocations $(\theta_{x,0}, \theta_{y,0}), (\theta_{x,0}, \theta_{y,1}), (\theta_{x,2}, \theta_{y,1}), (\theta_{x,2}, \theta_{y,3}),\dots$. In the end, we will show that in these iterations, the allocations corresponding to these local maximums are more and more *extreme*. **Aspect 2**: However, the condition describing the local maximum (e.g., $\theta_{y,1}$) is complicated. For example, Equation G.119 describes the condition that $\theta_{y,1}$ should satisfy. We cannot solve the Equation analytically. So we turn to analyze the condition to study the property of these local maximums. We use a crucial transformation (in Equations G.120~G.124) to obtain key Equations G.124, G.138, G.142 which express the property of these local maximums (corresponding to allocations) implicitly. Equations G.124, G.138, G.142 also characterize the relation between local maximums obtained in neighboring iterations. **Aspect 3**: Prove that the constructed sequential allocations in the above iterations are more and more *extreme*. There are two problems we need to tackle. First, how to measure the *extremeness* of one allocation. We find formulae (see Equations G.148 and G.149) to measure one allocation's extremeness indirectly. Second, how to compare the extremeness of two allocations. The Equations G.124, G.138, G.142 used to characterize the property of local maximums obtained in Aspect 2 are also used to prove these sequential allocations are more and more extreme. **Aspect 4**: Prove the limit of the allocation sequence is $(\theta_x=1,\theta_y=1)$ which can maximize the objective function. Hope you enjoy our analysis. **Q4**:  The section called Lemmas and Notations has no lemmas. The Applications section should be called Discussion. **A4**: The section ‘Lemmas and Notations’ introduces notations and Lemmas B.1. We will add subcaption to improve consistency. We will add discussion and revise the Application section name. Thanks. --- Rebuttal Comment 1.1: Title: Answer to rebuttal Comment: I have read the authors response and consider my concerns addressed. The paper grading was updated accordingly.
Summary: This paper investigates the properties of KL divergence between Gaussian distributions. The main theoretical contributions include two main theorems. The first one gives the supremum of reverse KL divergence between Gaussians when the forward KL divergence is bounded. The conditions when the supremum is attained are also identified. The second theorem gives the relaxed triangle inequality of KL divergence between Gaussians. Based on these two main theorems, this paper also derives several corollaries, including the local approximations and a lower bound of reverse KL divergence. It is also notable that the bounds are dimension-free. Finally, this paper discusses several applications of the theoretical results in OOD detection with flow-based generative models and safe/robust reinforcement learning. Overall, the research questions studied in this paper have not been answered before. The theoretical contributions of this paper are novel and solid. The proofs are carefully written and correct. Notably, the proof of Theorem 4 is rather technical. The theorems presented in this paper can be applied in various contexts involving KL divergence and Gaussian distributions. Strengths: 1. The problems studied in this paper are novel and interesting. This paper answers these research problems for the first time. 2. The proofs, which are based on the Lambert W function, are technical. 3. The theoretical results can be applied to various problems, including anomaly detection and reinforcement learning. These results also have other potential applications. Weaknesses: 1. It is possible to make some equations tighter by introducing notations earlier. For example, notations in Equations (G.146)-(G.150) can be introduced earlier to make Equations (G.128)-(G.144) more concise. 2. The derivations in Equation (E.54) and (J.193) are over-detailed. These two equations can be shortened. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See Weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have discussed social impacts and limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your careful and valuable comments. We explain your concerns as follows. C1: It is possible to make some equations tighter by introducing notations earlier. For example, notations in Equations (G.146)-(G.150) can be introduced earlier to make Equations (G.128)-(G.144) more concise. A1: We will introduce these notations earlier to make Equations (G.128)-(G.144) more concise in the revision. We have two choices on where to introduce these notations (Equations (G.146)-(G.150)). The first choice is to introduce them earlier. This would make the proof more concise but harder to understand. The second choice is to retain the details of Equations (G.128)-(G.144) and introduce notations (i.e., Equations (G.146)-(G.150)) later. This would make the proof more detailed but a little longer. Note that these two choices do not affect the correctness of the proof. C2: The derivations in Equation (E.54) and (J.193) are over-detailed. These two equations can be shortened. A2: Thanks for this suggestion. In the revision, we will make the derivations in Equation (E.54) and (J.193) more concise. Thanks again for your valuable comments. --- Rebuttal Comment 1.1: Comment: I have read the response and appreciate the contributions of this paper. The theoretical contributions of this paper are novel and solid. The proofs are carefully written and correct. I keep my accept recommendation.
Summary: In this paper, the authors look at the KL divergence between two multivarite Gaussian distributions. The KL divergence is an important distance function between two distributions. However, it lacks certain nice properties that other metric distance functions such as variation distance satisfies: namely, symmetry and triangle inequality. This paper shows that, nevertheless, KL divergence satisfies an approximate version of these two important properties. Specifically, if one of the KL divergences is small then the reverse KL divergence will also be small. Similarly, if two pairs of distributions have small KL divergences between them, then the remaining pair will also have a small KL divergence in between. The results are derived by posing this as an optimization problem that minimizes the unknown KL divergences subject to the constraint that the known KL divergences are small. Then certain relevant functions such as and are analyzed to derive an upper bound for the above optimization problems. Finally, the authors argue that such approximate symmetry and approximate triangle inequality appear in several important practical applications. In fact, they mention one such problem involving deep neural networks that led them to study this question. One more application of this result is that learning a multivariate Gaussian in either KL gives a similar learning result for the reverse KL. So far algorithms have been derived separately for the two directions, see [arXiv:1710.05209] and [arXiv:2107.10450] for more. I have not carefully checked the mathematical details. Strengths: The paper works on a fundamental mathematical problem of proving that KL divergence between multivariate Gaussians is almost a metric near 0. and gives a nice solution. This paper is very nicely written and a pleasure to read. This is really a beautiful paper. Weaknesses: None. Technical Quality: 3 good Clarity: 3 good Questions for Authors: None. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your careful and valuable comments, especially for suggesting one more application of our theorems. Please see the *common rebuttal for all reviewers* for discussion about this application. There is no weakness, question, or limitation contained in the comments. Thanks again for your valuable comments!
Rebuttal 1: Rebuttal: Thanks for your valuable comments. Here we address one concern from Reviewer (8dDm) on the applications of our theory. We think Reviewer (rMwu) may be also interested in this point. So we put the answer in the common rebuttal. Reviewer (rMwu) proposes no weakness or question. We thank Reviewer rMwu for her/his suggestion of one more important application of Theorem 1 (approximate symmetry of KL divergence between Gaussians) on sample complexity research. Reviewer (8dDm) proposes one problem as follows. “Since the KL divergence has a wide range of applications, the *two* applications shown in this paper are kind of limited and not convincing. Can the theoretical results in this paper be used to some other tasks in machine learning?” **A**: As pointed out by reviewers, our theory can be applied to various problems ranging from deep learning to reinforcement learning to sample complexity research. **Firstly**, we have discussed **four** applications in Section 5 and Appendix L. They are: (1) *The Motivation Application on OOD Detection Using Flow-Based Model*. Both the approximate symmetry (Theorem 1) and relaxed triangle inequality (Theorem 5) are applied in the motivation application. Please see Section 5.1 and the manuscript (anonymous version) “Kullback-Leibler Divergence-Based Out-of-Distribution Detection with Flow-Based Generative Models” in the supplementary materials. (2) *Providing Theoretical Guarantee for Continuous Gaussian policy in AWAC method [arXiv:2006.09359]*. Nair et al. provide a theoretical guarantee for discrete policy distributions. The approximate symmetry (Theorem 1) can extend their guarantee to continuous Gaussian policy. Please see Appendix L.1. (3) *Bringing New Insights to Existing Reinforcement Learning Algorithm*. In [arXiv:1806.06920, ICLR 2018], Abdolmaleki et al. propose the MPO algorithm for reinforcement learning. They use Expectation-Maximization (EM) to solve control problems and use constraints on KL terms in both E and M-steps. Theorem 1 can eliminate such a difference for continuous Gaussian policies. Please see Appendix L.1. (4) *Extending One-step Safety Guarantee to Multiple Steps in Reinforcement Learning*. In [arXiv:2201.11927, ICML 2022], Liu et al. propose an Expectation-Maximization style approach for learning safe policy in reinforcement learning. Our relaxed triangle inequality extends their one-step robustness guarantee to multiple steps. Please see Appendix L.2 and [arXiv:2201.11927, ICML 2022] for details. **Secondly**, as Reviewer (rMwu) pointed out, Theorem 1 (approximate symmetry of KL divergence) has one more important application. Theorem 1 can be used to **extend existing theoretical results in sample complexity research**. In “Ashtiani, et al., Near-optimal Sample Complexity Bounds for Robust Learning of Gaussian Mixtures via Compression Schemes, Journal of the ACM, 2020. arXiv:1710.05209”, the authors propose a compression-based learning method and establish an optimal lower bound of sample complexity for learning Gaussian mixtures. For a fixed target Gaussian mixture distribution $P$, the learning method receives a sample set and outputs a distribution $Q$ satisfying $KL(Q||P)\leq \epsilon$. See page 26, the inequality below Equation (17) in [arXiv:1710.05209, JACM 2020], where KL divergence is used to bound Total Variation distance. Their conclusion applies to a single Gaussian when the number of mixture components is 1. One open problem proposed in their paper is: what is the sample complexity for learning Gaussian mixtures with guarantee using the reverse KL divergence $KL(P||Q)\leq \epsilon$ (see page 35, last paragraph, in [arXiv:1710.05209, JACM 2020]). Our Theorem 1 on the approximate symmetry can extend existing theory and answer this open problem in the single Gaussian case. According to Theorem 1, when $KL(Q||P)\leq \epsilon$, $KL(P||Q) \leq \epsilon+2\epsilon^{1.5}+O(\epsilon^2)$. The supremum equals $O(\epsilon)$ when $\epsilon<1$. This implies that the bounds of forward and reverse KL divergence have the same order. Therefore, the optimal sample complexity for learning single Gaussian is the same when using reverse KL divergence as a guarantee. Similarly, in “Bhattacharyya et al., Learning Sparse Fixed-Structure Gaussian Bayesian Networks, AISTATS 2022, arXiv:2107.10450”, the authors propose a learning method for sparse fixed-structure Gaussian Bayesian network, which can be treated as a representation of multidimensional Gaussian distributions (see Koller's book: Probabilistic graphical models: principles and techniques). They prove the sample complexity of their method for learning Gaussian distribution with guarantee $KL(P||Q)\leq \epsilon$, where $P$ is the target Gaussian distribution from which samples are drawn, $Q$ is the learned Gaussian distribution. Specially, in page 9 of [arXiv:2107.10450, AISTATS 2022], the authors note that their theoretical result uses reverse KL divergence while the JACM paper discussed above uses forward KL divergence. Again, Theorem 1 can extend their conclusion to forward KL divergence and eliminate the difference between forward and reverse KL divergence. Just as Reviewer (rMwu) says, "Learning a multivariate Gaussian in either KL gives a similar learning result for the reverse KL. So far algorithms have been derived separately for the two directions". Theorem 1 can eliminate the difference between forward and reverse KL divergence in this scenario. In other words, we answer the open problem proposed in [arXiv:1710.05209, JACM 2020] in single Gaussian case. We can see that the asymmetry of KL divergence between Gaussians has hindered researchers for a long time. Note that Theorem 1 allows us to exchange forward and reverse KL divergence in the derivation on-demand, which would bring more convenience. We plan to explore this direction in the future. Since KL divergence and Gaussian are widely applied, our theory may have more potential applications. Thanks.
NeurIPS_2023_submissions_huggingface
2,023
Summary: In this paper, the authors prove the following interesting mathematical properties of the Kullback-Leibler (KL) divergence between multivariate Gaussian distributions, while the KL divergence is not a proper distance (in sense that it is not symmetric) and does not satisfy the triangle inequality, but: 1. if $KL(N_2||N_1) \leq \epsilon$ then it can be shown that the supremum of $KL(N_1||N_2) $ can be upper bounded by some explicit function of $\epsilon$ that is of order $\epsilon$ for $\epsilon$ small, so that the KL is approximately symmetric in the Gaussian case when being close; and 2. an infimum of $KL(N_1||N_2) $ is also derived for $KL(N_2||N_1) \geq M$; and 2. for three Gaussian $N_1, N_2, N_3$, one has an upper bound $KL(N_1||N_3) $ that verifies the triangle inequality up to a factor of three, again when the three Gaussians are close. The authors discuss the basic proof ideas and some possibly applications in Section 5. Strengths: This paper focuses on the fundamental theoretical properties of Kullback-Leibler divergence between multivariate Gaussian distributions, that, to the best of my knowledge, are novel, and have wide applications in ML. I've not checked the detailed proofs, but the proof sketch looks compelling. The proof idea is very interesting and may be of independent interest. Weaknesses: The paper is in good shape, I do not have specific concern to raise. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. The authors mention that they propose an unified OOD detection algorithm KLODS, but no detail about KLODS is given, it would be great if the authors could elaborate more on this. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: This paper is primarily of theoretical nature, and I do not see any potential negative societal impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your careful and valuable comments. We explain your concerns as follows. C1: The authors mention that they propose an unified OOD detection algorithm KLODS, but no detail about KLODS is given, it would be great if the authors could elaborate more on this. A1: We will add more details about the OOD detection algorithm in the revision. The OOD detection work motivating the theoretical research in this submission is elaborated in our another manuscript named “Kullback-Leibler Divergence-Based Out-of-Distribution Detection with Flow-Based Generative Models”. Currently, this manuscript is under review in another journal. We also append the anonymous version in the supplementary material. Please see the appended manuscript for details. Thanks again for your valuable comments. --- Rebuttal Comment 1.1: Comment: Thanks for the comments, I have read the rebuttal.
null
null
null
null
null
null
Linear Time Algorithms for k-means with Multi-Swap Local Search
Accept (poster)
Summary: This paper studies local-search algorithms for k-means clustering. The goal here is to obtain a local-search algorithm which (1) give a constant factor approximation and (2) run in time linear in the dataset. In the past literature, one can distinguish essentially 2 types of local-search algorithms for this problem. Single-swap local search which have a simple swap procedure but can only guarantee approximation ratios around 500 at least. Multi-swap local search which use a richer swap structure to give better guarantees (the current best is 9+\epsilon – approx) but are often impractical since finding a good swap often implies to enumerate over all subsets of some candidate set, which makes the algorithm slower than linear (in the dataset size) time. This paper tries to reconcile these two directions by giving a new Multi-swap local search algorithm which gives and approximation factor of 50+\epsilon and runs in essentially linear time. One of the main ideas in this work is to use sampling techniques in the spirit of the famous k-means++ algorithm and only after this try to swap current centers with the sampled candidates. Because the set of sampled candidates is much smaller than the actual dataset, the running time is significantly improved. They also give experiments that show that their algorithm performs well in practice. Strengths: In my opinion, the strengths of the paper are as follows. 1) Interesting result and techniques. Not being an expert, it seems that it is the first time that the sampling technique is used in the context of multi-swap local search. 2) The algorithm seems to perform well in practice. Experimental section is quite thorough. Weaknesses: Some minor weakness in my opinion is that there is still a significant gap between the lower bound of 9 for local search and the proven upper bounds. Typos : line 88: benefit line 105: much *more* line 145: I guess the OPT value is missing in the inequality Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: The authors actually propose 2 algorithms, first MLS for which they prove theoretical guarantees. And a second MLPS which they say is “more practical” but I do not see anywhere in the paper that the same approximation guarantee of 50 holds for MLPS as well. In the experiment sections, MLS seems to perform well, but it is unclear to me if this is the same MLS as the one that was analyzed in section 3 (line 336 the authors say they use the sampling method of MLPS for the MLS algorithm). Does this change the theoretical guarantee of MLS? It would be great of the authors could clarify these points. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive rating and the thoughtful comments. In the following we address the concerns. **Question 1: The authors actually propose 2 algorithms, first MLS for which they prove theoretical guarantees. And a second MLSP which they say is "more practical" but I do not see anywhere in the paper that the same approximation guarantee of 50 holds for MLSP as well. In the experiment sections, MLS seems to perform well, but it is unclear to me if this is the same MLS as the one that was analyzed in section 3 (line 336 the authors say they use the sampling method of MLSP for the MLS algorithm). Does this change the theoretical guarantee of MLS? It would be great of the authors could clarify these points** Response: We thank the reviewer for raising this important question. The proposed MLSP algorithm is actually a heuristic algorithm for achieving better practical performance in experiments. The main heuristic strategies for MLSP are: 1) random sampling for accelerating the clustering cost updating process; 2) recombination for finding potentially better solutions. As a result, the theoretical bound $50(1+\frac{1}{t}) + \epsilon$ might not be applicable to MLSP. The proposed theoretical guarantee for MLS algorithm is based on the standard worst-case analysis. However, in practical scenarios, heuristic strategies can help to improve both the clustering quality and runtime of the proposed algorithm. For the MLS algorithm used in experiments, as stated in line 336-337, the sampling strategy is also used to accelerate the clustering cost updating process. Thus, the bound of $50(1+\frac{1}{t}) + \epsilon$ might not hold in this case. In experiments, it can be seen from table 2 that the running time of MLS and MLSP algorithm can be accelerated by using heurstic strategies such that they can achieve better performance compared with other local search methods, and MLSP performs much better than MLS. How to give a theoretical bound on the approximation guarantee for the MLSP algorithm is an interesting problem that deserves further study. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their response and clarification. After looking at other reviews and corresponding rebuttals, my initial assessment remains. I think this is an interesting paper.
Summary: The authors study the well-known k-means problem: Given a set of points in the Euclidean space, compute k centers such that the sum of squared distances between points and their closest center is minimized. A constant-factor approximation to this problem is achieved by local search: Start with arbitrary k centers and improve the solution by choosing a point which is currently not a center and replace a center in our current solution by this point. If we exchange at most t centers instead of one in the local search method Kanungo et al. showed that the algorithm computes a (3+2/t)^2-approximation. Lattanzi and Sohler proposed a combination of k-means++ sampling and local search (LS++): Start with the k centers computed by the k-means++ solution and improve it via local search. In every step of local search the new center is sampled proportionally to its current cost in the solution. Lattanzi and Sohler showed that their algorithm computes a 509-approximation after O(k*log(log(k))) local search steps. The authors extend the algorithm of Lattanzi and Sohler (MLS): Instead of sampling only one point, sample t points S simultaneously and search for a subset T of S such that replacing |T| points in the current solution by T reduces the cost significantly. The main result is the following: After O(k^O(t)*log(epsilon*log(k))) local search steps the algorithm returns a (50(1+1/t)+epsilon)-approximation in expectation. Each local search step takes time O(n*d*k^O(t)), where n is the number of points and d is the dimension of the euclidean space. In the experimental part of the paper the authors implemented the MLS-algorithm for t=2 and proposed a second algorithm (MLSP): 1) Use k-means++ for an initial solution 2) For R rounds do the following in each round: Apply the MLS-algorithm, check whether the current solution C is the best observed so far. If yes, again apply MLS to improve it, if not exchange some (or all) centers from C according to a procedure developed by the authors and apply MLS with new center set. So MLSP restarts MLS with a new set of centers if the local optimum which MLS computes is not good. This restart is performed R times. The authors compare the MLS and MLSP algorithm to three algorithms LS++, Lloyd and FLS on several data-sets (50 000- 11 000 000 points) for k=10. In most cases the MLSP algorithm computes the best solution but has a significantly higher run-time than MLS, Lloyd and LS++. Strengths: The extension of LS++ to an algorithm which samples more than one center is natural and the resulting improvement in the approximation factor is significant. Even though some parts of the proof are similar to the approach of Lattanzi, Sohler and Kanungo et al. the proof involves some non-trivial steps and is original in my opinion. The experiments show that the algorithm MLSP outperforms other algorithms with respect to the quality of the solution even in the setting where all algorithms run for a fixed amount of time. This may be due to the restart of the local search procedure when it runs into a bad local minimum, so it seems to be a good heuristic to implement in any local search procedure. Weaknesses: Especially the technical part of the paper is very hard to read, which could have been prevented since it is mostly not due to the difficulty of the proofs but because of the complex notation and sometimes imprecise statements. While the result in the main theorem seems to be plausible I am not completely sure about the correctness of the proof. It would be nice to have experiments for k>10, since the choice of k<=10 seems not to fit the considered data sizes of 50 000 to 10 000 000 points. The run-time of MLS and MLSP rises faster with respect to k than the run-time of LS++ so it would be nice to see how their algorithms perform for larger values of k. Furthermore the authors outline that the MLS algorithm is relatively fast, especially when compared to its easier version LS++ where only one center is exchanged in every step, which was confusing at first. However the speed-up in the run-time is achieved by an approximate computation of the clustering cost of a solution which could have also been implemented for LS++ for a fair comparison to MLS in my opinion. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: L 69: Should this be n^t instead of n*k^t? If not, please explain. (appears also in L 76, Table 1) L 88: Missing O(...) in the run-time L 123: I feel that you should state Lemma 2 as a theorem L 136-138: You should mention that this only holds when the cost of the current solution is larger than ((50+1/t)+epsilon)*OPT. L 138: we can get an approximation -> we get an approximation L 145: In the lower bound on the cost of C there is missing the cost of an optimal solution. L 167: Maybe just introduce a map \phi which assigns every center in C* the nearest center in C? Then you do not need the notation s_c and you can denote the set of optimal centers captured by a center c in C by \phi^{-1}(c). I feel that the notation should be simplified at a lot of places throughout the paper. L 172: find a set \sigma_h of unused lonely centers -> let \sigma_h be an arbitrary set of unused lonely centers L 173: in the definition of A: don't we have c_h^* in \phi(s_c_h^*) already by definition as c_h^* is captured by s_c_h^*? L 176-177: Please specify how you construct type 2 matched swap pairs. At first it seems that every unused lonely center from C should be in at most one type 2 matched swap pair, but this is not possible. I think what you later need is that every unused lonely center and every c_h^* with |\phi(s_c_h^*)|>t should form a type 2 matched swap pair? L 179-182: For the union of clusters with centers in V you use X(V) while for the union of clusters with centers in Q you use Z(J(Q)), this is very confusing. I suggest to simplify the notation (maybe just directly write the union of clusters instead of introducing a new notation) and also don't change between center sets, clusters sets and union of cluster sets that much in the whole paper when it is avoidable. L 212-222: maybe it is better to state this as a lemma L 219-220: missing two " ) " in this chain of inequalities, also please put this into an align environment as it is hard to read if you include it directly in the text L 227: cost of those optimal clusters -> cost of optimal clusters L 230: |Q|>1 : Isn't this automatically the case for sets in H_2? L 235: c_h^* can find clustering centers close to it -> c_h^* is close to a center from C L242: you use H_1,H_2 etc. are sets of centers, now H_2^* is some set of clusters, this is a little bit confusing L267: Shouldn‘t the probability Omega(1/k) also depend on t because of the definition in Line 231? L268-269: I belive that this should be independency of events, not the union bound L281: comma at beginning of line L281: Could you explain why we add Q_S‘‘ and not Q_T to H_G? Algorithm 3 Line 11: Could you explain how this swapping exactly works? Also why don‘t use simply Lloyd here? Appendix: L428: In the following sequence of equations: first inequality remember the reader why s_{o_p} is not in V L446: rename one of the (c_h^*,c_h^m) in the argmin equation L448: \mu is previously used as centroid of a set, maybe find an other letter L449: In the following sequence of equations, first inequality: should \mu(L) be \mu(M_3)? If not please define \mu(L) L452: Please explain this upper bound L460: please state the bound for the relaxed triangle inequality as lemma somewhere L463: In the following sequence of equations: Z(Q_T) → Z(J(Q_T)), Z(Q‘_S)-Z(J(Q‘_S)), Z(Q)->Z(J(Q)) L474-476: It is not clear if you mean the failure probability of not sampling at least one point correctly or all points correctly L484: In the following inequality: shouldn‘t „t“ be „|v(W)|“ in the exponent? L488 In the following inequalities, last inequality: \Delta(P,C)→ 300\Delta(P,C) L496: „unique lonely center“ misleading as one could think that l(c_h^*) is specific to P_h^* L515: \lambda t^{-k} → \lambda k^{-t} L520-528: this paragraph equals L512-520 → remove Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: n.a. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive rating and the thoughtful comments. In the following we address the concerns. **Q1: Regarding $n^t$ instead of $nk^t$ in L69** Response: The swap size $O(nk^t)$ used in L69 is actually $O((nk)^t)$, which is a typo. Sorry for the confusion. **Q2: Typos and misleading statements** Response: We thank the reviewer for pointing these out and we apologize for being unable to respond to each typo error and misleading statement individually due to the character limitations in the rebuttal. In our revised version, we we will carefully correct all the typos and misleading statements in the paper. **Q3: Maybe just introduce a map $\phi$ which assigns every center in $C^{*}$ the nearest center in $C$?** Response: Thanks for the great suggestion. To make the notations simpler and easier to follow, in our revised version, we will introduce a mapping function $\phi$ to map every center in $C^*$ to the nearest center in $C$, and use $\phi^{-1}(c)$ to denote the set of centers captured by $c$. **Q4: Regarding the definition of $A$ in L173** Response: Thanks for pointing this out. In the construction of $A$, $c_h^*$ already belongs to $\Psi(s_{c_h^*})$ and $\Psi(s_{c_h^*}) \cup \{c_h^*\}$ is indeed $\Psi(s_{c_h^*})$. **Q5: L176-177, please specify how you construct type 2 matched swap pairs** Response: We are sorry for the confusion caused by the statements on construction of type 2 matched swap pairs. In L176, every unused lonely center should participate in the construction of type 2 swap pairs with every $s_{c_h^*}$ satisfying the condition of $|\Psi(s_{c_h^*})|>t$. In our revised version, we will make the statement clearer **Q6: Regarding the notations in L179-182** Response: We thank the reviewer for the kind suggestions. In the revised version, we will make the notation for the union of clusters with centers in $V$ and $Q$ consistent. **Q7: L230, $|Q|>1$. Isn't this automatically the case for sets in $H_2$** Response: $|Q|>1$ is implicit in the definition of $H_2$. We will remove it in the revised version. **Q8: Regarding the notations in L242** Response: We are sorry for the confusion caused by the notations. In our revised version, we will make the notations consistent. **Q9: Regarding the probability bound in L267** Response: We thank the reviewer for pointing this out. Since the swap size $t$ is usually a constant and could be much smaller than $k$, we omit the dependence on $t$ in the probability lower bound. In the revised version, we will add $t$ to the lower bound. **Q10: L281: Could you explain why we add $Q_S''$ and not $Q_T$ to $H_G$** Response: Thanks for raising this question. $H_G$ is denoted as the collection of optimal clusters whose clustering cost is relatively small with respect to $C$. Recall that $Q_T = Q_S'' \cup Q_L$. For each $P_h^* \in Q_L$, with good probability, data points close to $c_h^*$ can be sampled in the $m(P_h^*)$-iteration of the independent $t$ sampling iterations. Thus, $Q_S''$ is added to $H_G$ while $Q_L$ is not added to $H_G$. **Q11: Regarding L11 in Algorithm3** Response: We thank the reviewer for raising this important question. In L11 of Algorithm3, the swapping works as follows. For each center $c_h \in C$, the algorithm finds the nearest 50 data points in $P$ to it (denoted as the set $N(c_h)$). For each point $x \in N(c_h)$, a swap pair $(c_h, x)$ is constructed. If one of the swap pairs constructed can induce a clustering cost reduction, the algorithm will execute the swap. In L11 of Algorithm3, the nearest neighbor searching and the swap pairs construction processes will be repeated until the current clustering reaches a convergence. The reason that Lloyd's method is not used in L11 is to prevent the algorithm from falling into a poor local optimum too early. Once Lloyd's method is used to adjust the centers, a local optimum is obtained, which reduces the possibility to find potentially better solutions. **Q12: Regarding the notations in L449** Response: The $\mu(L)$ term appears in the first inequality actually refers to $L$, which is a typo. Sorry for the confusion. **Q13: L452: Please explain this upper bound** Response: We thank the reviewer for raising this question. In the following, we give a brief explanation on how to obtain the upper bound in L452, which will appear in the revised version. Let $L1 = \cup_{c_h^* \in L}s_{c_h^*}$. By the definition of $\mu(M_3)$ and $L$, for each $s_{c_h^*} \in L1$, we can find a set $z(s_{c_h^*}) \subseteq \mu(M_3)$ with size $|\Psi(s_{c_h^*})| - 1$ such that $z(a) \cap z(b) = \emptyset$ for any $a, b \in L1$. For each $s_{c_h^*} \in L1$, since $|\Psi(s_{c_h^*})| > t$, $|\Psi(s_{c_h^*})|/|z(s_{c_h^*})| \le 1 + 1/t$. Taking a summation for centers in $L$, we have $|L| = \sum_{c_h^* \in L}1 = \sum_{s_{c_h^*} \in L1} |\Psi(s_{c_h^*})| \le \sum_{s_{c_h^*} \in L1} |z(s_{c_h^*})|(1+1/t) \le |\mu(M_3)|(1+1/t)$. **Q14: Regarding the relaxed triangle inequality in L460** Response: We thank the reviewer for the nice suggestion. In our revised version, we will include a lemma to state the relaxed triangle inequality in the preliminary section. **Q15: Regarding the failure probability in L474-476** Response: We thank the reviewer for pointing this out. To make the statement clearer, we will rewrite it as "The probability of failing to sample a point close to each of the optimal clustering centers". **Q16: L484 about $|\nu(W)|$ in the exponent** Response: We thank the reviewer for pointing this out. The reason why $t$ is in the exponent instead of $|\nu(W)|$ is based on the following relation between $t$ and $|\nu(W)|$. By the definition of $\nu(W)$, we have $\nu(W) = \lbrace{P_h^*:P_h^* \in Q''_S\rbrace}$ and $Q''_S$ is a subset of $Q$. Note that $Q \in H_2$. By the definition of $H_2$, we have $|Q| \le t$. Thus, $|\nu(W)| \le t$ and $t$ can be used to replace $|\nu(W)|$ in the exponent of the probability bound. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response.
Summary: Local search algorithm is a well studied technique for clustering problems. In the k-means problem, the simple swap heuristic states that start with an intiial chosen set of k centers and then at each local search step, check if swapping an existing center with a new one leads to decrease in cost. It is known that this gives a constant factor approximation algorithm (with the constant being 25). The $t$-swap heuristic generalises this local search algorithm by swapping a subset of $t$ centers at each step. It is clear that the running time of such an algorithm will be exponential in $t$, though the approximation factor converges to 9 as $t$ increases. A related line of work on the k-means problem deals with the Lloyd's algorithm. In this algorithm, we choose an initial set of k seed centers and then run a different local search algorithm on it. A lot of work has happened on how to choose these initial centers. The k-means++ heuristic says that we choose the initial centers as follows: the first center is picked uniformly at random. After having picked $i$ initial centers, a point is chosen as the $(i+1)^{th}$ one with probability proportional to square of the distance from the $i$ centers chosen already. It is known that if we choose a set of $k$ centers in this manner, then it can lead to $O(\log k)$-approximation algorithm for the k-means objective (and this bound is tight). Subsequent works have focussed on whether we can improve this initial set in a small amount of time. For example, reference [10] showed that if we run the single swap heuristic on the $k$ centers chosen by this random sampling procedure for about $O(k)$ steps, then we get a 509-approximation algorithm. This paper is an extension of this last result by giving a fast implementation of the $t$-swap heuristic on an initial set of centers chosen using the random sampling procure described above. Each local search step chooses a set of $t$ new centers using a random sampling procedure and checks if it can replace an existing set of $t$ centers in the current solution. The paper shows that the running time now becomes $O(k^{O(t)} nd$, and hence is linear in $n$ for fixed $k,t$. They also show that the resulting set of centers has approximation ratio of about 50. The analysis has the same structure as that of [10], though analyzing $t$-swap is much more tricky, and requires new insights. Strengths: 1. Usually $t$-swap heuristcs are expensive to implement, this paper gives a non-trivial implementation which is efficient in practice. 2. The analysis of $t$-swap requires new ideas. Weaknesses: 1. The improvement in experiments is only marginal (in most cases less than 5%). 2. The usual local search heurstic (with single swap heuristic), ie.., reference [9] in table 1, has running time $O(nkd \lof \Delta)$ and has approximation ratio of 25. This running time almost matches the running time of the result in this paper (upto log \Delta factor), and has better approximation ratio. So it is not clear if the theoretical contribution is significant. I also don't see a comparison with [9] (with single swap heuristic) in the implementation. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Please compare with [9] in terms of theoretical and experimental results. 2. It may be worth explaining what new ideas are needed in the analysis as compared to [10]. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No negative impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive rating and the thoughtful comments. In the following we address the concerns. **Question 1: Please compare with [9] in terms of theoretical and experimental results** Response: We thank the reviewer for raising this question. We are sorry for the confusion caused by the typo on the time complexity of the algorithm in [9]. The term of $O(n^{t}k^{t}d\log\Delta)$ in Table 1 should be $O(n^{t+1}k^{t}d\log\Delta)$. Our proposed MLS algorithm can achieve a $(50(1+\frac{1}{t})+\epsilon)$-approximation in time $(ndk^{2t+1}\log(\epsilon^{-1}\log k))$. It can be seen that even with a single swap strategy ($t=1$), the algorithm in [9] (denoted as LS algorithm for short) has at least quadratic running time in the data size while the running time of our proposed MLS algorithm has a linear dependence on the data size. For the LS algorithm, it is not surprising to get a better ratio with quadratic running time. For experimental performance, as shown in [8] (the FLS algorithm), even with single-swap strategy, the LS algorithm can hardly handle datasets with size over 10,000 due to its quadratic time complexity. Thus, we did not include the results of LS algorithm for comparison in our experiments. In the following table, we give additional results (see table 1) to compare our MLS and MLSP algorithms with the LS algorithm on different datasets used in our experiments with size smaller than 10,000. To ensure a fair comparison, we maintain a consistent number of 400 local search iterations, as employed in our previous experiments. It can be seen that our proposed MLSP algorithms outperform LS algorithm in terms of both clustering cost and running time. On average, the average clustering cost is reduced by 1.15\% compared with LS using our MLSP algorithm while the running time of our MLSP algorithm is more than 1000 times faster than LS algorithm. **Table 1: Comparison results of LS, MLS and MLSG on datasets with size smaller than 10,000** |Iris(150*4)|Best Cost|Average with Std|Time(s)|Abs_FL(720*21)|Best Cost|Average with Std|Time(s)| |:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| |LS|29.79|30.001$\pm$0.23|300.61|LS|**1.0786E+06**|1.0873E+06$\pm$7.8E+03|1513.83| |MLSP|**29.74**|**29.761$\pm$0.03**|0.43|MLSP|**1.0786E+06**|**1.0786E+06$\pm$0**|0.91| |MLS|29.93|30.214$\pm$0.19|**0.17**|MLS|**1.0789E+06**|1.0970E+06$\pm$1.1E+04|**0.47**| |**SEEDS(210*7)**||||**TR(980*10)**|||| |LS|214.951|218.9401$\pm$5.79|436.54|LS|**762.16**|767.88$\pm$4.1|2049| |MLSP|**214.523**|**215.079$\pm$0.49**|0.63|MLSP|**762.16**|**764.71$\pm$1.9**|0.94| |MLS|214.954|219.294$\pm$2.82|**0.46**|MLS|785.01|805.89$\pm$11.5|0.56| |**GLASS(214*9)**||||**SGC(1000*21)**|||| |LS|**251.859**|252.9835$\pm$0.91|424.28|LS|1.1741E+08|1.2026E+08$\pm$3.5E+06|2146.91| |MLSP|**251.859**|**251.994$\pm$0.54**|0.52|MLSP|**1.1734E+08**|**1.1752E+08$\pm$5.1E+04**|9.39| |MLS|253.294|254.001$\pm$0.55|**0.46**|MLS|1.1735E+08|1.1846E+08$\pm$9.6E+04|**0.62**| |**BM(249*6)**||||**HEMI(1955*7)**|||| |LS|**375974**|378169$\pm$2739.33|528.74|LS|2.7073E+06|2.7346E+06$\pm$5.7E+05|3713.27| |MLSP|**375974**|**376276$\pm$265.69**|0.52|MLSP|**2.7070E+06**|**2.7123E+06$\pm$6.7E+03**|1.08| |MLS|378649|384982$\pm$4131.91|**0.37**|MLS|2.7292E+06|2.7886E+06$\pm$6.9E+04|**0.59**| |**UK(258*5)**||||**pr2392(2392*2)**|||| |LS|**29.268**|29.602$\pm$0.34|538.55|LS|**5.3578E+09**|5.4745E+09$\pm$9.0E+08|4958.77| |MLSP|**29.268**|**29.286$\pm$0.01**|0.61|MLSP|**5.3578E+09**|**5.3668E+09$\pm$2.3E+07**|1.16| |MLS|29.411|30.004$\pm$0.27|**0.39**|MLS|5.3629E+09|5.4203E+09$\pm$6.2E+07|**0.58**| |**HF(299*12)**||||**TRR(5456*24)**|||| |LS|**6.9604E+10**|7.2964E+10$\pm$4.1E+09|634.03|LS|**1.3796E+05**|1.3834E+05$\pm$566.8|12646| |MLSP|**6.9604E+10**|**6.9604E+10$\pm$0**|0.71|MLSP|**1.3796E+05**|**1.3829E+05$\pm$253.1**|2.99| |MLS|**6.9604E+10**|7.0148E+10$\pm$4.3E+08|**0.4**|MLS|1.4591E+05|1.4939E+05$\pm$2.0E+03|**0.64**| |**WHO(440*8)**||||**AC(7195*22)**|||| |LS|**3.3631E+10**|3.3764E+10$\pm$1.7E+08|913.23|LS|**1163.7**|1169.19$\pm$8.42|18036.1| |MLSP|**3.3631E+10**|**3.3676E+10$\pm$4.6E+07**|0.89|MLSP|**1163.7**|**1168.2$\pm$9.01**|1.51| |MLS|3.4324E+10|3.5212E+10$\pm$5.6E+08|**0.4**|MLS|1165.5|1181.7$\pm$12.8|**0.64**| |**HCV(572*12)**||||**rds_cnt(10000*4)**|||| |LS|1.1312E+06|1.1458E+06$\pm$4.4E+03|1311.96|LS|1.6104E+06|1.6364E+06$\pm$2.3E+04|20544.7| |MLSP|**1.1311E+06**|**1.1410E+06$\pm$2.2E+04**|0.9|MLSP|**1.6099E+06**|1.6105E+06$\pm$7.2E+02|0.88| |MLS|1.1505E+06|1.2135E+06$\pm$4.2E+04|**0.49**|MLS|1.6146E+06|1.6520E+06$\pm$2.5E+05|**0.42**| **Question 2: It may be worth explaining what new ideas are needed in the analysis as compared to [10]** Response: Thanks for the great suggestion. For the LS++ method with single swap strategy in [10], its theoretical guarantee relies heavily on the one-to-one matched swap pairs constructed. When directly applying the ideas in [10] to multi-swap analysis, it has the following challenges: (1) the theoretical bounds on the clustering cost after multi-swaps can not be established; (2) for multi-swap local search, a successful swap involves that data points close to each center of a subset of optimal clustering centers should be simultaneously sampled. However, the LS++ method can only guarantee that a data point close to a single optimal clustering center can be sampled with high probability. To establish the bounds of clustering cost after multi-swaps, we extend the notions of swap pairs to swap sets and propose a new consecutive sampling method to construct candidate centers for swap. To analyze the success probability in each local search step, we propose new structures that divide optimal clusters into different groups for establishing a lower bound of sampling success probability. In summary, new definitions for swap pairs and new structures for analyzing the success probability lower bounds are needed in the analysis as compared to [10].
Summary: This paper proposes a multi-swap local search algorithm for the k-means problem with linear running time in the data size, while also achieves a better approximation ratio when compared with other local search algorithms that adopt single-swap strategy. To benefit more from such algorithm when handling large-scale datasets, the authors further propose a sampling-based method to accelerate the updating of clustering cost during the swaps. The extensive empirical experiments validate the good numerical performance of the proposed algorithms. Strengths: This paper studies the classic k-means clustering problem and gives a first linear time local search algorithm in literature that adopts the multi-swap strategy. Such algorithm enjoys the two advantage from both worlds: smaller approximation ratio and linear run-time in data size. The organization of this paper is clear and well-structured, the mathematical proof is very detailed and easy to follow, and the numerical experiments are also quite extensive. Weaknesses: For Lloyd algorithm, according to your experiments it is able to save up to 99% of the run time (0.07 vs 5.12), while only having around 1% more average cost. Therefore, the merit carried by the work presented in this paper does not seem to be too impressive from the practical point of view. One related question is, considering the huge amount of time being saved from Lloyd algorithm, have the authors tried running Lloyd algorithm for multiple times (if Lloyd algorithm takes 0.07s for each initialization, then you can run 5.12/0.07 times of Lloyd algorithm) with different initialization and then simply take the best solution? How does that compare with MLS when they are set to have the same run time? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Line 11-12, "which improves the current best result 509": It might be true for local search methods that take single-swap strategy, but It is unfair to call this the best result, since the current best approximation ratio is 6.357. Line 90, "MLS algorithm": Please refer it and write out the full name when mentioning this abbreviation for the first time. Please consider move the "multi-swap local search algorithm" at line 100 to here. Algorithm 1: "Output: ... of at most k centers". I do not understand why you say "at most". Since in each iteration point p can be selected if and only if point p is not in C, isn't it guaranteed that C has size exactly k? Similar question to Algorithm 2. Line 3, Algorithm 1: $\Delta$ is defined over two sets, here p is a point it should be {p}, and {C} should be C. This appears in other places as well, for instance Algorithm 2. In the numerical experiment section, in Table 2 why is MLS able to outperform the single-swap local search methods LS++ and FLS by a big margin? It does not make sense to me since MLS algorithm is clearly more complex than LS++ in [10] and it has bigger runtime complexity. Line 345-346, "the average clustering ... nearly match the results of the BB method": As far as I can see, except on one dataset, isn't MLSP algorithm always outperform BB method in terms of average clustering costs? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors did not address the limitation of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive rating and the thoughtful comments. In the following we address the concerns. **Question 1: Line 11-12, "which improves the current best result 509": It might be true for local search methods that take single-swap strategy, but It is unfair to call this the best result, since the current best approximation ratio is 6.357** Response: We thank the reviewer for pointing out this issue. In the revised version of the paper, we will rewrite the statement as "which improves the current best result 509 with a linear running time in the data size." **Question 2: Line 90, "MLS algorithm": Please refer it and write out the full name when mentioning this abbreviation for the first time. Please consider move the "multi-swap local search algorithm" at line 100 to here.** Response: We thank the reviewer for the kind suggestion. In the revised version, we will give the full name of "MLS algorithm" when mentioning this abbreviation for the first time. **Question 3: Algorithm 1: "Output: ... of at most k centers". I do not understand why you say "at most". Since in each iteration point p can be selected if and only if point p is not in C, isn't it guaranteed that C has size exactly k? Similar question to Algorithm 2** Response: We thank the reviewer for raising this subtle issue. In the standard definition of the $k$-means problem, the goal is to output a set $C \subseteq \mathbb{R}^d$ with size at most $k$ such that the clustering cost is minimized. To be consistent with the literature, we phrase our output requirement as "a set $C \subseteq \mathbb{R}^d$ with size at most $k$". Since local search methods for the $k$-means problem in literature all return a solution of size exact $k$, we also follow the tradition and require our algorithms to return a solution of size exact $k$. **Question 4: Line 3, Algorithm 1: $\Delta$ is defined over two sets, here $p$ is a point it should be $\lbrace p \rbrace$, and $\lbrace C \rbrace$ should be $C$. This appears in other places as well, for instance Algorithm 2** Response: Thanks for pointing this out. The $\Delta(p,\lbrace C \rbrace)$ used in line 3 of Algorithm 1 is actually $\Delta(\lbrace{p\rbrace},C)$, which is a typo. Sorry for the confusion. **Question 5: In the numerical experiment section, in Table 2 why is MLS able to outperform the single-swap local search methods LS++ and FLS by a big margin? It does not make sense to me since MLS algorithm is clearly more complex than LS++ in [10] and it has bigger runtime complexity** Response: We thank the reviewer for raising this important question. Theoretically, the running time of the MLS algorithm is larger than that of LS++ and FLS. The key point for MLS to outperform LS++ and FLS in experiments is that the sampling-based method is applied to accelerate the swapping and clustering cost updating processes (as mentioned in line 337). **Question 6: Line 345-346, "the average clustering ... nearly match the results of the BB method": As far as I can see, except on one dataset, isn't MLSP algorithm always outperform BB method in terms of average clustering costs?** Response: Thanks for raising this question. Since the average clustering cost of our MLSP algorithm does not outperform that of the BB method on dataset RNG\_AGR, for the sake of rigor, we did not state that our proposed MLSP algorithm outperform the results of the BB method. In the revised version, we will make a clearer statement as "the average clustering costs outperform the results of the BB method except on one dataset RNG\_AGR".
Rebuttal 1: Rebuttal: We thank the reviewers for their positive feedback and valuable suggestions and we highly appreciate the effort paid by the reviewers to provide in-depth reviews that helped us to improve our work. In the following, we will address the reviewers' comments in detail with separate responses.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Model and Feature Diversity for Bayesian Neural Networks in Mutual Learning
Accept (poster)
Summary: This paper proposed a mutual learning approach to learn a pair of Bayesian Neural Network(BNN). The posterior of BNN is approximated by Variational Inference using a Gaussian distribution with a diagonal covariance matrix. To make the BNN learn different perspective of the data, the author proposed to increase the diversity in parameter space and intermediate feature space by adding the an estimate of distance between parameter distribution and fused feature distribution of two BNN models into the objective function. Empirically, the proposed method outperform existing mutual learning method and vanilla BNN model in terms of accuracy, negative log likelihood loss and expected calibration error. An ablation study is also provided to investigate the usefulness of each component. Strengths: The paper is well written and easy to follow. Increasing the diversity of parameter distribution and intermediate feature distribution of peer BNN models to boost performance is an interesting idea. Experiments and detailed ablation study demonstrate the effectiveness of proposed method. Weaknesses: 1. It is mentioned in the abstract and introduction that the BNN model with variational inference may underperform deterministic model or BNN obtained by MCMC, the baseline only involves BNN model trained with(DML) or without(vanilla) mutual learning. Would the proposed method close the gap to some extent? Data augmentation, optimizer may all affect performance, so it is still helpful to include deterministic model results follow with same training setup. I would expect the BNN model to outperform deterministic model at least in NLL and ECE, and with the 50 ensemble, it can outperform the accuracy. 2. Continue with last point, for MCMC method (e.g. in line 81 of the paper), I agree that traditional MCMC method(e.g. Metropolis Hasting) may not be feasible for large model, and memory storage can be an issue for MCMC method. But I don't think the stochastic gradient MCMC cited in line 81 would require prohibitive computational cost, it behaves like adding a noise to at each step of standard SGD training. 3. The code is not provided so it may hurt the reproducibility of the paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. To my knowledge, it is not very clear if variational distribution(e.g. Gaussian with diagonal covariance matrix) can approximate the true posterior very well, can the author comment a bit on this, e.g. how would different choice of variational family affect the model? 2. In line 264 and line 6 of algorithm 1, it is mentioned that one BNN model is initialized with a trained model and this lead to better results empirically. Can the author discuss more on why this happened? It is a bit wired for me as it seems in the implementation detail, the pre-trained model and the model from scratch are trained with same optimizer and learning rate schedule. 3. Seems like $\alpha$ $\beta$ are set to 1,2 for CIFAR and 1,1 for Imagenet, these two parameters controls the strength of proposed penalty to the model, can the author comments a bit more on how sensitive are the model to those parameters, It can help to illustrate how diversity helps model performance. 4. As mentioned in line 268, results are average of 3 trials, I think it would be better to include the standard deviation as well to boost the significance of the results. 5. In figure A.3 in supplementary material, looks like a sharp increase of KL divergence between the fused feature distributions at around 30 epochs, but the penalty for feature is only added for last 100 epochs, can the author explain more on this? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Weaknesses 1. BNN model with variational inference may underperform deterministic model or BNN obtained by MCMC, the baseline only involves BNN model trained with (DML) or without(vanilla) mutual learning. Would the proposed method close the gap to some extent? I would expect the BNN model to outperform deterministic model at least in NLL and ECE, and with the 50 ensemble, it can outperform the accuracy. **Answer:** Due to limited space we cannot add tables here. We compare our method with deterministic models and also with MCMC-based BNNs. For MCMC-based BNNs, we use the Stochastic Gradient Langevin Dynamics (SGLD). The results are presented in Table 1 in the attached pdf file which can be found at the end of our general response at the beginning. The experimental results show that our proposed method outperforms the deterministic model and the SGLD [36]. 2. I don't think the stochastic gradient MCMC cited in line 81 would require prohibitive computational cost, it behaves like adding a noise to at each step of standard SGD training. **Answer:** We agree with Reviewer that SGLD [3] and Stochastic Gradient Hamilton Monte Carlo (SGHMC) would not require prohibitive computational cost. We will revise the references. We also would like to note that comparing to the stochastic gradient based MCMC such as SGLD and SGHMC, Variational Inference (VI)-based BNNs is more efficient in term of memory storage. In MCMC-based BNNs, we need to store model samples, while in VI-based BNNs, we can explicitly estimate the posterior. 3. The code is not provided so it may hurt the reproducibility of the paper? **Answer:** We will release the code for the reproducibility of the paper. ### Questions 1. To my knowledge, it is not very clear if variational distribution (e.g. Gaussian with diagonal covariance matrix) can approximate the true posterior very well, can the author comment a bit on this, e.g. how would different choice of variational family affect the model? **Answer:** The accuracy of the approximation made in VI mostly depends on how expressive the variational family is. The more expressive, the better the approximation. However, in practice, one needs to balance between the accuracy of the approximation and the computation and storage complexity associated. One of the simplest variational families includes the Gaussian distribution with diagonal covariance matrix [1] or Bernoulli distribution [A]. Despite their modest in terms of posterior estimation, they are widely used in the literature due to their efficiency (e.g., closed-form KL divergence) and scalability (i.e., the number of parameters scales linearly) which is suitable for large-scale models such as deep neural networks. Subsequently, one can select a family with higher complexity and expressiveness, such as Gaussian mixture models, to approximate the true posterior accurately. Although such a model could approximate the true posterior precisely, it significantly increases the number of learnable parameters, requiring more computation and storage, and hence, making it less applicable for deep neural networks. [A] Yarin Gal and Zoubin Ghahramani, Bayesian Convolutional Neural Networks with Bernoulli Approximate Variational Inference, ICLR Workshops, 2016. 2. In line 264 and line 6 of algorithm 1, it is mentioned that one BNN model is initialized with a trained model and this lead to better results empirically. Can the author discuss more on why this happened? It is a bit wired for me as it seems in the implementation detail, the pre-trained model and the model from scratch are trained with same optimizer and learning rate schedule. **Answer:** One of the benefits of using the pretrained model as initialization for the mean of BNNs is that the pretrained model has been well trained. Using it for BNNs training will lead to better training convergence. In addition, as our aim is to make the model distributions and feature distributions of two peer networks diverse, having a BNN initialized with a pretrained model and a BNN initialized from scratch at the beginning of the training ensures the diversity in model parameter distributions and feature distributions. The above benefits lead to better results. 3. Seems like $\alpha$, $\beta$ are set to 1,2 for CIFAR and 1,1 for Imagenet, these two parameters controls the strength of proposed penalty to the model, can the author comments a bit more on how sensitive are the model to those parameters, It can help to illustrate how diversity helps model performance. **Answer:** Please refer to the general response. 4. As mentioned in line 268, results are average of 3 trials, I think it would be better to include the standard deviation as well to boost the significance of the results. **Answer:** We present in Table 2 in the attached pdf file the ACC, NLL, and ECE with the standard deviation of pairs of networks on CIFAR100 dataset. 5. In figure A.3 in supplementary material, looks like a sharp increase of KL divergence between the fused feature distributions at around 30 epochs, but the penalty for feature is only added for last 100 epochs, can the author explain more on this? **Answer:** We would like to clarify that for figure A.3 in the supplementary material, we train BNNs with only $L_{\text{logits}}$ and $L_{\text{diverse feat}}$, i.e., we do not add the model diversity. This corresponds to the setting 'c' in Table 4 in the main paper. Our aim is to show that when diversity is encouraged in the feature space, the resulting distance between fused feature distributions is greater than the distance observed during when training with the traditional mutual learning. We also would like to note that figure A.2 is with training BNNs with only $L_{\text{logits}}$ and $L_{\text{diverse param}}$ which corresponds to the setting 'b' in Table 4 in the main paper. --- Rebuttal Comment 1.1: Comment: I appreciate the authors for detailed response. I will keep my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer 5v8y, Thank the Reviewer for the feedback. We greatly appreciate the time and effort the Reviewer dedicated to considering our paper and our response.
Summary: The paper proposes a method to combine deep mutual learning with BNN to diversify the weight distributions of each BNN networks in a pair or ensemble, to improve performance. Strengths: 1. AFAIK this is the first work combining mutual learning with BNN, so the authors can claim this point. 2. The paper is in general written clearly and easy to follow. 3. Experiments are adequate with ablation studies on individual features impact on diversity. Weaknesses: 1. Some design choices are found to be "empirically" working well without too much discussion or hypothesis. 2. Would be interesting to see how the model performs for o.o.d test data, especially uncertainty performance. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Line 178-179: The authors said adding the D(...) term will rapidly increase of this term and impact training. Wouldn't putting a smaller scaling factor for this term fix this issue? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Weaknesses 1. Some design choices are found to be "empirically" working well without too much discussion or hypothesis. **Answer:** Please refer to the general response for the ablation studies on hyper parameters T, $\alpha$, and $\beta$. 2. Would be interesting to see how the model performs for o.o.d test data, especially uncertainty performance. **Answer:** We thank the Reviewer for the suggestion. To address this, we trained BNN models on the first 5 classes of the CIFAR10 dataset, referred to as CIFAR5. The data from the remaining 5 classes are considered as OOD data. To estimate the uncertainty of predictions on OOD datasets, we compute the entropy of the average prediction from 50 sampled models (for each BNN), called OOD Entropy. We present the top-1 classification accuracy, evaluated on the first 5 classes, as well as the uncertainty estimation on the corresponding OOD data. The comparisons between our proposed method, the DNN model, the vanilla BNN, and the BNN with DML are presented in the following table. The results show that, when trained on CIFAR5, our method outperforms the others in terms of accuracy and uncertainty on OOD data. DNN yields the lowest uncertainty on OOD data, while our proposed method yields the highest uncertainty on OOD data. Table: Top-1 classification accuracy on CIFAR5 dataset and entropy on corresponding OOD data. *Bayesian neural networks are initialized with the mean value from the pre-trained deterministic model. We report the mean and standard deviation over 3 runs. | Method | Model | ACC (%) | OOD Entropy | |---------|-------------|----------------------------|--------------------------------| | DNN | ResNet20 | 89.46 ± 0.08 | 0.209 ± 0.028 | | Vanilla BNN | ResNet20 | 84.27 ± 0.42 | 0.244 ± 0.030 | | | ResNet20* | 90.35 ± 0.45 | 0.261 ± 0.015 | | DML | ResNet20 | 86.74 ± 0.32 | 0.296 ± 0.005 | | | ResNet20* | 91.06 ± 0.32 | 0.325 ± 0.010 | | Ours | ResNet20 | 86.89 ± 0.12 | 0.308 ± 0.019 | | | ResNet20* | **91.36 ± 0.25** | **0.358 ± 0.022** ### Questions 1. Line 178-179: The authors said adding the D(...) term will rapidly increase this term and impact training. Wouldn't putting a smaller scaling factor for this term fix this issue? **Answer:** In our preliminary experiments, we tried with adding a scale factor to control $D(q(w; \theta_1), q(w; \theta_2))$, however, we found that it is not an effective solution. The reason is that D(.,.) does not have an upper bound, which implies that it is not easy to control the scaling factor. The value of the D(.,.) term could potentially increase indefinitely during the learning process if we directly maximize it. This could lead to gradient explosion and the model will not converge. The proposed loss $L_{diverse param}$ (eq. 12) has the advantage that the D(.,.) increases gradually until it reaches a saturation point. These properties make optimization more stable during training. --- Rebuttal Comment 1.1: Comment: Thanks for the additional results. I will keep my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer fyNb, We thank the Reviewer for the feedback. We greatly appreciate the time and effort the Reviewer dedicated to considering our paper and our response. --- Rebuttal Comment 1.2: Comment: Thanks authors for the reply. I decide to keep my score of weak accept.
Summary: The paper titled addresses the challenge of improving the performance of Bayesian Neural Networks (BNNs) by leveraging the concept of mutual learning. BNNs provide a means for quantifying uncertainty in predictions through probability distributions of model parameters. However, BNNs often fall short in performance compared to their deterministic counterparts. The authors propose a novel approach that employs deep mutual learning to enhance the capabilities of BNNs. Strengths: 1. Innovative Approach: The paper introduces a novel method that combines deep mutual learning with Bayesian Neural Networks. By promoting diversity in both network parameter distributions and feature distributions, the proposed approach enables peer networks to acquire distinct features, capturing different characteristics of the input data. This innovative technique enhances the effectiveness of mutual learning in BNNs. 2. Detailed algorithm description: The paper provides a thorough and detailed description of the proposed algorithm for improving the performance of Bayesian Neural Networks (BNNs) through deep mutual learning. 3. Comprehensive Experiments: The authors conduct extensive experiments to evaluate the proposed approach thoroughly. The experimental results are statistically sound and demonstrate significant improvements in classification accuracy, negative log-likelihood, and expected calibration error compared to traditional mutual learning methods for BNNs. Weaknesses: 1. Limited variety in experimental validation: One weakness of the paper is that the proposed approach and its effectiveness are only verified through experiments conducted on Residual Neural Networks (ResNets). It would have been beneficial to include experiments on a diverse set of network architectures to demonstrate the approach's effectiveness across different model types and complexities. 2. Lack of detailed explanation for temperature, α, and β: One weakness of the paper is the limited explanation provided for the temperature parameter (T), α, and β, which are crucial components of the proposed approach. These parameters play a significant role in controlling the diversity of network parameter distributions and feature distributions, but their specific effects and optimal values are not thoroughly discussed. 3. Weakness in the conclusion: The current conclusion merely restates the experimental results and does not highlight the broader implications of the proposed approach or its potential impact on the field. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: The author should supplement more experiments to prove its effectiveness. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The author should supplement more experiments to prove its effectiveness and strengthen the conclusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Weaknesses 1. Limited variety in experimental validation: One weakness of the paper is that the proposed approach and its effectiveness are only verified through experiments conducted on Residual Neural Networks (ResNets). It would have been beneficial to include experiments on a diverse set of network architectures to demonstrate the approach's effectiveness across different model types and complexities. **Answer:** In addition to evaluations with the ResNet architectures in the main paper, here we validate our proposed method on AlexNet. The following table shows the results of ACC, NLL, and ECE. In terms of BNNs initialized with the mean value from the pre-trained deterministic model, our approach consistently outperforms the others. Our method outperforms DML 0.48\% and DNN 0.92\% in terms of top-1 accuracy on the CIFAR-100 dataset. Table: Top-1 classification accuracy, NLL, and ECE on CIAFR100 dataset. DNN means the deterministic model. *Bayesian neural networks are initialized with the mean value from the pre-trained deterministic model. | | | ACC↑ | | | | NLL↓ | | | | ECE↓ | | | |----------|---------------|-----------|-----------|-----------|---------------|-----------|-----------|-----------|---------------|-----------|-----------|-----------| | | Vanilla | DNN | DML | Ours | Vanilla | DNN | DML | Ours | Vanilla | DNN | DML | Ours | | AlexNet | 50.47 | 52.40 | 51.82 | 52.10 | 2.325 | 2.141 | 2.255 | 2.230 | 0.187 | 0.142 | 0.172 | 0.165 | | AlexNet* | 52.23 | - | 52.84 | 53.32 | 2.105 | - | 1.921 | 1.904 | 0.169 | - | 0.099 | 0.089 | 2. Lack of detailed explanation for temperature, $\alpha$, and $\beta$: One weakness of the paper is the limited explanation provided for the temperature parameter (T),$\alpha$, and $\beta$, which are crucial components of the proposed approach. These parameters play a significant role in controlling the diversity of network parameter distributions and feature distributions, but their specific effects and optimal values are not thoroughly discussed. **Answer:** Please refer to the general response. 3. Weakness in the conclusion: The current conclusion merely restates the experimental results and does not highlight the broader implications of the proposed approach or its potential impact on the field. **Answer:** Our work is the first work that explores the potential of deep mutual learning in the context of BNNs. More importantly, we are also the first to investigate the usefulness of model parameter diversity in mutual learning. We expect that our work will broaden the topic of mutual learning and inspire further researches in which the model parameter space is taken into account, in addition to the traditional approaches that only consider the feature space. In addition, our work can be considered as a baseline for DML-BNNs and we expect that our work inspires further researches that investigate the usefulness of mutual learning in the classical field BNNs. We will update our conclusion to reflect the above. ### Questions 1. The author should supplement more experiments to prove its effectiveness. **Answer:** We present above the experiments with AlexNet architecture. We also compare our work with the work [4] when [4] is applied to BNNs. We also compare our approach with deterministic models (i.e., single deterministic models and size-2 deep ensemble models). The results are presented in the tables in our responses to Reviewer xPiJ and Reviewer tB5V. --- Rebuttal Comment 1.1: Comment: I'm grateful for your response that tackled my concern. It's satisfying to witness its efficacy confirmed on more widely used deep network architectures. Consequently, I've opted to retain my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer phoh, We thank the Reviewer for the feedback. We greatly appreciate the time and effort the Reviewer dedicated to considering our paper and our response.
Summary: The paper focuses on improving the accuracy of BNNs by promoting diversity in both parameter space and feature space while training two peer BNNs with mutual learning between them. More specifically, they train two variational BNNs with a mean-field Gaussian variational loss for each along with a KL divergence term between the (temperature-scaled) predictive distributions of the two models, a Wasserstein distance term between the corresponding approximate posterior distributions across the two models (added as a softplus(-distance) term), and a KL divergence term between corresponding feature distributions. On the latter term, instead of directly maximizing the distance between corresponding feature distributions, they instead do so on "fused feature distributions". To do so, they use learned cross-attention to fuse the features from multiple feature levels in a model (two at a time). Then, they use the KL divergence between the distributions of the fused feature distributions of the two peer networks. To derive the distributions, they use the conditional probability density defined as $p_{i|j} = \frac{K(F'_i, F'_j)}{sum_{k=1, k \noteq i}^n K(F'_k, F'_j)}$, where $K(F'_a, F'_b)$ is a kernel function between two fused feature representations. Given those conditional probs, they compute a KL divergence term. Similar to the parameter space diversity term, they add this term to the loss as softplus(-divergence). The paper claims to be the first to propose maximizing the distance between feature distributions to promote diversity. In terms of experiments, the paper includes results for ResNet models on CIFAR-10/100 and ImageNet, measuring accuracy, NLL, and ECE as metrics, and comparing different approaches. Strengths: The paper does a great job of precisely articulating the modeling approach, and discussing the relevant background info. More specifically, the proposed approach of adding terms to promote diversity in parameter and feature space is clear and would be easy to reimplement. Weaknesses: My main concern is with the experiment section. More specifically, a few key details are unclear in the text, and importantly a deterministic baseline is missing that I believe should be present given the framing of the paper and relevant literature. Please see the Questions below. Given updates, I believe the paper would be great and I would gladly update my rating. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Main: - In the experiments, a few details are currently unclear. The following points are on Table 1, but generalize to all three tables. Please clarify these details here and in the paper. - Consider the ResNet20 section of Table 1. Is my understanding correct that the "ResNet20" results are for a pair of BNNs trained from scratch, while the "ResNet20*" results are for a pair of BNNs trained with the approximate posterior means set to the values from a deterministic model? - Is it correct that all results (all three metrics across all three approaches) are computed after averaging the predicted probs from the pair of models? - For the experiments, a deterministic baseline is missing. Given the intro that discusses how BNNs can lag behind deterministic models in acc (though not always), the experiments lack a comparison. It would be helpful to understand how the proposed approach compares to a deterministic baselines, specifically a single deterministic model and a size-2 deep ensemble. Could you add this as a baseline? I would consider this to be a blocker for the paper given the framing and relevant literature. Other: - The KL divergence term is scaled by the square of the temperature -- why? - How did you choose the values for temp, alpha, and beta? They differ between CIFAR-10/100 and ImageNet. Did you ablate values? Minor comments: - updating lines 17 & 22 of Alg 1 could be helpful for readability Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: No limitations are included. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Weaknesses 1. A few key details are unclear in the text, and importantly a deterministic baseline is missing. **Answer:** We acknowledge the reviewer's comment. Accordingly, we present here the results of the deterministic baseline in the following table. The results indicate that our BNN models that initialized with the mean value from the pre-trained deterministic models outperform the deterministic models in all metrics ACC, NLL, ECE. Table: Comparative results of our proposed method with deterministic models and 2-size deep deterministic ensemble in terms of Top-1 classification accuracy, NLL, and ECE on CIFAR-100 dataset. *Bayesian neural networks are initialized with the mean value from the pre-trained deterministic model. DNN means the deterministic model (ResNet20 or ResNet32). Size-2 DNN means deep ensemble of 2 deterministic models. For a size-2 deterministic deep ensemble, we separately train two deterministic models, which have the same architecture, but with different initializations. After training, we ensemble 2 models by averaging the probability outputs. | | | ACC↑ | | | | NLL↓ | | | | ECE↓ | | | |----------------|----------|------------------|----------|----------|----------|------------------|----------|----------|----------|------------------|----------|----------| | | DNN | size-2 DNN | DML | Ours | DNN | size-2 DNN | DML | Ours | DNN | size-2 DNN | DML | Ours | | ResNet20 | 69.13 | 71.91 | 67.27 | 68.32 | 1.106 | 0.979 | 1.174 | 1.101 | 0.065 | 0.058 | 0.057 | 0.041 | | ResNet20* | - | - | 69.61 | 70.45 | - | - | 1.073 | 1.043 | - | - | 0.047 | 0.038 | | ResNet32 | 71.36 | 74.10 | 68.59 | 70.53 | 1.074 | 0.924 | 1.169 | 1.029 | 0.080 | 0.061 | 0.087 | 0.043 | | ResNet32* | - | - | 71.45 | 72.14 | - | - | 1.012 | 0.975 | - | - | 0.064 | 0.040 | ## Questions 1. Consider the ResNet20 section of Table 1. Is my understanding correct that the "ResNet20" results are for a pair of BNNs trained from scratch, while the "ResNet20*" results are for a pair of BNNs trained with the approximate posterior means set to the values from a deterministic model? **Answer:** As outlined in lines 262-265, we employ a pair of BNN models; one model is initiated from scratch, referred to as "ResNet20", while the other is initialized with the mean $\mu$ is from the pretrained deterministic model, denoted as "ResNet20*". Thus, in Table 1, the results associated with "ResNet20" and "ResNet20*" are obtained by training a pair of BNNs, one initialized from scratch (ResNet20) and the other initialized (ResNet20*) with the mean $\mu$ from the pretrained deterministic model. 2. Is it correct that all results (all three metrics across all three approaches) are computed after averaging the predicted probs from the pair of models? **Answer:** The average results in tables 1,2, and 3 in the main paper represent only the average of results from model 1 and model 2 in a pair of models. This will be clarified further in the paper. 3. It would be helpful to understand how the proposed approach compares to deterministic baselines, specifically a single deterministic model and a size-2 deep ensemble. **Answer:** For the comparative results between the proposed approach and the deterministic model and a size-2 deterministic deep ensemble, please refer to the table above. The results show that although our method's accuracy and NLL are lower than those of the size-2 deterministic deep ensemble, our method surpasses the deep ensemble in terms of ECE. 4. The KL divergence term is scaled by the square of the temperature -- why? **Answer:** We adopt the distillation method using soft logits proposed by Hinton et al. [13]. The hyper parameter $T$ controls the smoothness level of the prediction distribution. As $T$ increases, the prediction distribution becomes more smooth. It's worth noting that the magnitudes of the gradients, which derive from the soft targets, are inversely proportional to $T^2$. Therefore, we scale the KL divergence term by $T^2$. 5. How did you choose the temperature parameter (T) values,$\alpha$, and $\beta$? They differ between CIFAR-10/100 and ImageNet. Did you ablate values? **Answer:** Please refer to the general response. 6. Updating lines 17 and 22 of Alg 1 could be helpful for readability **Answer:** We thank the Reviewer for your suggestion. We have updated the line 17 as follows: compute the total loss for B1: $\mathcal{L}^{B1} = \mathcal{L}{\text{logits}}^{B1} + \alpha \mathcal{L}{\text{diverse param}} + \beta \mathcal{L}_{\text{diverse feat}}^{B1}$ and line 22 as follows: compute the total loss for B2: $\mathcal{L}^{B2} = \mathcal{L}{\text{logits}}^{B2} + \alpha \mathcal{L}{\text{diverse param}} + \beta \mathcal{L}_{\text{diverse feat}}^{B2}$ --- Rebuttal Comment 1.1: Comment: Dear Reviewer tB5V, We hope the Reviewer has had time to look at our rebuttal. Could the Reviewer please share with us the Reviewer’s feedback on it? We sincerely appreciate the time and effort the Reviewer has dedicated to evaluating our paper and our response. --- Rebuttal Comment 1.2: Title: Rebuttal to tB5V Comment: Dear tB5V Could you look at the rebuttal and check if the authors addressed your concerns? Deadline is in just 2 days. Best regards, your AC
Rebuttal 1: Rebuttal: We sincerely thank the reviewers for the constructive feedback. ## General response to the choice of hyper parameters Regarding hyper parameters $T$, $\alpha$, $\beta$: For the temperature $T$, we follow the seminal work [13] and [4]. This parameter $T$ controls the smoothness of the prediction distribution. As the value of $T$ increases, the prediction distribution becomes smoother. The hyper parameters $\alpha$ and $\beta$ control the impacts of the diversity of network parameter distributions and network feature distributions on the learning of a pair of peer networks. We present here the ablation studies on the choice of $T$, $\alpha$, $\beta$ on CIFAR100. We denote "*" mean Bayesian neural networks are initialized with the mean value from the pre-trained deterministic model. For ablation studies for parameter T, we vary the value of $T$ from 1 to 5, and fix the values of $\alpha=1$ and $\beta=2$. The results are shown in the following table. The results show that the best value of $T$ is 3. | T | 1 | 2 | 3 | 4 | 5 | |---------|-------|-------|-------|-------|-------| | ResNet20| 66.6 | 67.66 | 68.32 | 67.95 | 67.93 | | ResNet20* | 69.62 | 70.14 | 70.45 | 70.12 | 69.97 | For ablation studies for parameter $\alpha$, we vary $\alpha$ when promoting diversity in parameter space, and set the value of $T=3$ and $\beta=0$. The results are shown in the following table. The results show that by setting $\alpha = 1$, we achieve a better performance compared to other tested values of $\alpha$. | $\alpha$ | 0.1 | 1 | 2 | 5 | 10 | |--------------|-------|-------|-------|-------|-------| | ResNet20 | 67.25 | 67.78 | 67.18 | 67.45 | 67.29 | | ResNet20* | 69.79 | 70.22 | 69.92 | 69.61 | 69.48 | For ablation studies for parameter $\beta$, we vary $\beta$ when promoting diversity in feature space, and set the value of $T=3$ and $\alpha=0$. The results are shown in the following table. The results show that by setting $\beta = 2$, we achieve a better performance compared to other tested values of $\beta$. | $\beta$ | 0.1 | 1 | 2 | 5 | 10 | |--------------|-------|-------|-------|-------|-------| | ResNet20 | 66.95 | 67.17 | 67.57 | 67.14 | 67.09 | | ResNet20* | 69.84 | 69.91 | 70.04 | 69.85 | 69.90 | In addition, from our experiments, we found that the accuracies are only slightly different when $\alpha$ and $\beta$ take values in {1, 2}. Pdf: /pdf/eba360e5499fb3ce4867714db38ee5b3a0deabb4.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper presents a novel method for enhancing the performance of Bayesian Neural Networks (BNNs) by employing deep mutual learning. The proposed approach aims to enhance the diversity of both network parameter distributions and feature distributions, encouraging individual networks to capture unique characteristics of the input data. The effectiveness of the proposed method is demonstrated on datasets, including CIFAR10, CIFAR100, and ImageNet. Strengths: The proposed method improves performance and uncertainty estimation while reducing the expected calibration error (ECE). The technical approach is novel as the method introduces mutual learning in the context of BNNs and first to propose maximizing the distance between feature distributions and parameter distributions. The paper includes large scale data experiments (ImageNet) and ablation studies to demonstrate the effectiveness of each technical contribution introduced in this paper. Weaknesses: The previous studies mentioned in the paper utilize alignments on feature maps [4] or predictions [38], rather than diversifying them. In contrast, the proposed method diversifies both feature distributions and parameter distributions which is an opposite approach to the previous works. Interestingly, both alignment-based and diversification methods improves performance over vanilla BNNs, as indicated in Table 1, 2, 3, and 5. However, the paper does not explicitly explain the reasons behind the performance improvements resulting from these contrasting approaches. Given the observed contradicting results in the experiments, where the alignment-based method (DML [38]) also enhances the performance of BNNs, an important question arises: could combining alignment-based methods with parameter diversification further improve BNN performance? Alternatively, is it necessary to diversify both feature and parameter distributions to achieve significant improvements? In the experiment section, the proposed method is only compared with [38] and not with [4]. Hyperparameters used for CIFAR experiments and ImageNet experiments are different. However, the paper does not describe details regarding the hyperparameter tuning or determination. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 3 block resnet is used for CIFAR experiments while 4 block resnet is used for ImageNet experiments. Why different form of resents are used for different datasets? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Limitations are shortly addressed in the supplementary. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Weaknesses 1. The previous studies mentioned in the paper utilize alignments on feature maps [4] or predictions [38], rather than diversifying them. In contrast, the proposed method diversifies both feature distributions and parameter distributions which is an opposite approach to the previous works. Interestingly, both alignment-based and diversification methods improves performance over vanilla BNNs, as indicated in Tables 1, 2, 3, and 5. However, the paper does not explicitly explain the reasons behind the performance improvements resulting from these contrasting approaches. **Answer:** Our method does not contradict to the original DML work [38]. Actually, both ours and [4] are built on top of [38]. Specifically, consider a pair of peer networks, to encourage each network to learn and teach each other, DML [38] tries to match the prediction distributions of the two networks through minimizing the KL distance. Following [38], both ours and [4] have the KL loss term for each network loss. Comparing to [4], ours and [4] are two different approaches for enhancing the mutual learning. The idea of [4] is that for each network, in addition to learning useful features for the task on its own, [4] also encourages each network to learn the feature distribution of its peer network through an adversarial training strategy. By doing this, they encourage both networks to learn features that generalize better. In the other words, they encourage the two networks to learn features that work well for both. Different from [4], our approach encourages each network to learn a different set of features that are good for the task. In the other words, our approach encourages each peer network to capture different characteristics of the input. It is worth noting that our approach encourages the diversity in intermediate features. For the last layer, we have the soft logit alignment similar to DML to encourage each network to learn and teach from each other. To summarize, ours and [4] are two different approaches to enhance mutual learning. [4] aims to learn generalized features, while ours aims to make each network to learn different characteristics of the input. Combining our idea and [4] may further improve mutual learning, however, this is not the focus of our paper. It is also worth noting that in [4] the authors do not focus on Bayesian Neural Networks. In addition, their implementation is not publicly available. This makes it difficult for us to compare to [4] in the context of BNNs. In the efforts to compare to [4], we reimplement the approach in [4] for the BNNs context. To make a fair comparison, both ours and [4] use ResNet20 as the backbone. For the discriminator of [4], we follow the description in their paper. The comparisons between ours and [4] on CIFAR100 are in the following table. The results show that both ours and [4] improve the performance of BNNs. However, our approach achieves better performance than [4] in terms of ACC, NLL, and ECE. Table: Comparative results of our proposed method with deterministic models and [4] in terms of Top-1 classification accuracy, NLL, and ECE on CIFAR-100 dataset.*Bayesian neural networks are initialized with the mean value from the pre-trained deterministic model. DNN means the deterministic model. | | ACC↑ | | | | NLL↓ | | | | ECE↓ | | | | |-------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------|-------------| | | DNN | DML | [4] | Ours | DNN | DML | [4] | Ours | DNN | DML | [4] | Ours | | ResNet20 | 69.13 | 67.27 | 68.18 | 68.32 | 1.106 | 1.174 | 1.154 | 1.101 | 0.065 | 0.057 | 0.047 | 0.041 | | ResNet20* | - | 69.61 | 70.26 | 70.45 | - | 1.073 | 1.039 | 1.043 | - | 0.047 | 0.045 | 0.038 | 2. The paper does not describe details regarding the hyperparameter tuning or determination. **Answer:** Please refer to the general response. ## Questions 1. 3 block resnet is used for CIFAR experiments while 4 block resnet is used for ImageNet experiments. Why different form of resents are used for different datasets? **Answer:** Regarding the model architectures when evaluating on CIFAR100 and ImageNet, we follow the previous works in knowledge distillation [4, 12, 13, 24, 38, 39]. Typically, for large datasets like ImageNet with an input size of $224\times 224$, people often use ResNet models with 4 blocks such as ResNet18. Meanwhile, for CIFAR-10 and CIFAR-100, which have smaller input size, i.e., $32 \times 32$, people usually utilize ResNet models with 3 blocks such as ResNet20, ResNet32, and ResNet56. These models have much fewer channels per convolutional layer and a reduced number of blocks (3 instead of 4). It is worth noting that although the ResNet20, ResNet32, and ResNet56 models (3 blocks) are deeper than ResNet18 (4 blocks), they have significantly fewer channels per layer compared to ResNet18. For example, the last conv. layer of ResNet20 has 64 output channels, while the last conv. layer of ResNet18 has 512 output channels. --- Rebuttal Comment 1.1: Comment: Dear Reviewer xPiJ, We hope the Reviewer has had time to look at our rebuttal. Could the Reviewer please share with us the Reviewer’s feedback on it? We sincerely appreciate the time and effort the Reviewer has dedicated to evaluating our paper and our response. --- Rebuttal Comment 1.2: Title: Thank you for the response. Comment: Thank you for clarifying my questions. Based on the reviews and responses, I raise my score to above borderline. --- Reply to Comment 1.2.1: Comment: Dear Reviewer xPiJ, We thank the Reviewer for increasing the score. We greatly appreciate the time and effort the Reviewer dedicated to considering our paper and our response.
null
null
null
null
null
null
Bypassing the Simulator: Near-Optimal Adversarial Linear Contextual Bandits
Accept (poster)
Summary: The paper addresses the problem of adversarial linear contextual bandits, where loss vectors are selected adversarially and the context for each round is drawn from a fixed distribution; traditional approaches require access to a simulator for generating free i.i.d. contexts or achieve a sub-optimal regret no better than $\widetilde O(T^{5/6})$. The authors significantly improve on these methods by achieving a regret of $\widetilde O(\sqrt{T})$ without the need for a simulator, while maintaining computational efficiency when the action set per round is small. The results affirmatively answer an open question about the existence of a polynomial-time algorithm with $\mathrm{poly}(d)\sqrt{T}$ regret. The approach allows for the case where the loss is linear up to an additive misspecification error, showing a near-optimal dependence on the magnitude of this error. The paper also presents a computationally inefficient algorithm that provides an improved $\widetilde O(d\sqrt{T})$ regret without a simulator, expanding on the EXP4 algorithm. Strengths: The paper presents an algorithm that innovatively instantiates an individual Follow-The-Regularized-Leader (FTRL) algorithm on each action set. This development has a wide range of implications for fields. It promises more efficient and effective optimization algorithms in the face of adversarial environments. One of the major strengths of the paper lies in the construction of loss estimators and feature covariance matrix estimators, especially in challenging situations where the learner has no knowledge about the context distribution and there is no simulator available. The paper manages to achieve an improved bound on the bias. The techniques used surpass those used in previous studies [DLWZ23, SKM23], leading to a significant improvement in the algorithm's performance and making it more effective in practice. The introduction of feature centralization is also a notable strength. By centralizing the features by $\hat x_t$, an estimation of the mean features under the current policy, the bias $y_t - \hat y_t$ appears in a nice form that can be compensated by a bonus term. This approach provides an elegant solution to handling bias in the estimator. The paper successfully handles the issue of strong dependence between the policy and the empirical context distribution, Dˆt, which typically rules out canonical concentration bounds. The paper addresses the need for high efficiency in reusing the context samples, which is a challenging problem often not adequately handled by existing methods. Weaknesses: A notable weakness of the paper is the absence of empirical or numerical experiments to validate the theoretical results. Without practical implementation and testing of the proposed estimators on different datasets, it becomes difficult to assess their efficacy and performance. Technical Quality: 3 good Clarity: 3 good Questions for Authors: How might these estimators handle high-dimensional data? Are there potential ways to improve their performance in such scenarios in future work? Could the presented estimators be generalized to other types of bandit problems, such as non-linear contextual bandit problems? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors adequately addressed the limitations as I could tell. ================================================== Post-Rebuttal: I appreciate the response from the authors, and it addresses most of my concerns. Though I still believe the empirical experiments and the comparisons to other existing methods should be an important part of the work, this is still a strong theoretical paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the valuable feedback. We agree that empirical validation is important to prove the efficacy of the algorithm. It will be an important future work to experimentally compare our algorithm to other existing linear contextual bandit algorithms. $\textbf{Q1:}$ How might these estimators handle high-dimensional data? Are there potential ways to improve their performance in such scenarios in future work? $\textbf{Reply:}$ We do not have any constraint on the dimension $d$, so our algorithm can also be applied when $d$ is large. In general, a $\text{poly}(d)$ regret is inevitable. In some special case, a $\text{poly}(d)$ regret can be avoided. For example, for the case where the number of actions and the size of the policy set are small, the algorithm in [SLKS16] can be used to achieve a regret of $\tilde{\mathcal{O}}\left((\log|\Pi|)^{\frac{1}{3}}(|\mathcal{A}|T)^{\frac{2}{3}}\right)$, where $|\mathcal{A}|$ and $|\Pi|$ are the sizes of the action set and the policy set, respectively. Their algorithm relies on simulators though, and it would be an interesting direction to try to remove it. [SLKS16] Vasilis Syrgkanis, Haipeng Luo,Akshay Krishnamurthy, and Robert E Schapire. Improved regret bounds for oracle-based adversarial contextual bandits. Advances in Neural Information Processing Systems, 29, 2016. $\textbf{Q2:}$ Could the presented estimators be generalized to other types of bandit problems, such as non-linear contextual bandit problems? $\textbf{Reply:}$ The current algorithmic framework, including the estimator and the regularizer designs, is heavily tailored for the linear case. To generalize the problem to the non-linear case, we notice that even for the generalized linear bandit *without context*, there is no known computationally efficient algorithm that handles adversarial losses and has a regret bound of $\text{poly}(d)\times o(T)$. The difficulty exactly lies in the the construction of an unbiased low-dimensional loss estimator that can be shared among actions. The study of an efficient algorithm for particular non-linear bandit problems (e.g., generalized linear bandits) with adversarial loss is indeed an interesting direction.
Summary: The paper studies an adversarial linear bandit problem. At each round, the adversary selects a hidden loss vector $y_t$, the environment 'stochastically generates' an action set $A_t$, and the learner chooses an action in the action set. The learner is able to observe noisy loss $\ip{y_t}{a_t}$. Strengths: Update after rebuttal. This paper proposed an optimal algorithm for the adversarial contextual linear bandit problem. The algorithm consists of OMD with logdet barrier as well as the lifting technique. Weaknesses: Update after rebuttal. Can dependence on d be improved? Technical Quality: 3 good Clarity: 3 good Questions for Authors: First, I did not fully understand the problem setting studied in this paper and would need some clarifications. 1. In this work context vectors = actions, I have never seen this formulation in previous literature. The action set $A_t$ is generated from the distribution $D$. What exactly is distribution $D$? Is it defined over the power set of $B_2^d$? What assumption do you make on the size of $A_t$? 2. The formulation does not align with previous work (e.g. NO20). In NO20, they assume the action is chosen by an adversary for each arm in $[K]$ (action for arm $a$ is $\theta_{t,a}$), and the context $x_t$ is drawn from (known) distribution and revealed to the learner. The learner chooses one of the arms and observes the noisy loss centered at ${x_t^T}\cdot{\theta_{t, A_t}}$. Regret is compared to best policy, which is mapping from context to action. This current work assumes the loss vector is chosen adversarially and the action set is drawn 'stochastically' (which I don't completely understand, see point above. ) Regret is compared to best policy, which is mapping from action set to a distribution over action set. Given the different formulations, I do not believe the claims in this work, stating they improve over prior work, are accurate. Based on my understanding of this work, if the action set $A_t$ were constant each round, then this is the bandit linear optimization problem, as in (competing in the dark, abernethy et al. ). Hence this paper really seems to be solving a bandit linear optimization problem where the action set is changing each round, and not the 'adversarial linear contextual bandit' problem as introduced in the intro section. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The author did not address limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for raising concerns about the clarity of the setting. We will clarify them in the new version. $\textbf{Q1:}$ In this work context vectors = actions, I have never seen this formulation in previous literature. The action set $A_t$ is generated from the distribution $D$. What exactly is the distribution $D$? Is it defined over the power set of $B_2^d$? What assumption do you make on the size of $A_t$? $\textbf{Reply:}$ Condensing the context into a time-varying action set is common in the literature of contextual linear bandits. See page 239 of [LS20], page 1 of [HYF23], or other previous works [CLRS11, LWCZ21] that also take this view. Furthermore, the alternative linear contextual bandit setting of [NO20] reduces to our setting as well (see the response for Q2). $D$ is a distribution over subsets of $B_2^d$. In other words, $D$ is a distribution over the power set of $B_2^d$. We only assume $A_t \subset B_2^d$ without any additional assumption on its size, so it can be infinite. The reviewer can also refer to [HYF23] for more characterization on $D$, which is the same as ours. [LS20] Tor Lattimore and Csaba Szepesvári. Bandit algorithms. Cambridge University Press, 2020. [HYF23] Osama Hanna, Lin Yang, Christina Fragouli. Contexts can be Cheap: Solving Stochastic Contextual Bandits with Linear Bandit Algorithms. COLT 2023. [CLRS11] Wei Chu, Lihong Li, Lev Reyzin, Robert Schapire. Contextual Bandits with Linear Payoff Functions. AISTAT 2011. [LWCZ21] Yingkai Li, Yining Wang, Xi Chen, Yuan Zhou. Tight Regret Bounds for Infinite-armed Linear Contextual Bandits. AISTAT 2021. $\textbf{Q2:}$ The formulation does not align with previous work (e.g. NO20). I do not believe the claims in this work, stating they improve over prior work, are accurate. $\textbf{Reply: }$ In fact, the setting of [NO20] is a special case of our setting. [NO20] assume that at every round $t$, the environment picks an adversarial loss vector $\theta_{t,a}$ for any action $a \in [K]$, and a random context $X_t\in\mathbb{R}^d$ is drawn from a fixed distribution. The learner chooses an action $a_t\in[K]$ based on $X_t$ and the final observed loss is $\ell_t = \langle X_t, \theta_{t,a_t} \rangle$. To convert this to our formulation, define $\theta_t = \left[\theta_{t,1}, \cdots, \theta_{t,K}\right] \in \mathbb{R}^{d \times K}$. Then we have $\ell_t = \langle X_t, \theta_{t,a_t} \rangle = X_t^\top \theta_t e_{a_t} = \text{Tr}(e_{a_t}X_t^\top \theta_t) = \langle X_te_{a_t}^\top, \theta_t \rangle = \langle \text{Vec}(X_te_{a_t}^\top), \text{Vec}(\theta_t) \rangle $ where $\text{Vec}(X)$ is the column vector generating by stacking the columns of matrix $X$, and $e_i$ is the $i$-th standard basis. Then by defining the action set $A_t = ${$\text{Vec}(X_te_{i}^\top), i \in [K]$} and adversarial loss vector $y_t = \text{Vec}(\theta_t)$, their setting becomes a special case of our setting with dimension $dK$. Our formulation is more flexible than that in [NO20] because in their formulation, the number of parameters always scales with the number of actions (i.e., $d\times K$), while in our formulation, the number of parameters is always $d$. Thus, our formulation is able to capture problems with large number of actions but small intrinsic dimension, while theirs cannot. Based on the discussion above, if we apply our algorithm to their problem, our $d$ becomes $dK$, and the regret bound is $\tilde{\mathcal{O}}\left(d^2K^2\sqrt{T}\right)$, which has worse $d$ and $K$ dependences than [NO20], but removes an inverse smallest eigenvalue of the covariance matrix in their bound. However, we emphasize that our goal is not to improve the regret bound in [NO20], but to remove strong assumptions made in previous work on the knowledge of context distribution. Actually, as discussed in the introduction, [DLWZ23] has already strictly improved [NO20] (simultaneously improving the regret bound, considering a more general setting, and weakening assumptions), and got a near-optimal bound $\tilde{\mathcal{O}}\left(\sqrt{dT\log K}\right)$ (or $\tilde{O}(\sqrt{dKT\log K})$ in the formulation of [NO20]) with simulators. Therefore, in Table 1, we only compare our results with [DLWZ23]. [DLWZ23] Yan Dai, Haipeng Luo, Chen-Yu Wei, and Julian Zimmert. Refined regret for adversarial mdps with linear function approximation. arXiv preprint arXiv:2301.12942,374 2023. --- Rebuttal Comment 1.1: Comment: We thank the reviewer for the time and effort spent on reviewing our paper. As the discussion phase is about to end, we want to ensure our responses have adequately addressed your inquiries. We look forward to your feedback. --- Rebuttal Comment 1.2: Comment: Thanks for the response, it addressed my questions. I was initially confused since there are different formulations for 'linear bandit' vs 'linear contextual bandit', but now it is clear. I have adjusted my score. --- Reply to Comment 1.2.1: Comment: We thank the reviewer for the update and the positive evaluation. For the new question in the updated review, we answer it as below. $\textbf{Q3:}$ Can dependence on d be improved? $\textbf{Reply:}$ It is possible that the additional $d$ dependence can be removed, but it requires new techniques. The additional $d$ dependence comes from a union bound over $(1/\epsilon)^{O(d^2)}$ policies when dealing with the covariance matrix estimation error, since the policy used by the algorithm is parameterized by $O(d^2)$ parameters. Due to the strong dependency between the contexts received in previous rounds and the the current policy, we do not believe that such a union bound can be avoided. We suspect that it might be possible to improve the bound to $\tilde{O}(\sqrt{d^3T})$. Recall that the reason we have $(1/\epsilon)^{O(d^2)}$ is because we lift the problem to $(d+1)^2$ dimension and the bonus term is a $(d+1)\times(d+1)$ matrix. However, there are other ways of imposing bonus in *linear bandits* (without contexts) that allows the bonus term to be in $(d+1)$ dimension ([LLWZ20] and [ZL22]), which potentially only requires a union bound over $(1/\epsilon)^{O(d)}$ policies. To achieve the optimal $\tilde{O}(d\sqrt{T})$ bound, we might need a very different algorithm design that does not rely on concentration arguments. [LLWZ20] Chung-Wei Lee, Haipeng Luo, Chen-Yu Wei, Mengxiao Zhang. Bias no more: high-probability data-dependent regret bounds for adversarial bandits and mdps. NeurIPS 2020. [ZL22] Return of the bias: Almost minimax optimal high probability bounds for adversarial linear bandits. COLT 2022.
Summary: Summary: This paper presents a near-optimal computationally efficient simulator-free algorithm in contextual bandits setting with i.i.d contexts and adversarial reward functions. Most of the past work on this setting either requires a simulator that allows them to draw a large number of contexts from a distribution, or is computationally inefficient. Existing algorithms that do not require a simulator and are computationally efficient only achieve a regret of O(T^{5/6}). This work greatly improves the dependence on T, achieving a regret of O(d^2 \sqrt{T}), while still computationally efficient without a simulator. Their algorithm is based on FTRL with a log-determinant barrier. They also present an algorithm that improves the dependence on d, achieving a regret of O(d \sqrt{T}) but is only computationally efficient for a finite policy class. Strengths: I think this paper makes a solid theoretical contribution to the field. The improvement from O(T^{5/6}) to O(\sqrt{T}) is substantial and simulator-free computationally efficient algorithms are important. The authors exploit the algorithmic ideas of ZL22, the ghost sample trick of NO20, along with novel analysis to provide an improved regret bound. I am slightly concerned about the originality of the algorithmic approach - but this is made up for with the improved analysis and improved regret bound The paper was clearly well written and the contributions clearly stated. I enjoyed how the authors decomposed the regret and the analysis of the estimator in 3.2-3.4. Weaknesses: The main weakness is that the paper was challenging to read inplaces, especially for readers not already familiar with the ideas of ZL22. There were other places where a bit of extra detail could have helped the exposition greatly. I have tried to remark on these in the list below. Remarks/Comments 1. On bottom of page 5, you mention the loss in the “original space”, can you explain precisely what that loss is? For readers not familiar with ZL22, Line 2 in the algorithm may be opaque - in particular the sequence H_s^{-1} is chosen to control the bias. Moving a bit of the discussion from 3.4/3.5 up - or even simply remarking that it will be explained later may help. 2. The discussion on line 188-189 is similar to 193-194. You can drop one. 3. In equation 2, you use a \hat{Sigma} - however the point of this equations seems to be that Sigma is known! Maybe remove the empirical hat? 4. Is it possible to elaborate a bit more in the paper how your methods “surpass those in [DLWZ23,SKM23]” as described in line 246? 5. \gamma_t should be defined in algorithm for readability Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. To analyze the first term |\hat{Sigma}_t - H_ty_t|_hat{Sigma}_t^{-1} in line 242 did you need to use any kind of matrix concentration? If so, did you need control of the minimal eigenvalue of H_t? If not, how were you able to avoid it? 2. The linear EXP4 algorithm looks very much similar to the original EXP4 algorithm. Could you provide more explanation of why we cannot directly use other computationally inefficient benchmark algorithms to get the same regret, i.e. why it’s necessary to present and prove linear EXP4? 3. The computationally efficient algorithm, Algorithm 1, achieves a O(d^2\sqrt{T}) regret while having a squared dependence on d; while the algorithm that achieves O(d\sqrt{T}) regret is optimal but computationally inefficient. Do you believe algorithm 1 could achieve a O(d\sqrt(T)) regret? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for providing suggestions on improving the readability of the paper. We will incorporate them in the final version. $\textbf{Q1:}$ To control the term $||(\hat{\Sigma}_t - H_t) y_t||\_{\hat{\Sigma}_t^{-1}}$, did you need to use any kind of matrix concentration? If so, did you need control of the minimal eigenvalue of $H_t$? If not, how were you able to avoid it? $\textbf{Reply: }$ We elaborate on the main idea of our proof here and more detail can be found in the proof of Lemma 14 in the full version of our paper. To control this term, we do not use any off-the-shelf matrix concentration inequality. Instead, we expand $\text{Tr}\left((\hat{\Sigma}_t - H_t)^\top\hat{\Sigma}_t^{-1}(\hat{\Sigma}_t - H_t)\right)$ by its definition and use the standard Bernstein’s inequality for scalars to handle it. More details are below. First, we show $\text{Tr}\left((\hat{\Sigma}\_t - H_t)^\top\hat{\Sigma}\_t^{-1}(\hat{\Sigma}\_t - H_t)\right) \leq 8\text{Tr}\left((\hat{\Sigma}\_t - H_t)^\top(H_t + \beta_t I)^{-1}(\hat{\Sigma}\_t - H_t)\right)$ where $\beta_t=\Theta(\text{poly}(d)/t)$ by some simple concentration inequality. Below, for simplicity, we assume that $H_t$ is a diagonal matrix (otherwise, we can diagonalize it as in the formal proof of Lemma 14). Recall that $\hat{\Sigma}\_t$ is equal to $\beta_t I$ plus a covariance matrix estimated from the empirical context distribution. Therefore, we have $\hat{\Sigma}\_t - H_t = \Delta_t + \beta_t I$, where $\Delta_t$ is the covariance matrix estimation error related to the difference between the empirical and the true context distribution at time $t$. By direct expansion and the assumption that $H_t$ is diagonal, we have $\text{Tr}\left((\hat{\Sigma}\_t - H_t)^\top(H_t + \beta_t I)^{-1}(\hat{\Sigma}\_t - H_t)\right)=\sum_{i=1}^d \left(\frac{(\Delta_t(i,i) + \beta_t)^2}{H_t(i) + \beta_t} + \sum_{j \neq i} \frac{\Delta_t(i,j)^2}{H_t(i) + \beta_t}\right)\leq 2\sum_{i=1}^d \sum_{j=1}^d \frac{\Delta_t(i,j)^2}{H_t(i)+\beta_t} + 2d\beta_t$, where $\Delta_t(i,j)$ is the $(i,j)$-th entry of $\Delta_t$ and $H_t(i)$ is the $(i,i)$-th entry of $H_t$. Finally, we argue that with high probability $\sum_{j=1}^d \Delta_t(i,j)^2\leq \text{poly}(d)\times\tilde{O}(\frac{H_t(i)}{t} + \frac{1}{t^2})$, which, when combined with the argument above, gives the desired bound. The key calculations for this inequality are in Line 610-620. To establish this inequality, we have to use the fact that $\Delta_t(i,j)$ is an average of a martingale difference sequence and so we can use Bernstein's inequality to relate it to the variance of individual terms. On the other hand, $H_t(i)$ is related to the variance of the individual terms in this martingale difference sequence. The inequality can be shown by combining these two components. From a high level, the key we are able to avoid the minimal eigenvalue of $H_t$ is the fact that in those directions where $H_t + \beta_t I$ has small eigenvalues, the error term $\hat{\Sigma}_t- H_t=\Delta_t + \beta_t I$ is also small. $\textbf{Q2:}$ The linear EXP4 algorithm looks very much similar to the original EXP4 algorithm. Could you provide more explanation of why we cannot directly use other computationally inefficient benchmark algorithms to get the same regret, i.e. why it’s necessary to present and prove linear EXP4? $\textbf{Reply:}$ We want to emphasize that we do not claim linear EXP4 as an important contribution of this paper, as this is indeed a fairly straightforward adaptation of the classical EXP4. Vanilla EXP4 and all benchmark algorithms we are aware of would suffer regret of $\sqrt{KT\ln(|\Pi|)}$, where $|\Pi|=\Theta(T^d)$ is the $1/T$-net of policies and $K$ the maximal number of actions. Since we want to handle the setting $K\gg d$, we have to present a modification that utilizes the linear feedback structure of the actions. $\textbf{Q3:}$ Algorithm 1 achieves a $O(d^2\sqrt{T})$ regret. Do you believe algorithm 1 could achieve a $O(d\sqrt{T})$ regret? $\textbf{Reply: }$ We are unsure whether the additional $d$ dependence can be removed. The additional $d$ dependence comes from a union bound over $(1/\epsilon)^{O(d^2)}$ policies when dealing with the covariance matrix estimation error, since the policy used by the algorithm is parameterized by $O(d^2)$ parameters. Due to the strong dependency between the contexts received in previous rounds and the the current policy, we do not believe that such a union bound can be avoided. We suspect that it might be possible to improve the bound to $\tilde{O}(\sqrt{d^3T})$. Recall that the reason we have $(1/\epsilon)^{O(d^2)}$ is because we lift the problem to $(d+1)^2$ dimension and the bonus term is a $(d+1)\times(d+1)$ matrix. However, there are other ways of imposing bonus in $\textit{linear bandits}$ that allows the bonus term to be in $(d+1)$ dimension ([LLWZ20] and [ZL22]), which potentially only requires a union bound over $(1/\epsilon)^{O(d)}$ policies. To achieve the optimal $\tilde{O}(d\sqrt{T})$ bound, we might need a very different algorithm design that does not rely on concentration arguments. [DLWZ23] Yan Dai, Haipeng Luo, Chen-Yu Wei, and Julian Zimmert. Refined regret for adversarial mdps with linear function approximation. arXiv preprint arXiv:2301.12942,374 2023. [LLWZ20] Chung-Wei Lee, Haipeng Luo, Chen-Yu Wei, Mengxiao Zhang. Bias no more: high-probability data-dependent regret bounds for adversarial bandits and mdps. NeurIPS 2020. [ZL22] Return of the bias: Almost minimax optimal high probability bounds for adversarial linear bandits. COLT 2022. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response and walking me through the minimum eigenvalue concerns. I will leave my score where it is.
Summary: This paper studied the contextual bandits with i.i.d. contexts and adversarial linear loss functions without simulators and proposed a follow-the-regularized-leader with log-determinant barrier (Logdet-FTRL) algorithm with a carefully designed covariance matrix estimator that is computationally efficient given small contexts set and achieves order-optimal (in T) regret \tilde{O}(d^2 \sqrt{T}). Previous algorithms achieving \sqrt{T} regret either require the knolwedge of contexts distribution or a simulator from which the learner can learn the contexts distribution by drawing free i.i.d. contexts. The proposed algorithm estimates the key covariance matrix via previously observes contexts with centralization and ridge regularizer. The setting contains stochastic sleeping bandits as special case and the proposed algorithm answers the open problem in stochastic sleeping bandits affirmtively. A computationally inefficient algorithm that achieves regret with improved d factor is also proposed. Strengths: The analysis is novel and the contribution is significant to linear adversarial bandits with i.i.d. contexts without simulator. Multiple related scenarios and extentions are discussed. I went over the proofs of the main regret analysis in App. D and several technical lemmas. The proofs are sound to me. Weaknesses: I do not see perticular weaknesses of the current version. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Minor typos: Line 164, has been taken The second equality between line 796 and line 797, U^{\Ac_0} instead of \bar{U}^{\Ac_0}. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the support and valuable feedback. We have changed the typos you mentioned in our new version.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Sample-Conditioned Hypothesis Stability Sharpens Information-Theoretic Generalization Bounds
Accept (poster)
Summary: Information-theoretic generalization bounds offer a new approach in generalization theory by providing complexity measures that depend on the data distribution and learning algorithm itself. In the past years, it has been observed that information-theoretic generalization bounds are not compatible with the classical approaches to prove generalization such as vc theory or uniform stability. In this paper the authors propose several new information theoretic quantities to study generalization. The proposed bounds are interesting as they are compatible with several notions of uniform stability. Strengths: I think the paper addresses an important limitation of information-theoretic generalization that is incompatibility with the uniform stability framework. They also provide various examples to show the expressiveness of their bounds. Weaknesses: 1– The presentation of the paper needs significant improvement. Specifically the part on defining the additional structure is extremely vague. For instance, there are many variables that are not clearly defined. For instance, what is the tilde{W}_{i,0} and tilde{W}_{i,1}. 2– Compared to CMI bound, the intuition behind the results is not clear. For instance CMI is closely related to membership inference. 3– Some of the results in the paper seem redundant. For instance, theorem 3.1 has been proved by work by Bu et al under the name of individual sample bounds. 4– The major drawback of the bounds in the paper is the following: Uniform stability bounds for randomized algorithms are very important. For instance, in the seminal work by Hardt et al, the fact that the algorithm is randomized is important. The other example is the algorithm by Feldman and Dagan for leaning VC classes with stable algorithms. However, to me all the bounds in the paper rely on a very strong notion of stability which only holds for “deterministic algorithms. It is not clear what is the roadblock for extending the results for randomized algorithms. Feldman, Dagan. PAC learning with stable and private predictions 5– The main limitation of uniform stability bounds is that it considers the “worst-case” data distribution. However, a learning algorithm may be more stable for “easier” distributions. However, all the bounds in the paper already depend on the worst-case stability parameter. I think this leads to suboptimal bounds. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1- weakly uniform stability versus strong uniform stability: Consider the seminal work of Hardt et al paper in which they derive stability bound for SGD under the notion of weakly uniform stability. Why in the paper do authors only consider the strong notion of stability? 2- text after corollary 3.1 seems unclear to me. If the algorithm is deterministic then the mutual information term will blow up. It is not clear to me how this may result in improvement. 3- Theorem 3.3: Let's assume that the algorithm is deterministic and also permutation invariant in the sense that the order of the training points does not affect the output. Then, this results states that gen.gap <= gamma_1 ( 1 + mutual information term). It shows that even if the mutual information term goes to zero, the convergence rate is determined by gamma_1. 4- For Example 1, what is the exact generalization error? It should be of order O(1/n), however, the obtained bound using the proposed approach has an order of 1/sqrt(n). Typos: Lemma A.5. Bernstein. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The main limitations are explained in Items 4 and 5 of weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive comments. Our responses follow. >- The presentation of the paper needs significant improvement. Specifically the part on defining the additional structure is extremely vague. For instance, there are many variables that are not clearly defined. For instance, what is the tilde{W}{i,0} and tilde{W}{i,1}. **Response.** We apologize for lacking clarity at places and will make our best effort in the revision. Specifically, $\widetilde{W}\_{i,0}$ and $\widetilde{W}\_{i,1}$ represents the $i$-th row, the first column and the second column of the matrix $\widetilde{W}$, respectively. >- Compared to CMI bound, the intuition behind the results is not clear. For instance CMI is closely related to membership inference. **Response.** We have tried to explain the intuition of $I(\widehat{Z}_i;U_i|\widetilde{W}_i)$ in Line 212-217, and here we restate: the new CMI terms in our paper are also closely related to the membership inference. Specifically, $I(\widehat{Z}_i;U_i|\widetilde{W}_i)$ measures the ability to decide if an instance $\widehat{Z}_i$ contributes to the training of $\widetilde{W}^+$ or to the training of $\widetilde{W}^+_i$ when we know it contributes to only one of them. >- Some of the results in the paper seem redundant. For instance, theorem 3.1 has been proved by work by Bu et al .... **Response.** Agreeably our Theorem 3.1 has a very similar form to the bound of Bu et al. However, the two bounds differ significantly due to the replacement of sub-gaussian variance proxy with stability parameter. This difference is critical since the bound of Bu et al. has been shown inadequate in explaining the learnability of certain SCO problems [1], as illustrated by Example 1 (please refer to Section E for an explanation) in our paper. However, our Theorem 3.1 overcomes this limitation. For further comparisons between the two bounds, please see Line 191-197 in our paper. >- The major drawback of the bounds in the paper is the following: Uniform stability bounds for randomized algorithms are very important. .... However, to me all the bounds in the paper rely on a very strong notion of stability which only holds for “deterministic algorithms.... >- weakly uniform stability versus strong uniform stability: .... Why in the paper do authors only consider the strong notion of stability? **Response.** On the one hand, our four SCH stability notions in different bounds relax the strong notion of uniform stability, taking into account the randomness of the algorithm, particularly for $\gamma_2$, $\gamma_3$, and $\gamma_4$. On the other hand, even for the strong uniform stability notion, such as the $\beta_2$-based bounds, they remain applicable to randomized algorithms. This is because the MI and CMI terms in these bounds are algorithm-dependent, capturing the randomness in the algorithm. Notably, $\beta_2$-uniform stability is still a weaker assumption than the bounded loss assumption. One important reason that we apply strong uniform stability notion here is because information-theoretic bounds are sometimes vacuous in the deterministic setting, so incorporating $\beta_2$ can significantly sharpens the information-theoretic bounds in this setting, as shown in the examples in this paper. Moreover, it's worth noting that replacing $\beta_2$ with the weak notion $\beta_1$ in our bounds may not be feasible. >- The main limitation of uniform stability bounds is that it considers the “worst-case” data distribution .... **Response.** We note that the $\gamma_2$-SCH-B stability and the $\gamma_4$-SCH-D stability are distribution-dependent notions. They are not defined from the worst-case data distribution. Additionally, if the loss function is upper-bounded by $C$, as commonly employed in proving the optimal rate of the stability-based bound, all the worst-case stability parameters can be replaced by $C$, allowing us to focus solely on the distribution-dependent notions. Remarkably, we achieve the known fastest rate of the second moment generalization error bound, as demonstrated in Theorem 4.4. >- text after corollary 3.1 seems unclear to me.... **Response.** For deterministic algorithms, the individual mutual information term $I(W;Z_i)$ may not necessarily blow up. Simple examples illustrating this can be found in Example A and B in Bu et al.'s work. In fact, the primary motivation behind Bu et al.'s proposal for the individual bound is to mitigate the problem of blowing up in the sample-wise mutual information ($I(W;S)$) under the deterministic setting. >- Theorem 3.3: Let's assume that the algorithm is deterministic and also permutation invariant .... **Response.** We completely agree. While Theorem 3.3 exhibits a faster decay rate compared to Theorem 3.1 due to the removal of the square-root function, it still does not surpass $\gamma_1$ in terms of rate. We must note that our work also demonstrates the enhancement of the stability-based framework through information-theoretic analysis, as highlighted in Example 2. >- For Example 1, what is the exact generalization error? It should be of order O(1/n)... **Response.** The exact generalization error in Example 1 decays with the rate of $\mathcal{O}(1/\sqrt{n})$, as given in [1, Theorem 17]. In fact, for GD with nonsmooth convex loss, such as in Example 1, [2, Theorem 3.2] gives a tight generalization lower bound: $\mathcal{O}(\eta \sqrt{T}+\frac{\eta T}{n})$. By substituting $\eta=\frac{1}{n\sqrt{n}}$ and $T=n^2$ from Example 1, we can deduce the lower bound of $\mathcal{O}(1/\sqrt{n})$ as well. Combining the upper bound obtained in our paper, we conclude that the exact generalization error has the order of $\mathcal{O}(1/\sqrt{n})$. [1] Mahdi Haghifam, et. al. Limitations of information-theoretic generalization bounds for gradient descent methods in stochastic convex optimization. ALT 2023. [2] Raef Bassily, et al. Stability of stochastic gradient descent on nonsmooth convex losses. NeurIPS 2020. --- Rebuttal Comment 1.1: Title: response after rebuttals Comment: I would like to thank the reviewer for their comment. I still have questions regarding Example 1. Linear function is indeed a smooth function. So, what you have as the response is not correct. I think it is easy to find the exact dependence of the generalization bound to $n$. --- Reply to Comment 1.1.1: Comment: Thanks for your reply! >- I still have questions regarding Example 1. Linear function is indeed a smooth function. So, what you have as the response is not correct. Thanks for pointing out this, you are right, the loss in Example 1 is indeed smooth. Yet, our previous response still holds because [2, Theorem 3.2] is proved without using the additional smoothness condition of the loss, so $\mathcal{E}\_\mu(\mathcal{A})\in \Omega(1/\sqrt{n})$ holds true. While it is completely possible that [2, Theorem 3.2] is not tight for the smooth loss case, our generalization upper bounds is $\mathcal{O}(1/\sqrt{n})$. Combining them together guarantees $\mathcal{E}\_\mu(\mathcal{A})\in\mathcal{O}(1/\sqrt{n})$.
Summary: The paper proposes several new stability assumptions named the sample-conditioned hypothesis (SCH) stability. Based on these notions, the authors present new IOMI and CMI bounds to address the limitations of existing information-theoretic bounds in the context of stochastic convex optimization (SCO) problems. Strengths: - The paper adopts different assumptions for bounding the CGF, which can improve existing information-theoretic bounds, especially in the context of SCO. - The connection between Bernstein and the proposed SCH-B stability is interesting. - The paper is generally well-written. Weaknesses: - It seems to me the most significant contribution is to introduce uniform stability for bounding the CGF, which can solve the counter-example proposed in [1]. However, it's hard to estimate the order of the stability parameter without any knowledge of the distribution. Especially in practice, such parameters do not affect the algorithm design. - The effectiveness of the proposed SCH stabilities is unclear. It's mentioned that all the stability parameters $\gamma_{1:4}$ are smaller than $\beta_2$. From Theorem 4.3, we have $\|\mathcal{E}\_{\mu}(A)\| \leq \frac{\beta_2}{n} \sum_{i=1}^n I(\hat{Z_i};U_i|\tilde{W_i}) + 0.72 \beta_2$. Similar to the discussion in Example 2, the first term can vanish. However, the second term still exists since $\beta_2$ is a constant. To prove the effectiveness of the SCH stability bounds, it needs to identify the order of $\gamma_4^2$. - Example 1 requires the data and parameter dimension $d=2n^2$ to ***grow with $n$*** for proving the $\Omega(1)$ lower bound of the information quantities. Does this really make sense? Even though the uniform stability is much tighter in this case, I do not think this counter-example is representative enough. Could you conceive another counter-example that is more convincing? - The SCH-B and SCH-D stability are distribution dependent, the corresponding stability parameter cannot be estimated, and these notions can be confused with other distribution assumptions. - The SCH-A and SCH-C stability are distribution-free but related to the distribution (or set) of the output model parameters. It’s unclear to me how to identify the stability parameter effectively for different algorithms. >[1] Mahdi Haghifam, Borja Rodríguez-Gálvez, Ragnar Thobaben, Mikael Skoglund, Daniel M Roy, and Gintare Karolina Dziugaite. Limitations of information-theoretic generalization bounds for gradient descent methods in stochastic convex optimization. In International Conference on Algorithmic Learning Theory, pages 663–706. PMLR, 2023. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the above section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you sincerely for your comments to our paper. Our responses follow. >- It seems to me the most significant contribution ... **Response.** Accurately estimating the order of the stability parameter is challenging in all stability-based bounds. However it is still possible to bound the parameter, without access to the underlying distribution, and obtain insights to guide the design of learning algorithms. As shown in Section F.2, without the knowledge of the data distribution, it is possible to explicitly bound the stability parameter, and the bound inspires Tikhonov regularization, which finds smoother solutions for better generalization. In addition to bounding the stability parameter, one may also have the option of estimating the parameter via influence function [1]. By employing such estimation, the appearance of the Hessian matrix of the solution can also offer valuable insights for algorithm design. We would like to re-emphasize that our primary focus is to demonstrate the existence of information-theoretic bounds capable of explaining the learnability of problems with certain stability properties, such as the SCO problems discussed in Section 5, and resolving the limitations shown in [2]. While it is not our primary goal, we agree that providing new algorithmic insights is indeed an important dimension for studying generalization. Notably novel generalization bounds may lead to the discovery of new algorithmic properties that improve generalization for the problem or algorithm is specialized to certain forms; for example, the bounds of [3] specialized to SGD leads to new insights in understanding and improving SGD in [4]. It is curious to explore this aspect for the bounds developed in this paper. >- The effectiveness of .... **Response.** If both $\beta_2$ and $\gamma_4$ are constants, then the algorithm is neither $\beta_2$-uniform stable nor $\gamma_4$-SCH-D stable. In this case, Theorem 4.3 is not of any interest for algorithms that are not stable in these senses. If $\beta_2$ is a constant, i.e., the algorithm is not $\beta_2$-stable. In this case, there is still an opportunity that the algorithm is $\gamma_4$-SCH-D stable and the second term vanishes at least at the speed of $\gamma_4$. Then we agree that identifying the order of $\gamma_4$ is needed for the bound to be useful. Similar to other distribution-dependent stability notions, determining the order of $\gamma^2_4$ is challenging. Consequently, uniform stability remains the most extensively studied and well-understood concept in the learning theory community. However, we would like to note that our SCH parameters offer an extension of the bounds' applicability, such as enabling connections with the Bernstein condition. >- Example 1 requires the data ... **Response.** Example 1 was constructed in [2, Theorem 17]. The objective of this exercise is to create a "bad" learning problem for which information-theoretic bounds demonstrate certain limitations. Such an exercise does not require the created "bad problem" to be representative of the reality. In this paper, we show that even for such "bad problems", the presented bounds in this paper overcome these limitations. Regarding the choice of $d=2n^2$, we remark that the dimension $d$ is usually related to the model capacity, and many generalization bounds have the rate of $\mathcal{O}((\frac{d}{n})^{\alpha})$ (for some $\alpha>0$). Thus, we do care about how $d$ changes affect the generalization. An important feature of gradient descent for SCO problems is that the sample complexity is dimension-independent. Thus, for any generalization bounds that explicitly depend on the model dimension, there may always exist a setting where they don't diminish, e.g., $d=2n^2$ or $d=3T2^n/4$ for MI/CMI bounds in [2]. In fact, this setting does make sense because we hope the generalization bounds to explain the learnability of a class of CLB problems here instead of only a specific setting with a fixed $d$. If $d$ is treated as a constant, then there would be no such limitations since the dimension-dependent property of MI/CMI bounds would not be considered. Thus, our key message here is that when $d=2n^2$ in this CLB problem, the exact generalization error of gradient descent does vanish with $n$ increases, but the previous information-theoretic bounds are unable to establish the learnability of this learning problem. In this sense, this example stands as a representative case. Additionally, here we focus on the regime of overparameterization where $d>n$, which holds true for the most deep neural networks. Also allowing the model dimension grows with $n$ aligns with practical scenarios. For example, achieving good performance on CIFAR-10 and ImageNet often necessitates employing a larger model for the latter. >- The SCH-B and .... >- The SCH-A and .... **Response.** We acknowledge that these SCH stability parameters are hard to estimate in general, which we also discussed in Section G. But this limitation, which also exists in many other bounds, does not negate the potential usefulness of these bounds, as we already discussed above. In addition, as we emphasize before, uniform stability is still the only one that has been widely studied and well understood in the learning theory community. Further development of the practical usage of stability itself may eventually also help to overcome the estimation difficulty of our bounds. [1] Pang Wei Koh, et. al. Understanding black-box predictions via influence functions. ICML 2017. [2] Mahdi Haghifam, et. al. Limitations of information-theoretic generalization bounds for gradient descent methods in stochastic convex optimization. ALT 2023. [3] Aolin Xu, et. al. Information-theoretic analysis of generalization capability of learning algorithms. NeurIPS 2017. [4] Ziqiao Wang, et. al. On the generalization of models trained with SGD: Information-theoretic bounds and implications. ICLR 2022. --- Rebuttal Comment 1.1: Title: Response Comment: Thanks for your clarifications. I will take these into consideration and engage in the discussion process. --- Rebuttal Comment 1.2: Title: Follow-up question Comment: Thanks for introducing the new examples. I have the following question that need more clarification. ### Example 1 It seems to me the proposed method does not solve the counter-example. The original discussion in Line 280 did not consider the dimension. In fact, $\|\| w_t - w_t^i\|\|\leq \|\|\eta t (\hat{\mu} - \hat{\mu}^i\)\|\| = \frac{\eta t}{n} \|\|z_i - z_i^\prime\|\| \leq \mathcal{O}(\frac{\eta t \sqrt{d}}{n})$. When $d=2n^2$, $\beta_2 \leq \mathcal{O}(\frac{\eta T \sqrt{d}}{n}) \in \mathcal{O}(\sqrt{n})$. --- Reply to Comment 1.2.1: Comment: Thanks for the question. Please notice that $Z$ is a one-hot vector in Example 1. Thus, $\frac{\eta t}{n}||z_i-z_i'||\leq\frac{\eta t}{n}(||z_i||+||z_i'||)=\frac{2\eta t}{n}=\mathcal{O}(\frac{1}{\sqrt{n}})$. In addition, $w_t$ is restricted in a unit ball, the distance $||w_t-w_t^i||$ will not explode. I hope this addresses the reviewer's concern.
Summary: This paper studies how to develop information-theoretic generalization error bounds under the assumption that the algorithm is uniformly stable under a certain loss. Typically, these kinds of bounds are based on properties of the loss with respect to the data (e.g. subgaussian or bounded), while in this case they look at a property of both the loss and the algorithm, namely that it is uniformly stable. Considering these two together allows the bounds to decrease at faster rates as the stability parameter can decrease with the number of samples while the boundedness or subgaussianty parameter does not. In particular, they extend the individual mutual information bounds from [9] to this assumption. They also consider a matrix of $n \times 2$ hypotheses, where the first column is the output hypotheses, and each row in the second column is the hypothesis outputted by the algorithm when the $i$-th sample is swapped by another sample that is i.i.d. In a similar fashion to [58], they manage to develop conditional mutual information bounds, where they condition on this set of hypotheses. Intuitively, these bounds state the ability to determine if a sample was used to generate the real hypothesis or the hypothesis where the $i$-th sample was swapped. Then, they develop individual conditional mutual information bounds similar to [49] under this setup. As an application, they showcase that while most of the current information-theoretic bounds fail in the example from [21], the presented bounds can succeed as they are stable with a stability parameter that decreases with the number of samples. Also, they make the case that for very stable algorithms (with constant in $\Omega(1/\sqrt{n})$) their bound improves upon [58]'s. Finally, they connect their results with the Berstein condition and further adapt the bounds from [63] to their setup. Strengths: The main strength of the paper is the link between stability and the current individual mutual information bounds, which in turn develop into mutual information bounds for stable algorithms with an explicit appearance of the stability parameter. This is good because we know the stability parameter in certain algorithms and this mutual information can be bounded in a good amount of settings. Similarly, the connection with the Berstein condition is also important as we also know that a good amount of algorithms under certain losses respect this condition, and therefore these bounds can be applied. Finally, as in [63], the paper showcases how a simple trick like the Donsker-Varadhan together with a careful choice of the function in the supremum can lead to good bounds. Weaknesses: While the results are interesting, I fail to understand exactly how results other than those in Section 3 are useful. Yes, they can achieve the rates desired in Example 1, but this comes basically because the individual CMI is <= 1 and one can directly have the bound expected generalization error <= $\beta$, which is already known from the uniform stability literature. * A suggestion for improvement, is to state that directly as a stronger result. Since the individual CMI <= 1, then one may recover the bounds from the uniform stability literature. Then, every result that we know from there is also subsumed into this setup. This way, one can see more clearly that Example 1 works immediately from uniform stability and many other results as well. I believe that this way you may be able to comment on larger sets of problems. I think that they definitely have potential, but this potential is not explained in the paper. The main reason I mention this is that the previous bounds were dealing with a KL divergence between terms that were "easily tractable" like the posterior $P\_{W|S}$ (or $P\_{W|Z\_i}$) and a prior $Q\_{W}$. However, the terms appearing in these bounds seem very difficult to treat and characterize. * Another suggestion would be to find an example where this mutual information can be calculated so the reader can have an idea of why this is interesting other than the fact that it can be reduced to the known bound from uniform stability as I mentioned above. An interesting example would include a setup where neither uniform stability nor "classical" information-theoretic bounds can achieve the desired expected generalization error rate, but combined with the presented bounds can. Some results and statements are included but are loosely explained. * Theorems 3.3 and 4.3 are examples of this. When can this be useful? Could you provide us with a particular example where we can see how are we benefitting from including $\gamma\_2^2 / \gamma_1$ or $\Lambda(\Tilde{W}\_i)$? * In Remark 2.2, what do you mean with "it is expected that $\gamma\_4$ is larger than $\gamma\_2$ due to the independence of $Z'$ in Eq. (7)"? A similar claim is done in lines 217-218. When doing this kind of remarks and statements, it would be important to justify more the reasoning behind them. * The whole set of definitions in Definiton 2.1 is not very clear to me. It would be nice to have a larger, more comprehensive commentary on what are these definitions and why are they included / why are they useful. Small corrections in the literature, not very important: * In line 31, it should be "decay at a faster rate, e.g. $\mathcal{O}(1/n)$". * In line 33, "demonstrating tightness in non-convex learning cases such as deep learning" may be misleading. Some people may interpret that this means that they achieve the correct rate and therefore are tight. An alternative writing could be "demonstrating a good characterization in some instances of non-convex settings (e.g. deep learning)". * In line 198, $R$-conditioned information-theoretic bounds also appeared in [49]. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: * Could you clarify further the definitions in Definition 2.1? Also, could you expand in the reasoning of why some parameters are larger/smaller than the others? * Could you give some examples where Theorems 3.3 and 4.3 improve upon the preceding theorems? * Could you try to find examples where neither uniform stability nor "classical" information-theoretic bounds can achieve the desired expected generalization error rate, but combined with the presented bounds can? * Could you find examples where the bounds with the conditioning can be employed to gain an understanding of some problem beyond the fact that it is stable? I am happy to increase my score if these questions are satisfactorily answered. I like the paper, but it seems a little immature at the moment. Once these things are addressed, it could become a better paper that ticks many boxes and deals with important questions. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Some limitations are included throughout the text and in the Appendices. Some others, like the ones included in the weaknesses, are not. Regarding potential negative societal impact, as the work is of a theoretical, fundamental nature, I do not forsee any issue with not addressing them. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your constructive comments, and we appreciate your positive feedback on our paper. Our responses follow. >- Could you clarify further the definitions in Definition 2.1? Also, could you expand in the reasoning of why some parameters are larger/smaller than the others? **Response.** Due to the character constraints in the separate rebuttal, we have made the decision to relocate the response to this question to the global response. Kindly refer to the global response provided above; we sincerely apologize for any inconvenience caused. >- Could you give some examples where Theorems 3.3 and 4.3 improve upon the preceding theorems? **Response.** Consider the loss function is bounded by $C$, if an algorithm $\mathcal{A}$ is $\gamma_2$-SCH-B stable but not $\gamma_1$-SCH-A stable (note that if $\mathcal{A}$ is $\gamma_1$-SCH-A stable, it is also $\gamma_1$-SCH-B stable since $\gamma_2\leq\gamma_1$), we replace $\gamma_1$ by $C$ in Theorem 3.1 and Theorem 3.3, since $I(\widetilde{W}^+;\widetilde{Z}^+\_{i})$ decays faster than $\sqrt{I(\widetilde{W}^+;\widetilde{Z}^+\_{i})}$, as long as $\gamma_2^2$ also decays faster than $\sqrt{I(\widetilde{W}^+;\widetilde{Z}^+\_{i})}$, Theorem 3.3 is tighter than Theorem 3.1. Similar arguments also apply to comparing Theorem 4.1 and Theorem 4.3 when the $\mathcal{A}$ is $\gamma_4$-SCH-D stable but not $\gamma_3$-SCH-C stable. In addition, the $\gamma_2$-SCH-B stablility condition in Theorem 3.3 gives us a chance to connect to the Bernstein condition as shown in Corollary 6.1. Thus, if the Bernstein condition is satisfied but $\mathcal{A}$ is not $\gamma_1$-SCH-A stable, Theorem 3.3 will be tighter. As a simple concrete example: Let $\mathcal{W}$ be finite, i.e. $|\mathcal{W}|=K$. To simplify the setting, assume $L_\mu(w^*)=0$ and let $\mathcal{A}$ be an interpolating algorithm. In this case, for Theorem 3.1, we have $\mathcal{E}\_\mu(\mathcal{A})\leq\frac{\sqrt{2}C}{n}\sum_{i=1}^n\sqrt{I(W;Z_i)}\leq\sqrt{2}C\sqrt{\frac{I(W;S)}{n}}\leq\mathcal{O}(\log{K}/\sqrt{n})$. For Theorem 3.3 or Corollary 6.1, we have $\mathcal{E}\_\mu(\mathcal{A})\leq\mathcal{O}(\sum_{i=1}^n\frac{I(W;Z_i)}{n})\leq\mathcal{O}(\frac{I(W;S)}{n})\leq\mathcal{O}(\frac{\log{K}}{n})$. Then clearly, Theorem 3.3 is tighter than Theorem 3.1. We will try to construct more examples in the revision. >- Could you try to find examples where neither uniform stability nor "classical" information-theoretic bounds can achieve the desired expected generalization error rate, but combined with the presented bounds can? **Response.** This is a good question and we did try to find such examples, but this appears difficult. The reason is that if the algorithm satisfies certain stability assumptions, stability-based bounds often achieve the optimal rate, particularly after the work of [1,2]. Specifically, in the context of SCO with SGD/GD, where the loss function is either smooth or nonsmooth, stability-based bounds can attain tight upper and lower bounds [3,4]. However, in the context of nonconvex learning, the challenge lies in obtaining the exact generalization error rate and the decaying rate of both the stability parameter and the MI/CMI term. We note that we have demonstrated that information-theoretic quantities can enhance the stability analysis, as exemplified in Example 2 and Section F.3. Thus, the advantages of our bounds are beyond mere reduction to uniform stability bounds. Moreover, when our interest extends beyond the decaying rate w.r.t. $n$, there is a potential case that our bounds improve both stability-based bounds and the MI/CMI bounds before. For instance, in Figure 3(d) in [5], previous MI/CMI bounds are loose at the early phase of training. As the stability parameter usually grows with the iteration number, it remains small at the beginning, and its large value at the end can be mitigated by the MI/CMI term. Consequently, the product of the stability parameter and the MI/CMI term could potentially offer a more accurate reflection of the dynamics of the true generalization error. The remaining challenge is to rigorously characterze SCH parameters for SGD (or SGLD), as previous analyses were mainly limited to characterizing $\beta_1$. >- Could you find examples where the bounds with the conditioning can be employed to gain an understanding of some problem beyond the fact that it is stable? **Response.** We note that CMI bounds have already been widely discussed in the existing literature where the algorithm is not necessarily stable. In Section F.3, we establish a connection between our newly introduced CMI terms and the classical VC theory, building upon previous research. Consequently, for learning problems with finite VC dimension, the CMI terms themselves can serve as a means to demonstrate the learnability of such problems. >- A suggestion for improvement, ... >- Another suggestion would be ... >- Small corrections in the literature, ... Thank you again for providing these valuable suggestions for improving our paper, we will revise our paper according to your comments. Please do let us know if you have further questions. [1] Yegor Klochkov, et. al. Stability and Deviation Optimal Risk Bounds with Convergence Rate O(1/n). NeurIPS 2021. [2] Olivier Bousquet, et. al. Sharper bounds for uniformly stable algorithms. COLT 2020. [3] Moritz Hardt, et. al. Train faster, generalize better: Stability of stochastic gradient descent. ICML 2016. [4] Raef Bassily, et al. Stability of stochastic gradient descent on nonsmooth convex losses. NeurIPS 2020. [5] Ziqiao Wang, et al. Tighter information-theoretic generalization bounds from supersamples. ICML 2023. --- Rebuttal Comment 1.1: Title: Answer to rebuttal Comment: Thank you for your rebuttal (both the general one and this one). Let me continue the discussion on some still unclear topics. 1. *Regarding the definitions in Definition 2.1.* \ Thank you for the explanations. Apart of including them in the text, could you give some examples of reasonable generality, where $\gamma\_2 \leq \gamma\_4$ and where $\gamma\_2, \gamma\_3,$ and $\gamma_4$ are smaller or equal to $\beta\_1$? 2. *Regarding the examples where Theorems 3.3 and 4.3 improve upon known results.* * *Regarding the first comparison between Theorem 3.1s and 3.3.* \ It is clear that if the constants decay faster than the terms in Theorem 3.1, then it is tighter. However, when would that happen? Do you have some examples where you can state that this is the case? * *Regarding the second comparison between Theorems 3.1 and 3.3.* \ If $\mathcal{A}$ is has a bounded domain $|\mathcal{W}| = K$ it is known with classical information-theoretic techniques that $\mathcal{E}\_\mu \in \mathcal{O}(\frac{\log K}{\sqrt{n}})$. Also, if $\mathcal{A}$ is an interpolating algorithm we know that $\mathcal{E}\_\mu \in \mathcal{O}(\frac{\log K}{n})$ already from [58, Theorem 5.7] or [24,25]. \ Still, this could be added to the main text to help understand these results. 3. *Regarding situations where presented bounds improve situations where neither stability nor classical information-theoretic can achieve the desired expected generalization error rate (also asked by the AC).* \ In Example 2, the bound is obtainable with previous known information-theoretic bounds, e.g. those from [21] as mentioned by the authors. I think that finding an example where neither stability nor classical information-theoretic bounds achieve a desired rate, but their combination does would improve substantially the paper. 4. *Regarding examples where the bounds with the conditioning can be employed to gain an understanding of some problem beyond the fact that it is stable.* \ I agree with the authors that this has been studied for other notions of CMI. My question is, what do we gain from using your particular conditioning instead of those in [19,20,23,25,58]? 5. *Regarding the suggestions*. \ Thank you for revising the paper to include the suggestions. Could you specify which examples where these mutual information can be calculated are you including in the revised version of the paper? --- Reply to Comment 1.1.1: Comment: Thanks for your reply. Our response follows. >- could you give some examples of reasonable generality, where $\gamma_2\leq\gamma_4$ and where $\gamma_2$, $\gamma_3$ and $\gamma_4$ are smaller or equal to $\beta_1$? For $\gamma_2\leq\gamma_4$, first, we again highlight that $Z'$, which is used in $\gamma_2$, represents an independent testing instance for both $W$ and $W^i$, while $Z_i$ used in $\gamma_4$ is a training instance for $W$, it serves as a testing instance for $W^i$. To compare them, To compare them, we begin by applying Jensen's inequality in $\gamma_2$, then the remaining intuition is motivated by |test_loss - test_loss|$\leq$ |train_loss - test_loss|. As a concrete example, let $\ell$ be zero-one loss and assume $\mathcal{A}$ is an interpolating algorithm and and randomly makes predictions for unseen data. By Jensen's inequality, $\gamma\_2^2\leq\mathbb{E}\_{W,W^i,Z'}{\left[\ell(W,Z')-{\ell(W^i,Z')}\right]^2}=\mathbb{E}\_{W,Z'}\left[\ell(W,Z')\right]-2\mathbb{E}\_{W,W^i,Z'}\left[(\ell(W,Z')\ell(W^i,Z'))\right]+\mathbb{E}\_{W^i,Z'}\left[\ell(W^i,Z')\right]^2$, where we use $\ell^2=\ell$ for zero-one loss. Since $Z'$ is an unseen data for both $W$ and $W^i$, we have $\gamma_2^2\leq\mathbb{E}\_{W^i,Z'}\left[\ell(W^i,Z')\right]^2+\frac{1}{2}-\frac{1}{2}=\mathbb{E}\_{W^i,Z'}\left[\ell(W^i,Z')\right]^2$. While in this case $\gamma_4^2=\mathbb{E}\_{W^i,Z_i}{\left[{\ell(W^i,Z_i)}\right]^2}$ so $\gamma_2\leq\gamma_4$. For $\gamma_2,\gamma_3,\gamma_4$ vs. $\beta_1$, please notice that $\gamma\_2\leq\beta\_1$ and $\gamma\_4\leq\beta\_1$ can be rigorously proved by $\mathbb{E}\leq\sup$. Consider a Gaussian location estimation problem, let $\ell(w,z)=||w-z||\_2$ and let $Z\sim\mathcal{N}(0,\sigma^2)$. Note that the ERM solution is the sample mean, $W=\frac{1}{n}\sum_{i=1}^n Z_i$, and the Euclidean distance is 1-Lipschitz. Thus, for $\gamma_2$, we have $\gamma^2\_2\leq \mathbb{E}\_{S,Z'\_i}||W-W^i||^2=\mathbb{E}\_{Z\_i,Z'\_i}\frac{1}{n^2}||Z\_i-Z'\_i||^2=\frac{2\sigma^2}{n^2}$. Notice that $\gamma_3^2$ and $\gamma_4^2$ share the same upper bound in this case. For $\beta_1$, $\sup\_{s\simeq s^i,z}| \ell(w,z)-\ell(w^i,z)|\leq \sup\_{z\_i,z'\_i}\frac{1}{n}||z'\_i-z\_i||\to\infty$, which implies that $\beta\_1\to\infty$. Thus, we conclude that $\gamma\_2,\gamma\_3,\gamma\_4\leq\beta\_1$. We have a sense that the reviewer may still have some confusion regarding our Definition 2.1, please do let us know if any specific definitions remain unclear. >- However, when would that happen? Do you have some examples where you can state that this is the case? The previous Gaussian location estimation problem is such a case. First, $\gamma_2^2/\gamma_1\leq\gamma_2\leq\mathcal{O}{(\frac{1}{n})}$ ( $\gamma_2\leq\gamma_1$ was explained in the previous response), namely the second term in Theorem 3.3 decays with $\mathcal{O}{(\frac{1}{n})}$. Additionally, $I(W;Z_i)\leq\mathcal{O}{(\frac{1}{n})}$ as proved in [1, Example 1] so $\frac{1}{n}\sum_{i=1}^n\sqrt{I(W;Z_i)}\leq\mathcal{O}{(\frac{1}{\sqrt{n}})}$. Hence, $\gamma_2$ decays faster than $\frac{1}{n}\sum_{i=1}^n\sqrt{I(W;Z_i)}$. >- Regarding the second comparison. We acknowledge that a similar conclusion is already established, but this specific example validates that Theorems 3.3 can result in a tighter bound compared to Theorems 3.1. >- Regarding situations where presented bounds improve situations where neither stability nor classical information-theoretic can achieve the desired expected generalization error rate. We provide such an example; please refer to the response to AC. >- Regarding examples where the bounds with the conditioning can be employed to gain an understanding of some problem beyond the fact that it is stable. About comparing our novel CMI notions with the classical CMI notions, we do provide a brief discussion in Section G and acknowledge that this might not be a straightforward task. In our attempt, we treat the classical CMI notions as the forward channel ($P_{W|\widetilde{Z}}$) while addressing our CMI notions in Section 4 as the backward channel ($P_{\hat{Z}|\widetilde{W}}$). However, we currently cannot offer any new insights in this regard. At present, as done in this paper, we focus on demonstrating that our new CMI notions preserve the favorable properties of the original CMI notions, including the boundedness, , being upper-bounded by the unconditional individual bound (Theorem 4.2), establishing a connection with VC theory (Theorem F.1), and exhibiting $f$-CMI, e-CMI, and ld-CMI counterparts (Theorem 6.1). >- Could you specify which examples where these mutual information can be calculated are you including in the revised version of the paper? We believe that Example 3, as presented in our response to AC, demonstrates that our bounds are not merely a reduction of the known bound derived from uniform stability. [1] Rodríguez Gálvez, et al. Tighter expected generalization error bounds via Wasserstein distance. NeurIPS 2021.
Summary: This work improves information-theoretic generalization gap bounds by doing more careful derivations. As a result, the derived bounds get a multiplicative factors that capture some notions of hypothesis stability, while the existing bounds usually have a multiplicative constant factors that depend on the loss function and the data distribution. For example, the bound of Bu et al. [9] that states that $\text{expected gen. gap} \le \frac{1}{n}\sum_i \sqrt{2 I(W;Z_i)}$, with $W$ being the output of the learning algorithm on the dataset $(Z_1,\ldots,Z_n)$, gets improved by a multiplicative stability term $\gamma_1 \le 1$ that captures how much (in the worst-case) the loss on an example $z'$ can change when one replaces a single training example. Similar improvements are derived for conditional mutual information (CMI) bounds, both with standard CMI terms and a novel CMI term. In the random-subsample setting of Steinke and Zakynthinou [58], this novel CMI term measures the conditional mutual information between the $i$-th example that contributed to the training and the selection variable of the $i$-th pair, given two hypotheses: one corresponding to training with the first example of the $i$-th pair, the other corresponding to training with the second example, keeping everything else the same. The authors show that in some known cases when existing information-theoretic bounds fail to vanish with $n$, adding the stability factor result in a vanishing bound. They also show that there are cases when the stability term alone does not vanish, but when the mutual information part is also considered, the resulting bound vanishes. Strengths: **Strength #1: Significance & Originality.** Information-theoretic generalization bounds have gained a lot of attention recently, partly because they are algorithm and distribution-dependent and some variants of them are nonvacuous in practical setting for deep learning. Recently, Haghifam et al [21] uncovered some limitations of information-theoretic bounds, constructing cases when the existing bounds do not even vanish with $n$. This submission addresses these limitations, and is therefore a significant contribution. While the derivations mostly follow common techniques, the resulting unification of stability-based and information-theoretic bounds is a novel (to my best knowledge) and significant contribution. **Strength #2: Quality.** The submission is technically sound. I have checked the proofs. The related work is cited adequately. **Strength #3: Clarity & Presentation.** This work is well-written and presented. Weaknesses: **Weakness #1: Limitations [minor].** As the authors acknowledge, the proposed stability terms are hard to evaluate or estimate in practical deep learning settings. Therefore, the extent of improvements outside of the considered synthetic settings is unclear. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - Line 153: "are identically but not independent distributed" --> "are identically distributed but not independent". - Line 169: It should be $\gamma_2 \le \gamma_1$ and $\gamma_4 \le \gamma_3$. - Line 219: there might be no uniform convergence if the conditional mutual information terms vanish with different rates for different $\tilde{w}_i$ such that their supremum does not vanish. - Equation (14): $W_i$ and $\bar{W}_i$ need to be defined in the Lemma statement. - Theorem 4.4: Is the assumption of $\mathcal{A}$ being symmetric with respect to $S$ necessary? - Line 280: Are $\hat{\mu}$ and $\hat{\mu}^i$ treated like constants here? - Lines 713-720: If $E_0$ denotes the event of two elements of $\tilde{Z}$ sharing the same non-zero coordinate, then we are interested in upper bounding $p(E_0)$ (in contrast to what is written in line 717). On line 720, "the first inequality holds because when event $E_0$ occurs, one can determine the value of $U_i$ completely", it should be when $E_0$ *does not* occur. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The limitations are adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your positive comments. Our responses follow. >- Line 219: there might be no uniform convergence if the conditional mutual information terms vanish with different rates for different $\tilde{w}_i$ such that their supremum does not vanish. **Response.** We agree that if the CMI with the worst $\tilde{w}_i$ does not vanish, the current bound cannot be considered a uniform convergence bound. We will provide clarification in the revised version. >- Theorem 4.4: Is the assumption of $\mathcal{A}$ being symmetric with respect to $S$ necessary? **Response.** It is necessary in our proof of Theorem 4.4, otherwise we may not be able to obtain the inequalities in Line 678. >- Line 280: Are $\hat{\mu}$ and $\hat{\mu}^i$ treated like constants here? **Response.** In our main text, yes, we let $\hat{\mu}$ and $\hat{\mu}^i$ be fixed. However, it is worth noting that all these developments still apply when $\hat{\mu}$ and $\hat{\mu}^i$ are treated as random variables, namely $||\eta t\hat{\mu}-\eta t\hat{\mu}^i||=\eta t||\frac{1}{n}\sum_{j=1}^n Z_j-\frac{1}{n}(\sum_{j\neq i} Z_j+Z'_i)||=\frac{\eta t}{n}||Z_i-Z'_i||\leq\mathcal{O}({\eta t}/{n})$. >- Lines 713-720: If $E_0$ denotes the event of two elements of $\widetilde{Z}$ sharing the same non-zero coordinate, then we are interested in upper bounding $p(E_0)$ (in contrast to what is written in line 717). On line 720, "the first inequality holds because when event $E_0$ occurs, one can determine the value of $U_i$ completely", it should be when $E_0$ does not occur. **Response.** Thank you very much for your careful reading and pointing out these. There indeed exists a typo, $E_0$ should be the event that **no** pair of instances in $\widetilde{Z}$ share the same non-zero coordinate. We apologize for this oversight, and we have corrected this in the revision. >- Line 153:...; Line 169:...;Equation (14):.... **Response.** Thanks for the suggestions, we have revised our paper according to your comments. --- Rebuttal Comment 1.1: Title: Reviewer response Comment: Thank you for the clarifications.
Rebuttal 1: Rebuttal: # To all reviewers, particularly to Reviewer s7qd: >- Reviewer s7qd has pointed out that our Definition 2.1 lacks clarity. We would like to address this concern by offering the following commentary: We first note that the reason we introduce SCH stabilities in Definition 2.1 is that solely using $\beta_2$ (as mentioned in Line 132) in our bounds might be too loose for the randomized setting (but it is still weaker than the bounded loss assumption), as it considers the supremum over all sources of randomness. By incorporating SCH stabilities, we aim to demonstrate that theoretically, we can achieve significantly tighter stability parameters. The basic set up is as follows. Assume a random sample $S$ gives rise to $W$. For each $Z_i\in S$, we construct $S^i$ by replacing $Z_i$ with another independently drawn instance; call the training result $W^i$, the neighbor of $W$. In (a), $\gamma_1$-SCH-A stability measures the difference between the loss of $w$ and the expected loss of its neighbor $W^i$ at a worst $z$ and the worst possible $w$. While in (b), $\gamma_2$-SCH-B stability measures the square of this difference, not in the worst case, but in an average case, where the average is over an independently $Z'$ for the loss evaluation, the training sample, and the algorithm randomness. Since "average is smaller than worst", $\gamma_2\leq\gamma_1$ can be rigorously proved. In (c), we consider the difference between the loss of $W$ and the loss of its neighbor when evaluated at the worst possible $z_i$ that when included in $S$ gives rise to $W$. The expected value of this difference is $\gamma_3$-SCH-C stability. While in (d), $\gamma_4$-SCH-D stability measures the expected squared difference between the loss of $W$ and the loss of its neighbor when evaluated at $Z_i$ (a member of $S$). For a similar "average smaller worst" reason, one expects that $\gamma_4\leq\gamma_3$. However, this result can not be rigourously proved. We will revise this in Remark 2.2. Note that this relationship has never been used in this work. We expect that $\gamma_2$, $\gamma_3$, and $\gamma_4$ are all smaller than $\beta_1$. This is because in $\beta_1$, we consider the worst evaluated instance, whereas in the other cases, we take the expectation over all instances. In Line 217-218, we also expect that ${\mathbb{E}\_{\widetilde{W}_i}{\Delta_1(\widetilde{W}_i)^2}}\leq\beta^2_1$, similarly, this is because $\beta_1$-stability holds for all the possible $s$ and $s^i$, namely it holds for all the $(w,w^i)$ pair (that shares the same randomness) while in ${\mathbb{E}\_{\widetilde{W}_i}{\Delta_1(\widetilde{W}_i)^2}}$, we take the expectation of these pairs. We expect $\gamma_2\leq\gamma_4$ due to the following reason: first by Jensen's inequality, we have $\mathbb{E}\_{S,R,Z'}{\left[\ell(W,Z')-\mathbb{E}\_{W^i|W}{\ell(W^i,Z')}\right]^2}\leq\mathbb{E}\_{W,W^i,Z'}{\left[\ell(W,Z')-{\ell(W^i,Z')}\right]^2}$, then since $Z'$ is independent of both $W$ and $W'$, $Z'$ can be regarded as a testing data point for both $W$ and $W'$, we could expect that the expectation of $\ell(W,Z')-{\ell(W^i,Z')}$ is small. While in $\mathbb{E}\_{S,Z'_i,R}{\left[\ell(W,Z_i)-\ell(W^i,Z_i)\right]^2}$, $Z_i$ is a training data point for obtaining $W$, so $\ell(W,Z_i)$ could be small in general, and $Z_i$ is a testing point for $W^i$. Therefore, it is reasonable to expect that $\mathbb{E}\_{W,W^i,Z'}{\left[\ell(W,Z')-{\ell(W^i,Z')}\right]^2}\leq\mathbb{E}\_{S,Z'_i,R}{\left[\ell(W,Z_i)-\ell(W^i,Z_i)\right]^2}$, namely $\gamma_2\leq\gamma_4$. We will include these explainations in the revision.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Explore In-Context Learning for 3D Point Cloud Understanding
Accept (spotlight)
Summary: This paper proposes an in-text learning method for multi-task 3D shape analysis. It handles several tasks such as denoising, part segmentation, reconstruction and registration with a single pretrained masked point model (MPM). The authors also claim previous MPM methods introduce information leakage during pretraining. To solve this, this paper proposes a JS module, which facilitates the model to learn the inherent association between input and target and streamlines the learning process. Strengths: 1. In-context learning for point clouds is a new and interesting topic. 2. The unified modeling of different 3D shape analysis tasks makes sense. 3. The performance of the proposed method is satisfactory. Weaknesses: 1. The writing of this paper is not clear. (3.1) The authors should add more description on the task definition, such as which kind of model should be pretrained, is it fixed during inference, etc. (3.3) It is claimed that previous methods will bring information leakage. However, the explanation is confusing. The authors should clearly show this in Figure 2. Moreover, as a core technical contribution of this paper, JS module should be clearly demonstrated. It is hard for me to understand why using the same FPS Idx. 2. The JS Module seems to be straight-forward (and confusing). As a simple technique, more insight should be explained. I will consider to improve my rating if my concern can be properly solved. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: see above Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: see weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >**Q1**: > >(1) **Task definition** > >(2) **Information leakage** > >(3) **JS module** **A1**: &emsp; We are sorry that you were confused about our paper and thank you very much for your suggestions on our paper. We provide more detailed explanations: &emsp; **1. More details of the task definition in Section 3.1.** &emsp; **During training**, motivated by 2D in-context learning [Bar et al., 2022; Wang et al., 2022], we provide two pairs of point clouds as input to the model and train the model with an MPM-style framework, which is to reconstruct the masked parts of the target point clouds belonging to two pairs. **During testing**, we provide the prompt pair and the query pair that is comprised of a query point cloud and a mask as input to the model. Then the model will reconstruct the mask like what to do in training, which is the corresponding output to the query point cloud. &emsp; Besides, Our Point-In-Context is trained from scratch, with **no pre-training nor additional dataset**. As for compared methods, the multi-task models are trained on all tasks jointly, too. But they are comprised of a pre-trained encoder and multi-task heads. The task-specific models are trained on each task individually. &emsp; Finally, **the model is fixed during inference** with no more learning. &emsp; **2. The elaboration on the information leakage problem mentioned in Section 3.3.** &emsp; Previous methods based on the MPM pre-training framework will first be pre-trained on the ShapeNet dataset with Masked Point Modeling. The pre-trained encoder is then combined with task-specific heads to perform fine-tuning on downstream tasks, such as classification, few-shot learning, or segmentation. &emsp; During pre-training, the model will embed the position of **all sampled local center points** to the extracted point cloud features, even if **they belong to the center points of the patches to be reconstructed (have been masked out, invisible)**. This is possible under their pre-training scheme, but it will cause information leakage under our setting, which **is not allowed**. In other words, our model shouldn't see the masked center points, because we just train one time for multiple tasks and no more fine-tuning. &emsp; Based on your confusion, we **improve Figure 2(a)** of the main text and put it in **the PDF of the global rebuttal**. Please see the improved figure of the demonstration of information leakage for more details. &emsp; **3. More explanations and insights for JS modules.** &emsp; In order to solve the above problems, we replace the position embedding that leads to information leakage with a sin-cos fixed encoding, that is, do not use the local center point for position embedding. However, we found that the performance of the model dropped drastically, and even failed to converge, as shown in the following table. Caption: The performance (mIOU) of PIC-Sep and PIC-Cat on denoising and part segmentation. | Model | Denoising CD$\downarrow$ | Part Seg. mIOU$\uparrow$ | |--------------|:---------:|:---------:| | PIC-Cat w/o JS | 29.3 | 17.03 | | PIC-Cat w/ JS | **5.3** | **78.95** | |-|-|-| | PIC-Sep w/o JS | 36.3 | 23.72 | | PIC-Sep w/ JS | **7.6** | **74.95** | &emsp; Unlike images, each point of a point cloud has no clear order structure (**unordered property**), so the position information is seriously missing between the input point cloud and the target point cloud, resulting in the model being unable to learn the input point cloud and the mapping relationship between the target point cloud. Such the unordered property mentioned above is **an unexplored challenge** for introducing in-context learning to 3D point clouds. &emsp; Therefore, in order to make up for the lack of position information, we **align the indices of input sample pairs for each task during training**. During the **generation of our dataset**, the input point cloud of all tasks is generated from the target point cloud, and different operations are performed for different tasks. Taking the reconstruction task as an example, a sparse input point cloud is obtained by discarding a certain percentage of points in the target point cloud. We zero the XYZ coordinates of the discarded points, so the shape of the input point cloud remains consistent with the target point cloud. **This is why we can make the patch/token sequences of both the input point cloud and the target point cloud well-aligned by using the same FPS index**. &emsp; For the denoising task, we add Gaussian noise to the points at different levels, so the shape of the input point cloud and the target point cloud are the same and aligned. For the registration task, we multiply the target point cloud with a rotation matrix to get the input point cloud. For the part segmentation task, we convert each point's part label to a point in XYZ space, and points representing the same part are clustered together. All in all, the input and target point clouds are consistent in shape as well as indices of each point for all tasks. &emsp; Then, we propose a **simple but effective** method, the **Joint Sampling module**, so that each patch/token of the input point cloud and the target point cloud is well-aligned. &emsp; **During inference**, for the query point cloud, its target is masked out, so **no alignment is required**, we only need to provide the aligned task prompt pair which is randomly sampled in the training set. During our exploration, we tested various positional embeddings, learnable matrices, and sampling methods, but none met our expectations. [Bar et al., 2022] Visual prompting via image inpainting. In NeurIPS. [Wang et al., 2022] Images speak in images: A generalist painter for in-context visual learning. In CVPR. >**Q2: The JS Module seems to be straight-forward (and confusing). As a simple technique, more insight should be explained.** **A2**: &emsp; Please see the above explanations. --- Rebuttal Comment 1.1: Title: New Comment to Reviewer uxmh Comment: Dear reviewer uxmh, &emsp; Our work is **the first** to explore in-context learning for 3D point cloud understanding. We **unify the output space of the four tasks** into the 3D coordinate space, and apply the MPM framework for model training, realizing **one training, one model, and processing multiple tasks**. Extensive experiment results demonstrate the methodology design's **superiority**, which indicates that in-context learning in 3D is **non-trivial**. Besides, our Point-In-Context shows excellent generalization capability. As the reviewer Uvps says, our work may **bridge the connection between 3D point clouds and 2D images in-context learning**, as we hope it will. &emsp; We would like to have an in-depth discussion with you. Looking forward to your reply. &emsp; Yours sincerely, &emsp; Submission4623 Authors
Summary: This paper introduces a novel framework, named Point-In-Context, designed explicitly for in-context learning 3D point clouds. The authors conduct extensive experiments to validate the versatility and adaptability of the proposed methods in handling a wide range of tasks. Strengths: This paper explores an interesting topic -- how to do in-context learning for 3D understanding; and it demonstrates a reasonable way to achieve it, addresses some issues like the "position information leakage", and shows some positive results. Weaknesses: The reviewer is concerned about how well this framework can generalize, its general usefulness, and its scope, since to some extent the performance may depend on the "prompt". In reality, it might be hard to choose a proper prompt for it and the performance may not be stable, and it kind of involves some extra tuning effort compared to using the model which is directly trained for that task. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Is the Transformer backbone trained from scratch or it is initialized from something, whether this backbone has been pre-trained on some 3D data before, for example, pointbert, pointmae .etc? Is there any leakage between train dataset and test dataset. 2. Have you tried only train the model for part of the tasks not all of them, and try to see if there is any emergent ability; for example, only train it for reconstruction and registration, do it have some zero-shot emergent ability for denoising and part-segmentation? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: refer to the weakness part Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >**Q1: In reality, it might be hard to choose a proper prompt for it and the performance may not be stable, and it kind of involves some extra tuning effort compared to using the model which is directly trained for that task.** **A1**: &emsp; The prompt can indeed affect the performance of our PIC, but we consider it to be **the icing on the cake for us**. Moreover, our PIC shows **great generalization ability** which is needed in actual applications. &emsp; Usually, the prompt plays a very important role in In-context Learning. High-quality prompts can improve the performance of the model. In Table 1 of the main text, the main results of this paper, the prompts we used are randomly selected under the same task in the training set. It can be seen that **the performance brought by these random prompts is already remarkable**, surpassing all multi-task models on the four tasks. However, we have a higher ceiling. When it is possible, **a good prompt shall greatly improve the performance** (see Table 3(a) of the main text and Table 5(a) of the appendix). &emsp; Besides, we demonstrate that **our PIC has great generalization ability**. In actual scenarios, the 3D domain shows seriously lacking data (as the reviewer GuqQ said), and the data to be tested in actual applications do not necessarily have corresponding training data, so the generalization of the model is more needed. As we mentioned in Figure 5(a) of the main text, when our model generalizes to other datasets (ModelNet40), it can outperform the single-task training model under the same training environment, which shows that **our model can perform well under the limitation of training data in practical applications**. We also demonstrate that our PIC can generalize to other related tasks (see the following A3). &emsp; In conclusion, we regard **the possibility of choosing a better prompt as an advantage of the proposed method, providing the chance of achieving more reliable performance**. we do not need a “proper” prompt to maintain our performance, the random prompt under the same task is satisfied. Moreover, **our generalization ability can make it more practical in reality compared to task-specific models**. >**Q2: Is the Transformer backbone trained from scratch or it is initialized from something, whether this backbone has been pre-trained on some 3D data before? Is there any leakage between the training dataset and the testing dataset?** **A2**: &emsp; Our model is trained from scratch, **no additional data is used for pre-training**, and **no pre-trained model parameters are used**, all parameters are only updated on our training set. In addition, our training set and test set are divided according to the ShapeNet [Chang et al., 2015] and ShapeNetPart [Yi et al., 2016] datasets **without any leakage**. Our code is reproducible, which is provided in the supplementary material, and the dataset will be made public in the future. [Chang et al., 2015] Shapenet: An information-rich 3d model repository. In arXiv. [Yi et al., 2016] A scalable active framework for region annotation in 3d shape collections. In TOG. >**Q3: Have you tried only training the model for part of the tasks not all of them, and try to see if there is any emergent ability; for example, only train it for reconstruction and registration, does it have some zero-shot emergent ability for denoising and part-segmentation?** **A3**: Caption: The performance of PIC-Sep and PIC-Cat on unseen tasks. **Bold represents tasks included in the training set**. For instance, the first row of PIC-Cat denotes that it is trained on reconstruction and registration, and is tested on denoising and part segmentation. | Models | Reconstruction CD$\downarrow$ | Denoising CD$\downarrow$ | Registration CD$\downarrow$ | Part Seg. mIOU$\uparrow$ | |-------------- |:--------------:|:---------:|:------------:|:---------:| | PIC-Cat (Original) | **4.3** | **5.3** | **14.1** | **78.95** | | PIC-Cat (new) | **3.9** | 19.5 | **3.9** | 2.98 | | PIC-Cat (new) | **4.1** | 21.0 | **18.8** | **79.51** | | - | - | - | - | - | | PIC-Sep (Original) | **4.7** | **7.6** | **10.3** | **74.95** | | PIC-Sep (new) | **2.4** | 25.3 | **3.9** | 3.41 | | PIC-Sep (new) | **6.7** | 22.9 | **11.0** | **80.46** | &emsp; Thanks for your advice. We conduct experiments that train our PIC for part of the tasks and test its emergent ability for the remaining tasks. As the above table shows, we train our PIC for **reconstruction & registration & part segmentation**, and evaluate it on the **denoising** task, which is an unseen task in the training set. Despite the formidable challenge, the PIC demonstrates **great emergent capability on denoising task**. &emsp; For training on reconstruction & registration and testing on denoising & part segmentation, we don't think it makes sense. Because **the output space of part segmentation is kind of different from the other three, it is hard for PIC to adapt to such a polar task**. Despite it, we conducted the experiment. As we expected, we find that PIC shows excellent generalization capability on the denoising task but performs poor results on the part segmentation task due to the extremely different output space form. &emsp; It is worth noting that our PIC trained on our dataset **can also generalize to the other dataset (ModelNet40)**, and its performance surpasses the task-specific model trained on our dataset, as shown in **Figure 5(a) of the main text**. --- Rebuttal Comment 1.1: Comment: Thanks for the response, to some extend it addresses my concern, and I will keep my positive score.
Summary: This work is conducted toward in-context learning for 3D point cloud data. Similar to 2D in-context learning, the authors first define and construct the in-context learning 3D dataset covering reconstruction, denoising, registration, and part segmentation tasks. To avoid information leaking during masked point modeling, a joint sampling module that samples correspondingly from inputs and outputs is proposed. Extensive experiments have been conducted, and interesting results have been obtained. Strengths: - This works targets in-context learning for 3D point clouds, which is a very relevant but underexplored problem. - The setup of in-context learning in 3D and curation of the in-context learning dataset is helpful to the community. - Extensive experiments have been conducted, where results demonstrate the superiority of the methodology design, which also indicates that in-context learning in 3D is non-trivial. - Code and dataset are promised to be released. Good. Weaknesses: - For 2D or NLP, in-context learning is generally free-form, targeted at various tasks. However, for 3D, due to a seriously lacking of data, it only shows very limited applications. It seems not practical in real-world deployments. - The novelty and technical contribution of the method part is somewhat limited. The joint sampling module is relatively straightforward. The model and loss function is the same as in previous works. However, the contribution of setting up the 3D in-context learning baseline is good. - Missing citations or comparison: For 3D MIM methods, important recent cross-modal representation/prompt learning methods should be discussed or compared [Dong et al., 2023; Qi et al., 2023]; For the loss function, Chamfer Distance should be cited [Fan et al., 2017]; For in-context learning, some works are missing [Sun et al., 2023; Balažević et al., 2023]. - The compared methods Point-BERT and Point-MAE are a bit old and are not SOTA. I wonder what about the comparison to other cross-modal 3D MIM methods like ACT [Dong et al., 2023], I2P-MAE [Zhang et al., 2023], and ReCon [Qi et al., 2023]? I think it is important to conduct solid comparisons to more advanced methods. - Minor suggestion: There are not many formulations but the current formulation is not neat. Please improve the presentation quality including the formulations. For example, pure texts should not be italic in equations. Besides, the writing is overall okay but could be improved. [Dong et al., 2023] Autoencoders as Cross-Modal Teachers: Can Pretrained 2D Image Transformers Help 3D Representation Learning? In ICLR. [Qi et al., 2023] Contrast with Reconstruct: Contrastive 3D Representation Learning Guided by Generative Pretraining. In ICML. [Fan et al., 2017] A point set generation network for 3d object reconstruction from a single image. In CVPR. [Sun et al., 2023] Exploring Effective Factors for Improving Visual In-Context Learning. arXiv preprint. [Balažević et al., 2023] Towards In-context Scene Understanding. arXiv preprint. [Zhang et al., 2023] Learning 3d representations from 2d pre-trained models via image-to-point masked autoencoders. In CVPR. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Besides questions and concerns listed before, I have following questions: - Is all tasks jointly trained or one model per task? - What if the 3D in-context learning involves other modalities? For example, images or languages? - In ACT [Dong et al., 2023], I notice that the authors report the reconstruction CD-$\ell_2$ on ShapeNet as 2.110, which is significantly lower (better) than the best reconstruction results in this paper (4.3). Can authors explain this? I am looking forward to the author's response, and I am happy to raise my score if my concerns are solved. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The authors have discussed the limitations, and I think it is somewhat okay for this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Due to word limit, we omit the questions, answer order is consistent with the above questions. **A1**: &emsp; Seriously lacking data is a **common problem for 3D tasks**. With the advent of 3D sensors such as LiDAR and Kinect, 3D point clouds have gained increasing popularity and are widely used in robotics, autonomous driving, object detection, etc. &emsp; First, our work is **the first to explore the application of in-context learning in 3D point clouds and goes one step further than previous methods**. &emsp; Secondly, our PIC can **easily be generalized to related datasets (Figure 5(a) of the main text) and tasks (A3 of the reviewer kyHc)**. We strive to tap the multi-task potential of the model as much as possible under the condition of fewer data (can see **A1 of the reviewer kyHc**) and provide a new idea and direction for future work of 3D point clouds. Such capability is urgently needed for a deep learning model in the condition of a seriously lacking 3D point cloud data. &emsp; In conclusion, the proposed method **is good at solving the lacking data problem**. **A2**: &emsp; Our work is **the first to explore in-context learning for 3D point cloud understanding**. We unify the output space of the four tasks into the 3D coordinate space, and apply the MPM framework for model training, realizing one training, one model, and processing multiple tasks. &emsp; It is **a challenge to train multiple tasks jointly**. Previous methods cannot achieve an ideal balance among multiple tasks, resulting in poor multi-task performance, as shown in Table 1 of the main text. Our PIC shows a great ability for dealing with multiple tasks jointly with an MPM-style training scheme. &emsp; However, it is infeasible to simply adopt the MPM framework into our work due to **information leakage**. That is the model knows the coordinates of the center point of the patch to be reconstructed in advance, which **is not allowed**. It is an open problem that is never explored. &emsp; Therefore, in order to make up for the lack of position information, we propose a simple but effective method, the **Joint Sampling module**, so that each patch/token of the input point cloud and the target point cloud is well-aligned. **During our exploration**, we tested various positional embeddings, learnable matrices, and sampling methods, but none met our expectations. &emsp; As for the model structure, we improve some detailed architecture for our different forms of inputs. But what we want to express most is that **we are the first work to introduce in-context learning into the 3D point clouds, laying the foundation for subsequent research**. **A3**: &emsp; Thank you for your valuable suggestions. We will cite these articles in the final version and add the comparison of ACT, I2P-MAE, and ReCon to the main experimental results. **A4**: &emsp; Thank you for your constructive suggestions, which make our work more complete and solid. We conduct experiments on other cross-modal methods like ACT, I2P-MAE, and ReCon. In the implementation, we use a pre-trained encoder and combine it with different task heads for simultaneous training on the four tasks. As shown in the table of **the PDF of the global rebuttal**, our PICs outperform them on four tasks. Such results demonstrate that our PICs show an excellent ability to deal with multi-task. **We will include these results in the final version**. **A5**: &emsp; Thanks for your valuable advice. Except for the typo you have mentioned, we will carefully check all formulas and polish the expression of the article in the final version to improve its readability. **A6**: &emsp; Our Point-In-Context is trained on all tasks jointly. As for compared methods, the multi-task models are trained on all tasks jointly, too. The task-specific models are trained on each task individually. The joint training signifies the proposed method that is capable of learning various contexts and bringing mutual benefits. **A7**: &emsp; It's an exciting idea, and we consider it feasible to involve the other two modalities. Our work is the first to explore in-context learning for 3D point cloud modality. Similar to the ICL in language and 2D images, our model is based on transformer architecture, providing us a chance to involve the other two modalities. &emsp; The main challenge is that there are not many data sets with three modalities and data alignment, and how to **align the features** of the three modalities to the same feature space is a semi-open problem. There are already several works exploring the multimodal understanding of 3D scenes (PLA [1] and Open Scene [2], etc.), and future work will focus on exploring the in-context learning in 3D point cloud scene segmentation. Please see **A2 of reviewer Uvps** for more details. [1] PLA: Language-Driven Open-Vocabulary 3D Scene Understanding. CVPR2023. [2] OpenScene: 3D Scene Understanding with Open Vocabularies. CVPR2023. **A8**: &emsp; The reconstruction result of 2.110 is an ablation experiment in the ACT paper, and we did not find more details about this experiment in the paper. Therefore, we speculate that there are two reasons: &emsp; **1. The settings of the experiments are different.** We establish five levels for input point clouds, which contain 512, 256, 128, 64, and 32 points respectively for our reconstruction task. The difficulty of our reconstruction task cannot be measured and compared with the reconstruction task in the ACT directly. &emsp; **2. About the pre-processing of experimental data.** In order to unify the data between different tasks and different datasets, we normalize the data, resulting in the point clouds of the testing set in our reconstruction dataset being **larger** than the original data (see **table R2 in PDF of the global rebuttal**). So our results may be seen as higher than the results of ACT intuitively. However, our results cannot be compared with the result of the ACT paper directly. --- Rebuttal Comment 1.1: Title: Post Rebuttal Comment Comment: I thank the authors for the detailed response! My concerns are largely solved. Thus, I raise the score to Weak Accept. Minor question: I have checked the part segmentation results again, and I found that the results are lower than commonly reported numbers on ShapeNetPart. For example, PointNet has only 77.45 mIoU, which is lower than the common result of 80.39 mIoU. This situation goes to other methods like DGCNN, ACT, etc. Is the result tested on the same ShapeNetPart? Or is it a newly constructed dataset split? --- Reply to Comment 1.1.1: Title: The response to the reviewer's question. Comment: &emsp; Thank you for your nice comment! &emsp; Our testing set of part segmentation is **different from** ShapeNetPart [Li et al., 2016]. Ours is a newly constructed dataset, which is **larger** and **more difficult** than ShapeNetPart. &emsp; As described in Section 3.2 of the main text, to augment the sample size of the part segmentation task, we conduct several random operations on the point clouds of the ShapeNetPart dataset, including point cloud perturbation, rotation, and scaling. Therefore, the size of the testing set of our part segmentation task is about 4 times larger and more diverse than that of ShapeNetPart, and it is more difficult compared to ShapeNetPart which contains regular point clouds, therefore our results are lower than commonly reported numbers on ShapeNetPart. >[Li et al., 2016] A scalable active framework for region annotation in 3d shape collections. In TOG.
Summary: Inspired by in-context learning in NLP and 2D vision tasks, this paper aims to explore the in-context learning in the 3D point cloud. The authors present Point-In-Context, which is a 3D mask point modeling framework. Meanwhile, to handle the data leakage issues, the authors also present a simple solution, named joint sampling. Then, extensive experiments are carried out in modified ModelNet40 dataset. The performance of two different baselines is good to other specific baselines. Strengths: 1. The proposed point in-context is the first work that explores the in-context ability in the point cloud understanding, which is novel and interesting to me. The authors show the effectiveness on four different tasks, including rotations, registration, denoising and part segmentation. 2. The overall writing and motivation is good, clear, and easy to follow. 3. The proposed joint sampling can effectively solve the data leakage problems. 4. The experiments results are good. The ablation studies are extensive. The authors re-benchmark several representative works, including SOTA single model and multi-task models. The analysis on visual point example is good and convincing. 5. This work may bridge the connection between 3D point cloud and 2D image in-context learning. Weaknesses: 1. Are there any other solutions to replace joint sampling to handle the data leakage problems? 2. The proposed approach is verified effective on simple scene on ModalNet40. The ability to extend to large scale scene including in-door point segmentation is unknown. 3. What are the results of two combined prompts: denoising and part segmentation jointly? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Please refer to the *Weaknesses. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >**Q1: Are there any other solutions to replace joint sampling to handle the data leakage problems?** **A1**: &emsp; When we adopt the original MPM pre-training framework for our task, we find that model utilizes center point coordinates that should have been masked before position embedding. This inadvertent inclusion results in unintended information leakage. &emsp; This problem is **not explored in previous work and is an open question**. To address this predicament, we introduce the **Joint Sampling module** designed to align the positional attributes of patches/tokens between input and target point clouds. This alignment serves to compensate for the consequential loss of vital positional data, thereby enabling the model to understand the intricate mapping correlation inherent in the input-target pairs. &emsp; During our exploration, we tested various positional embeddings, learnable matrices, and sampling methods, but none met our expectations. In the end, our **simple yet highly effective** Joitn Sampling module provided the perfect solution to this challenge. In future work, we will consider **enhancing** the JS module's structure by incorporating a learnable matrix. This will strengthen the linkage between the input point clouds and the target point clouds, thereby advancing overall performance and efficacy. >**Q2: The ability to extend to large-scale scenes including in-door point segmentation is unknown.** **A2**: &emsp; Segmenting scene-level point clouds presents a formidable challenge. The 3D point cloud datasets comprise an extensive number of points and contain intricate object composition, such as ScanNet [Dai et al., 2017], Matterport3D [Chang et al., 2017], and S3DIS [Armeni et al., 2016]. &emsp; To solve these problems, more fine-grained design and more computing resources are required. However, we think that **this is feasible**. Referring to PointNet and others' works, we can divide the 3D scene into blocks using a non-overlapping sliding window of 1m x 1m on the xy plane. By processing these blocks individually and subsequently merging their results, we can derive the ultimate segmentation result of a whole 3D scene. Naturally, if an ample supply of computational resources is available, a comprehensive analysis of the entire scene can yield a deeper understanding of both the 3D scenes and their constituent objects. &emsp; This is a work worth exploring in the future: applying in-context learning to 3D scene-level point cloud segmentation to achieve the unification of semantic segmentation, instance segmentation, and panoptic segmentation. We would like to explore more possibilities of in-context learning in 3D point clouds, not only involving object-level point clouds. [Dai et al., 2017] Scannet: Richly-annotated 3d reconstructions of indoor scenes. In CVPR. [Chang et al., 2017] Matterport3d: Learning from rgb-d data in indoor environments. In 3DV. [Armeni et al., 2016] 3d semantic parsing of large-scale indoor spaces. In CVPR. >**Q3: What are the results of two combined prompts: denoising and part segmentation jointly?** **A3**: Caption: performance (mIOU) of PIC-Sep and PIC-Cat of part segmentation. The given prompts contain various levels of noise points ranging from 100 to 500 noisy points (1024 points per sample). | Models | Original | Level=1 | Level=2 | Level=3 | Level=4 | Level=5 | Average | |:-------:|:--------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:| | PIC-Sep | 74.95 | 75.13 | 75.02 | 74.94 | 74.84 | 74.68 | 74.92 | | PIC-Cat | 78.95 | 77.73 | 77.33 | 77.15 | 77.04 | 76.95 | 77.24 | &emsp; We use pre-trained PIC-Sep and PIC-Cat in Table 1 of the main text to test the combined task, when different levels of noises are added to the prompts during the testing. As the above table shows, **our PIC-Sep and PIC-Cat can naturally generalize to noised prompt without re-training**. For PIC-Sep, adding noise has little effect on its performance. For PIC-Cat, its mIOU drops about 1.2-1.7 in various levels of noise points. PIC shows great robustness. We **visualize** the part segmentation results of PIC-Sep and present the figure in **the PDF of the global rebuttal**. &emsp; Besides, we also notice that **the choice of prompt impacts the performance**. When we select the prompt for PIC according to the minimum Chamfer Distance between the query point cloud and the prompt (CD-aware), we find that the performance (mIOU) increases, as shown in the following table. Caption: The higher-quality prompt can improve the performance of PIC. | Models | Random | CD-aware | |:---------:|:------:|:--------:| | PIC-Cat | 78.95 | **80.49** $\uparrow$ | | PIC-Sep | 74.95 | **78.46**$\uparrow$ | --- Rebuttal Comment 1.1: Title: Response to Author Rebuttal Comment: Thanks for the response that addressed my concerns, and I will keep my positive score.
Rebuttal 1: Rebuttal: &emsp; We would like to thank the four reviewers for their suggestions, which make our paper more solid. &emsp; We are grateful that the reviewers acknowledge our work. Here list some excerpts: &emsp; **1.** As the reviewer **Uvps** and **GuqQ** said, Our work is the first to explore in-context learning for 3D point cloud understanding and is a very relevant but underexplored problem. Besides, the setup of in-context learning in 3D and the curation of the in-context learning dataset is helpful to the community. &emsp; **2.** As the reviewer **uxmh** said, we unify the output space of the four tasks into the 3D coordinate space, and apply the MPM framework for model training, realizing one training, one model, and processing multiple tasks. &emsp; **3.** As the reviewers **kyHc**, and **GuqQ** said, we design our Point-In-Context for in-context learning in 3D point clouds and address the "information leakage" problem via the Joint Sampling module. Such indicates that in-context learning in 3D is non-trivial. &emsp; As suggested by four reviews, we conduct some additional experiments on our Point-In-Context and demonstrate more inspirable abilities. &emsp; **1.** For the response of the reviewer **Uvps**, we use pre-trained PIC-Sep and PIC-Cat in Table 1 of the main text to test the combined task, when different levels of noises are added to the prompts during the testing part segmentation task. The results show our PIC-Sep and PIC-Cat **can naturally generalize to noised prompt without re-training**. &emsp; **2.** For the response of the reviewer **GuqQ**, we conduct experiments on **other cross-modal SOTA MPM methods** like ACT, I2P-MAE, and ReCon. Such experiment results will be included in the final version. &emsp; **3.** For the response of the reviewer **kyHc**, we conduct experiments that train our PIC for part of the tasks and test its emergent ability for the remaining tasks. Despite the formidable challenge, the PIC demonstrates **great emergent capability** on the other unseen task in the training. &emsp; **4.** For the response of the reviewer **uxmh**, we explain more details about **task definitions, information leakage, and insights into our technological** methods. &emsp; In conclusion, our work is **the first to explore in-context learning for 3D point cloud understanding**. We **unify the output space of the four tasks** into the 3D coordinate space, and apply the MPM framework for model training, realizing **one training, one model, and processing multiple tasks**. Extensive experiment results demonstrate the **superiority of the methodology design**, which also indicates that **in-context learning in 3D is non-trivial**. Besides, our Point-In-Context shows **great generalization capability**. &emsp; We include two figures and two tables in **the PDF** and cite them in some rebuttal. - Figure R1: Visualization of PIC-Sep on the part segmentation. The given prompts contain various levels of noise points ranging from 100 to 500 noisy points (1024 points per sample). - Figure R2: The improved demonstration of information leakage in the previous MPM works. - Table R1: The additional results of the cross-modal methods: ACT, I2P-MAE, and ReCon. - Table R2: The numerical statistical results of target point clouds of the ShapeNet testing set and our reconstruction testing set. Pdf: /pdf/b248d1325484dceae8853c054f0427612b440768.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Multi-Fidelity Multi-Armed Bandits Revisited
Accept (poster)
Summary: The authors consider the problem of multi-fidelity multi-armed bandits under fixed-confidence BAI and regret-minimization objectives. The best-arm identification algorithm is based on lower-upper confidence bound and associated cost-complexity. A novel definition of regret is introduced which captures the fact that fidelities influence the accuracy of affected rewards. Lower and upper bounds are provided for both problem cases. Strengths: This paper provides a novel outlook at the multi-fidelity MAB problem introduced by Kandaswamy et. al. The BAI algorithm is an index-based algorithm with a new UCB based procedure to determine the optimal fidelity to be sampled. Weaknesses: - Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the review for providing positive feedback to this work.
Summary: This paper studies the multi-fidelity multi-armed bandits problem where each arm has different fidelities and observation accuracy. There are many existing works in this problem. This paper studies both best arm identification with fixed confidence and regret minimization. There are some novel theoretical results in both problems. Strengths: 1. This paper studies an important problem and overall presentation is good so it’s an easy-to-follow paper. 2. This paper claims to be the first work to study the BAI task under the MF-MAB model. 3. It’s great to see many application-oriented discussion after each theoretical analysis. 4. The tightness of theoretical results is checked; see Remark 3.6, 4.4, 4.5. 5. Compared with Kandasamy et al. [19], the cost complexity definition in this paper makes more sense to me. Weaknesses: 1. Although this paper claims to be the first work to study the BAI task under the MF-MAB model, I do have a question on comparison to Kandasamy et al. [18] as discussed in line 76-82. Yes, simple regret minimization in continuous domain is equivalent to BAI in discrete domain, but after discretization of continuous domain how is the algorithm in this paper compared to Kandasamy et al. [18]? 2. In Remark 3.8, I don’t think $\mu_1$ and $\mu_2$ can be easily obtained in real-world applications. Yes, a perfect classification means $\mu=1.0$ but that’s because of the output range. For the 2nd-best arm, its \mu is hard to get. 3. A minor comment. I do like application-oriented discussion but why do they appear after each technical stuff? I think first being motived by applications, stating the problem, and presenting the results makes more sense to me. 4. A minor suggest. Instead of denoting best arm as arm 1 and 2nd-best arm as arm 2, $a_1$ and $a_2$ look better. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. In line 225, it’s good to compare explore-a vs. explore-b but what’s the criterion to select one of them to perform best? 2. There are some technical terms that need more explanation. (2.1) In line 109, why the error upper bounds can be revealed to the learning agent? Could you give a real-world example and explain what are they in this example? (2.2) In line 131, are v^(m)_k different according to different algorithms? If yes, why can they appear in lower bound? (2.3) In line 172, \beta seems like something similar as in GP-UCB paper. Could you explain why \beta is defined as that? Pointing it to somewhere in the proof also works. (2.4) In line 182, how “half” is chosen? I guess any constant between 0 and 1 works, right? If a constant C is used instead of half, how does it affect theoretical results? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Limitations are discussed in last section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We highly appreciate the reviewer for these instructive comments. Below, we answer weaknesses and questions point by point according to the reviewer's numerical label. **Weaknesses** > *Although this paper claims to be the first work to study the BAI task under the MF-MAB model, I do have a question on comparison to Kandasamy et al. [18] ...* 1. Under the MF-MAB model, we are claiming to be the first to study the best arm identification *with fixed confidence* setting, which is different from best arm identification *with fixed budget* setting---the objective of Kandasamy et al. [18] after domain discretization. The fixed confidence refers to give a fixed confidence parameter $\delta$, to find the best arm with as small costs as possible, where the *cost complexity* is the minimizing objective, while the fixed budget refers to give fixed cost as budget, to find the best arm with confidence as high as possible, where the *fail probability* is the minimizing objective. With two different minimization objectives, both settings and their algorithms are incomparable. > *In Remark 3.8, I don't think $\mu_1$ and $\mu_2$ can be easily obtained in real-world...* 2. We note that to run our algorithm, it is not necessary to get the true means of top two arms. The upper bound of $\mu_1^{(M)}$ and lower bound of $\mu_2^{(M)}$ are enough to make sure the algorithm works properly (see Line 187). In classification scenario mentioned by the reviewer, classification accuracy of the model with the default hyperparameter setting could be considered a safe (and reasonably tight) lower bound for the 2nd-best arm. With additional application-based information, e.g., in the hyperparameter optimization example discussed in Remark 3.8, one can have better bounds for these two reward means. 3. We highly appreciate this suggestion. We will move Section 3.4 for application discussion to the beginning of Section 3 to better motivate the multi-fidelity best arm identification task. 4. We appreciate the reviewer for this suggestion. In the final version of this paper, we will update the notations of arm index and means accordingly. **Questions** > *1. In line 225, it's good to compare explore-a vs. explore-b but what's the criterion to select one of them to perform best?* - Theoretically, For when do we prefer $\texttt{Explore-A}$ than $\texttt{Explore-B}$, one can informally say if $\tilde{G}$ is small ($\tilde{G}=O(M\tilde{H})$), $\texttt{Explore-A}$ is preferred, and, otherwise if $\tilde{G}$ is large, $\texttt{Explore-B}$ is preferred. - Practically, we can just run both $\texttt{Explore-A}$ and $\texttt{-B}$ in parallel for each arm $k$. If $\texttt{Explore-A}$ starts to converge on a fidelity, i.e., a large proportion of pull times on this arm is at this fidelity, we stop $\texttt{Explore-B}$ and continue $\texttt{Explore-A}$. Otherwise, if $\texttt{Explore-B}$ commits to fidelity at first, we stop $\texttt{Explore-A}$ and follow $\texttt{Explore-B}$ to commit this fidelity. With this heuristic approach, we can expect the empirical cost to be close to the smaller one of running either $\texttt{Explore-A}$ or $\texttt{-B}$ alone. > *2. There are some technical terms that need more explanation...* - 2.1 Due to the limited space in rebuttal, we kindly refer the reviewer to [the first reply to review rkrs](https://openreview.net/forum?id=oi45JlpSOT&noteId=SfykMGfX3s) where we elaborate on why the error upper bounds can be revealed in hyperparameter optimization. - 2.2 We note that the distribution $\nu^{(m)}_k$ is the reward distribution of pulling arm $k$ at fidelity $m$. It depends only on the multi-fidelity multi-armed bandits model parameters, instead of any specific algorithm. - 2.3 Yes, the $\beta$ is the confidence radius for constructing the upper confidence bound (UCB), and there is also a corresponding term in GP-UCB. Recall $$\beta(n,t,\delta) = \sqrt{\frac{\log(Lt^4/\delta)}{n}},$$ where $n$ is the number of observations, $t$ is the current time slot, $\delta$ is the confidence parameter, and $L\,(>0)$ is a constant factor. With Hoeffding's inequality, one usually define the confidence radius as $\sqrt{\frac{\log(1/\delta)}{n}}$ to guarantee that the true mean $\mu$ is insides the interval $(\hat{\mu} - \sqrt{\frac{\log(1/\delta)}{n}}, \hat{\mu} + \sqrt{\frac{\log(1/\delta)}{n}})$ with a confidence $1-\delta$. The reason why we replace $1/\delta$ with $Lt^4/\delta$ --- i.e., increase the confidence to $1 - \frac{\delta}{Lt^4}$ --- is because in the proof, we need to utilize the confidence intervals many times, e.g., for many time slots and for all arms and fidelities, and thus we need a higher the confidence for each of these intervals so that their union confidence is high enough for our fixed confidence theoretical guarantee. - 2.4 Yes, the selection of "half" is just for the simplicity of presentation, and any constant between $(0,1)$ would work. If a constant $C\in (0,1)$ is used, then the factor $3$ of right-hand side in the condition of Line 10 of $\texttt{Explore-B}$ becomes $\frac{1+C}{1-C}$, and there would be an additional $(C^{-2} + (1-C)^{-2})$ multiplier appearing in the cost complexity upper bound of [Explore-B]{.smallcaps} in Eq. (8). In the $(C^{-2} + (1-C)^{-2})$ multiplier, the first term $C^{-2}$ factor is because a small $C$ implies that $\texttt{Explore-B}$ may commit to a very inefficient suboptimal fidelity and thus suffer a high cost, while the second term $(1-C)^{-2}$ corresponds to that if $C$ is close to $1$, then it becomes very difficult for $\texttt{Explore-B}$ to find a good fidelity to commit, and thus $\texttt{Explore-B}$ has to spend a large cost in exploration. --- Rebuttal Comment 1.1: Title: Thanks for rebuttal Comment: Authors' rebuttal addresses most of my concerns and I really encourage authors to polish the paper presentation by incorporating our discuss in review and rebuttal in the next version. I'm happy to increase my rating from 6 to 7.
Summary: The paper studies the multi-fidelity multi-armed bandit problem where each arm can be pulled at different fidelities, providing better or worse estimate of the true mean, at a different cost. The paper studies best arm identification with fixed confidence, provides a lower bound on the cost complexity, and an algorithm with cost complexity upper bounds. In addition, the paper proposes a new regret definition which finds multiple applications. Upper and lower bounds on the proposed regret are proved for the instance dependent and instance independent cases. Strengths: - The paper studies a practical problem that has various application. - It is well-written and easy to follow. - The proposed definition of regret looks natural to study and has multiple applications. - The paper provides upper and lower bounds that are tight, in some cases, for both best arm identification and regret minimization. Weaknesses: My main concerns are the following: - The proposed algorithm for BAI requires the estimates $\tilde{\mu}_1^{(M)}, \tilde{\mu}_2^{(M)}$ which is not a realistic assumption. Even though the author argue that this is possible in some applications, it is still unrealistic for many other applications. These assumptions make the problem theoretically and practically less interesting. - The cost upper bound for BAI depends on $\tilde{m}^\star_k$, which depend on $\tilde{\mu}_1^{(M)}, \tilde{\mu}_2^{(M)}$ that are not guaranteed to be tight estimates. It also has extra terms that are not matched in the lower bound. - The problem dependent regret upper bound is not tight in most cases. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Can the proposed regret be upper bounded by $\lambda^{(M)}$ times simple regret considered in the literature? How does the proposed upper bound compare to this? - Please see weaknesses. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > - *The proposed algorithm for BAI requires the estimates $\tilde{\mu}\_1\^{(M)}, \tilde{\mu}\_2\^{(M)}$ which is not a realistic assumption. ...* > - *The cost upper bound for BAI depends on $\tilde{m}\_k\^{\star}$ , which depend on $\tilde{\mu}\_1\^{(M)}, \tilde{\mu}\_2\^{(M)}$ that are not guaranteed to be tight estimates. It also has extra terms that are not matched in the lower bound.* We thank the reviewer for pointing this out. We acknowledge that when the reward mean bounds for the top two arms are loose, the cost complexity would have extra terms. We notice that practically, there are applications, e.g., the hyperparameter optimization example in Remark 3.8, that provide tight bounds for top two arms. > *The problem-dependent regret upper bound is not tight in most cases.* We acknowledge the problem-dependent regret upper and lower bounds only match in a class of instances mentioned in Remark 4.5, where we also illustrate that this class of instance, though not general, is common in practice. On the other hand, we notice that our algorithm's problem-independent regret upper bound matches the lower bound up to logarithmic factor, and often in bandits literature, if one algorithm is good at problem-independent case, it would be slightly worse in the problem-dependent case. Therefore, since our algorithm's problem-independent bound is tight, the result that its problem-dependent upper bound is not very tight is expected. > *Can the proposed regret be upper bounded by $\lambda^{(M)}$ times simple regret considered in the literature? How does the proposed upper bound compare to this?* Recall that the simple regret in the literature refers to the single reward mean difference between the optimal arm and the final output arm by their concerned algorithm, while our regret is the cumulative reward mean differences between the optimal arm and the pulled arm by the concerned algorithm over all decision rounds. Hence, with different base units, $\lambda^{(M)}$ times their simple regret is incomparable to our regret upper bound. Although one can compare our regret upper bound with simple regret in prior works, e.g.,  Kandasamy et al. [19], via multiplying their simple regret upper bound by the time horizon $T$, this is rather unfair. Because our regret upper bound counts the cost when the learning was not accurate at the beginning, while simply times $T$ with simple regret enjoys the best accurate over all decision rounds. - [19] Kirthevasan Kandasamy, Gautam Dasarathy, Barnabas Poczos, and Jeff Schneider. The multifidelity multi-armed bandit. Advances in neural information processing systems, 29, 2016. --- Rebuttal 2: Comment: This is a gentle reminder for the reviewer that the author-reviewer discussion will end in less than 15 hours. In case you have any further questions, please feel free to ask; we are happy to answer them.
Summary: This paper studys the problem of Multi-Fidelity Multi-Armed Bandits (MF-MAB) where each arm can be pulled at different fidelity with different rewards and costs. The main contribution of this paper includes derive the cost complxity lower bound for best arm idenfication with fixed confidence and a new definition of regret for regret minimization. Strengths: The theorical results about the cost complexity lower bound is interesting and the proof of each theorem is comprehensive. Weaknesses: - In Algorithm 1, the best arm and second best arm is selected according to the up confidence bound (UCB). - Why use the UCB given that UCB of an arm is larger than that of another arm does not necessary mean that the first arm is better? How about use LCB? - Since the UCB and LCB both depends on the confidence radius, is it possible that the algorithm will come to an earlier decision to pick the arm that is not well explored? Here, I am assuming that if the arm is not well expored the confidence radius will be large thus the UCB will also be large. - It would be better to have some toy examples/experiments to show the effectiveness of the theorical results? Technical Quality: 3 good Clarity: 3 good Questions for Authors: As in Weakness section Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes, the authors addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > *Why use the UCB given that UCB of an arm is larger than that of another arm does not necessary mean that the first arm is better? How about use LCB?* We believe this question is about why we use UCB indices in Line 4 of Algoithm 1 (LUCB framework) to pick top two arms and why not use LCB indices. We note that arms with high uncertainty (e.g., lack exploration) would have a large confidence raduis (see $\beta(n,t,\delta)$ in Line 172) and thus a large UCB. As the algorithm is looking for top two arms, picking arms with high UCB can direct the algorithm to explore these under-explored arms so as to balance the exploration of all arms. In constrast, LCB would mislead this exploration away from under-explored arms: the less one arm is explore, the lower its LCB would be, and, via picking arms with high LCB, there would be less chance to explore the under-explored arms. > *Since the UCB and LCB both depends on the confidence radius, is it possible that the algorithm will come to an earlier decision to pick the arm that is not well explored? Here, I am assuming that if the arm is not well expored the confidence radius will be large thus the UCB will also be large.* As long as the interval composed by UCB and LCB of an estimate contains the true reward mean --- which holds with a high confidence, the algorithm would pick the right optimal arm for sure; even when the confidence radius is large. This is because when the condition in Line 3 of LUCB algorithm does not hold, i.e., the arm $\ell_t$ (with highest UCB index)'s LCB is greater than the second-largest UCB index, we have that $$\mu\_{\ell_t} \overset{(a)}> \text{LCB}\_{\ell_t} \overset{(b)}> \text{UCB}\_{k} \overset{(c)}> \mu_k, \text{ for any arm }k\neq \ell_t,$$ where inequality (a) and (c) is because the confidence interval contains the true reward mean, and inequality (b) is because Line 3 of Algorithm 1 does not hold. That is, the arm $\ell_t$ is indeed the best arm, and the algorithm can identify it as the best arm with high confidence. > *It would be better to have some toy examples/experiments to show the effectiveness of the theoretical results?* We are running more experiments, in particular comparing to the baseline LUCB on the highest fidelity, and we hope the experiments will conclude in a few days. Once it finishes we will post it as a reply. For now, please refer to Figure 1 for an empirical comparison of our proposed two exploration policies. --- Rebuttal 2: Title: New experiment updated in general comment Comment: This is a gentle reminder to the reviewer that we added [a new simulation in the general comments](https://openreview.net/forum?id=oi45JlpSOT&noteId=qMzrbHt8FZ). It compares our two exploration policies to one baseline and illustrates when our exploration policies (utilizing low fidelity) outperform the baseline (not utilizing low fidelity).
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: In this work authors revisit the multi-fidelity multi armed bandits and provide results i.e., upper and lower bounds for cost complexity for best arm identification with fixed confidence objective using the Lower-upper confidence bound framework. Further, this work introduces 3 procedures for finding optimal fidelity where the comparable upper bound results are given for 1st and 2nd procedure. Finally, the regret minimization objective is discusses briefly, introducing new definition of regret which differs from the definition introduced in the prior work. Strengths: The paper has made a significant effort to keep clear problem framework and analysis. The proposed method is novel for the best arm identification objective. Further, a detailed theoretical analysis of LUCB framework for both best arm identification objective and regret minimization objective, and comparable upper and lower bounds on the cost complexity and regret respectively are theoretically sound and well presented. Weaknesses: This work assumes the reward distributions are on bounded support which may not be the case always. Further the knowledge of upper and lower bounds of 1st and 2nd arm respectively is assumed and when these bounds are too loose then the upper bounds on the cost complexity would not be comparable to lower bounds. The assumption 3.3 on which all the results are dependent may not hold always. The paper is missing the experimental evaluation of the proposed framework and comparison with the existing methods. Lastly, some times paper is hard to follow when 1st and 2nd arms are mentioned as to when is the ground truth being referred and when is the indexing of arms being referred. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Is the assumption 3.3 expected to hold in general? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > *This work assumes the reward distributions are on bounded support which may not be the case always.* Assuming the reward distribution is bounded is common and standard in bandits literature [@auer2002finite]. With known approaches in bandits, this assumption can be easily extended to a more general sub-gaussian one [@lattimore2020bandit §5.3], and, with some efforts, this assumption can be relaxed to heavy tail distribution [@bubeck2013bandits]. - auer2002finite: Peter Auer, Nicolo Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit problem. Machine learning, 47:235–256, 2002 - lattimore2020bandit: Tor Lattimore and Csaba Szepesvári. Bandit algorithms. Cambridge University Press, 2020. - bubeck2013bandits: Sébastien Bubeck, Nicolo Cesa-Bianchi, and Gábor Lugosi. Bandits with heavy tail. IEEE Transactions on Information Theory, 59(11):7711–7717, 2013. > *Further the knowledge of upper and lower bounds of 1st and 2nd arm respectively is assumed ...* We acknowledge that when the reward mean bounds for the top two arms are loose, the cost complexity would also be loose. We notice that practically, there are applications, e.g., the hyperparameter optimization example in Remark 3.8, that provides tight reward mean bounds for the top two arms. > *The assumption 3.3 on which all the results are dependent may not hold always.* First, we kindly remind the reviewer that only Theorem 3.4 relies on Assumption 3.3. All other theoretical results, e.g., cost complexity lower bound and regret minimization bounds do not need this assumption. More importantly, we notice that Assumption 3.3 can be removed by slightly changing the definition of value $c$ in the proof of Theorem 3.4 at Appendix C.2 as $$c = \frac{(\mu_1 + \max_k (\mu_k^{(m_k^*)} + \zeta^{(m_k^*)}))}{2}.$$ With this new $c$, the inequality (a) in Step 2 and inequality (a) in Step 3 --- where Assumption 3.3 were used --- can go through without the assumption. > *The paper is missing the experimental evaluation of the proposed framework and comparison with the existing methods.* We thank the reviewer for giving the suggestion. We are running more experiments, in particular comparing to the baseline LUCB on the highest fidelity, and we hope the experiments will conclude in a few days. Once it finishes we will post it as a reply. For now, please refer to Figure 1 for an empirical comparison of our proposed two exploration policies. > *Lastly, some times paper is hard to follow when 1st and 2nd arms are mentioned as to when is the ground truth being referred and when is the indexing of arms being referred.* We thank the reviewer for pointing out this confusion. As another reviewer zHKa suggested, in the final version of this paper, we will use $a_1, a_2$ and $\tilde{a}_1,\tilde{a}_2$ to refer the ground truth and estimated top arm indexes respectively, and use $\mu_1^{(m)}, \mu_2^{(m)}$ and $\tilde{\mu}_1^{(m)},\tilde{\mu}_2^{(m)}$ explicitly for reward means. > *Is the assumption 3.3 expected to hold in general?* Please refer to our reply above. The assumption 3.3 can be removed by slightly changing our proof. --- Rebuttal Comment 1.1: Comment: I thank the authors for the response. The reviewer has no further questions.
Summary: This paper studies the the multi-fidelity multi-armed bandit setting where each arm is associated with a cost (fidelity) and observed reward. In this setting when the learner pulls an arm $k \in \mathcal{K}:=\{1, \ldots, K\}$ at fidelity $m \in \mathcal{M}:=\{1, \ldots, M\}$, it pays a cost of $\lambda^{(m)}$ and observes a reward $X_k^{(m)}$ whose mean $\mu_k^{(m)}$ is not too far away from the arm's true reward mean $\mu_k$. This paper studies both the best arm identification and regret minimization setting. For the best arm setting, they propose a LUCB algorithm where the key novelty/challenge (different from standard MAB) is to decide the fidelities for exploring the critical arms. This fidelity selection faces an accuracy-cost trade-off, i.e., higher fidelity (accuracy) but suffering higher cost, or lower fidelity (accuracy) but enjoying lower cost. Hence, this trade-off is different from the common exploration-exploitation trade-off in classic bandits. To address this trade-off in fidelity selections, they propose a UCB-type policy which is called upon by the LUCB algorithm that finds the optimal fidelity $m_k^*$ in (3) for each arm $k$ (EXPLORE-A) and an explore-then-commit policy stopping at a good fidelity that is at least half as good as the optimal one (EXPLORE-B). In theorem 3.4 they show that the sample complexity upper bound matches the lower bound upto constant factors. they also study the regret minimization setting. Strengths: 1) The best arm identification in multi-fidelity multi-armed bandit setting is novel and interesting setting. 2) The proposed solution seems novel and is theoretically analyzed. 3) The sample complexity upper bound matches the lower bound under certain assumptions. Weaknesses: 1) You assume that the reward distribution and the mean $\mu_k^{(m)}$ of each arm $k$ at fidelity $m$ are unknown, while the costs $\lambda^{(m)}$ 's and the error upper bounds $\zeta^{(m)}$ 's are known to the learning agent. This setting assumption seems very contrived. Is this a standard assumption? Can you point out some references? Can you give some motivating examples where this setting assumption makes sense. 2) The writing needs to substantially improve. There are too many results packed into the paper, without discussion on a specific result in depth. For example one of the main contribution is the regret bound, however, the regret bound theorem is moved to appendix D entirely. Also the regret minimization algorithm is not presented in the main paper and is presented in Appendix D. The novelty of the arm elimination algorithm is not clear to me. Is the regret bound of the regret minimization algorithm tight? 3) This is mainly a theory paper, yet the technical novelty of your approach is not clear to me. What are the technical challenges in proving Theorem 3.4? I think this needs a detailed discussion. For example the [Kandasamy et al.](https://papers.nips.cc/paper_files/paper/2016/file/2ba596643cbbbc20318224181fa46b28-Paper.pdf) paper tackles the multi-fidelity setting for regret minimization and [Kauffmann et al](http://proceedings.mlr.press/v30/Kaufmann13.pdf) studies the LUCB algorithm for single fidelity. My guess is that the proof of thorem 3.4 must use these papers. Can the authors briefly discuss the proof technique. 4) There is no experiment in the main paper. Yet, multi-fidelity paper do substantial experiments on their setting (See [Kandasamy et al.](https://papers.nips.cc/paper_files/paper/2016/file/2ba596643cbbbc20318224181fa46b28-Paper.pdf) ). What are the baselines in your experiments? 5) There is a summation over $m\in\mathcal{M}$ in the Explore-B sample complexity bound, which is not present in Explore-A bound. It is said that when $\tilde{G}=O(M \tilde{H})$, the cost complexity upper bound of EXPLORE-A is less than that of EXPLORE-B. However, if $\tilde{G}=O(M \tilde{H})$ then EXPLORE-A bound is far away from the lower bound. Can you clarify this? Also what are the implications of $\tilde{G}=O(M \tilde{H})$ and what settings have these? In short, when do we prefer EXPLORE-A than EXPLORE-B? Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: See weakness section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: See weakness section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer for spending time reviewing this paper. Below, we answer weakness point by point according to numerical order. 1. Yes, this is a standard setting of multi-fidelity multi-armed bandits proposed by Kandasamy et al. [18, 20]. **On known error upper bound** We present the hyperparameter optimization as a motivating example in Section 3.4, where arms correspond to different groups of hyperparameters and fidelities correspond to the different amounts of training resources for testing a group of hyperparameters. There are benchmarks for multi-fidelity hyperparameter optimization, including `HPOBench` and `YAHPO Gym`, under the typically used fidelity dimension, including the number of epochs, the training data sample, etc. Thanks to these benchmarks, it is also convenient to know $\zeta^{(m)}$ under different fidelities $m$ for commonly used types of fidelity dimension. We notice that the application motivation is mentioned a bit late. In the final version, we will move the application discussion forward to the model section to better motivate the model. 2. **Discussion on regret results** We acknowledge that the results of regret minimization are present in a brief style. This is because we need to balance the presentation of two branches of contributions (best arm identification and regret minimization). We give four remarks (Remarks 4.3-4.6) in the main paper to discuss the theoretical results of regret minimization. We present the real-world implication of the two-stage arm elimination algorithm in Remark 4.3, the regret bound tightnesses in Remarks 4.4 and 4.5, and the theoretical relation to partial monitoring in Remark 4.6. **Novelty of elimination algorithm** One novelty of the multi-fidelity elimination algorithm is its two-stage exploration-exploitation mechanism --- always conduct exploration at the highest fidelity and exploitation at the lowest fidelity. This algorithmic design is based on the special new regret definition in Eq., which reflects the real-world applications, e.g., advertisement distribution and production management, etc. (see Remarks 4.1 and 4.3 for detail). **Tight regret bound** In Remark 4.4, we show that our problem-independent regret upper and lower bounds match up to some logarithmic factor. In Remark 4.5, we show that our problem-dependent regret upper bound matches our problem-dependent lower bound in a class of instances common in practice. 3. **Technical novelty on Theorem 3.4** Our best arm identification algorithm consists of two parts: (a) LUCB framework in Algorithm 1 and (b) fidelity selection procedures in Algorithm 2. While the proof approach related to LUCB framework is similar to Kalyanakrishnan et al. [19], the rest proof related to fidelity selection procedures $\texttt{Explore-A}$ and -B is very different from Kandasamy et al. [19]. [19] explore the fidelities from low fidelity to high fidelity one by one. While this gradual elevation of the fidelity is intuitive, it fails to capture that there is an optimal fidelity reflecting the best tradeoff between accuracy and cost, and one should try to converge towards this optimal fidelity. Instead, in our paper, we first present the the lower bound result, which provides us with the above insight on the optimal fidelity, and then we design two new exploration procedures that try directly to converge towards optimal fidelity. Next, we fix an arm $k$ and illustrate the technical challenges of analyzing both procedures. **$\texttt{Explore-A}$** utilizes a UCB-type policy to find the optimal fidelity $m_k^*$ --- the most effective fidelity for distinguishing this arm $k$ from the optimal arm, where the fidelity corresponds to "arm" in MAB. The key step is to show that each suboptimal fidelity for arm $k$ is been selected at most $O(\log\log t)$ times, where $t$ is the time slot index. This $\log \log t$ term, corresponding to the second term of Eq., is novel and rare in the bandit's sample complexity upper bound. This result leverages two crucial findings: firstly, the arm $k$ is pulled a maximum of $O(\log t)$ times, as ensured by the LUCB framework across all fidelities. Secondly, the $\texttt{Explore-A}$ guarantees that each suboptimal fidelity is chosen a logarithmic number of times *relative* to the total number of arm pulling by LUCB (see Lemma C.2). **$\texttt{Explore-B}$** aims to find a good fidelity that is at least half as good as the optimal fidelity. To achieve that, one key technical challenge is to devise a commitment condition (Line 10 in Algorithm 2) and to prove that (1) how much cost is sufficient to make sure the condition holds and (2) when this condition holds, $\texttt{Explore-B}$ can find a good fidelity with high probability (see Lemma C.3). 4. **On Experiment.** We are running more experiments, in particular comparing to the baseline LUCB on the highest fidelity, and we hope the experiments will conclude in a few days. Once it finishes we will post it as a reply. For now, please refer to Figure 1 for an empirical comparison of our proposed two exploration policies. 5. Notice that $\tilde{G}=O(M\tilde{H})$ is an upper bound for $\tilde{G}$, meaning $\tilde{G}$ is not large. That is, the second term containing $\tilde{G}$ of the cost complexity upper bound of $\texttt{Explore-A}$ is not large. Therefore, when $\tilde{G}=O(M\tilde{H})$, the cost complexity upper bound of $\texttt{Explore-A}$ is *not* far away from the lower bound. We note that the first term of $\texttt{Explore-A}$ matches the lower bound, and the second upper bound matches up to an $M$ factor. For when do we prefer $\texttt{Explore-A}$ then $\texttt{-B}$, we kindly refer the reviewer to [the reply to review zHKa's first *Question*](https://openreview.net/forum?id=oi45JlpSOT&noteId=urNfm5ayDK). --- Rebuttal Comment 1.1: Title: Further clarification questions Comment: I thank the authors for their response. I have a few clarification questions which will help me understand the paper: - Thank you for clarifying the motivation of the paper. - I still feel that the presentation of the paper is unsatisfactory where half the result, including the regret algorithm, regret theorem is in the appendix. Regarding novelty of the regret, I have a few queries. - I am concerned about the regret definition in eq (9). There are two issues. Why is the definition only discounting $\mu^1$ by $\lambda^{1}$ and not $\mu\_{I_t}$ by also $\lambda^{I_t}$? Why the $N$ is required and what happens if you simply sum over $t=1$ to $\Lambda$? - The algorithmic novelty of MF-MAB in Appendix D is still not clear to me. It seems just a kernelized version of UCB-Improved of [Auer et al (2010)](). Pull each arm in the active set equal number of times at highest fidelity and then eliminate sub-optimal arms at that are below the gap $2^{-p + 1}$. - The factor $\epsilon$ is not described in the algorithm description in Appendix D.2 (from lines 617-622). Is it a problem dependent parameter, a lower bound to the gaps? I see that it is showing up in the Theorem D.3 as $1/\epsilon^2$. Does this mean that you need to know a lower bound to the gaps for running MF-MAB? - This queries again bring me to one of the key points I am raising. Algorithm design is one of the key contribution of this paper. You *cannot* put the algorithm in Appendix, hide away key design choices and discuss result in remarks. - Experiments: No further experiments are provided by the authors. UCB revisited: Improved regret bounds for the stochastic multi-armed bandit problem Peter Auer 2010 --- Reply to Comment 1.1.1: Comment: We highly appreciate the reviewer for spending time evaluating this work. > I am concerned about the regret definition in eq (9). There are two issues. - > Why is the definition only discounting $\mu_1$ by $\lambda^{(1)}$ and not $\mu_{I_t}$ by also $\lambda^{(I_t)}$? **Answer:** We are *not* discounting reward by fidelity. No matter which fidelity is leveraged to pull the arm, the obtained observations are counted as rewards without any scaling. Please refer to Remark 4.1 for the application motivation of not discounting rewards by fidelity. Since there is no reward scaling due to different fidelity, in the second term $\mathbb{E}\left[ \sum_{t=1}^N \mu_{I_t}^{(M)} \right]$ of Eq. (9), the $\mu_{I_t}^{(M)}$ is not discounted. The first term $\frac{\Lambda}{\lambda^{(1)}}\mu_1$ of Eq. (9) signifies that the optimal strategy for maximizing cumulative rewards involves consistently pulling the optimal arm $\mu_1$ with the lowest fidelity $\lambda^{(1)}$, where the expected pulling times are $\frac{\Lambda}{\lambda^{(1)}}$, and the $\lambda^{(1)}$ is not for discounting the reward mean $\mu_1^{(M)}$. - > Why the $N$ is required and what happens if you simply sum over $t=1$ to $\Lambda$? **Answer:** $N$ represents the overall count of decision rounds (time slots) within the algorithm under consideration, and $\Lambda$ stands for the total budget allocation for said algorithm. In light of the fact that the second term in the regret definition (e.g., Eq. (9)) is the accumulation of rewards across all time slots, one needs to perform the summation across the range from $t=1$ to $t=N$. As $t$ serves as the index for time slots and possesses distinct units from the budget $\Lambda$, it is inappropriate to execute a summation over the interval $t=1$ to $t=\Lambda$. > The algorithmic novelty of MF-MAB in Appendix D is still not clear to me. It seems just a kernelized version of UCB-Improved of Auer et al (2010). Pull each arm in the active set equal number of times at highest fidelity and then eliminate sub-optimal arms at that are below the gap $2^{-p + 1}$. **Answer:** The primary innovation in the algorithm's design lies in its two-stage approach: a high-fidelity exploration phase (Lines 3-7 in Algorithm 4) followed by a low-fidelity exploitation phase (Line 8 in Algorithm 4). As acknowledged in the paper, the design of this algorithm, particularly its exploration stage, bears resemblance to the work of Auer et al. (2010). However, our paper extends beyond this algorithmic design and centers on demonstrating that the algorithm can attain tight regret upper bounds (both problem-dependent and -independent) for the novel regret definition in Eq. (9) for the MF-MAB model. This novel MF-MAB regret definition is suitable to model many practical application scenarios in real-world applications, e.g., advertisement distribution as described in the paper and appreciated by the other reviewers. The two-stage algorithm provides a practical strategy for such scenarios. The proposed tight regret guarantees of the two-stage algorithm support the viability of adopting such a design in practical settings. > The factor $\epsilon$ is not described in the algorithm description in Appendix D.2 (from lines 617-622). Is it a problem dependent parameter, a lower bound to the gaps? I see that it is showing up in the Theorem D.3 as $1/\epsilon^2$. Does this mean that you need to know a lower bound to the gaps for running MF-MAB? **Answer:** No, $\epsilon$ is not gap dependent and one can run the algorithm with any $\epsilon > 0$. The regret upper bound in Theorem D.3. (Eq. (21)) holds for any $\epsilon > 0$. Theorem D.4. shows that setting $\epsilon = (K\log \Lambda / \Lambda)^{1/3}$ guarantees the problem-independent (worst case) regret upper bound for any MF-MAB instance. > This queries again bring me to one of the key points I am raising. Algorithm design is one of the key contribution of this paper. You cannot put the algorithm in Appendix, hide away key design choices and discuss result in remarks. **Answer:** We thank the reviewer for suggesting a better presentation of this paper. Although we agree that the algorithm design (Algorithm 4) is a contribution to this paper, it is only one of the key contributions (among other algorithm designs and theoretical results). Especially as this is a theoretical paper, it is also very important to present the theoretical results. We indeed made hard decisions on what to keep in the paper in order to achieve a balance. As the camera-ready version allows an additional page, we promise to move the algorithm design and the detailed theorems in Appendix D into the main paper in the final version. --- Reply to Comment 1.1.2: Title: New experiment updated in general comment Comment: This is a gentle reminder to the reviewer that we added [a new simulation in the general comments](https://openreview.net/forum?id=oi45JlpSOT&noteId=qMzrbHt8FZ). It compares our two exploration policies to one baseline and illustrates when our exploration policies (utilizing low fidelity) outperform the baseline (not utilizing low fidelity).
null
null
null
null
Successor-Predecessor Intrinsic Exploration
Accept (poster)
Summary: This paper is about exploration in Reinforcement Learning. Many existing works do not leverage retrospective information when computing intrinsic rewards. To this end, the paper introduces a method called SPIE where the intrinsic reward is calculated by combining successor and predecessor representations to account for prospective and retrospective information, respectively. There are different variants for discrete state space and continuous state space respectively. For discrete state space, the paper presents results on grid worlds of varying difficulty to test the efficacy of agents at covering the state space and adaptation when rewarding states change (in a continual setting). For continuous state space, comparisons on hard-exploration tasks in the Atari benchmark are presented where gains in performance are observed on a few tasks. Strengths: 1. Exploration with SR has been studied in prior works. The idea of incorporating PR information in the intrinsic reward is interesting. 2. The experiments for discrete state space are thorough and show that the gains are coming from adding retrospective information with both fixed/learned SR. Weaknesses: 1. The motivation is not very clear. The writing switches between learning continual exploration policy or exploration for singleton tasks. More comments on this are in Questions below. 2. The paper misses some citations and baselines. There should be discussions on learning exploratory policies that can continually explore and more recent baselines on the same. Comparisons should also be done on environments for non-stationary continuous state space environments. 3. The results on the Atari benchmark are not very convincing and more insights/tasks are needed for evaluation. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. There are methods focusing on using the retrospective structure of the transition sequences- NovelD [1]. Furthermore some recent methods have also relied on episodic novelty in addition to global intrinsic reward to improve exploration- NGU [2], E3B [3]. How does the proposed method compare with them? 2. Between lines 29-33, it is mentioned that the existing method (as they don't use retrospective information) may fail to visit bottleneck states. It is not clear why this will happen. Can this be shown experimentally on the grid world example? Also, it is possible that the intrinsic reward for visiting the bottleneck state can be small. Still, the agent will receive high rewards on going to the other side of the cluster, resulting in a higher Q-value and encouraging the agent to visit those states. 3. Lines 130-134 talk about continual exploration but focus on single-task settings. The NovelD [1] paper talks about the Asymptotic Inconsistency (page 4) of the exploration bonus so the agent can learn to exploit and converge to an optimal policy for the given task. For singleton tasks why having continual exploration is important? Furthermore, the paper does not discuss methods focusing on continual exploration or compare with them. 4. Lines 153-159 show that the proposed agent has cyclic behavior and mentions that this is advantageous for a non-stationary reward structure. However, the experiments are mostly designed for singleton tasks. Several methods have studied adaptation in continual tasks [4] or procedurally generated tasks [5,6,7]. How does the proposed method compare with them? 5. Was the proposed method also evaluated on how much coverage the agent can achieve in a single episode of finite length to validate if the method is suitable for non-stationary tasks? 6. Line 126 mentions that SR-based exploration is not ideal as the asymptotic behavior is uniform across states due to the norm becoming a fixed constant. Won’t the reward based on SF+PF face the same issues as the norm will be constant upon convergence? This would not help with continual exploration as discussed for the discrete case and may lead to different behaviors than the reward defined for the discrete state space scenario. 7. Experiments on Atari: There is no reasoning provided on why the method does better on Montezuma’s Revenge and Private Eye, and does poorly on Freeway and Venture. [5] also compared on Gravitar and Solaris, it would be interesting to see comparisons on these 2 environments too. Furthermore, more recent works should be included in the baselines like SPR [8]. 8. Minor typo: Line 127: were -> where ### References: [1] Zhang, Tianjun, et al. "Noveld: A simple yet effective exploration criterion." Advances in Neural Information Processing Systems 34 (2021): 25217-25230.\ [2] Badia, Adrià Puigdomènech, et al. "Never give up: Learning directed exploration strategies." arXiv preprint arXiv:2002.06038 (2020). [3] Henaff, Mikael, et al. "Exploration via Elliptical Episodic Bonuses." arXiv preprint arXiv:2210.05805 (2022).\ [4] Lehnert, Lucas, Stefanie Tellex, and Michael L. Littman. "Advantages and limitations of using successor features for transfer in reinforcement learning." arXiv preprint arXiv:1708.00102 (2017).\ [5] Zha, Daochen, et al. "Rank the episodes: A simple approach for exploration in procedurally-generated environments." arXiv preprint arXiv:2101.08152 (2021).\ [6] Raileanu, Roberta, and Tim Rocktäschel. "Ride: Rewarding impact-driven exploration for procedurally-generated environments." arXiv preprint arXiv:2002.12292 (2020).\ [7] Wang, Kaixin, et al. "Revisiting intrinsic reward for exploration in procedurally generated environments." The Eleventh International Conference on Learning Representations. 2023.\ [8] Schwarzer, Max, et al. "Data-efficient reinforcement learning with self-predictive representations." arXiv preprint arXiv:2007.05929 (2020). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Some limitations are discussed in the paper. The discussion can include some potential directions on extending the method to non-stationary tasks. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewers for their instructive comments. Please find below our replies to the raised concerns/questions. - We thank the reviewers for pointing us to the related literature on constructing intrinsic reward based on episodic information and retrospective transitions. SPIE is similar to NovelD in that both methods constructs the intrinsic reward based on the difference of novelty measures at the next state and the current state. However, the difference between SPIE and all three mentioned methods is that the SPIE intrinsic reward is based solely on a global intrinsic bonus, in contrast to the three methods which all leverage (partially) episodic intrinsic bonus. We will include these discussions on the relationship and differences with these methods and included citations to the mentioned papers in the related works section in the camera-ready revision upon acceptance. - Existing intrinsic exploration methods can nevertheless visit bottleneck states. However, consider the uncertainty based methods (e.g., RND and ICM), upon multiple visitation to the bottleneck states, the associated novelty (uncertainty) measure decreases and the agent would generate an intrinsic motivation against visiting the state in the long run. In contrast, the fundamental principle of SPIE is the dynamic balancing between the motivations of visiting bottleneck states and exploring uncertain state. Hence, the implicit motivation towards bottleneck states for making sensible exploratory transitions is sustained irrespective of the agent's perceived knowledge (uncertainty) of the bottleneck states. - Continual exploration with SPIE is potentially useful even when the task structure (e.g., its connectivity pattern) does not change. For instance, consider an ethologically plausible setting where an animal is performing random foraging in a clustered environment, where the rewards are food that is depleted upon consumption, and new rewards would occur in a different location (with a potential temporal lag). In this case, continual exploration is necessary for fast adaptation to the non-stationary reward location (as demonstrated in Figure 3c-d). However, we note that such "cycling" exploratory behaviour and the associated ethological plausibility arises as one of the benefits of SPIE, but is not our main focus in the current paper. We hence do not further study its utility under the continual exploration setting here but it is indeed an interesting direction for future investigation which we will pursue. - Similar to our argument in the previous point, the continual exploration is not the main focus of the paper. Figure 3c-d and associated discussions aims to demonstrate that SPIE exhibits the potential of showing ethologically plausible continual exploration. We do not interpret SPIE as a state-of-the-art method for solving the continual RL task or procedurally generated tasks, hence we do not provide the corresponding empirical evaluations in the paper. However, we do think this is an important and interesting direction to pursue. - We thank the reviewer for the suggestion. The average size of the set of unique states visited in a finite-length episode is reported in the rebuttal letter. - We note that the argument with respect to the asymptotic uniformity only applies to discrete setting. As the reviewer correctly points out, the SF and PF would converge asymptotically and does not lead to continual exploration. However, neither our motivation nor the focus of our empirical evaluation is focused on promoting SPIE as a method for continual exploration or examining its utility in that respect. In all our empirical evaluations under continuous state space, we only focus on single-task settings, hence the asymptotic convergence is beneficial per the asymptotic inconsistency argument from the NovelD paper. - We argue that DQN-SF-PF does not perform poorly on Freeway and Venture, but it performs comparably with the strongest baselines in these two games. It is hard to probe the exact reason underlying the performance of the agents in Atari games, but we can provide intuitive explanations. One possible reason is that Montezuma's revenge and Private Eye contains more key transition states (bottleneck states) between different "rooms" hence the "bottleneck-seeking" exploratory behaviour is beneficial in such games. We have now included the evaluation results of DQN-SF-PF (and other baselines) on two more hard-exploration atari games, Gravitar and Solaris. Empirical results indicate that DQN-SF-PF outperforms DQN-SR on both tasks. We do not think SPR is a good baseline to compare against since its motivation does not stem from designing stronger exploration agent, but rather aims at learning better state representation for efficient RL. Moreover, SPR contains more complicated neural architecture and representation learning module than DQN-SF-PF, which leads to unfair comparison. The main aim of our experimental studies is to demonstrate that SPIE is an easy-to-plugin exploration strategy that could yield the same agent to achieve more efficient exploration, rather than showing it beats all state-of-the-art models across all benchmarks. - In line 127, by "were the SR matrix to be known...", we mean that "assuming the SR matrix is known...". We do not think it is a typo, however we do apologise for the confusion. - We thank the reviewer for pointing out additional points for discussing the limits and future works of SPIE, which we will include in the discussion chapter in the camera-ready revision upon acceptance. We wish to thank the reviewer again for their thoughtful comments, and we hope the above replies address all concerns/questions raised by the reviewer. If so, we hope the reviewer could adjust their score accordingly. If there is any remaining confusion/concern, please let us know as soon as possible and we are happy to engage in further discussion. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. 1) The authors clarified that continual exploration is just an application, but in the written paper it has been used for motivating the proposed method a couple of times. I do feel there should be experiments on this as there are claims made in the paper (Introduction and Motivation). 2) The results on the number of states visited show that the proposed method does better. However, it is unclear how many states are there in the environment. The plot should of the number of visited states as a fraction of the total states which will show how far the algorithm is from optimal performance. 3) The discussion in general response on "reciprocal form of SPIE objective in continuous settings." is important and should be in the paper. Also, I feel there should be an ablation to show the gap in performance. As for discrete state space scenarios. the method was motivated by the drawbacks of using reciprocal form. 4) Regarding the response on bottleneck states, is it possible to show it with an experiment? Overall, the paper needs more work in terms of writing and motivating the ideas, and experiments to support the points made in the Introduction and Methods section. --- Reply to Comment 1.1.1: Title: Thank you for engaging in the further discussion and for the additional questions. Comment: We thank the reviewer for reading through our responses and for their additional comments. Below we address the additional questions. - As we have stated in the responses to the reviewer and in the general response, the continual exploration is a side benefit of the model which we found interesting, but is not the main focus of the current paper. We apologise for the confusion, and we will make sure to update the paper such that the motivation is more coherent with the experiments in the paper (which we have already done but cannot show to the reviewer at the current instance since we are not allowed to upload a revised version of the paper during rebuttal this year). Here we hope to emphasise again that the main focus of the paper is to propose SPIE as an intrinsic exploration framework for more efficient exploration, but not for continual exploration (which is a surprising side benefits that we wish to pursue in future works). - There are 100 states, 91 states, 77 states, 400 states in OF-small, Cluster-simple, Cluster-hard, and OF-large environments, respectively. We set the maximum episode lengths to be 200 across all the environments. We hence observe that SARSA-SRR achieves near-optimal exploration performance across the 4 environments. We will reformat the figures by plotting the number of unique visited states as a fraction of the total number of states in the camera-ready revision upon acceptance. - We will include the discussion on the choice of reciprocal form of SPIE objective in continue settings in the camera-ready revision upon acceptance. We thank the reviewer for suggesting the additional ablation study on verifying the SPIE objective. However, given the large computational costs for hyperparameter tuning for the alternative model (difference in norms between the SF and PF), we are unable to deliver the ablation results at the current instance. We will include this ablation study in the updated version of the paper. - Yes we can show the directed intrinsic motivation towards the bottleneck states. We have videos showing the dynamics of the agent with associated value functions which we will upload as part of the supplementary materials in the updated version. In the meantime, we have created an official comment for AC containing an anonymised link to this video (but we are afraid that we cannot share it here given the regulations this year). Moreover, as suggested by Reviewer dj2L, we will also show the value function / intrinsic reward heatmaps at different checkpoints through learning to justify the dynamic balancing between exploring bottleneck states and less visited states. We thank the reviewer again for engaging in further discussion and for the additional questions. We hope the reviewer for adjust their score accordingly if the above responses have clarified the additional questions raised by the reviewer. If there is any remaining confusion/concern, please let us know as soon as possible and we are happy to engage in further discussion.
Summary: This paper introduces a new intrinsic reward (termed SPIE) for encouraging exploration in reinforcement learning. Unlike existing methods that mainly focus on prospective information, SPIE also incorporates retrospective information into the intrinsic reward. SPIE combines successor representation (SR) and predecessor representation (PR). The effectiveness of SPIE is validated in experiments on discrete grid worlds, continuous MountainCar, and 4 Atari games. Strengths: - The idea of combining both prospective and retrospective information in exploration is interesting. - This paper is in general clear and well-written. Weaknesses: - Beyond the performance gain, there lack in-depth discussions on the disadvantages of solely using prospective information and the benefits of adding retrospective information. - Line 105: The authors mention that retrospective information incorporates the connectivity about the state space. I am not sure if it can be considered an advantage that is specific to retrospective information. In my understanding, the prospective SR is also able to capture the connectivity between two states. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Apart from the points raised above, I have some questions and comments: - Figure (a): Can the authors comment on why SARSA-SR performs much worse than random walk (SARSA)? Maybe it is limited to such tabular cases. Usually, random exploration is not a strong baseline in high-dimensional domains. - Line 40: What are "some of the problems"? - Line 195: How does it perform if we define $r_\textrm{SR-R}$ in a similar way as in discrete MDPs? - What if we define the intrinsic reward as $r(s,a,s')=\lVert\hat{M}[s,:]\rVert_1 - \hat{M}[s,s']$? The intuition is to encourage transitions that are less reachable from $s$. How does it perform compared to SPIE? - Table 1 is wider than the text width. - Figure 4(c): Please include the standard deviations of the results in the figure. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The limitations of the proposed method are barely discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewers for their instructive comments. Please find below our replies to the raised concerns/questions. - The retrospective information can be leveraged for identifying essential predictors for reward states, hence exploration using the retrospective information allows the agent to traverse the predictor (in our case, the bottleneck states) frequently regardless of the immediate information gain. In addition to demonstrate empirical performance on benchmarks, we also conducted extensive experimental studies in grid worlds to demonstrate the utility of SPIE framework. Specifically, we show that SPIE leads to improved exploration efficiency and state coverage in the complete absence of extrinsic reward. We additionally show that SPIE yields cycling exploratory behaviour, i.e., traversing the bottleneck state to reach other clusters upon reaching sufficient coverage of the current cluster, enabling the agent to adapt well to non-stationary environment. We show that the utility of the discrete SPIE intrinsic reward, $r_{\text{SR-R}}$ does not arises solely from its negative nature (hence optimistic initialisation). We also show that the advantage of SPIE lies in its formulation of combining the prospective and retrospective information, by demonstrating that $r_{\text{SR-R}}$ with fixed SR also yields strong exploration efficiency. If there is any additional analysis the reviewer thinks would be useful for demonstrating the utilities of SPIE, we are happy to conduct such analysis and engage in further discussion. - The reviewer is correct that the prospective information (i.e., the SR) also contains the connectivity information. However, the retrospective information provides a marginal accessibility of the target state (s') from all other states, hence providing a better sense of the local connectivity pattern of the target state that could be useful for, e.g., distinguishing states in the centre of the environment and bottleneck states. - One possible explanation for the poor performance of SARSA-SR is its equivalence to count-based exploration (Machado et al., 2018). Count-based exploration algorithms perform poorly in larger state space with sparse reward settings, such as grid worlds. Such agents preferentially explore local region for an extended time before moving to novel sub-region, hence leading to poor exploration efficiency. - Existing intrinsic exploration methods exhibit a number of problems. For instance, a notable problem is undirected exploration, whereas SPIE provides explicit intrinsic motivation towards bottleneck states. Additionally, as the agent continues exploring the state space, the intrinsic bonus decreases to 0 asymptotically, hence does not provide any intrinsic bonus for facilitating sustained exploration, which would lead to slow response to non-stationarity. Another issue associated with existing intrinsic exploration methods is the non-stationary reward structure imposed by the additive intrinsic bonus. Here we show that SPIE yields a sensible exploration bonus even with fixed SR (PR), hence resolving the non-stationary intrinsic reward issue whilst maintaining efficient exploration. - We find that defining the SPIE objective in continuous state space as the difference in norms of SF and PF leads to difficult hyperparameter tuning and we do not have the computational resources to run comprehensive grid. The reciprocal form yields robust performance on a range of $\beta$ values (0.03-0.06). We hence choose to instantiate DQN-SF-PF with equation 16. - Using $r(s, a, s') = ||\hat{M}[s, "]||_{1} - \hat{M}[s, s']$ yields intrinsic exploration based solely on state uncertainty, but does not provide any targeted exploration motivation. However, a key principle underlying SPIE is dynamically balancing exploring under uncertainty and the implicit motivation towards bottleneck states. We implemented the agent with the suggested intrinsic bonus on both Riverswim and SixArms. The empirical performance of the resulting agent on the two tasks are $95,961\pm181,216$ and $562,346\pm 1,749,455$, respectively, which is significantly lower than the performance of SARSA-SRR on the two tasks ($2,547,156\pm 479,655$ and $2,199,291\pm1,024,726$, respectively). - We thank the reviewer for pointing out the formatting issue, we have now reduced the width of Table 1. - We thank the reviewer for pointing out the missing of standard error in Figure 4c, which we now include in the rebuttal letter. Given some further hyperparameter tuning, the resulting agent with $r_{\text{SF-PF}}$ now yields a significantly higher average completion probability than the agent with $r_{\text{SF}}$ comparing to Figure 4c in the current main paper, further justifying the utility of SPIE. - The limitations of SPIE mainly lie in its theoretical underpinning. In the paper we empirical verified a number of benefits brought by SPIE. However, it is hard to analytically derive the asymptotic properties of SPIE. For instance, is there a fixed point to the non-Markovian reward decision process problem in the complete absence of extrinsic reward? Further work is required in this direction. Moreover, our definition of SPIE objective in the discrete setting coincides with the successor contingency definition [Gallistel et al., 2014], and the SC quantity is shown to contain useful information about the essential predictor of reward states. Our analysis in the current paper does not arise from a causal perspective, which is a promising venue for future investigation. We wish to thank the reviewer again for their thoughtful comments, and we hope the above replies address all concerns/questions raised by the reviewer. If so, we hope the reviewer could adjust their score accordingly. If there is any remaining confusion/concern, please let us know as soon as possible and we are happy to engage in further discussion. --- Rebuttal Comment 1.1: Title: Feedback Comment: Thank the authors for the response. I still have some questions and would appreciate the authors' feedback: - Can the authors visualize the intrinsic reward heatmap for environments in Fig1? For experiments in Fig1(b), as the SR is learned online, the authors can plot intrinsic reward heatmaps at several checkpoints. For experiments in Fig1(c), a single plot should suffice as SR is fixed. - In Line 128, the authors mention that exploring with $r_\text{SR}$ would regress back to random exploration if the SR matrix were known a priori. Then why does SARSA-SR perform worse than random exploration in Fig1(c)? --- Reply to Comment 1.1.1: Title: Thank you for engaging in the further discussion and for the additional questions. Comment: We thank the reviewer for reading through our responses and for their additional comments. Below we address the additional questions. - We have visualised the intrinsic reward as heatmaps for different models. We are afraid that we are unable to upload the new visualisation at the current stage of the rebuttal, hence we will verbally describe the the intrinsic reward heatmaps. We describe the dynamics of the intrinsic reward heatmaps for the 20x20 Cluster-simple environment. For the online learning case, at the early phase of exploration when the agent initially explores the environment, the SR matrix is updated over the visited states, hence the intrinsic rewards becomes negative over the states visited and remains zero for unvisited states, facilitating exploration towards unvisited states. As the learning continues, i.e., upon sufficient coverage of the environment and learning of the SR matrix, the resulting intrinsic rewards exhibit dynamic balancing between moving towards the bottleneck states and exploring remaining states in the current cluster. The dynamic balancing depends on the recent trajectory taken by the agent and the amount of time the agent spends in the current cluster since last visitation. As the agent spends more time in the current cluster, the intrinsic rewards towards the bottleneck states become greater. For the fixed-SR case, the intrinsic rewards are stationary over the training process. We observe that the intrinsic rewards towards the direction to the bottleneck state are higher for the states closer to the bottleneck state, and is more evenly spread out over the action space for the states towards the centre of each cluster. The intrinsic reward heatmaps indicate that SPIE induces a generic intrinsic motivation towards the bottleneck states, but is balanced by the motivation towards states of high uncertainty (less visited states), hence yielding more efficient exploration. We apologise for not being able to show the plots right now, but we will include the heatmaps in the camera-ready revision upon acceptance. - When the SR matrix is known as fixed a priori, SARSA-SR would regress back to random exploration only upon reaching converged value function. Throughout our analysis, we have assumed the Q-values are initialised to 0 for all state-action pairs. Therefore, the first action (regardless of the action taken) yields a positive reward (1/20 when $gamma = 0.95$), leading to positive Q-value for the corresponding state-action pair. This process goes on and the agent would follow the same path repeatedly by acting greedily with respect to the Q-values, with the only exceptions at the states where the agent has visited more than once over a single trajectory (due to randomness of the Q-value initialisation or epsilon-greedy). Asymptotically, the action value function converges and the behaviour of the SARSA-SR agent is identical to a purely random exploration agent. We thank the reviewer again for engaging in further discussion and for the additional questions. We hope the reviewer for adjust their score accordingly if the above responses have clarified the additional questions raised by the reviewer. If there is any remaining confusion/concern, please let us know as soon as possible and we are happy to engage in further discussion.
Summary: This paper proposes Successor-Predecessor Intrinsic Exploration (SPIE), an exploration framework that uses the successor representation (SR) and predecessor representation (PR) to formulate intrinsic motivation for exploration. Amongst the variations, two specific design of intrinsic rewards are highlighted: SR-R and SF-PF. While the first directly use the successor and predecessor representations to calculate the intrinsic reward for tabular case, the latter utilizes successor and predecessor features for a smooth expansion to continuous state spaces. The experiments show that the highlighted reward designs display a promising results. Strengths: The paper proposes a unique and novel approach to exploration problem in RL. Furthermore, leveraging the successor representation displays a broad future directions of application as the successor representation are taken to be helpful in the view of generalizability in RL. Weaknesses: The presentation needs some finishing touches and variables and logics are left unexplained in places. Also the experiments seem to be done in relatively small scales, whereas the the algorithm is not restricted small scales. For example, training curves of more Atari task, trained for longer period (since hard-explorations can often take more experiences), may show more clear pattern of learning. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Some questions. - For SR-R, since $\mathbf{M}[s,s'] \le \mathbf{M}[:,s']$, the intrinsic reward is likely to be negative, which can possibly discourage the agent to proceed to a longer episodic length when terminal state exists and can be discovered. Could this be a problem to applying SR-R to such tasks? - I did not fully follow how the learned $\phi$ preserves the property $r = \phi \cdot \mathbf{w}$. Also, what are the conditions for $\phi$ and $\mu$ to be shared? (which I suspect is the case based on Figure 2, since $\mu$ is not clearly defined or described in the paper) - The score for RND seems far too off from what I would expect based on previous works, even regarding the number of training frames. Could the authors provide more details on the RND implementation? Also the score is missing for Freeway, whereas in the text, it reports 4 tasks (line 297). Minor remarks. - The sentence before Eqn. 10 and the equation itself have period at the end, which should be commas. Similar punctuation errors can be found in other equations. - The caption for Table could be more specific. - Tables would look more nicer if they are resized, fitted to the textwidth. - several notations are not clearly explained, e.g., $\hat q^\pi$, $\mu$ Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitations are stated in the conclusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their instructive comments. Please find below our replies to the raised concerns/questions. - Thanks for pointing out the missing of explanations of variables and logics in the current manuscript. In responding to other reviewers' comments, we have modified our manuscript for clearer demonstration and explanation. For instance, we now make clear that the main focus of the paper is on proposing stronger intrinsic exploration algorithm rather than continual exploration in non-stationary environments. We now give a more elaborate explanation on why SARSA-SR behaves poorly in pure exploration. Moreover, we provide more details on the description of the model and associated implementations. We hope these modifications have made the paper clearer and more rigorous. If there is any additional confusion remaining, could the reviewer provide specific pointers to places where we have such omission? - Through the experimental studies, instead of only showing that it yields higher performance on different tasks, we aim to maximally demonstrate the exploratory behaviours of SPIE, hence justifying our extensive evaluations on (relatively) small-scale tasks such as grid worlds. We show the final performance (at 100M training steps) instead of the training curves since we do not have the resources to run all selected baseline algorithms on all presented Atari games, hence we reporting the asymptotic performance from existing literature (e.g., from Machado et al. 2018), which prohibits us from showing the full training curve. However, we do wish to note it is common in the literature to only report the final performance instead of the full training curve in practice, therefore we do not think it would negatively affect faithful validation of the model. - As the reviewer correctly points out, the $r_{\text{SR-R}}$ intrinsic reward is negative. However, in tasks with terminal states (usually denoting reward state), we believe it is usually preferable to reach the target state as quickly as possible. SARSA-SRR agent yields stronger exploration efficiency for faster detection of the goal states, which could then facilitate further exploration and learning for finding shorted paths towards the goal states. - The successor feature generalises SR to continuous state space, and requires a notion of state representation for constructing the similar cumulative future occupancy measure. Hence the SF is defined as the expected discounted cumulative sum of future state representations. The motivation of leveraging a state representation that could represent the reward vector as a linear transformation of the representation is based on the motivation that the SF wish to yield the similar decomposition of value function into the dot product between the SR/SF and reward vector. $\mu$ and $\phi$ are the same vector as we wish to construct the SF and the PF with the same state representation. We apologise for the missing notation. We now define define the PF using the same feature vector as the SF, $\vec\xi^{\pi}(s) = \vec\mu(s_{t+1}) + \gamma\mathbb{E}\left[\vec\xi^{\pi}(s_{t})|s_{t+1}=s, a_{t}=a\right]$ in the updated manuscript. - The evaluation of RND agent on Freeway was missing since we did not have the computational resources to finish all evaluations. We have provided the RND performance on Freeway in the updated manuscript (28.2 points, $\sigma^2 = 0.2$). The main reason behind the difference in our evaluation of RND and existing evaluations reported in the literature is the reward scheme is set differently in our implementation comparing to the original implementation from Burda et al., 2018. Specifically, we report the canonical cumulative episodic reward, whereas the original implemented bin the rewards to {+1, 0, -1} by its sign, which results in larger episodic return (see, e.g., https://github.com/openai/random-network-distillation/blob/f75c0f1efa473d5109d487062fd8ed49ddce6634/atari_wrappers.py#L220). We have now included the evaluation results of DQN-SF-PF (and other baselines) on two more hard-exploration atari games, Gravitar and Solaris. Empirical results indicate that DQN-SF-PF outperforms DQN-SR on both tasks. - We thank the reviewer for pointing out the punctuation errors, which we have corrected in the updated manuscript and will be included in the camera-ready revision upon acceptance. - We have expanded the description in the table captions for clarity. We have also reduced the width of the table such that it now fits to the textwidth. These modifications will be included in the camera-ready revision upon acceptance. Thanks for the suggestions. - We thank the reviewer for pointing out the unclear notation in the paper, which we have now included the associated explanations in the updated manuscript. We wish to thank the reviewer again for their thoughtful comments, and we hope the above replies address all concerns/questions raised by the reviewer. If so, we hope the reviewer could adjust their score accordingly. If there is any remaining confusion/concern, please let us know as soon as possible and we are happy to engage in further discussion. --- Rebuttal Comment 1.1: Title: Any additional questions? Comment: We thank the reviewer again for their review. As the end of the discussion period is approaching, we kindly ask the reviewer to engage in further discussion if there is any additional question/concern remaining, and we hope the reviewer could adjust their score accordingly if all raised concerns are addressed. We look forward to your further responses!
Summary: The paper proposes to use both successor feature of s and predecessor feature of s' as intrinsic reward to improve RL exploration. Specifically, in the tabular case, the proposed intrinsic reward encourages the agent to visit states that are infrequently visited from other states, using successor representation which measures visitation frequency of s' from s and from other states. Then the method replace the visitation frequency from all states to s' with the predecessor representation of s'. Finally, to extend to continuous state spaces, the method approximate successor and predecessor representations with successor and predecessor features. Strengths: 1. Intrinsic reward is an important way to improve RL exploration and thus sample efficiency, and the proposed method is well motivated (to visit bottleneck states more frequently), to the best of my knowledge, is novel. 2. The experiments are well designed, ranging from tabular to gridworld to atari to show the effectiveness of each variation of the method: (1) using successor representation along, (2) replacing row sum of successor representation with predecessor representation, and (3) replacing representations with features. 3. The background of successor representations is written clearly, preparing readers well for the proposed method. Weaknesses: 1. The method section is well structured, but it can be improved in the following places: * It's clear that successor representation means the visitation frequency from s to s', and using Eq 8 follows the proposed motivation of encouraging visitation to s' that are rare from states other than s. Then, for continuous state space, the authors replace successor representation with successor feature. But why does the successor feature could represent the visitation frequency? After the replacement, does the intrinsic reward still follow the same motivation? I feel this replacement is not well explained/motivated. * For the successor/predecessor feature, different feature columns may have different value ranges (scales). How to account for that to avoid certain feature columns dominating the column sum? * Any possible explanation for the form of the reciprocal of norms in Eq. 16? What are other forms that you tried but underperform? * Why is reconstruction needed? It's unclear how much the reconstruction contributes to representation learning and exploration. * When the authors mention the "row/column sum" of some symbol, the shape of the referred symbol (including which is row and which is column) is usually not introduced and thus confusing, including but not limited to L192. * Figure 1: why does SARSA-SR perform so badly? * (minor) unintroduced symbols and typos: L165 "diffusion" (you can probably remove "diffusion"), Eq 14 $\mu$, L181 N's shape missing "|", Fig 2 the lines are mixed together and thus a bit hard to read. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: Beyond the questions above: * What do the agent trajectories look like? Do they follow the motivation of visiting the bottleneck state? * Eq 8 reminds me of the motivation in the "diversity is all you need" paper where different trajectories should visit different parts of the state space. Are the two papers related? If so, how? Adding this discussion would be helpful. * Is the reconstruction necessary? An ablation study would be helpful. * What are the confidence intervals like in Fig 4 (c)? * Why is RND missing in Table 2 Freeway row? Any explanation for the worse performance of the proposed method on the private eye? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 2 fair Contribution: 4 excellent Limitations: The only limitation is the lack of motivation of replacing successor/predecessor representation with their corresponding features for continuous state space. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their instructive comments. Please find below our replies to the raised concerns/questions. - The generalisation to continuous state space means that we can no longer enumerate all states for the computation and learning of the SR. Hence we adopt a natural extension of the SR to continuous setting, the successor features. The SF also exhibits the similar predictive encoding as the SR in discrete settings, hence could also represent the prospective transition information in continuous setting (similarly the retrospective information for the predecessor features). Empirically, we find the generalisation to SF work well with our models and previous models (e.g., Machado et al., 2020) using either linear or non-linear function approximation, hence justifying our choice. - In our implementation of the SF-PF-DQN, both the SF and the PF are based on the intermediate features ($\phi_{t}$ in Figure 2), which are L2-normalised for avoiding the scaling issue. This is done with F.normalize(phi, p=2, dim=-1) (see line 105 in atari/models/sr_pr_dqn.py in the provided codes). - The initial proposal was to use the difference of norms of the SF and PF as is done for the SR in discrete settings. However, we find it hard to set the scaling factor for the SPIE intrinsic reward in this case. We hence adopt the reciprocal form presented in Machado et al., 2018 (last equation on page 5), which we find to yield robust performance on a range of $\beta_{spie}$ values (0.03-0.06). We note that the fundamental principle of SPIE is the combination of prospective and retrospective information for exploration, admitting different instantiations (including those shown in equation 8 and 16). - In our implementation of DQN-SF-PF, by following relevant literature [oh et al., 2015, Machado et al., 2020], we include an additional sub-module in the neural architecture for predicting action-dependent future observation, which is trained via minimising the predictive reconstruction error. The purpose of including this sub-module is purely for learning better latent representations underlying the visual observation. We validate the utility of such predictive reconstruction auxiliary supervision by performing ablation study with an alternative version of DQN-SF-PF, removing the visual reconstruction sub-module. Testing on Montezuma's Revenge, the resulting model achieves $551.5$ points (averaged over $5$ random seeds $\sigma^2=618.4$). We observe that there is a significant decrease from standard DQN-SF-PF, indicating the importance of stronger representation learning given the predictive reconstruction auxiliary task. Moreover, given the reported performance of $398.5$ points ($\sigma^2=230.1$) of DQN-SF in the absence of predictive reconstruction auxiliary task from Machado et al., 2020, we observe that the SPIE objective still yields improved performance over exploration with SF alone, justifying the utility of SPIE irrespective of the specific neural architecture we choose. - We will include the shape of the referred row/column sum and corresponding features, and keep the notations consistent in the updated manuscript. Thanks for pointing it out. - As mentioned in the analysis in Section 3, one possible explanation for the poor performance of SARSA-SR is its equivalence to count-based exploration (Machado et al., 2018). Count-based exploration algorithms perform poorly in larger state space such as grid worlds (in contrast to 6 and 7 states for RiverSwim and SixArms) such that the agent preferentially explores local region for an extended time before moving to novel sub-region, hence leading to poor exploration efficiency. - We thank the reviewer for pointing out the typos, which we have fixed in the updated manuscript. - We have sample video containing a sampled trajectory of SARSA-SRR in the grid world. As is also discussed in the main paper, the SARSA-SRR agent tends to balance the motivations of visiting bottleneck states and exploring uncertain state, hence the typical behavior is the agent spend some time exploring the current cluster, and upon sufficient coverage it would move directly to the bottleneck state for exploring other clusters. In the two-cluster environment, we indeed observe such behavior where the agent spend the majority of its time exploring one of the clusters and transition into the bottleneck states upon reaching sufficient coverage. - The relationship between SPIE and DIAYD lies in their connections to the empowerment objective, which facilitates diverse exploration trajectories. We thank the reviewer for pointing out the connection, which we have now included associated discussions in the related work section. - Yes the reconstruction is necessary empirically, see the fourth point above. - We have now included the new Figure 4c with standard error in the rebuttal letter. - We have now included the Freeway evaluation (28.2 points with $\sigma^2=0.2$, see Table 1 in the rebuttal letter). One possible reasoning for the relatively poor performance on Private Eye across all agents apart from vanilla DQN is that the background of the observation is frequently changing (unlike other games such as Venture and Montezuma's revenge), hence imposing large intrinsic exploration signals much more often comparing to other tasks, such that the agent spends the majority of its time exploring instead of focusing on maximising the reward. This is coherent with the observation that DQN without any intrinsic exploration behaves best on Private Eye whereas other agents with intrinsic exploration behaves poorly. We wish to thank the reviewer again for their thoughtful comments, and we hope the above replies address all concerns/questions raised by the reviewer. If so, we hope the reviewer could adjust their score accordingly. If there is any remaining confusion/concern, please let us know as soon as possible and we are happy to engage in further discussion. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response, especially the ablation of using reconstruction or not! I increase my score to 7 and encourage the authors to add the extra results in the next version of the paper. --- Reply to Comment 1.1.1: Title: Thank you for engaging in further discussion and for raising the score. Comment: We are glad to see that our responses addressed the concerns raised by the reviewer and we thank the reviewer for increasing their score. We will make sure to include all the new analysis in the camera-ready revision upon acceptance.
Rebuttal 1: Rebuttal: We thank all reviewers for their instructive comments for making the paper clearer and more rigorous. There are a number of questions/concerns that are shared by multiple reviewers, which here we provide summarised responses below. We have also attached a rebuttal letter containing additional experiment results requested by the reviewers. - Main motivation of the paper and the experimental studies: The main focus of the paper is to propose the SPIE intrinsic exploration framework of combining prospective and retrospective information for more efficient and targeted exploration. Whilst we show additional side benefits that SPIE yields ethologically plausible "cycling" exploratory behaviour, it is not the main focus nor the motivation of SPIE. We provide preliminary experimental studies illustrating the existence of such "cycling" behaviour and discuss its potential application for continual exploration for dealing with non-stationary environments. However, it is by no means our intention to claim or promote SPIE as a state-of-the-art method for dealing with non-stationary continual RL problems. Moreover, we wish to clarify that the aim of our empirical evaluations is to demonstrate the improved exploration (and hence learning) efficiency given the dynamic balancing of exploring uncertain states and transitioning into bottleneck states admitted by SPIE. Our experiments comprehensively validates the utility of SPIE framework in tasks with discrete state spaces with tabular value estimates, and in tasks with continuous state spaces using linear and non-linear (deep RL) function approximations. All presented experiments demonstrate that SPIE can be efficiently implemented, with minimal modification to the original agent, whilst yielding significantly better performance. The goal of the paper is to show that retrospective information could and should be utilised for stronger exploration. The conceptual idea can be instantiated with any existing RL agents, and produce state-of-the-art results if needed. - The choice of reciprocal form of SPIE objective in continuous settings. We generalise SPIE to tasks with continuous state spaces by replacing SR/PR with SF/PF. The initial proposal was to use the difference of norms of the SF and PF as is done for the SR in discrete settings. However, we find it hard to set the scaling factor for the SPIE intrinsic reward in this case. We hence adopt the reciprocal form presented in Machado et al., 2018 (last equation on page 5), which we find to yield robust performance on a range of $\beta_{spie}$ values (0.03-0.06). We note that the fundamental principle of SPIE is the combination of prospective and retrospective information for exploration, admitting different instantiations (including those shown in equation 8 and 16 in the main paper). - Poor performance of SARSA-SR on pure exploration in grid worlds. One possible explanation for the poor performance of SARSA-SR is its equivalence to count-based exploration (Machado et al., 2020). Count-based exploration algorithms perform poorly in larger state space such as grid worlds (in contrast to 6 and 7 states for RiverSwim and SixArms) such that the agent preferentially explores local region for an extended time before moving to novel sub-region, hence leading to poor exploration efficiency. - More evaluations in deep RL experiments. We have now provided evaluation results of DQN-SF-PF and baseline agents on all 6 hard-exploration Atari games (Bellemare et al., 2013), which is shown in Table 2 in the attached rebuttal letter. DQN-SF-PF outperforms baseline agents on 4 of the games and yields comparable performance on the remaining 2. - Missing standard error in linear RL experiment. We thank the reviewers for pointing out the missing of standard errors in the linear RL experiment (Figure 4c). We have now provided the full evaluations with standard errors in Figure 1a in the rebuttal letter. We note that the reported results for $r_{\text{SF-PF}}$ in this case is from a different set of hyperparameters. We performed a comprehensive hyperparameter search, and the resulting agent with $r_{\text{SF-PF}}$ now yields a significantly higher average completion probability than the agent with $r_{\text{SF}}$ comparing to Figure 4c in the current main paper, further justifying the utility of SPIE. - Typos and formatting issues. We thank the reviewers for pointing out various typos, and specifically the width issue with Table 1 in the current main paper. We have now corrected all raised typos and fixed the formatting issues. We wish to thank all the reviewers again for their thoughtful comments, and we hope the above replies address all concerns/questions raised by the reviewers. If so, we hope the reviewers could adjust their score accordingly. If there is any remaining confusion/concern, please let us know as soon as possible and we are happy to engage in further discussion. Pdf: /pdf/a0b1522c402b52bcf00600d2362abcc8bc2bd7af.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Disentangling Cognitive Diagnosis with Limited Exercise Labels
Accept (poster)
Summary: The dissociation-based cognitive diagnosis (DisenCD) model proposed in this paper addresses the cognitive diagnosis challenge of limited practice labels by using students' historical practice records to model their proficiency, practice difficulty, and practice label distribution. The model introduces novel modules to disentangle factors related to knowledge concepts and align them with the limited labels available. Experiments on three real datasets demonstrate the effectiveness of the model. Overall, this paper presents a promising approach to the cognitive diagnosis of intellectual education. However, in the process of specific description, the introduction of the two modules is too vague, and some experimental details are not provided in the article. Strengths: Practical Approach: DisenCD provides an effective approach for cognitive diagnosis under limited exercise labels, representing one of the few attempts to focus on this problem. Extensive experiments on three real-world datasets demonstrate the effectiveness of the model. Group-based disentanglement and limited-labeled alignment modules: The DisenCD model introduces two novel modules, group-based disentanglement, and limited-labeled alignment, which disentangle factors relevant to knowledge concepts and align them with the available limited labels. These modules help to overcome the problem of limited exercise labels and improve the accuracy of cognitive diagnosis. Interpretable: The DisenCD model achieves the best interpretability in all scenarios, especially in the 10% and 20% Q-matrix (label-scarce exercises) scenarios, where it significantly outperforms interpretable baselines in terms of Degree-of-Alignment (DOA). This demonstrates the superior interpretability of the model. Weaknesses: Limited Generalizability: Although the effectiveness of the DisenCD model is demonstrated on three real-world datasets, the model's generalizability to other datasets or samples still needs to be established. Further studies are, therefore, needed to establish the generalizability of DisenCD to more extensive and diverse datasets. Lack of Explicit Discussion on Limitations: Although the article describes the proposed model, its training procedures, and experimental results, the limitations of the DisenCD model should be explicitly discussed. This may limit the reader's ability to interpret the results and implications of the model. A more explicit discussion of the model's limitations would have made the study more comprehensive. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1.Can you please provide more details on the pre-filling algorithm used for missing Q-matrix in the experiments? 2. Could you explain further the two novel modules, group-based disentanglement and limited-labeled alignment, that DisenCD uses to disentangle relevant knowledge concepts and align them with the available limited labels? 3.Can you elaborate on the hyperparameter settings during the experiment? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No, the authors did not provide any discussions for limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your thoughtful comments, efforts, and time. We respond to each of your questions and concerns one-by-one as follows: ### Weaknesses **W1-(1): Limited Generalizability: Although the effectiveness of the DisenCD model is demonstrated on three real-world datasets, the model's generalizability to other datasets or samples still needs to be established. Further studies are, therefore, needed to establish the generalizability of DisenCD to more extensive and diverse datasets.** **A1-(1)**: Thank you for your suggestion. We focus on proposing a cognitive diagnosis model based on a tree structure of knowledge concept relationships. Therefore, we have adopted the most popular datasets in this field, which includes the tree structure of knowledge concept relationships. In fact, these three datasets are most commonly used and representative datasets in cognitive diagnosis, given our best efforts. In the future, we would be happy to validate the generalizability of DisenCD on more datasets if we come across them. **W1-(2): Lack of Explicit Discussion on Limitations: Although the article describes the proposed model, its training procedures, and experimental results, the limitations of the DisenCD model should be explicitly discussed. This may limit the reader's ability to interpret the results and implications of the model. A more explicit discussion of the model's limitations would have made the study more comprehensive.** **A1-(2)**: Thank you. Due to the page limitation, we placed the limitations of DisenCD and an outlook on future work in the supplementary material B. We are sorry for not indicating the location of the limitation description in the manuscript. We will add it accordingly. ### Questions **Q1: Can you please provide more details on the pre-filling algorithm used for missing Q-matrix in the experiments?** **A1**: Thank you. Due to the page limitation, we placed this part in Section C.3 of the supplementary material in which we provide a detailed pseudo-code algorithm description on filling missing knowledge concept. Besides, we also compare DisenCD with pre-filling algorithm in section A.1 of the supplementary material. **Q2: Could you explain further the two novel modules, group-based disentanglement and limited-labeled alignment, that DisenCD uses to disentangle relevant knowledge concepts and align them with the available limited labels?** **A2**: Group-based disentanglement module: The fundamental assumption of disentangled representation learning is that each latent variable is independent. However, in cognitive diagnosis, each latent variable corresponds to a knowledge concept, and there are relationships between knowledge concepts. For example, certain knowledge concepts are assessed together, and one must master certain knowledge concepts before moving on to the next one. Therefore, to apply disentangled representation learning in cognitive diagnosis, we attempt to find independent relationships between knowledge concepts and group them accordingly. Knowledge concepts within a group have high correlation, while knowledge concepts between groups have low correlation, thus constraining the independence of knowledge concepts between groups. To achieve this, we incorporate knowledge concept tree structure information, where we consider higher-level knowledge concepts to have coarser granularity and higher independence among them compared to lower-level knowledge concepts. Alignment module: In the few-labeled exercise scenario, we assume that there are a large number of unlabeled exercises and a small number of labeled exercises. For the labeled exercises, we directly align dimensions with knowledge concepts in the exercise relevance representation using the supervised information from the labeled exercises. As for the unlabeled exercises, considering the sparse nature of knowledge concept annotations in the exercises, we constrain the sparsity of the unlabeled relevance. Here, we divide the exercise relevance representation into three parts: 1) Those most likely to correspond to true knowledge concepts; 2)Those serving as candidate knowledge concepts; and 3)Those with very low probability of being knowledge concepts. We use margin loss to encourage the first part to be as close to 1 as possible, allowing the model to adaptively infer missing knowledge concepts. **Q3: Can you elaborate on the hyperparameter settings during the experiment?** **A3**: Thank you. Due to the page limitation, we placed the hyperparameter settings in Section C.5 of the supplementary materials in our original submission. We will try our best to reorganize the layout to add some important information regarding the settings into the main text. Thank you again for your suggestion. --- Rebuttal Comment 1.1: Comment: I have read the rebuttal. --- Reply to Comment 1.1.1: Title: We are willing to discuss at any time. Comment: We really appreciate your constructive review and your precious time. If you have any further questions or suggestions, we are very happy to discuss with you.
Summary: This paper presents an algorithm for cognitive diagnosis (CD) in limited data scenarios. CD aims at labeling questions to knowledge concepts. As the CD process requires expert tagging, labelling questions to knowlege concepts in time intensive. This paper addresses the labelling task from three factors: student proficiency, exercise difficulty, and exercise relevance. Strengths: 1. Knowledge concept (KC) labeling in limited data scenario 2. Disentangled Representation Learning for interpretation 3. Addresses correlated and independent KCs by a tree-like hierarchy Weaknesses: 1. The authors utilized student interaction distribution to derive exercise relevance and knowledge concepts from exercise relevance. One limitation of utilizing student interaction is the cold-start problem of newly introduced exercises or a small number of student responses. A more robust way to model exercise relation is by considering the question text and student responses. See Pandey et al. [1] for reference. 2. The assumptions of inter-group and correlated KCs may be too strong. References. 1. Pandey S, Srivastava J. RKT: relation-aware self-attention for knowledge tracing. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management 2020 Oct 19 (pp. 1205-1214). Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Did the author validate (a subset of) the discovered KCs with domain experts? Did the author evaluate how their model performs with small student responses to questions in V2? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Yes, addressed in the supplementary material. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your thoughtful comments, efforts, and time. We respond to each of your questions and concerns one-by-one as follows: ### Weaknesses **W1: The authors utilized student interaction distribution to derive exercise relevance and knowledge concepts from exercise relevance. One limitation of utilizing student interaction is the cold-start problem of newly introduced exercises or a small number of student responses. A more robust way to model exercise relation is by considering the question text and student responses.** **A1**: Thanks for your suggestions. Our algorithm focuses on inferring the exercise relevance from student responses when there is a lack of textual information. Therefore, it does rely on the quantity of interactions associated with the exercise. In our future work, we will consider exploring the exercise relevance from both response records and additional exercise information (e.g., text, graph, and video). **W2: The assumptions of inter-group and correlated KCs may be too strong.** **A2**: Thanks for your question. We believe that the independence among high-level concepts (coarse-grained) is higher than low-level concepts (fine-grained). This viewpoint is inspired by practical applications, and the analysis is summarized as follows. To better clarify the argument, we provide the partial knowledge concept tree of NIPS2020EC dataset in Figure 1 in our rebuttal global pdf file. Firstly, the course chapter hierarchy is a typical example of concepts with a tree structure. Intuitively, the associations among concepts at the high level of the hierarchy are weaker compared to the associations between concepts at the low level. For example, if we adopt the 2nd level concepts (i.e., algebra, data and statistics, ...) to group the last level concepts, the independence among groups is higher than the groups determined by the 3rd level concepts (i.e., inequalities, formula, data collection, ...). Moreover, there is a higher likelihood that concepts within the same chapter are simultaneously assessed in the same exercise. For instance, the concepts whose parent concept is inequalities would have a higher probability occur in the same exercise. However, the solving linear inequalities concept whose parent concept is inequalities and the tally charts concept whose parent concept is data collection would have a lower probability occur in the same exercise. ### Questions **Q1: Did the author validate (a subset of) the discovered KCs with domain experts? Did the author evaluate how their model performs with small student responses to questions in V2?** **A1**: Thank you for your question. - Validate the discovered KCs with domain experts Yes, we conducted experiments to validate the discovered knowledge concepts with domain experts. The results are presented in Appendix A of the supplementary materials in which we compare DisenCD with pre-filling algorithm. - Evaluate performance with small student response We agree that the performance of the proposed algorithm rely on the quantity of interactions associated with the exercise. DisenCD may not perform well on exercises with small student responses. As you mentioned in weakness 1, we would consider incorporating multimodal information to make up for the shortcoming. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their response. I do not have further queries at this point. --- Reply to Comment 1.1.1: Comment: Thank you so much for your response and valuable suggestions! We sincerely appreciate your time and efforts once again.
Summary: This paper focuses on performing cognitive diagnosis with limited exercise labels. To address the enormous cost of labeling exercises, in this paper, the authors proposed Disentanglement based Cognitive Diagnosis (DisenCD). Specifically, they first used students’ practiced records to model student proficiency, exercise difficulty, and exercise label distribution; Then, group-based disentanglement and limited-labeled alignment modules were proposed to disentangle the factors relevant to concepts and align them with real limited labels. At the same time, a tree-like structure of concepts was proposed for group-based disentangling. Strengths: The idea presented in this paper is novel and holds value in addressing the problem of limited exercise labels, as annotating them with domain experts incurs a substantial cost. Meanwhile, in the supplementary materials, it is highly significant for the authors to perform the Nemenyi Test on the experimental results. Weaknesses: 1. Figure 1: The radar chart has some mistakes. For the diagnosis of knowledge concept k2, we can find that the proficiency of student u1 is higher than that of u2, but the answer logs given by the authors show that student u1 answered v1 correctly, answered v2 incorrectly, and student u2 answered v1 correctly. 2. In line 28, the word 'The' is written incorrectly. 3. In line 70: MIRT has been enhanced based on IRT and can provide an assessment of subjects or students from multiple perspectives. 4. In the experimental section: (1) It is clear that the model proposed by the authors did not achieve the best results in all scenarios. (2) Insufficient preparation of experimental parts, i.e., lack of ablation experiments to verify the effectiveness of each part. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. In Section 2.1, The authors believed that the TextCNN method proposed by NCDM+ contained errors in extracting knowledge concepts from the semantic information of the exercise text. Could the authors provide verification of this claim? Alternatively, could the authors compare their method with the TextCNN approach to assess its effectiveness in extracting knowledge concepts? 2. In Section 4.1, initially, the authors explained that the interaction matrix X contains significant information, including student proficiency. However, it is later mentioned that the response records of a student conceal his proficiency regarding the knowledge concepts. This raises the question of why such a contradiction exists. 3. To the best of my knowledge, most cognitive diagnosis (CD) [1][2][3] works typically exclude students with less than 15 answer logs to ensure the plausibility of the results. However, I am curious as to why the authors chose to employ three different criteria for removing students in the three selected datasets. 4. Regarding the 10% Q matrix or the 20% Q matrix, what criteria do the authors employ to retain 10% or 20% of the original Q matrix? [1] Wang F, Liu Q, Chen E, et al. Neural cognitive diagnosis for intelligent education systems[C]//Proceedings of the AAAI conference on artificial intelligence. 2020, 34(04): 6153-6161. [2] Gao W, Liu Q, Huang Z, et al. Rcd: Relation map driven cognitive diagnosis for intelligent education systems[C]//Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2021: 501-510. [3] Ma H, Li M, Wu L, et al. Knowledge-Sensed Cognitive Diagnosis for Intelligent Education Platforms[C]//Proceedings of the 31st ACM International Conference on Information & Knowledge Management. 2022: 1451-1460. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: 1. Currently, in the field of cognitive diagnosis, many datasets lack a tree structure between knowledge concepts. Therefore, the method proposed in this paper may not be suitable for the majority of datasets, i.e., the ASSISTments2009 dataset[1], and the ASSISTments2012 dataset[2]. 2. As can be seen from Table 2 in the experimental part and Table 2 in the supplementary materials, the method proposed in this paper is not well suited for datasets[1][2] with a large number of knowledge concepts. This would greatly limit the applicability of the method. [1] https://sites.google.com/site/assistmentsdata/home/assistment-2009-2010-data/skill-builder-data-2009-2010 [2] https://sites.google.com/site/assistmentsdata/datasets/2012-13- school-data-with-affect Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your thoughtful comments, efforts, and time. We respond to each of your questions and concerns one-by-one as follows: ### Weaknesses **W1 and W2 : The radar chart in Figure 1 has some mistakes...., 'The', …** **A1 and A2**: Very sorry for the mistake and typos. In fact, the proficiency on concept k2 of student u1 should be lower than that of u2. We will update Figure 1, and correct typos in the manuscript accordingly. **W3: In line 70: MIRT has been enhanced based on IRT and can provide an assessment of subjects or students from multiple perspectives.** **A3**: Thank you. We agree with you that MIRT transforms the student representation from a scalar in multiple dimensions. However, in MIRT, each dimension of the multidimensional vector representing students does not correspond to a specific knowledge concept. We will rewrite this part to avoid confusion. **W4-(1): It is clear that the model proposed by the authors did not achieve the best results in all scenarios.** **A4-(1)**: Thank you for your comment. We compared the results of the model in terms of prediction metrics (i.e., AUC, ACC, and RMSE) and interpretability metric DOA, as shown in Table 2 of the manuscript. Only in the 10% Q-matrix scenario of the Junyi dataset, our model achieves prediction metrics comparable to the state-of-the-art model KaNCD. Moreover, for the interpretability metric DOA, the proposed model significantly outperforms the best baseline of KaNCD model. Therefore, the experimental results demonstrate the effectiveness of the proposed model. **W4-(2)-Brief: Insufficient preparation of experimental parts.** **A4-(2):** Thanks for your suggestion. We have presented the results of the ablation experiments in Figure 3 of the original submission. Due to the page limit, we demonstrate this by hyperprameter analysis. Specifically, the disentanglement module is influenced by hyperparameter $\beta$. When $\beta$ is equal to 0, it means DisenCD without disentanglement. The margin loss and L2 loss in alignment module is influenced by $\lambda_1$ and $\lambda_2$ , respectively. When $\lambda_1$=0 and $\lambda_2$ =0 , it means DisenCD without alignment without margin loss and L2 loss, respectively. We present the ablation experiments in Table 3 in our rebuttal global pdf file. The experimental results validate the effectiveness of the disentanglement and alignment modules. We will include the separate ablation experiment results in the supplementary materials. ### Questions **Q1-Brief: Provide verification of the claim about NCDM+.... Compare DisenCD with TextCNN approach…** **A1**: Thanks for your question and sorry for the confusion. Extracting knowledge concepts from the exercise text is a good idea to deal with the few-labelled exercise scenario. We additionally focus on a more challenging scenario of missing exercise text scenario and infer knowledge concepts from student interaction records. According to your valuable suggestion, we would rewrite this part to avoid misunderstanding. **Q2-Brief: There exists contradiction in statement…** **A2**: Sorry for the confusion due to the typo. In line 154, the word “…conceal..” should be “reflect”. We will correct this error. **Q3-Brief: The choice of filtering threshold for removing students.** **A3**: Thanks for your professional question. We conducted a survey of related literature on student filtering methods, including setting a certain filtering threshold (e.g., 15 [1], 50 [2] ) or selecting a certain number of students (e.g., 1000 [3], 10000 [4]). These methods have slight deviations in threshold settings due to different dataset sizes. Therefore, we have also designed different filtering thresholds according to the different dataset sizes. The student scales of the three datasets we used, from small to large, are Matmat, NIPS2020, and Junyi, respectively. Therefore, we set the filtering thresholds as 15, 30, and 50 in ascending order based on the student scale. Additionally, we conducted an experiment where we set the filtering threshold to 15 for NIPS2020EC dataset. The experimental results are shown in Table 1 of our global rebuttal pdf file. DisenCD outperforms the baselines, demonstrating the effectiveness of the DisenCD model. **Q4-Brief: The choice of the 10% Q matrix or the 20% Q matrix.** **A4**: Thanks for your question. We randomly selected two representative cases where there are 10% and 20% labeled exercises. We can also adopt other ratios. ### Limitations **L1-Brief: DisenCD requiring the knowledge concept tree may not be suitable for the majority of datasets.** A1: Thanks for your valuable comment. Some publicly available datasets in the field of cognitive diagnosis do not include tree structure information. For such datasets, we can also constraint independence among all dimensions. More importantly, we believe that obtaining and annotating tree-structured knowledge concepts is relatively easy due to the following reasons: 1) Tree-structured knowledge concepts are typically reflected in the structural divisions of course chapters, and when designing a smart education system, chapter divisions are inevitably involved, making this data readily available. 2) Compared to cognitive diagnostic models that utilize prerequisite relationships among knowledge concepts, tree-structured knowledge concept information is easier to annotate and obtain. **L2-Brief: DisenCD is not well suited for datasets with a large number of knowledge concepts.** **A2**: Thanks for your insightful comment. We agree that our model may not be well suited for datasets with a large number of knowledge concepts. This is due to the Beta-TCVAE that DisenCD rely on for disentanglement, which has a high computational complexity when calculating total correlation. In the future, we will further explore the complexity issue of the model to alleviate this limitation. --- Rebuttal Comment 1.1: Comment: I have read the author's response and the comments from other reviewers. Most of my concerns have been addressed. I have revised my score accordingly. Moreover, regarding to question 4, can you conduct an experiment evaluating different missing ratios of the Q matrix? Based on your response, I will determine the final score. --- Reply to Comment 1.1.1: Title: Experiments on other different missing ratios of the Q-matrix Comment: We appreciate your positive comment and precious time. Regarding to question 4, we conduct experiments evaluating different missing ratios of the Q-matrix on the NIPS2020EC dataset. As IRT、MIRT and PMF are not affected by ratio of Q-matrix, we only compared the interpretable models. In Table 2 of our submitted paper, we illustrated the results under 100%, 20% and 10% settings. Here we additionally add experiments on 50%, 30%, and 5% Q-matrix scenarios , which are illustrated in Table 1, Table 2, and Table 3, respectively. The experimental results demonstrate our DisenCD outperform all the interpretable baseline models in these few-labeled scenarios. Besides, as the preserved ratio of the Q-matrix decreases (from 50% to 5%), our DisenCD demonstrates a greater improvement (from 0.86% to 1.43%) compared to the second-ranked model in terms of DOA. This shows the advantage of DisenCD in label-scarce scenarios. Thank you for your suggestion, we will add the experimental results of the other scenarios to the supplementary material. \ Table 1. Comparison in 50% Q-matrix scenario on NIPS2020EC dataset. | Model | AUC↑ | ACC↑ | RMSE↓ | DOA↑ | | ----------------- | --------- | --------- | --------- | --------- | | DINA | 0.766 | 0.692 | 0.456 | 0.807 | | NCDM | 0.799 | 0.739 | 0.415 | 0.802 | | $\beta$-TCVAE | 0.794 | 0.746 | 0.416 | 0.808 | | KSCD | 0.800 | 0.744 | 0.417 | 0.800 | | KaNCD | 0.807 | 0.754 | 0.411 | 0.810 | | **DisenCD(ours)** | **0.809** | **0.759** | **0.407** | **0.817** | \ Table 2. Comparison in 30% Q-matrix scenario on NIPS2020EC dataset. | Model | AUC↑ | ACC↑ | RMSE↓ | DOA↑ | | ----------------- | --------- | --------- | --------- | --------- | | DINA | 0.766 | 0.685 | 0.455 | 0.769 | | NCDM | 0.801 | 0.743 | 0.414 | 0.765 | | $\beta$-TCVAE | 0.795 | 0.748 | 0.412 | 0.790 | | KSCD | 0.803 | 0.750 | 0.413 | 0.791 | | KaNCD | 0.809 | 0.759 | 0.406 | 0.792 | | **DisenCD(ours)** | **0.811** | **0.760** | **0.405** | **0.800** | \ Table 3. Comparison in 5% Q-matrix scenario on NIPS2020EC dataset. | Model | AUC↑ | ACC↑ | RMSE↓ | DOA↑ | | ----------------- | --------- | --------- | --------- | --------- | | DINA | 0.765 | 0.675 | 0.454 | 0.649 | | NCDM | 0.800 | 0.742 | 0.414 | 0.644 | | $\beta$-TCVAE | 0.801 | 0.747 | 0.415 | 0.770 | | KSCD | 0.805 | 0.752 | 0.411 | 0.769 | | KaNCD | 0.812 | 0.761 | 0.404 | 0.752 | | **DisenCD(ours)** | **0.814** | **0.763** | **0.403** | **0.781** |
Summary: This paper introduces an innovative approach called DisenCD, aimed at enhancing the performance of cognitive diagnosis when exercise labels are limited. The proposed method incorporates two newly developed modules: the group-based disentanglement module and the limited-labeled alignment module. These modules effectively identify relevant concept factors and align them with existing labels. By leveraging limited exercise labels, the DisenCD method maximizes their potential for improved cognitive diagnosis. Strengths: 1. The paper addresses an intriguing and relevant problem: cognitive diagnosis with limited exercise labels, which has practical implications. 2. The paper effectively utilizes a tree-structure to disentangle concept-related factors and align them with existing labels, providing a valuable solution to the aforementioned problem. 3. The extensive experiments conducted in this paper convincingly showcase the effectiveness of the proposed method, further strengthening its validity. Weaknesses: 1. Although DisenCD aims to enhance interpretability in cognitive diagnosis, the model itself may lack interpretability. The inclusion of disentanglement and alignment modules could introduce additional complexity, making it challenging to comprehend the model's decision-making process. 2. In Section 4.2, the authors claim that there is reduced independence among knowledge concept groups in high-level tree nodes. However, the absence of necessary experiments or references weakens the support for this argument. 3. Section 5.1 lacks the necessary details regarding the implementation, which leaves gaps in understanding the practical aspects of applying the proposed method. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Since the tree structure of knowledge concepts is very important for the proposed method, The details of the tree structure in each dataset should be provided. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: 1. DisenCD is a complex model that involves multiple modules and hyperparameters. This complexity may make it difficult to implement and tune, especially for practitioners who are not familiar with the underlying techniques. 2. DisenCD may not be scalable to large datasets due to its complexity. This may limit its applicability in real-world scenarios where large-scale cognitive diagnosis is required. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your thoughtful comments, efforts, and time. We respond to each of your questions and concerns one-by-one as follows: ### Weaknesses **W1-Brief: Although DisenCD aims to enhance interpretability in cognitive diagnosis, the model itself may lack interpretability.** **A1**: Thanks for your comment. In cognitive diagnosis, "Interpretability" means that the each dimension of learned student proficiency and exercise difficulty/relevance representation should correspond to a specific knowledge concept. Thus, it can be used to analyze the student proficiency. However, it will become difficult when there are few labeled exercises, which is the focus of our paper. We design the disentanglement and alignment modules to construct DisenCD, such that each dimension responds to a specific concept, and the variation of each dimension can control the prediction results of corresponding concepts. As such, the requirements of cognitive diagnosis can be satisfied in few-labeled exercise scenario. This is the reason why we claimed that our proposed DisenCD is able to enhance interpretability in cognitive diagnosis. Moreover, we can understand the model's decision-making process by drawing analogies to the encoder and decoder processes in VAE-based methods. The decision process of the model involves two main stages: 1) In the encoder stage, the disentanglement module enforces the independence of the joint distribution of latent variables within each group of representations, resulting in student proficiency, exercise difficulty, and exercise relevance. The alignment module then ensures that each dimension of the representation aligns with a specific knowledge concept, giving it practical meaning. 2) In the decoder stage, prediction scores are generated based on the aforementioned three representations, similar to existing prediction methods in cognitive diagnosis. We will further clarify the model's decision-making process in the manuscript. **W2: In Section 4.2, the authors claim that there is reduced independence among knowledge concept groups in high-level tree nodes. However, the absence of necessary experiments or references weakens the support for this argument.** **A2**: Thanks for your question. We believe that the independence among high-level concepts (coarse-grained) is higher than low-level concepts (fine-grained). This viewpoint is inspired by practical applications, and the analysis is as follows. To better clarify the argument, we provide the partial knowledge concept tree of NIPS2020EC dataset in Figure 1 in the rebuttal global pdf . Firstly, the course chapter hierarchy is a typical example of concepts with a tree structure. Intuitively, the associations among concepts at the high level of the hierarchy are weaker compared to the associations between concepts at the low level. For example, if we adopt the 2nd level concepts (i.e., algebra, data and statistics, ...) to group the last level concepts, the independence among groups is higher than the groups determined by the 3rd level concepts (i.e., inequalities, formula, data collection, ...). Moreover, there is a higher likelihood that concepts within the same chapter are simultaneously assessed in the same exercise. For instance, the concepts whose parent concept is inequalities would have a higher probability occur in the same exercise. However, the solving linear inequalities concept whose parent concept is inequalities and the tally charts concept whose parent concept is data collection would have a lower probability occur in the same exercise. Based on your valuable suggestions, in the revised version, we would add the group-based structure to make it clearer. **W3-Brief: Section 5.1 lacks the necessary details regarding the implementation.** **A3**: Thank you. We presented the implementation details in the supplementary materials, specifically in sections C.4 and C.5. Besides, we also provide the code including baselines in the supplementary material zip file. Based on your valuable suggestion, we would add a link to the codes in the main body to facilitate easy reimplementation. ### Questions **Q1-Brief: The details of the tree structure in each dataset should be provided.** **A1**: Thank you for your valuable comment. According to your suggestion, we add the partial tree structure for NIPS2020EC dataset, as shown in Figure 1 in the rebuttal global pdf file. Displaying the tree structure on these datasets helps readers to have an intuitive understanding of the operation process of DisenCD. We will update the relevant content of all datasets in the supplementary materials accordingly. ### Limitations **L1 & L2 - Brief: DisenCD is a complex model that involves multiple modules and hyperparameters. This complexity may make it difficult to implement and tune. DisenCD may not be scalable to large datasets due to its complexity.** **Answer**: Thanks for your comment. We agree with that the DisenCD involves multiple modules and hyperparameters and introduces additional time complexity. In this paper, we aim to design a cognitive diagnosis for numerous unlabeled exercises scenario, which significantly reduces the cost of expert annotation. Therefore, we believe that the increased complexity is acceptable compared to the huge expert annotation cost.Due to the task scenario becoming more complex and challenging, it is inevitable for the model's complexity to increase and the number of hyperparameters to grow. Besides, we have provided the code including baselines in the supplementary material zip file. If the paper is luckily to be accepted, we promise to release our project to facilitate reproducible research. In the future, we will explore methods with lower complexity in this scenario. This is a promising direction, and we very much appreciate your suggestion. --- Rebuttal Comment 1.1: Comment: The inclusion of the disentangled explanation in the supplementary material is commendable. I recommend that the author incorporate this section into the main body of the revised paper. While I am inclined to give a higher score based on this addition, I believe it's crucial to consider the feedback from other reviewers as well. I have no further questions. --- Reply to Comment 1.1.1: Comment: Thank you very much for your response. According to your suggestion, we will incorporate the disentangled explanation into the main body of the revised paper. We sincerely appreciate your professional review once again.
Rebuttal 1: Rebuttal: We sincerely appreciate all reviewers' thoughtful comments, efforts, and time. In the current global response, we have included an **attached rebuttal global file**. We will be **referencing some figures or tables when replying to each reviewer**. Thanks again to the reviewers for their efforts. The references occurred in the rebuttal are listed as follows: [1] Fei Wang, Qi Liu, Enhong Chen, Zhenya Huang, Yuying Chen, Yu Yin, Zai Huang, and Shijin Wang. Neural cognitive diagnosis for intelligent education systems. In AAAI, pages 6153–6161, 2020. [2] Xinping Wang, Caidie Huang, Jinfang Cai, and Liangyu Chen. Using knowledge concept aggregation towards accurate cognitive diagnosis. In CIKM, pages 2010–2019, 2021. [3] Haiping Ma, Manwei Li, Le Wu, Haifeng Zhang, Yunbo Cao, Xingyi Zhang, and Xuemin Zhao. Knowledge-sensed cognitive diagnosis for intelligent education platforms. In CIKM, pages 1451–1460, 2022. [4] Weibo Gao, Qi Liu, Zhenya Huang, Yu Yin, Haoyang Bi, Mu-Chun Wang, Jianhui Ma, Shijin Wang, and Yu Su. Rcd: Relation map driven cognitive diagnosis for intelligent education systems. In SIGIR, pages 501–510, 2021. Pdf: /pdf/0e6809b43430e8f9c76709b42fee2ad4088231ed.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper proposes a method to perform cognitive diagnosis. The method leverages notions of disentanglement representation learning to achieve better interpretability while avoiding sacrificing performance of predicting the students' answers. Experiments were conducted on three popular datasets and quantitative empirical evidence appears to support the above claim (no loss of predictive performance; improved interpretability). Strengths: The idea of leveraging disentangled representation learning in the context of cognitive diagnosis seems new. The execution of this idea is interesting as the authors made a few changes to solve the technical challenges of applying disentangled representation learning to modeling students' skills and predict their answer correctness. The experiments cover three popular datasets and seem comprehensive, including numerical results comparing multiple baselines and various analyses. Weaknesses: My main concern is on the notion of "interpretability". After reading the paper, I fail to understand what is "interpretability" in the authors' context. For disentangled representation learning in the vision domain, interpretability mostly means disentangling factors of variation, such that when you perform generation, you can manipulate the encoded latent vector (e.g., change a specific dimension of the vector that corresponds to one variation such as shape, size, viewpoint, etc.) to control the generation to be of a particular shape, size, viewpoint, and so on. The authors seem to apply a similar notion of "interpretability" on the latent student knowledge/skill vector. However, I do not see any evidence or discussion on how such interpretability is manifested, e.g., what are the "disentangled factors" that the model learns? What do these factors mean and how do they connect to the knowledge concepts in the Q matrix? Do the model learn different disentanglement for different students? I believe an illustrative example would be extremely useful. Reporting only the DOA metric is not very convincing to me as a measure of "interpretability". Because the paper makes a big claim on "interpretability" and the proposed methodology revolves around it, I am unsatisfied with the empirical evidence and analyses around it. I am open to raise my score if the authors can provide more in-depth discussion and definition around interpretability in their context, as well as illustrative examples to demonstrate the interpretability that their model learns. ---- post-rebuttal update: authors answered most of my questions and I have updated my scores accordingly. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: What do the authors mean by interpretability? Does it mean disentangling factors of variation on the students' latent vector? If so, what is being disentangled? Does the different dimension of the students' latent vector mean anything, or correspond to one or few concepts defined in the Q matrix? In line 228, what's the rationale to use L2 loss? From my understanding, this loss aims to encourage sparsity, so why not using an L1 loss? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your thoughtful comments, efforts, and time. We respond to each of your questions and concerns one-by-one as follows: ### Questions **Q1 & Weakness - Brief: Confusion on interpretability and DOA.** **A1**: Thanks for your question. In cognitive diagnosis, different dimensions of the students' latent vector are required to correspond to specific knowledge concepts. In few-labeled exercise scenario, it is hard to meet the interpretability requirement of cognitive diagnosis. Therefore, we propose a novel DisenCD to enchance this type of interpretability through Disentangled Representation Learning (DRL). In DisenCD, interpretablity also means disentangling factors of variation. As shown in Figure 2 in the rebuttal global PDF, we randomly select three concepts, we observe that increasing the value of the correpoding concept, all students' predicted ratings on the exercises with the corresponding concpet would increase. To better illustrate the interpretability of our DisenCD, we will elaborate from two perspectives: the data generation process and the variation of disentangled factors. we will draw an analogy on DRL between the visual domain and cognitive diagnosis, and answer your questions during the process. 1.Vision Domain (1) The data generation process We can view the entire image dataset as evolving from a series of generative factors (such as color, shape, size,viewpoint, etc). For a single image, it is determined by the specific states of all the generative factors. (2) The variation of disentangled factors Given a disentangled representation, changing the value of one dimension, such as color, will result in a change in the color of the image only. 2.Cognitive Diagnosis (1) The data generation process We can view the entire response records as evolving from three groups of generative factors: student proficiency factor on each knowledge concept ($\mu_u$), exercise difficulty factor on each knowledge concept ($\mu_v^d$), and exercise relevance factor on each knowledge concept ($\mu_v^r$). For a response record, result of the student response to exercise $v$ depends on factors: knowledge concepts contained in the exercise $v$ (corresponding to exercise relevance factors), exercise difficulty on each knowledge concept contained in the exercise $v$ (corresponding to exercise difficulty factors) and student proficiency on each knowledge concept contained in the exercise $v$ (corresponding to student proficiency factors). (2) The variation of disentangled factors - The variation of disentangled factors in student proficiency representation. Given a student proficiency representation, each dimension in the representation corresponds to the student's proficiency level on a specific knowledge concept. The higher the value of a dimension, the higher the proficiency level on that knowledge concept. When we only increase the value of a single dimension in the student representation (improving the student's proficiency on a specific knowledge concept), while keeping the exercise difficulty representation and relevance representation unchanged, the student's probability of answering questions related to that knowledge concept correctly will increase, while the probability of answering questions unrelated to that knowledge concept will remain unchanged. In Figure 2 in the rebuttal global pdf file, we present a case where we only increase a single dimension in the student representation, and it shows that the student's probability of answering questions related to that knowledge concept correctly increases. - The variations of disentangled factors in exercise difficulty representation disentangled factors in exercise relevance representation have similar phenomenon. As for the DOA metric, the interpretability of cognitive diagnosis primarily aims to reflect the student's proficiency in various knowledge concepts when providing prediction results. Therefore, diagnostic models should provide reasonable diagnostic reports based on the student's historical response data. To meet this requirement, DOA metric is proposed to measure the degree of agreement between diagnostic report (i.e. student proficicency representation) and the whole students' historical response data. In other words, DOA metric currently is the most commonly used metric to measure the interpretability in cognitive diagnosis[1,2,4]. Perhaps this type of metric is deficient. We are also eager for more researchers to participate in cognitive diagnosis and come up with more interpretability metrics. Thanks for your comment sincerely. **Q2: In line 228, what's the rationale to use L2 loss? From my understanding, this loss aims to encourage sparsity, so why not using an L1 loss?** **A2**: Thank you for your question. In our preliminary attempt, we first directly try to use L1 loss for numerous unlabeled exercises, and we find it does not show competing performance and it is harder to reach convergence in the training process. We speculate a possible reason is as follows: with L1 loss, after model initialization, it is very hard to revise these incorrectly labeled exercises. In contrast, L2 loss are sensitive to outliers, and would have a larger loss when the corresponding knowledge is incorrectly predicted. We show the experimental results of L1 and L2 loss in Table 2 in the global PDF file. Besides, to ensure sparsity , we use a margin-based loss as shown in the first part of Eq.(7). In the margin-based loss, after sorting each predicted exercise encoder in a descending order, we encourage that the top-ranked concepts have a large margin compared to the least-likely concepts, i.e. ,the (d1+d2+1)-th largest value to the K-th values . As shown in Figue 3(h), d1+d2 are set to small values. Therefore, the margin-based loss can encourage the predicted concepts to be sparse. Sorry for the confusion again, and we would explain Eq.(7) clearer in the revised version. --- Rebuttal Comment 1.1: Title: on interpretability Comment: Thanks for the authors' response. One more question re interpretability: each dimension of the learnt student proficiency factor z correspond to a knowledge concept. Does this correspondence (a dimension of z --> a specific knowledge concept) pre-determined? Or is it learnt in an unsupervised manner similar to β-TCVAE? If the latter is the case, will the correspondence between a dimension of z and a specific knowledge concept change for different optimization runs, causing an identifiability issue? Apologies if answers to the above questions are obvious from the paper but I guess it does not hurt to reiterate. Thanks! --- Reply to Comment 1.1.1: Comment: Thank you very much for the professional question! The question you raised regarding the correspondence between the proficiency factor and the knowledge concept is crucial to the interpretability of DisenCD. We are sorry for not clarifying this in section 4.4 decoder module. The correspondence on student proficiency representation is not pre-determined, nor does it learn in an unsupervised manner like Beta-TCVAE. Our solution is detailed as follows. The proposed semi-supervised disentangled method DisenCD comprises three representations, namely student proficiency, exercise difficulty, and exercise relevance. They all align with real knowledge concepts. First, the semi-supervised signal from the few-labeled Q-matrix would keep the correspondence on exercise relevance representation, which is detailed in the limited-labeled alignment module (Section 4.3). Then, the correspondence in exercise relevance representation ensures correspondence in both student proficiency and exercise difficulty representations through the decoder module, which is similar to existing CD works [1,2,3,4]. Specifically, as shown in equation (8), the subtraction operation $(\textbf z_{u_i} - \textbf z_{v_j}^d)$ is element-wise. The inner product operation $\otimes$ is also element-wise. The element-wise operation in the decoder module propagates the correpondence on exercise relevance representation to both student proficiency representation and exercise difficulty representation. \ We are sorry for the confusion once again and will clarify this in section 4.4. Thank you!
null
null
null
null
null
null
Learning Efficient Coding of Natural Images with Maximum Manifold Capacity Representations
Accept (poster)
Summary: This paper develops a form that facilitates direct optimization, use it to learn Maximum Manifold Capacity Representations (MMCRs), and demonstrate that these are competitive with state-of-the-art results on standard self-supervised learning (SSL) recognition benchmarks. Empirical analyses reveal important differences between MMCRs and the representations learned by other SSL frameworks. Strengths: Inspired by manifold capacity theory, the proposed self-supervised learning algorithm is efficient to learn and requires neither large batch size nor large embedding dimension. This work also provides very interesting empirical results which are well aligned with the proposed theorem. Besides, the paper is well-presented. Weaknesses: Since I am not in this area, the things mentioned in this paper are very convincing to me. I cannot point out very specific weakness. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: NA Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive evaluation of our work. Please let us know if any questions arise during the discussion period and we will be happy to clarify.
Summary: This paper proposes a new SSL objective to directly optimize manifold capacity (num of categories represented in a linearly separable fashion). It is found that the resulting representations can support linear classification well. It is also shown that the representations are somewhat sample efficient (reasonable performance in 1% and 10% settings, Food, Flowers, etc). It is also found that that the representations are slightly better predictors of neuronal representations in V2 compared to baselines. Strengths: 1. The description of the objective function and simplifying assumptions are clear 2. The evaluations (e.g. Tables 1, 2) and analyses (e.g. Figs, 4, 5) are clear. 3. The eigenspectrum alpha being close to 1 is an interesting finding. Weaknesses: [Limited evaluations, objective too specific?] There is some novelty in the specific objective function used, but evaluation is limited. For example, by more directly optimizing for linear separability has there been a loss in generality of representations, e.g. on segmentation or depth estimation? The detection results in the appendix appear to be consistent with MMCR being more specialized for linear classification. Also unclear how well does the objective work when the data are more complex (e.g. COCO)? Imagenet normally contains few objects and the correct category is very often an object in the center. Moreover, the architecture explored is also limited to ResNet-50 and is not competitive with more recent SSL methods (e.g. RELICv2, NNCLR). Another important limitation is that evaluations are for SSL models only trained for 100 epochs. Different SSL methods converge to good representations at different speeds, so limiting evaluation to 100 epochs is does not give us an adequate understanding of various methods - this limitation applies to comparisons on ml benchmarks as well as neuronal data. [Purpose of comparing to neural data unclear.] MMCR has slightly better predictive power in V2, and the eigenspectrum alpha is close to 1. These are interesting empirical observations, but the discussion does not appear to really engage with these. In the discussion (l316) it is claimed that future work should go beyond accounting for current data. It is not clear to me how the current work provides an account of current data? What are the precise claims/hypotheses being made and how do the experiments support it? What does it entail for our understanding of the brain that was not previously known? Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. Can the authors please make their discussion on relevance to neuroscience more specific? 2. Can the authors please expand on their evaluations along any of the directions listed in weaknesses? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review of our work, our responses to the listed weaknesses are below: - *[Limited evaluations, objective too specific?]*: We agree that evaluating on more tasks can strengthen the assessment of a learned representation. We have run additional evaluations of baseline models trained in the same setting (100 epochs), which show that MMCR performs comparably to other SOTA methods on object detection and outperforms the supervised baseline. These evaluations are on the VOC dataset using the same procedure described in the appendix of our original submission: | | mAP | AP50 | AP75 | |-------------|-------|--------|--------| | Supervised | 53.5 | 81.3 | 58.8 | | MMCR | 54.6 | 81.9 | 60.6 | | SimCLR | 54.7 | 81.3 | 60.2 | | SwAV+multicrop | 54.4 | 81.6 | 61.0 | | BYOL | 56.0 | 82.3 | 62.0 | | Barlow Twins | 53.1 | 80.9 | 57.7 | | MoCo v2* (200 epochs) | 57.0 | 82.4 | 64.0 | Results for MoCo are reported from the official paper repository. - *"Another important limitation is that evaluations are for SSL models only trained for 100 epochs..."*: We chose this setting because implementations and models are readily available using ResNet-50 with 100 epochs of pretraining, allowing us to compare to many existing methods directly while operating with a limited compute budget. We do agree that it would be valuable to demonstrate that our method’s strong performance is not limited to 100 epoch pretraining, and have now launched experiments using 1000 epochs and 2 views. This experiment is still running at the time of posting but we will include results in the updated paper. It is worth noting that NNCLR does report results for linear evaluation using 100 epochs of pretraining, where they achieve 69.4% accuracy, slightly underperforming MMCR with two views. - *[Purpose of comparing to neural data unclear.]*: The results show that MMCR is competitive in explaining physiological data from area V2, but the BrainScore evaluation does not provide a means of assessing/interpreting the detailed nature of the fits. Our table indicates that, although different objective functions produce representations with very different spectral properties (as evidenced by the spread in participation ratio and alpha decay coefficients), there is much less spread in their neural predictivity. --- Rebuttal Comment 1.1: Title: Still Confused. Results do not seem favorable to MMCR. Comment: Dear Authors, Thanks for your response. - "The results show that MMCR is competitive in _explaining_ physiological data from area V2". I'm confused by this. How does MMCR _explain_ the physiological data in V2? Sure, it has predictive power. But it's not clear to me how this _explains_ anything. Overall, I'm still confused by the comparison to neural data and find myself going back to the original question I had, what do we know now about the brain that we din't before? - "..although different objective functions produce representations with very different spectral properties (as evidenced by the spread in participation ratio and alpha decay coefficients), there is much less spread in their neural predictivity." This makes me even more confused, because you are suggesting that the specific (MMCR or SimCLR) SSL objective does not matter very much. - VOC results. Thanks for reporting these numbers. These seem to confirm that any benefits of MMCR over baseline similar SSL objectives are restricted to the linear classification setting. - Results from longer training and different architectures could potentially strengthen the paper. But even if these numbers are not available, please include stronger baselines in the paper (e.g. NNCLR). --- Reply to Comment 1.1.1: Comment: We agree that the evaluations suggest MMCR is strongest in the linear classification setting, though our method is also competitive with baselines in terms of detection. We will also include NNCLR as an additional baseline as per your suggestion. We feel that the introduction of a new objective function that is competitive with several baselines across a variety of tasks and produces a representation with measurably different properties warrants inclusion in the literature.
Summary: This paper proposes an alternative way (i.e., maximizing manifold capacity) to learn useful representation in a self-supervised manner. In particular, it maximizes the separability of the centroid manifold (i.e., negative samples), while the second term aims to minimize the nuclear norm among positive samples. tends to figure out crucial properties of self-supervised learning (SSL) methods, which promote good performance on downstream tasks. To reach this target, the authors propose a unifying conceptual ISSL framework. In particular, their main contributions contains: i) increasing the dimensionality of presentation and using asymmetric projection heads; ii) presenting a new non-contrastive ISSL objective; iii) applying non-linear probes. Strengths: This paper introduces manifold capacity in self-supervised learning (SSL). This idea is somewhat novel and provides an alternative way to avoid the collapse issue in SSL. Extensive experiments conducted on nearly 7 datasets demonstrate that the proposed method achieves comparable performance to existing state-of-the-art (SOTA) methods. Furthermore, empirical analysis of Maximum Manifold Capacity Representations (MMCRs) reveals distinct characteristics compared to existing approaches. Weaknesses: However, my main doubts/concerns regarding the paper are the following: - As shown in Line 138-141, the first term aims to maximize the separability of negative samples, while the second term aims to minimize the nuclear norm among positive samples. The motivation behind the proposed loss is similar to [1], which pursues "the alignment of features from positive pairs and the uniformity of the induced distribution of the normalized features on the hypersphere." It would be better to provide further discussion in relation or difference to [1]. - The observation in Figure 6 is interesting. However, this result raises the question of whether implicitly minimizing object manifold compression is truly necessary, as the second term with nuclear norm does not impact the mean manifold nuclear norm. Therefore, conducting an ablation study between these two terms on several downstream tasks is essential to further investigate their effects. - The proposed MMCR method shares strong connections with nuclear norm-based approaches [2,3,4], especially in relation to [2]. However, the current study lacks either theoretical or empirical comparisons between MMCR and these nuclear norm-based methods. - In Figure 2, the Mean Filed Manifold Capacity Analysis shows that existing SOTA methods with different objectives all explicitly learn representations with large radii but low dimensionality. These results may question the necessity of the proposed method. - In Figure 5, the centroid similarity distributions of MMCR for both same class and distinct classes appear to be generally smaller than those of existing methods (Barlow Twins and SimCLR). Therefore, this comparison may be somewhat unfair. To ensure a fair comparison, it would be better to normalize the centroid similarities (e.g., by computing z-scores) for each method and then compare the normalized distribution across different methods. Minor: - The experimental setting described in Section 3.2 is unclear, which could potentially result in unfair comparisons between MMCR and other state-of-the-art (SOTA) methods. If MMCR employs multi-crop augmentation while Barlow Twins and SimCLR only use two views, the comparison would be unfair in Figure 2 and 5. - The authors might unintentionally modify the default LaTeX template, which results in incorrect formatting of the citations. In particular, (1) -> [1] or (1;2) -> [1,2]. - Incorrect format of theorems, lemmas, and proofs is detrimental to this paper since it makes the paper appears informal. - Almost all equations lack commas or full stops at the end. - Please standardize the usage of either "Fig. x" or "Figure x" for consistency. - Figure 2: "see E" -> "see appendix E". Table 2: "see L" -> "see appendix L". [1] T. Wang, et al., "Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere", ICML 2020. [2] Y. Wang, et al., "A Low Rank Promoting Prior for Unsupervised Contrastive Learning", TPAMI 2022. [3] O. Hénaff, et al., "The Local Low-dimensionality of Natural Images", ICLR 2015. [4] J. Lezama, et al., "OLÉ: Orthogonal Low-rank Embedding, A Plug and Play Geometric Loss for Deep Learning", CVPR 2018. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: The questions are corresponding to the above main concerns: - What is the relation or difference between the proposed objective and [1]? - Please add more theoretical or empirical analyses on the necessity of implicitly minimizing object manifold compression. - Please add more theoretical or empirical comparisons between MMCR and the nuclear norm-based methods [2,3,4]. - Please add more discussion about the results in Figure 2. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: Yes, the authors have addressed the societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thorough review of our submission. Below we respond to each of the listed weaknesses: - *"As shown in Line 138-141..."*: Many self-supervised learning methods share these core motivations, and [1] in particular demonstrates that the logarithm of the average pairwise Gaussian potential is optimized by a uniform distribution on the hypersphere. The unique aspect of our method is that it doesn’t rely on pairwise comparisons in order to achieve the desired properties in the global representation, but rather optimizes a population level metric directly. We will make this point more explicit in the revision. - *"The observation in Figure 6 is interesting..."*: We feel that Figure 6 does not imply the necessity of implicit manifold compression, but rather demonstrates the effectiveness of doing so. This is an important interpretational difference, as using explicit compression substantially increases the computational complexity of evaluating the objective. In practice we observe that including this term in the loss significantly increases the run time, and on CIFAR experiments we did not observe any benefit in terms of the quality of the learned representation. We will emphasize this practical advantage of implicit compression in the revised paper. - *"The proposed MMCR method shares strong connections"*: Our formulation is related to [2, 3, 4] in the use of nuclear norm (and we’ve cited all of them for this). However, each of them differ significantly enough from our approach that a more detailed comparison did not seem warranted. Specifically: - [3] is primarily focused on learning a set of linear filters in an autoencoder architecture. The substantial difference in experimental setting prevents a useful empirical comparison, but we will include a more thorough discussion of this paper in our revised related works section. - [2, 4] employ low rank (nuclear norm) regularizers to supplement their objective functions. In contrast, our objective directly maximizes rank. In addition, [4] differs in that it employs supervised training. We do think a more direct comparison with [2] could be interesting, i.e. we could compare the spectra of augmentation manifolds and the global dimensionality of the learned representations. A fair comparison will require matching the pre-training settings (specifically the amount of pretraining, LORAC trains for 200 epochs), and compute limitations have prevented us from completing these experiments during the rebuttal period. We will however report preliminary results during the upcoming discussion period and in include such analyses in the final paper. - *"In Figure 2, the Mean Filed Manifold Capacity Analysis shows..."*: For methods that perform strongly in terms of linear classification, capacity analysis provides further insight as to whether this is accomplished with class manifolds with low dimensionality or low radius. Figure 2 demonstrates that MMCR yields high capacity by reducing dimensionality at the expense of increasing radius, relative to SOTA methods. Your comment helped us to realize that the formatting of Figure 2 obscures this point, since the differences only emerge at the tail end of the ResNet hierarchy. In the revision, we’ll include a table of values taken from the final representation layer, and move the figure to the appendix. - *"In Figure 5, the centroid similarity distributions..."*: We would appreciate it if the reviewer could clarify this concern. The distribution of similarities has a smaller mean than the alternatives, which is the message we were trying to communicate. If the distributions for each method were z-scored, they would differ only in higher order (shape related) properties, which could be interesting but is not central to the argument being made. - *"The experimental setting described in Section 3.2..."*: All the model evaluations use the same number of augmentations - we will clarify this point in the revised paper. It is true that during *pre-training*, SimCLR and Barlow Twins use 2-view augmentations. We think this is fair, since these were the procedures used in their respective original implementations (in addition, it is not clear how Barlow Twins would be adapted to the multi-view setting). We’ll clarify these points in the revised text. - *[Minor]*: Thank you for bringing these formatting issues to our attention, these will be corrected in the revised text. --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer eLMY Comment: Thanks for your response. The comments from the authors solve most of my concerns, and I have updated the score. Clarification to "In Figure 5, the centroid similarity distributions...": In Figure 5, the experimental settings for the three methods differ significantly, which will result in different distributions of centroid similarity. Therefore, making a direct comparison of these distributions might not provide an apples-to-apples comparison, which will weaken the claim.
Summary: The efficient coding hypothesis suggests sensory systems maximize mutual information between their inputs and the environment. A recent adaptation, "manifold capacity", calculates the number of linearly separable object categories, but is computationally intensive. The authors simplify this measure to directly optimize Maximum Manifold Capacity Representations (MMCRs). They show MMCRs are competitive with top results on self-supervised learning (SSL) recognition benchmarks. MMCRs differ from other SSL frameworks and may enhance class separability through manifold compression. On neural predictivity benchmarks, MMCRs prove competitive as models of the ventral stream, a part of the brain involved in object recognition. Strengths: - The paper presents a novel method on SSL leveraging Manifold Capacity theory. The contribution is clear and precise. - The paper connect SSL with Manifold Capacity theory with a effective theoretical formulation for the SSL objective. - The paper empirically validates its claim by running large scale experiments and demonstrates competitive performance. - The method mitigate the large batch / large dimension training bottleneck of the previous SSL method. - The paper also evaluates on the neural data to show that its method can explain the neural data better. Weaknesses: - Although the initial results look very promising, the author didn’t include an error bar in Table-1. It would be beneficial to see the error bar to further eliminate the effect of chance. - The paper posits that sensory systems maximize mutual information between their representations and the environment, based on the efficient coding hypothesis. However, it might lack a discussion on the biological plausibility of MMCRs. How feasible is it for biological sensory systems to implement MMCRs? How does this model account for the biological constraints mentioned in the efficient coding hypothesis? - [Minor] In the legend of Figure 2, the description indicates the top row showing the radius but the actual graph shows dimensionality. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - An illustrative figure that intuitively demonstrate the intuition of singularity of the covariance matrix in terms of the objective function would be beneficial to the reader's understanding. - In the introduction, it might be better to highlight the contribution to the SSL more clearly. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive review of our paper. We respond to each of the listed weaknesses below: - *"Although the initial results look very promising..."*: We agree including a measure of uncertainty would help strengthen the paper. For ImageNet experiments repeated trainings of the linear classifier yields a standard deviation of \~0.05, and for the smaller datasets we have observed slightly more variability (~0.1). We will update the table with these confidence intervals in the revised paper. - *"The paper posits that sensory systems..."*: Thank you for this thoughtful question. One of the key biological constraints is that not all of the information carried by a sensory stream can be faithfully encoded with limited resources. This constraint is applied in MMCR by encouraging invariance to augmentation variability. The learned representations are efficient in that they compress this source of variability while maximizing the discriminability over the images in the dataset. Moreover, we speculate that the MMCR objective is better suited to biological implementation because it does not require a large number of pairwise comparisons to evaluate the objective. Instead the MMCR objective is a function of the summary statistics (singular values) of a global representation of the image manifold. Whether and how this could be optimized using neural circuits is of interest, but well beyond the scope of the current paper. - *"In the legend of Figure 2..."*: Thank you for bringing this to our attention, we will revise the caption in the updated paper.
Rebuttal 1: Rebuttal: We would like to thank the reviewers for their careful consideration of our submission. Their feedback has helped to identify several simple ways to improve our paper. Below we summarize some key revisions to be made in the case our work is accepted: - The framing/introduction of our objective will be clarified. In particular we will change Equation (2) to only include the first term, as we immediately drop the second term in the following experiments. - Relatedly, we will better motivate the use of “implicit” compression by noting the practical advantage (in terms of runtime). - We will include a more thorough evaluation on object detection performance on the Pascal VOC dataset in the main text, to demonstrate that though our method is inspired by a theoretical analysis of linear classification the learned representations perform well on more complex computer vision tasks. - We will better characterize the contribution of our neural data experiments, which show that different SSL methods trained in matched settings produce representations with diverse spectral/geometric properties but surprisingly similar neural predictivity. Pdf: /pdf/e917f220042ca4d1e9ca66952fc1b6a07dbba537.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The authors convert recent results in manifold perceptron capacity calculations to a practical self-supervised learning algorithm. They then (1) provide one theoretical result, (2) show their method is predictive of primate visual cortex recordings, and (3) characterize some empirical properties of their method versus other methods. Strengths: - The conversion of manifold perceptron capacity to a practically implementable SSL algorithm is novel - This paper is quite thorough. It contains a little bit of everything: a new algorithm, one theoretical result for the algorithm, comparison with biological data, empirical study of how the algorithm differs from other SSL algorithms, eigenspectrum decay. Its experimental results are particularly thorough. Weaknesses: Ordered by location in the text, rather than prioritization: - Section 2.1 either (1) makes a subtle but important leap that I'm not sure I followed or (2) uses confusing pluralization. Lines 93 to 103 state we consider P manifolds (plural) and then discuss three quantities: (1) the manifold radius (singular), (2) the manifold dimension (singular), (3) the correlation between the manifold centroids (plural). Additionally, a subscript $M$ appears and I do not know what this $M$ refers to. I'm not trying to nitpick pluralization; rather, I'm confused whether each of the P manifolds has its own radius and its own dimension, or whether all P manifolds collectively have 1 radius and 1 dimension. If the former, how are each of the P manifolds' radii and dimensions combined to determine the threshold capacity of the manifold population? If the latter, what does linear separability mean for a monolithic grouping of the P manifolds? Or must all $P$ manifolds have the same radius and dimension, but are permitted to possess their own centroids? I would appreciate if the authors could clarify this section. - The move from Section 2.1 to Section 2.2 needs better motivation. Section 2.1 introduces the manifold capacity as $\phi(R_M \sqrt{D_M})$ with differentiable, closed-form expressions for both $R_M$ and $D_M$. I am then expecting that we'll be maximizing capacity by minimizing $R_M \sqrt{D_M}$. But instead, we jump to Equation (2)'s, with a small inline connection to $\phi(R_M \sqrt{D_M})$, and are then told that the second term of Equation 2 doesn't matter and so we'll set its prefactor $\lambda$ to 0. I think this section could be greatly improved if the authors emphasize & explain the inline approximation , and better motivate Equation 2. I would recommend either leaving out the second term of Eqn 2 altogether, or alternatively, clarifying why the second term _should_ be mentioned if it is to be immediately eliminated. - I'm unsure whether I should be persuaded by the 2D example of 2 centroids (Equations 3 and 4) . I do love attempting to build intuition, and I understand that no closed form expressions exists for singular values of an arbitrary matrix, so I commend the authors, but I'm unsure whether the intuition will hold in (a) higher dimensions and (2) with significantly more data i.e. more centroids. Even staying in 2D, what will MMR do if we have 3, 4, 5 etc. centroids? I might've guess something like the Thomson problem (https://en.wikipedia.org/wiki/Thomson_problem) but that can't be the case because 2 electrons choose antipodal points where MMCR chooses orthogonal points. - Moreover, upon another read, I realized the paragraph "Compression by Maximizing Centroid Nuclear Norm Alone" seems odd compared with Equation 2. The paragraph argues that the nuclear norm of $C$ alone is sufficient because it both (1) incentivizes making each manifold as tightly compressed as possible (i.e. each mean should have maximum norm, bounded above by 1) and (2) incentivizes making centroids pairwise orthogonal. I agree with this - the 2D case is clear and convincing. But if those two incentives are what we care about, why not directly use them as our loss? I'm specifically suggesting something like $-\sum_b ||c_b||^2 + \sum_{b, b'} (c_b \cdot c_{b'})^2$? And if so, isn't that almost exactly the loss of TiCo (Zhu et al 2021/2022 https://arxiv.org/abs/2206.10698 ), specifically equation 6? - With all SSL papers, there are many implementation decisions and hyperparameters that complicate interpreting the results. Line 205-208 caught my eye ("We additionally employ a momentum encoder for ImageNet pre-training...". While there is nothing wrong with using a momentum encoder, I would like to know what performance MMCR achieves with vs. without the momentum encoder. Could the authors please include this ablation? If I missed it, I apologize. - Please add some measure of uncertainty (e.g. 95% confidence intervals) to Figure 3 (unless such intervals already exist and are too small to be seen, in which case, please note that in the figure caption). - The authors write "Elmoznino and Bonner (23) found that high dimensionality in ANN representations was associated with ability to both predict neural data." and later "Additionally, Elmoznino and Bonner (23) report that high intrinsic dimensionality (as measured with the participation ratio of the representation covariance) is correlated with a representation’s ability to predict neural activity." The claim that participation ratio is highly correlated with neural predictivity was noted concurrently by Schaeffer et al. 2022 "No Free Lunch from Deep Learning in Neuroscience" at an ICML 2022 workshop (albeit in mouse MEC on navigation tasks, rather than primate visual cortex on vision tasks) and published at NeurIPS 2022. Then, soon after, Tuckute et al. 2022 "Many but not all deep neural network audio models capture brain responses and exhibit hierarchical region correspondence" released a preprint showing the same correlation between participation ratio and neural predictivity in human fMRI recordings of audio (Figure S6 in biorxiv v1; I'm too lazy to find the same plot in the more recent biorxiv v4). From an attribution perspective, I believe all three should be cited since they were all within ~1-3 months of one another. I also think citing all three will strengthen your paper since it points out the same phenomenon appears in multiple modalities (navigation, vision, audition), multiple species (rodent - can't remember mouse or rate, monkey, human) , multiple recording technologies. I realize that many of these "weaknesses" might be my own misunderstandings. If the authors could clarify, or lightly revise the paper to address these above points in a satisfactory manner, I would be happy to increase my score :) Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Figure 5: The authors state "the InfoNCE loss employed in Chen et al. (12) benefits when negative pairs are as dissimilar as possible, which is achieved when the two points lie in opposite regions of the same subspace". If this is correct (and I believe it is), we should expect to see negative cosine similarities between distinct classes. But then the right subplot shows that SimCLR has positive cosine similarity (~0.3) between distinct classes. Could the authors please clarify? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your very thorough review of our work! Below we address each point listed in the weaknesses: - *"Section 2.1 ..."*: Apologies that this was not expressed more clearly: each of the manifolds is assumed to have the same dimensionality and radius, but each has its own centroid. We will clarify in the revision. - *"The move from section 2.1 to section 2.2..."*: The jump to Equation (2) is a bit abrupt (we left these steps out due to space constraints). The capacity is a monotonic function of the nuclear norm in lines 114-115. We neglected to mention that its functional form is not well suited to gradient descent (evaluation requires numerical integration) but the monotonicity means it is largely irrelevant for the purpose of gradient-based optimization. We included the second term in Eq (2) because we were aiming to formulate a contrastive objective: an invariance term (the second) and a term to prevent representational collapse (the first). It was not initially obvious to us that the collapse prevention term (in tandem with the unit sphere constraint) naturally encourages invariance. We will revise the presentation to communicate the above sentiment “up-front,” and then modify Eq (2) to only include the first term. - *"I'm unsure whether I should be persuaded by the 2D example... "*: Thank you for carefully considering the implications of our proposed objective. We included the 2-D example to build intuition for two properties of the objective: it maximizes the angle between the centroids, and it also encourages augmentation manifold compression (indirectly, by maximizing the norm of centroids). When there are more centroids than dimensions, maximizing the nuclear norm encourages them to form a simplex equiangular tight frame (sETF) - note that this can only be achieved exactly for certain combinations of dimensionality and number of points. For the 2D case, they would be equally distributed around the unit circle. - "Moreover, upon another read..."*: The most important property of our objective is that it incentivizes global high dimensionality without relying on expectations over large numbers of pairwise comparisons (as required by TiCo and many other SSL methods). We will clarify this in the revised text. - *"With all SSL papers..."*: Thanks for the suggestion - we agree this is worth reporting. The momentum encoder conferred a small advantage in terms of downstream classification performance: | | 2 Views | 4 Views | 8 Views | |---------------------|---------|---------|-------------| | With Momentum Encoder | 69.5 | 71.4 | 72.1 | | Without Momentum Encoder | 68.4 | 70.2 | 71.5 | - *"Please add some measure of uncertainty..."*: Thank you for pointing out this omission. The figure does indeed have confidence intervals which are too small to be seen (due to the large sample size). We will mention this information to the figure caption. - *"The authors write..."*: Thank you for the pointers - we agree that these articles should be cited as well! - *"Figure 5: The authors state"*: We are visualizing the cosine similarity at the output of the encoder network, which is the learned representation used for downstream tasks. Because the encoder is a ResNet-50, the outputs are rectified and the minimum possible similarity is zero. If we instead look at the outputs of the projector network (where the loss is applied), your intuition is correct: the distribution of centroid similarities for SimCLR for distinct classes peaks below zero (see the figure included in the pdf uploaded along with our general response, we match the setting of Figure 5 except for we instead examine the outputs of the projector for SimCLR). --- Rebuttal Comment 1.1: Comment: Hi! I'll read and respond to your rebuttal later, but I realized just now that a comment was not visible to the authors even though it was meant to be. I posted a comment during the review-writing stage, and I thought the authors weren't included under the comment's Readers because we were still in the review-writing stage, but I now see that hasn't changed, and so the authors might not have seen the comment. Consequently, I'm reposting the comment so that the authors can hopefully see: =============================== My comment in my review, asking how MMCR behaves in 2D with more than 2 centroids, prompted me to sit down and write a small simulation to understand the answer myself. Here, I'm assuming that all we need to focus on are centroids. The code is short and can be run locally on a personal machine in seconds. ```import matplotlib.pyplot as plt import numpy as np import torch import torch.nn as nn import torch.optim as optim # Define the nuclear norm as the sum of singular values def nuclear_norm(matrix): u, s, v = torch.svd(matrix) return torch.sum(s) def minimize_nuclear_norm(num_centroids: int) -> np.ndarray: # Generate N random vectors on the unit circle # Shape: (N, 2) centroids = torch.normal(mean=0, std=1, size=(num_centroids, 2)) centroids = centroids / torch.norm(centroids, dim=1, keepdim=True) # Create a PyTorch variable to hold the matrix matrix = nn.Parameter(centroids) # Define the optimizer. optimizer = optim.Adam([matrix], lr=0.005) # Optimization loop for i in range(2500): optimizer.zero_grad() loss = -nuclear_norm(matrix) loss.backward() optimizer.step() # Put the row vectors back on the unit circle. matrix.data = matrix.data / torch.norm(matrix.data, dim=1, keepdim=True) print('Iteration: {}, loss: {}'.format(i, loss.item())) # Retrieve the optimized matrix matrix = matrix.detach().numpy() return matrix for num_centroids in range(2, 8): print('N: {}'.format(num_centroids)) optimized_matrix = minimize_nuclear_norm(num_centroids=num_centroids) plt.close() fig, ax = plt.subplots(figsize=(6, 6)) for i in range(num_centroids): ax.plot([0, optimized_matrix[i, 0]], [0, optimized_matrix[i, 1]]) ax.set_ylim(-1.1, 1.1) ax.set_xlim(-1.1, 1.1) ax.set_aspect('equal') plt.show() ``` My observations: 1. As the authors report, when there are 2 centroids, then indeed, the centroids move to be orthogonal to one another. 2. 3 centroids will occasionally be equally spaced around the unit circle OR will occasionally lie together within 1 quadrant of the unit circle 3. With more centroids e.g., 4 or 5, MMCR oftentimes place two or more centroids nearly or directly on top of one another. I can't figure out how to upload images to OpenReview, so you may need to run the code yourself a few times to see these various outcomes. Obviously, this was just something quick and I have not quantified the behavior of MMCR over many runs, and I may have done something wrong. Have I done something wrong? If not, it appears that once MMCR is has more centroids than dimensions, MMCR fails to distinguish a subset of centroids from other centroids. Could the authors comment or explore this behavior in depth? This seems like a bad property for a SSL method to possess. Edit: To follow up, suppose you have C centroids in N dimensions, in an optimal configuration i.e. the nuclear norm is maximized. Suppose you then add one more centroid. Where is the optimal place to put the new centroid? If C>N, then adding the new centroid can't affect the rank, so you still have same number of singular values to optimize. I think the answer is atop an existing centroid, because doing so has zero impact on the nuclear norm and the nuclear norm was previously optimal. Does this make sense or am I mistaken? If so, is stacking centroids one-atop-another desirable behavior? Title: Reposting comment that I somehow accidentally made invisible to the authors --- Reply to Comment 1.1.1: Comment: Thanks for the code and simulations. Our interpretation is that sETF solutions are optimal, but there are other configurations that achieve the same nuclear norm. For example, in the 2D case of 4 centroids, both the equally spaced solution and a duplicated pair of orthogonal vectors achieve the optimal loss. So the solution in this case comes down to initialization. But in spaces of reasonably high dimensionality, we believe these degenerate solutions become quite unlikely. To demonstrate, we examined mean and maximum absolute cosine similarity between optimized centroids, as a function of dimensionality and undercompleteness. We can no longer include the graphs in a PDF, but our code is attached below (it is only a slight modification of your simulation). Note that, for example, for 512-D (the dimensionality used in the paper) and 8x under-completeness, the centroids are all pairwise nearly orthogonal (max similarity of \~0.2 out of 4096 choose 2 pairs). ``` import numpy as np import matplotlib.pyplot as plt import torch import torch.nn as nn import torch.optim as optim import matplotlib.pyplot as plt from tqdm import tqdm # Define the nuclear norm as the sum of singular values def nuclear_norm(matrix): u, s, v = torch.svd(matrix) return torch.sum(s) def minimize_nuclear_norm(num_centroids: int, dim: int) -> np.ndarray: # Generate N random vectors on the unit circle # Shape: (N, D) centroids = torch.normal(mean=0, std=1, size=(num_centroids, dim)) centroids = centroids / torch.norm(centroids, dim=1, keepdim=True) # Create a PyTorch variable to hold the matrix matrix = nn.Parameter(centroids) # Define the optimizer. optimizer = optim.Adam([matrix], lr=0.005) # Optimization loop losses = [] for i in tqdm(range(2500)): optimizer.zero_grad() loss = -nuclear_norm(matrix) loss.backward() optimizer.step() # Put the row vectors back on the unit circle. matrix.data = matrix.data / torch.norm(matrix.data, dim=1, keepdim=True) #print('Iteration: {}, loss: {}'.format(i, loss.item())) losses.append(loss.item()) # Retrieve the optimized matrix matrix = matrix.detach().numpy() return matrix, losses def get_pairwise_similarities(matrix): # matrix contains unit norm vectors sims = matrix @ matrix.T # extract the upper triangular part return sims[np.triu_indices_from(sims, k=1)] dims = [2, 4, 8, 16, 32, 64, 128, 256, 512] centroids_per_dim = [1, 2, 4, 8] # sweep over dimensionality and number of centroids per dim mean_abs_sims = [] max_abs_sims = [] for dim in dims: for cpd in centroids_per_dim: matrix, losses = minimize_nuclear_norm(num_centroids=dim*cpd, dim=dim) sims = get_pairwise_similarities(matrix) abs_sims = np.abs(sims) mean_abs_sims.append(np.mean(abs_sims)) max_abs_sims.append(np.max(abs_sims)) fig, axs = plt.subplots(1, 2, figsize=(12, 5)) x = centroids_per_dim axs[0].plot(x, mean_abs_sims[:4], label='D=2') axs[0].plot(x, mean_abs_sims[4:8], label='D=4') axs[0].plot(x, mean_abs_sims[8:12], label='D=8') axs[0].plot(x, mean_abs_sims[12:16], label='D=16') axs[0].plot(x, mean_abs_sims[16:20], label='D=32') axs[0].plot(x, mean_abs_sims[20:24], label='D=64') axs[0].plot(x, mean_abs_sims[24:28], label='D=128') axs[0].plot(x, mean_abs_sims[28:32], label='D=256') axs[0].plot(x, mean_abs_sims[32:36], label='D=512') axs[0].legend() axs[1].plot(x, max_abs_sims[:4], label='D=2') axs[1].plot(x, max_abs_sims[4:8], label='D=4') axs[1].plot(x, max_abs_sims[8:12], label='D=8') axs[1].plot(x, max_abs_sims[12:16], label='D=16') axs[1].plot(x, max_abs_sims[16:20], label='D=32') axs[1].plot(x, max_abs_sims[20:24], label='D=64') axs[1].plot(x, max_abs_sims[24:28], label='D=128') axs[1].plot(x, max_abs_sims[28:32], label='D=256') axs[1].plot(x, max_abs_sims[32:36], label='D=512') axs[0].set_xlabel('Centroids Per Dimension') axs[1].set_xlabel('Centroids Per Dimension') axs[0].set_ylabel('Absolute Cosine Similarity') axs[1].set_ylabel('Absolute Cosine Similarity') axs[0].set_title('Mean Absolute Cosine Similarity') axs[1].set_title('Max Absolute Cosine Similarity') ``` --- Rebuttal Comment 1.2: Title: Reviewer moiT response to Author Rebuttal Comment: Hi all! Thank you for responding to my review! I appreciate you answering my questions. The clarifications make sense and I appreciate your disentangling the effect of the momentum encoder. Whether MMCR is significantly/sufficiently different from other SSL methods (e.g. TiCo, Wang et al. 2020's alignment+uniformity) still isn't exactly clear to me. The statement "The most important property of our objective is that it incentivizes global high dimensionality without relying on expectations over large numbers of pairwise comparisons" seems dubious since I would guess that one could rewrite those other loss functions by reorganizing some sums. For instance, in TiCo, instead of pairwise comparisons, one could first compute means, then perform pairwise dot products of means. I haven't thought this through but I no longer think detail this matters, as I explain below. My thinking about the review process also somewhat changed during ICML. I'm now of the opinion that most reviewers (perhaps myself included) are too high variance to be relied upon, and that perhaps a better process is just for reviewers to decide whether a paper is sufficiently well put together to be shared with others, so that the community can then sort out the paper's importance. With my new view, I think that this paper is definitely worth sharing and so I recommend that it should be accepted. To me, the value of this paper is showing that previous geometric manifold capacity analysis methods can be converted to a practical SSL method, plus many other contributions: theoretical result for the algorithm, comparison with biological data, empirical study of how the algorithm differs from other SSL algorithms, eigenspectrum decay. I've increased my score. --- Reply to Comment 1.2.1: Comment: We are happy to clarify: - *"It seems odd to say that sETF solutions are optimal..."*: Point taken: we will describe the non-uniqueness of the optima in the revision. - *"Is that true? I would think that dimensionality..."*: Both the dimensionality and the number of centroids are relevant, and you are correct that when the number of centroids is larger than the dimensionality of the space replicating centroids can be an equally valid solution to the optimization problem. However the geometry of high dimensional spaces is such that randomly selected directions are nearly orthogonal to each other with high probability. Because of this as you move into higher dimensional spaces it becomes more likely that gradient descent chooses a solution that consists of approximately orthogonal centroids than one that replicates centroids. This stance is supported by the simulations above where we show that in a 512-D space, optimizing 4096 centroids to maximize the nuclear norm leads to a solution where no pair of centroids is very similar to each other. - *"To make sure I understand,..."*: By undercomplete we meant the case where there are more centroids than dimensions, we apologize for the ambiguity. In terms of a good null distribution for comparison, a reasonable choice would be the uniform distribution on the unit sphere. Randomly sampling 4096 512-D unit norm vectors, yields a distribution of (absolute) pairwise similarities with a maximum of ~0.2. From this we can conclude that optimizing the nuclear norm in this setting does not lead to significant overlap of centroid vectors any more than one would expect from a uniform distribution. Title: Clarifications on Toy Simulations --- Reply to Comment 1.2.2: Comment: Thank you for considering the contributions of our work, we are in complete agreement with your assessment (which is closely aligned with our summary in lines 40-50 of the submission).
null
null
null
null
null
null
Online (Multinomial) Logistic Bandit: Improved Regret and Constant Computation Cost
Accept (spotlight)
Summary: This paper proposes a computationally efficient algorithm for multinomial logistic bandit with improved regret and computation cost. The algorithm uses online mirror descent and new approximation to efficiently compute consistent estimator and construct optimistic reward. Experimental results show improved regrets and computational costs of the proposed algorithm. Strengths: The contributions are clear, concrete, and well-presented. Improving the regret and computation cost and extending binary cases to multinomial is significant. Applying online mirror decent, novel approximation, and novel construction of optimistic reward to reduce the computation cost are novel and significant in contextual bandits under the parametric model. Weaknesses: 1. Discussion and experimental results on whether there is a trade-off to gaining efficiency would be helpful. 2. In experimental results, while the reduction of the computation cost is significant, the reduction of regret seems minor. More experiments are needed to validate the improvement of the proposed algorithm in terms of regret. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Is there any trade-offs between the computational efficiency and the accuracy of the estimator? 2. Could you explain why the slope of the regret of MLogB increases in Figure 1 (c)? 3. If the number of iterations $T$ increases, can the proposed algorithm find a better policy to reduce the increment of the regret? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Yes. This work is mostly theoretical and potential negative societal impact is unseen. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive evaluation of this paper and the insightful common! In the following, we will address your questions and will improve the paper according to your suggestions. --- **Q1:** Discussion and experimental results on whether there is a trade-off to gaining efficiency would be helpful. **A1:** Thank you for the insightful question. There is indeed a trade-off between computational efficiency and the quality of the estimator. This trade-off can be observed by comparing our update rule $W_{t+1} = \arg\min\_{W\in\mathcal{W}}\langle \nabla\ell_t(W_t),W\rangle + \frac{1}{2}\Vert W-W_t\Vert_{\tilde{H}\_t}^2$ with the one $W_{t+1} = \arg\min\_{W\in\mathcal{W}}\ell_t(W) + \frac{1}{2}\Vert W-W_t\Vert_{H_t}^2$ used in [10]. The former is known as a standard online mirror descent (OMD), while the latter is referred to as implicit OMD, which updates on the original loss function instead of the linearized approximation. In the online learning literature, both algorithms attain the $O(\sqrt{T})$ minimax regret bound. However, implicit OMD has been observed to enjoy superior empirical performance [1] and has been proven to perform better than OMD when $\{\ell_t\}_{t=1}^T$ is stable [2]. The price for the better performance of implicit OMD is increased computational complexity, as it has to solve a convex optimization and requires $O(\log T)$ time complexity. Our experiments (especially the updated ones during the rebuttal phase) further demonstrate this trade-off, as the performance of our algorithm is slightly inferior to ada-OFU-ECOLog proposed by [12], but with improved computational efficiency. In the next version, we will add more discussion on the trade-off between computational efficiency and statistical efficiency. **Ref:** [1] Kulis and Bartlett. Implicit online learning. ICML 2010. [2] Campolongo and Orabona. Temporal variability in implicit online learning. NeurIPS 2020. --- **Q2:** Could you explain why the slope of the regret of MLogB increases in Figure 1 (c)? **A2:** We think there are two reasons: the first is the time horizon $T$ could be somewhat short to reflect sublinear regret. Even in the binary case, the contenders (OFU-MLogB, LogUCB1 and OL2M) appear to exhibit a linear regret within $T=1200$. The second reason might be that the results are averaged over 20 trails with randomly selected arm set. As shown in the additional experiments, the performance of all contenders are highly affected by the shape of the arm set. The reported averaged result could have a high variance. To better illustrate the performance of our algorithm, we have conducted additional experiments with longer time horizon $T=6000$ and fixed the arm set. Due to the limited time,the experiments are conducted on the binary case. We are happy to provide more results for the multinomial case latter. Please refer to the global response for more details. Thanks! --- **Q3:** If the number of iterations $T$ increases, can the proposed algorithm find a better policy to reduce the increment of the regret? **A3:** Thanks for the suggestions. We have conducted additional experiments with a longer $T$ to test our method. Please kindly refer to the global rebuttal for more details. Thanks! --- Rebuttal Comment 1.1: Comment: I thank the authors for the detailed and helpful responses and additional experiment results. I will change my score from 6 to 7 accordingly. --- Reply to Comment 1.1.1: Comment: Thank you for the insightful comments and for updating the score! In the next version, we will update the paper based on your suggestions and the discussion in the rebuttal phase. Specifically, we will add a discussion on the trade-off between statistical efficiency and computational efficiency in Section 3. We will also reorganize the experimental section to present the comparison using fixed arm sets with different random seeds and a longer time horizon.
Summary: This paper considers Multinomial Logistic bandits (the extension of the usual logistic bandit model to a setting where there are more than two possible outcomes - e.g. in advertising one picks an action in response to a context and may observe 'no click'/'click'/'save for later'/etc instead of just 'no click'/'click'). This problem has been previously considered in work such as Amani and Thrampoulidis (2021) where an optimistic algorithm with near optimal dependence on T (up to logarithmic factors) has been derived, but the focus of this paper is on algorithms with improved computational efficiency, and optimal dependence on the parameter $\kappa$. Dependence on this parameter $\kappa$ has been the focus of a series of recent papers on bandit problems involving the logistic function, as many algorithms with an optimal dependence on $T$ can nonetheless have a poor dependence on $\kappa$ which in turn brings an exponential dependence (in terms of worst-case performance) on other problem parameters. The present paper first improves the MNL-UCB algorithm of Amani and Thrampoulidis by providing a sharper tail-inequality for the MLE under the multinomial model, before providing the main contribution: the OFU-MLogB algorithm, and analysis of its regret and efficiency. The proposed algorithm avoids costly inference by replacing maximum likelihood estimation with online mirror descent, and avoids costly optimisation for the confidence set via an alternative construction of reward. The result is an algorithm with no leading order dependence on $\kappa$ and $O(1)$ computational cost per round. Strengths: This is a welcome contribution for the literature on contextual bandits. Improvements regards to dependence on $\kappa$ have been important steps forward in this area for various logistic bandit problems in recent years and this development is also likely to be impactful and useful. I find the paper to be well researched and written. There are a few minor points as discussed below, but I think the paper is generally thoughtfully structured. For instance I liked how the OFU-MLogB algorithm was motivated in terms of certain concerns about efficiency and these same headings were used to frame the discussion of the algorithm's structure. As far as I can tell the theoretical results are derived correctly, and the analytical work is sound. I have not had as much time as one would like to be 100% confident in this, so this does impact my confidence score. Weaknesses: I don't have too much to criticise in terms of the originality and significance of the paper: its contribution is an improvement upon flawed but still somewhat practically useful algorithms, so it is not a revolutionary one, but I think it is nonetheless meaningful and appropriate for NeurIPS. I have a few points for clarification regards to the writing, experimental work, and points that I think would benefit from extra detail which I describe below, and if these are adequately addressed I should be able to increase my score following rebuttal. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - What do you see as the utility of the algorithm presented in Section 2.3 over OFUMlogB? If there is not one particularly, and it is just a way to introduce important concepts and provide a correction to [12] and/or state an improved concentration result, potentially of independent interest, can you make that clearer from the outset? - I think Theorem 2 is slightly informal: I guess that you mean the sequence of arms selected by the algorithm using decision rule (5) achieves this regret, but how is this algorithm initialised? What is its first selection, or selections until such a point as the parameter estimates are well defined? - Which MNL-UCB are you testing in the experiments? The original from [12] or the improved one analysed in Theorem 2? - The confidence bands on the regret presented in the Appendix are very wide. They suggest less of a statistically significant difference than the plots in the main body. I think you need to move a discussion of this to the main text, and offer some discussion as to whether the ordering of the algorithms remains stable across trials, or whether that is random also. I feel that this experiment probably could have been better designed, and would be yet more satisfied if it was possible to provide a more extensive experiment from which more meaningful conclusions could be drawn. - Minor: at line 167, I don't think you ever really define 'feedback number' as a piece of terminology. I suggest that you should. - Minor: at line 255, should say 'condition as Theorem 3' - Minor: at line 67, it may be worth making clear that MNL-bandits (which people could confuse this model with) specifically will be addressed in Appendix A, it was something I was concerned was missing until I got to the end. - Minor: at line 312: radio -> radius Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: Appropriate for the nature of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive evaluation and constructive comments for this paper! In the following, we will address your questions. We will further improve the paper according to your suggestions. --- **Q1**: What do you see as the utility of the algorithm presented in Section 2.3 over OFUMlogB? **A1**: Thank you for the helpful comments! As you have mentioned in the review, we regard the jointly efficient algorithm OFU-MLogB as the main contribution of this paper. The purpose of Section 2.3 is first to introduce the important concept of logistic bandit and then state the improved concentration results that can be potentially useful. In the next version, we will highlight the utility of section 2.3 at the beginning of the section. Thanks! --- **Q2:** I think Theorem 2 is slightly informal: I guess that you mean the sequence of arms selected by the algorithm using decision rule (5) achieves this regret, but how is this algorithm initialised? What is its first selection, or selections until such a point as the parameter estimates are well defined? **A2:** Thank you for pointing this issue out. For the first round, the learner can randomly select any arm for the decision set $\mathcal{X}$. After that, the decision rule (5) is well-defined. We will provide a more formal description of the algorithm in the revision. --- **Q3:** Which MNL-UCB are you testing in the experiments? The original from [12] or the improved one analysed in Theorem 2? **A3:** We implemented the original algorithm from [12]. As discussed in Q1, the purpose of the algorithm in Section 2.3 is to first introduce important concepts for the MLE-based logistic bandit method and then to bring the issue of the dependence on $K$ to the community's attention. Therefore, we initially did not consider the algorithm in Section 2.3 as a contender to be tested in the experiments. However, we admit that an empirical comparison between MNL-UCB and the improved version could further help us understand how significant the improvement on $K$ will be. We will take this as future work. --- **Q4:** The confidence bands on the regret presented in the Appendix are very wide....a more extensive experiment from which more meaningful conclusions could be drawn. **A4:**Thank you for the thoughtful comment! During the rebuttal phase, we realized that there could be a better way to organize our experimental results. In the original experiments, we randomly selected the arm set for each set of 20 trials, leading to a high variance in the results. We have provided a new version of our experiments for the binary case, in which we still run the algorithm for 20 trials, but the arm set is fixed. Please kindly refer to the PDF file attached to the global response for more details. --- **Q5:** it may be worth making clear that MNL-bandits (which people could confuse this model with) specifically will be addressed in Appendix A **A5:** Thank you for the suggestions. In the next version, we will include part of the related work discussion with the MNL setup in the main text. --- **Q6:** other comments on the minor issues: **A6:** We sincerely thank the reviewers for the detailed check. We will revise the paper accordingly. --- Rebuttal Comment 1.1: Title: Response to Rebuttal (increasing score) Comment: Thanks for the systematic response to my questions and suggestions, and your willingness to undertake edits. It was particularly helpful that you were clear about where and how you will make changes - thank you. I will increase my score and confidence score on the basis of this response. --- Reply to Comment 1.1.1: Comment: Thank you for the insightful comments and for updating the score! In the next version, we will update the paper based on your suggestions and the discussion during the rebuttal phase. Specifically, we will edit Section 2 to ensure that its main purposes are properly highlighted, and add a more detailed algorithmic description for Theorem 2. In the experimental part, we will present the results with a fixed arm set for different random seeds instead of averaging all of them. We will also move some parts of the related work discussion, particularly on MNL, to the main text.
Summary: This paper considers a generalization of the (binary) logistic bandit problem to the multinomial setting, significantly improving the state of the art for that problem and even for the special binary. For the binary logistic bandit problem, many (earlier) algorithms proposed were not optimal; their regret bounds depended on a potentially large problem dependent parameter $\kappa$ capturing the nonlinearity of the reward function. Recent works were able to remove regret dependence on that parameter $\kappa$, though had high per-round computation (depending on the horizon $T$). The authors propose an algorithm that (up to $\log$ terms) achieves optimal regret bounds even when specialized to the well-studied binary logistic bandit setting with per round constant computation. Furthermore, compared to prior work for the more general multinomial setting, they not only both obtain better regret (independent of the parameter $\kappa$) but do so with faster per-round computation (constant instead of $O(T)$). Strengths: - For the multinomial setting, the proposed algorithm significantly improves on the state-of-the-art both in terms of regret bounds (esp. removing the dependence on $\kappa$) and computation (from $O(T)$ per round computation to $O(1)$). In simple experiments, this is backed up with a (slight) reduction in empirical cumulative regret but much faster run-time. - Impressively, the proposed algorithm even improves on the state of the art for the (binary) logistic bandit setting. Among computationally efficient ($O(1)$ per-round complexity) methods it has better regret bounds (also in simple experiments significantly lower empirical cumulative regret with competitive run-time). Compared to ada-OFU-ECOLog the results are more nuanced (the proposed method has better regret for small horizons but worse regret for large horizons; takes half as long for a horizon of $T=1200$). - The algorithm and regret analyses do build on recent advances in (binary) logistic bandits, but non-trivially so. - Overall I found the writing clear and well-organized. Table 1 and the remarks throughout comparing and contrasting to closely related works were helpful. Weaknesses: #### Major I do not have any major concerns. #### Minor - Appendix A includes discussion on differences between the problem set up of multinomial logistic bandits considered in this paper and the problem of multinomial Logit (MNL) bandits which has been considered in several recent works. This is only a minor concern, but given close connections of both to binary logistic bandit problems, a slightly more detailed discussion of similarities (if any) with OFU based methods and analyses (like with [33] Agrawal et al.) could be helpful. - There is an experimental design based approach for the binary setting (Mason et al. “An Experimental Design Approach for Regret Minimization in Logistic Bandits” in AAAI 2022) that I don’t think is cited. I did not look carefully into it regarding time complexity so do not know whether it would be competitive with (ada)-OFU-ECOLog or not. - For the experiments, the $\sqrt{T}$ dependence does not kick in for the horizon $T=1200$ (i.e. the regret appears linear). For Fig 1(a) ada-OFU-ECOLog is initially worse but its regret is visibly concave and catches up around $t=800$. I would suggest additional experiments with a longer horizon (maybe just with ada-OFU-ECOLog and OFU-MLogB) to examine how long OFU-MLogB exhibits near-linear regret (and consequently how big that regret gap can grow). Just to be clear, I think it is impressive that OFU-MLogB is better for smaller $t$ and $t$ on the scale of $10^4$ or $10^5$ might be unrealistic for applications, but the linear regret is a bit concerning. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: #### Minor editing suggestions - For Table 1, I’d suggest including a comment in the caption about $\kappa$ and $\kappa_*$ (in addition to the later discussion in the main text). - In the problem formulation explicitly mention whether or not there are there any assumptions about $\mathcal{X}$ (finite, convex, etc.). - I’d suggest re-ordering lemmas in C.1.2 (proof of lemma 6 depends on lemma 7, proof of lemma 7 depends on lemma 8) - For experiments, does MNL-UCB reduce to LogUCB1? If not, I’d (mildly) suggest including it in Figs 1 (a)-(b) for further reference of how well state-of-the-art multinomial methods perform against methods specifically designed for binary setting. #### Spelling, grammar - Line 52 argmax subscript ‘$x \in W$’ - line 276 ‘linearlized’ - line 289 ‘logistc’ - line 312 ‘radio’ - Line 953 ‘trails’ - line 537 ‘on the an’ - Line 571 ‘anlaysis’ - there were more I did not bother to list Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: It is fine Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for your great appreciation and brining the related work to us! In the following, we will address your questions. We will further improve the paper according to your suggestions. --- **Q1**: a slightly more detailed discussion of similarities (if any) with OFU based methods and analyses (like with [33] Agrawal et al.) could be helpful. **A1**: Thank you for the constructive commons. We provide more detailed comparison between MNL and MLogB problem as follows. - From the problem setup view, both settings utilize the multinomial logistic regression model to capture the probability of feedback, and they can be seen as generalizations of the binary logistic bandit problem. However, the main difference is in the mechanism to generate the feedback. The MLogB problem considers the case where there could be multiple feedbacks for the selected arm. While the MNL problem focuses on situations where the algorithm can submit multiple arms, but the feedback exhibits a binary value. - For the algorithm design view, the optimistic in the face of uncertainty (OFU) principle can be applied to both settings. However, the different problem setup leads to different algorithm design details and challenges. For instance, when analyzing the concentration property of the parameter estimator, one of the main challenges of MLogB is to handle the multinomial random noise $\epsilon_t\in\mathbb{R}^{K}$ while MNL has to tackle the potentially related arm set for each iteration. - The previous work [33] proposed a UCB-type algorithm with an improved $\tilde{O}(\sqrt{T})$ bound for the MNL problem. Apart from the differences in problem setup, the main distinction between our work and [33] is that the previous work still employs an MLE-based algorithm, requiring $O(T)$ time complexity to solve the optimization problem. We believe extending our algorithm to the MNL setting to achieve the $\tilde{O}(\sqrt{T})$ bound with constant computational cost would be an interesting direction. --- **Q2**: There is an experimental design based approach for the binary setting. **A2:** Many thanks for sharing the paper! We were not aware of it and will add the result to Table 1 for a more comprehensive comparison. In the paper, the experimental design approach has to solve a min-max optimization problem to select the arm (line 12 of Algorithm 2). It still seems unclear how to solve this min-max optimization problem efficiently. --- **Q3**: I would suggest additional experiments with a longer horizon (maybe just with ada-OFU-ECOLog and OFU-MLogB) to examine how long OFU-MLogB exhibits near-linear regret. **A3**: Thanks for the suggestions. We agree that the time step is somewhat short in the current experiments. We have conducted additional experiments with a longer $T$ to test our method. Please kindly refer to the global rebuttal for more details. Thanks! --- **Q4**: For experiments, does MNL-UCB reduce to LogUCB1? **A4**: Yes, MNL-UCB can be seen as a counterpart of LogUCB1 for MLogB problem. We will make this clear in the revision. --- **Q5:** about other editing suggestions and grammar: **A5:** We sincerely thank the reviewer for the detailed check of our paper. We will revise the paper according to your suggestions. Many thanks!
Summary: This paper has addressed multinomial logistic bandits whose feedback has multiple choices. This paper improves the regret bound in terms of $K$ and reduces the computation cost into constant complexity with respect to $T$. In experiments, the results support that the proposed method is much faster than the prior work. Strengths: **Computational Complexity** This paper reduces the time complexity into $O(K^{3}d^{3})$, which is independent on $T$. The experimental results also support that computational time is dramatically reduced in practice. **Regret Bound** The regret bound of the proposed method is $O(K\sqrt{T})$. This result is a substantial improvement over the best-known regret bound of $O(K^{5/4}\sqrt{\kappa T})$, where $K$ is the number of feedback values and $\kappa$ is a constant that increases exponentially in terms of the diameter of the parameter domain. The proof seems to be correct. Weaknesses: **Lower Bounds** Improving the regret bound with respect to $K$ seems like a minor contribution. Especially, since there is no comparison with the lower bound on regret, it is unclear if the improvement in the regret bound with respect to K is theoretically significant. **Novelty of Analysis Techniques** Most of the techniques used in this paper are quite similar to the generalized linear bandits (GLB). Especially, the proof scheme of the Theorem 1 is almost the same as logistic bandits or GLB. Furthermore, in Theorem 1, Lemma $6$ plays a crucial role, however, as mentioned in the Appendix, the difference from previous research lies solely in the bound on the norm of the error. The remaining parts of the proof of Theorem 1 heavily depend on Abbasi-Yadkorietal. (2011). It cannot be considered as a significant theoretical contribution. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: **Questions** 1. What is the memory complexity of each algorithm? While this paper reduces the algorithm's time complexity, it may lead to requirements for more memory. Since there is a general trade-off between time complexity and memory complexity, it would be better to discuss the memory complexity of this algorithm (and others). 2. Does this algorithm match the lower bound? Is there any analysis for the lower bound of this setting? 3. What causes the improvement of the regret bound? From the new analysis scheme? or finding better parameters? If I correctly understand the paper, the improvement of the regret bound is just caused by changing the condition on $\epsilon$ from $\| \epsilon \| _{2} \leq \sqrt{K}$ to $\|\epsilon \| _{1} \leq 2$. **Minor comments** Table 1: the column "Constant Cost" is unnecessary part since we can check it in the column "Cost Per Round" line 112: $\textnormal{diag}(z)$ -> $\textnormal{diag}(\sigma(z))$ Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: Increasing the memory complexity might be a limitation of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the careful review. In the following, we will first highlight the technical contribution of the paper then address your questions. If your concerns have been properly addressed, please consider updating your score to this paper. Thanks! --- **Q1:** novelty of analysis techniques **A1:** We would like to note that there are two parts of the paper: - an inefficient algorithm with improved $K$ (Section 2.3) - efficient algorithm with further improved dependence on $\kappa$. (Section 3) For the first part of the paper, we agree with your comments. While we believe the improvement on $K$ is worthy of the community's attention, we recognize that the result is somewhat built upon standard analysis from existing logistic bandit literature, with refined conditions on $\epsilon_t$. The result is not considered a main contribution of this work as we have chosen not to include it in the Introduction section. The main contribution of this paper lies in **the second part**, where we developed novel efficient algorithms for both binary and multinomial logistic bandits problems with improved bounds. Developing a computationally efficient algorithm that also maintains statistical efficiency is a non-trivial task. Most previous methods [8,9,12] with improved dependence on $\kappa$ crucially rely on the inefficient MLE estimator. The only joint efficient algorithm is developed in the binary case and can hardly be extended to the multinomial setting. Additionally, even in the binary case, our method improves the computational efficiency of [10] from $O(\log T)$ to $O(1)$. We have highlighted the technical challenge and our contribution in Remark 2 with more details explained in Section 3.3. In summary, we recognize the reviewer's concern about the novelty of the first part but feel the second part, our main contribution, has been somewhat overlooked. This might be because we have spent slightly more pages than expected on the first part. In the next version, we will consider your comments and further emphasize the main contribution of the paper. --- **Q2:** What is the memory complexity of each algorithm? **A2:** At every iteration, our method requires to maintain the matrices $H_t,\tilde{H}\_t,\nabla\sigma(W)$ , the single round data point $(x,y)\in \mathbb{R}^d\times\mathcal{Y}$ and the model $W\_{t+1}\in\mathbb{R}^{K\times d}$. Therefore, the storage complexity of our algorithm is $O(K^2d^2)$, a constant in terms of $T$. In the binary case, the storage complexity becomes $O(d^2)$ which is the same as the joint efficient algorithm [10]. However, the situation is different for the MLE-based algorithm [8,9,12]. This method requires storing all historical data to solve the optimization problem, leading to a storage complexity $O(dt)$ for round $t$. Additionally, the MLE-based algorithm must store the matrix $H_t^{-1}(W)\in\mathbb{R}^{Kd\times Kd}$ to perform the optimistic rule, as dictated by equation (4). Therefore, the total storage complexity of the MLE-based method amounts to $O(dt + K^2d^2)$. In summary, the storage complexity of our algorithm is the same as [10] and substantially improves the MLE-based method [8,9,12]. Thank you for the comments. We will add more discussion on the storage comparison in the revision. --- **Q3:** Does this algorithm match the lower bound? Is there any analysis for the lower bound of this setting? **A3:** Since the binary logistic bandit problem is a special case of the multinomial logistic bandit, the lower bound established in the binary case also holds for the multinomial case. First, as outlined in Remark 3, the minimax optimal rate for the binary case is $O(\sqrt{T/\kappa_*})$ as established by [9]. Our method can achieve this rate up to logarithmic factors. Besides, in Remark 3, we also provide a discussion on the tightness of the bound in terms of $\kappa$ for the multinomial case. While we admittedly found it challenging to achieve the $O(\sqrt{T/\kappa_*})$ bound, our algorithm has already achieved **the best known result** for the problem. It is an intriguing open question whether the $O(\sqrt{T/\kappa_*})$ regret bound is achievable in the MLogB problem. Furthermore, as we mentioned in Remark 1, the optimality of the $O(K)$ dependence has been discussed in [12] (the paragraph under Theorem 3), which demonstrates the tightness of our bound. --- **Q4:** What causes the improvement of the regret bound? From the new analysis scheme? or finding better parameters? If I correctly understand the paper, the improvement of the regret bound is just caused by changing the condition... **A4**: As we have discussed in response to Q1, the main contribution of this paper is to provide an **efficient algorithm** with an improved bound. The improvement of our algorithm crucially relies on novel analyses, which we highlight in Remark 2. First, to achieve the improved dependence on $\kappa$ in the MLogB problem, we introduce a novel intermediary decision that helps us prove a tight confidence set for the efficient online estimator. Besides, to reduce the computational complexity of the algorithm from $O(\log T)$ to $O(1)$, we carefully exploit a negative term in the analysis, which helps to eliminate the requirement of learning with the original loss, thus speeding up the algorithm. Thank you for the comment. We will further highlight the related part to make the technical contributions of this paper more accessible. --- Rebuttal Comment 1.1: Title: Response to the rebuttal Comment: I am sorry for the late response and appreciate the authors for their thoughtful responses. After reading their rebuttal, I agree that the improvements in both computational complexity and memory complexity compared to existing methods are noteworthy. I think using online mirror descent updates plays a crucial role. Recognizing sufficient contributions, I have raised my score. One last concern is that it might be necessary to mention in the main paper that the regret improvement is due to a special parameter choice rather than a new analysis technique. This clarification would help future researchers easily understand this aspect. --- Reply to Comment 1.1.1: Comment: Thank you for the helpful comments and for updating the score! In the next version, we will revise the paper based on your suggestions and the discussion in the rebuttal phase. During the rebuttal phase, we realized that the current writing of Section 2 does not accurately reflect its purpose: to first introduce the important concept of logistic bandit and then bring the issue of $K$ to the community's attention. In the next version, we will further clarify the utility of Section 2 at the beginning of this section. We will also revise Remark 1 in Section 2 to clearly convey that the improved $K$ bound for the MLE estimator mainly comes from a better upper bound for the noise, while the analyses are largely based on existing techniques in logistic bandit literature.
Rebuttal 1: Rebuttal: We would express our heartfelt thanks to all reviewers for their careful review and constructive feedback. After carefully considering the comments from reviewers XVWd, Rks7, 6tgu, and ojs3, we conduct additional experiments to support the effectiveness of our algorithm by a more suitable way of organizing the results. The results are presented in the attached PDF. **Setup for original experimental**. In Section 4, we report the average performance of all compared algorithms over 20 trials. As detailed in Appendix E (sorry for that, we should mention this in the main text), the arms are randomly sampled in each trial of our experiments, which finally leads to a result with an inherently large variance. **Setup for additional Experiments**: During the rebuttal, to more accurately present the empirical performance of the compared algorithms, we reorganized the experiments by **fixing the arm set** for the 20 trials. In the attached PDF, we report the mean and variance of the compared algorithm across these 20 trials, under 6 different configurations of the randomly generated arm set. Additionally, we extended the experiment for a longer time horizon with **$T=6000$**. The results show that OFU-MLogB is comparable (slightly worse) with the state-of-the-art joint efficient algorithm for the binary case (ada-OFU-ECOLog), but achieves much better performance than O2LM. Due to time constraints, we were only able to conduct the experiments for the binary case. We are happy to provide additional results for the multinomial case at a later date. Thanks! Pdf: /pdf/5e4c9c56396f0139ccb9dfc79e9b1ed89e8a3511.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper examines multinomial logistic bandits (MLogB), a problem where the learner's action $x_t$ produces feedback $y_t$ with $K+1$ possible outcomes. The probabilities of these outcomes are modeled using a logistic model. In real-world scenarios like online advertising, customers may provide various types of feedback, such as 'buy now', 'add to cart', 'view related item', or simply leave without any click. The MLogB problem assumes that at each time $t$, the learner selects an action $x_t$ from $\mathcal{X}$. Each outcome $k\in[K]$ is associated with a latent parameter $w_*^{(k)}\in \mathbb{R}^d$ and the probability of the outcome $P[y_t=k|x_t]$ follows the logistic model, $P[y_t|x_t]=exp((w_*^{(k)})^\top x_t)/(1+\sum_j exp((w_*^{(j)})^\top x_t))=\sigma(z)_k$. Then the expected reward of the learner's action is defined$ \sum_t \rho^\top \sigma (W_* x_t)$ where $\rho$ is a known rewards vector. The authors first introduce an improved version of MNL UCB algorithm [12] with the optimal dependency on K. With an improved concentration set for OFU in Theorem1, MLE based algorithm achieves $O(Kd \log(T)\sqrt{\kappa ST})$ in theorem 2. In the next section, they present an efficient and improved algorithm (OFU-MlogB). For the efficiency, they address the computation cost of MLE and OFU methods. Instead of using MLE, they suggest using online mirror descent algorithm to estimate $W$, incurring $O(1)$ cost per round and achieving $\kappa$=independent confidence set in Theorem 3 (similar to theorem 1). For the optimistic reward construction, they suggested a novel optimistic reward that can be solved in a constant time per round by introducing bonus terms for exploration in proposition 1. This method archieves an improved regret bound of $\tilde{O}(Kd\sqrt{T})$ in Theorem 4 and constant computation cost compared to $\tilde{O}(K^{5/6}\sqrt{\kappa T})$ regret bound and $O(T)$ computation cost in [12]. In the binary setting where $K=1$ the proposed algorithm archives a minimax regret bound of $O(\sqrt{T/\kappa_*})$ with an optimistic rule similar to [9,10] while maintaining lower computation cost. In their analysis for theorem 3, they leverage negative terms from the efficient update of mirror descent and a novel intermediary prediction construction designed for multiclass logistic loss. Lastly, they provide experimental results for both binary and multi-class settings. In the binary case, the proposed algorithm demonstrates lower computational overhead while delivering comparable performance to the state-of-the-art algorithm, ada-OFU-ECOLog[10]. For the multinomial setting, the proposed algorithm exhibits improved computational efficiency compared to MNL-UCB [12], while achieving similar performance in terms of regret. Strengths: 1. In the binary case, the proposed algorithm attains the minimax optimal guarantee while reducing the computation cost per round from $O(\log T)$ to $O(1)$. 2. In the multinomial case, the proposed algorithm achieves a regret bound of $\tilde{O}(K\sqrt{T})$ with constant computation cost, surpassing the performance of the best-known algorithm which achieves a regret bound of $\tilde{O}(K^{5/4}\sqrt{\kappa T})$ with $O(T)$ computation cost per round. 3. The authors present empirical evidence demonstrating the improved computation cost of the proposed algorithm. Weaknesses: 1. Algorithm 1 requires the computation of the matrix inverse and projection, resulting in computational costs of $O(d^2K^3)$ and $O(K^3d^3)$ per round. 2. In the binary case, it appears that achieving optimal regret in Algorithm 1 necessitates the use of a linear optimistic rule (as stated in Corollary 1), rather than relying on an efficient optimistic reward with a bonus term. It is not clear why a linear model works (details are in Question 1). 3. In the experimental evaluation, concerning the binary case, the proposed algorithm exhibits regret that increases linearly, while the previously suggested algorithm, ada-OFU-ECOLog, demonstrates sublinear regret. Regarding the multinomial case, although the proposed algorithm achieves a theoretically superior regret bound considering $K$ and $\kappa$, it shows similar performance to MNL-UCB. This observation does not provide a clear validation of the theoretical analysis. 4. In the multinomial case, there might exist a difference between the attained regret bound of $\sqrt{T}$ and a regret lower bound. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. In corollary 1, what is the reason the optimistic rule follows a linear model, $\arg\max_w w^\top x$, rather than $\arg\max_w \rho^\top \sigma(w^\top x)$? 2. Is the computation cost for the matrix inverse and projection of the proposed algorithm, which is O(K^3d^3), commonly observed in previous literature like [10]? Otherwise, it appears that the improvement in computation cost mainly applies to scenarios with small values of K and d. 3. In the conducted experiments, why was the time horizon set to $T=1200$, which is relatively smaller compared to the larger time steps typically used in conventional bandit literature? 4. Furthermore, in the experimental results (a), what could be the possible explanation for the observation that the proposed algorithm exhibits linearly increasing regret, while ada-OFU-ECOLog demonstrates sublinear regret? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: 1. The details regarding the method mentioned in Corollary 1 may not be clear (question 1). 2. The experimental results do not seem to provide clear evidence of regret for the proposed algorithm due to linearly increasing patterns and small $T$. (questions 3,4) I am open to revising my evaluation if these concerns are addressed. Minor comments: 1.line 99,112: should it be changed from $diag(z)$ to $diag(\sigma(z))$? 2. line 333 ,336: $O(log1)$-> $O(1)$; $O(T/\kappa^*)$-> $O(\sqrt{T/\kappa^*})$ Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments. We will address your questions below and make enhancements to the paper based on your valuable suggestions. --- **Q1:** what is the reason the optimistic rule follows a linear model. **A1:** We are grateful to the reviewer for highlighting the ambiguity in our presentation. We will now clarify the equivalence between $\arg\max w^\top x$ and $\arg\max \rho^\top\sigma(w^\top x)$ in the binary case. We will now clarify this relationship. Specifically, as we discussed in line 96-97, the binary logistic bandit is a special case of MLogB with $K=1$ and $\rho =1$. In such a case, the reward function is exactly $r(x) = \sigma(w^\top x)$, where $\sigma:\mathbb{R}\rightarrow\mathbb{R}$ is a 1-dim function. Given $\sigma$ is a monotonically increasing, we can solve $\arg\max \rho^\top\sigma(w^\top x)$ by $\arg\max w^\top x$. We provide more detailed will make the equivalence of the two equations clear in the revision. --- **Q2:** Is the computation cost for the matrix inverse and projection of the proposed algorithm, which is $O(K^3d^3)$, commonly observed in previous literature like [10]? **A2:** Thank you for the insightful comments. As demonstrated in the proof of [10, Proposition 8], previous literature also necessitates maintaining the **matrix inverse** and performing $O(\log t)$ **projected gradient steps** (PGD) at each iteration to solve their optimization problem. In contrast, the main computational advantage of our algorithm lies in provably reducing the required PGD steps from $O(\log t)$ to $O(1)$. Specifically, both [10] and ours share the same matrix inverse cost of $O(d^2)$ in the binary case. For each of the $O(\log t)$ PGD times, [10] employ a distinct method to project the unconstrained solution onto an ellipsoid [10, Lemma 13], resulting in a computational cost of $O(d^2\log t)$. While this projection operation saves a $d$ factor, it incurs an additional cost of $O(\log t)$ when compared to our $O(d^3)$ cost. In scenarios with large dimensions, where $d = \Omega(\log T)$, we can also utilize the same projection method found in [10] to have improved the dependence on dimension. We will provide a more detailed discussion of the computational cost of projection in the revision. Thanks! --- **Q3:** The experimental results do not seem to provide clear evidence of regret for the proposed algorithm due to linearly increasing patterns and small $T$ (question 3 and 4). **A3:** Thank you for the suggestion. During the rebuttal period, we reorganized the experiments for the binary case with a longer running time $T$ and presented the results in a more suitable way. Please kindly refer to the global response for more details. Thanks! --- Rebuttal Comment 1.1: Title: Thank you for your detailed response Comment: Thank you for your detailed response. Most of my concerns are resolved by the responses so I raised my score from 5 to 6. Based on your reply, it seems to be better to clarify the tradeoff between $d$ and $\log t$ in computation cost and provide further clarification regarding the comparison for dependency on $K$ in the final version. --- Reply to Comment 1.1.1: Comment: Thank you for the thoughtful questions and for updating the score! In the next version, we will revise the paper according to your suggestions and the discussion in the rebuttal phase. Specifically, we will add a discussion on the time complexity tradeoff between $d$ and $\log t$ when performing the projection step. We will also clarify the comparison regarding the dependency on $K$ and reorganize the experiment section to present the comparison on fixed arm sets with different random seeds and longer time horizons.
null
null
null
null
null
null
Demystifying Structural Disparity in Graph Neural Networks: Can One Size Fit All?
Accept (poster)
Summary: This paper investigates the effectiveness of GNNs on nodes with different structural patterns in real-world graphs and proposes a new method to identify the reasons for performance disparities. The authors found that GNNs tend to perform well on homophilic nodes within homophilic graphs, but struggle with the opposite node set, and they provide insights into the performance disparities by analyzing aggregated feature distance and homophily ratio difference between training and testing nodes. Strengths: This paper provides both empirical and theoretical evidence to support its findings. It conducts rigorous analyses and derives a non-i.i.d. PAC-Bayesian generalization bound for GNNs, which adds credibility to the research. The authors substantiate their findings with controllable synthetic experiments and real-world datasets, further strengthening the quality of the paper. Weaknesses: 1. Writing needs to improve. 2. Some explanation needs more elaboration. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. Line 49-50, “while heterophilic graphs exhibit an opposite phenomenon with homophilic nodes in the majority and heterophilic ones in the minority.” Check this definition of heterophilic graphs. 2. Line 55-56, “ Such differences may lead to performance disparity between nodes in majority and minority patterns.” If you randomly sample training and test data, the distribution of the majority and minority patterns should be the same for training and test nodes. Thus, the disparity shown in this example is not valid in practice. 3. “Consequently, the performance disparity can be overwhelmed by such gap which renders the effect from structural patterns.” I don’t understand this sentence. Please elaborate the reason why you use GLNN here. 4. Are the observations in Figure 3 also hold on other benchmark datasets? 5. Line 170, why both homophilic and heterophilic patterns have p>q? In your definition, nodes from the subgroup but different classes have the same (p,q)? 6. Line 179-181, check the English. 7. “This proposition suggests that aggregation results in a distance gap between different patterns within the same class.” I cannot see how this proposition suggests the distance gap. Please elaborate it. 8. In Lemma 1, are nodes u,v from the same class? Why do you want to examine “discrepancy between nodes and with the same aggregated feature but different structural patterns.”? How does this relates to the proposed claim about majority and minority nodes? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Q2:** Such differences lead to performance disparity between nodes in majority and minority patterns.If randomly sample train and test data,the distribution of majority and minority patterns is same for train and test nodes.Thus,the disparity shown in example is not valid in practice **R:** The focus of this paper is structural disparity but not distribution shift, which are not the same concept.Disparity shown in example is valid in practice for **there are always both homophilic and heterophilic nodes in the same graph**. The only difference from toy example is homophilic and heterophilic patterns may not be so extreme, there are some heterophilic edges in homophilic nodes,and vice versa. Details explanations:Structural disparity is that homophilic and heterophilic patterns exist simultaneously in a single node set. Such disparity consistently exists in all graphs, as shown in Figure 2, 13.If randomly sample train and test data, **both train and test set will have homophilic and heterophilic patterns, indicating structural disparity** happens on both train and test set.**Example is to show two patterns together in the same graph, not train is in homophilic pattern, but test is in heterophilic pattern.** Empirical evidence in Figure 2 shows structural disparity leads to performance disparity, where **experiments follow random split setting without distribution shift.**The reason for performance disparity is GNNs tend to learn better on the train majority pattern with more supervised signals while ignoring the minority ones. Then GNNs will perform well on the test majority node but not minority ones. **Q8:** In Lemma 1,are nodes u,v from the same class?Why examine discrepancy between nodes with the same aggregated feature but different structural patterns?How this relates to claim about majority and minority nodes? **R:** In lemma 1,we do not necessarily require nodes u, v from the same class, instead, we attempt to measure how likely nodes u and v are in the same class regarding to the structure disparity. We clarify section 3.1 aims to answer:How does aggregation affect nodes with different structural patterns? We examine how nodes with structure disparity(homophily ratio difference $|h_i-h_j|$) show different behaviors along with aggregation, the key operation in GNN.Thus, our analysis focuses on the node aggregated features in different structural patterns Following explanations are four-folds:1.what and how to measure behavior differences 2.what is structural disparity 3.Lemma 1 explanation:structural disparity leads to behavior difference 4.how lemma is correlated with claim about majority and minority nodes 1. In the case that behavior difference **does not exist**,nodes u and v the same aggregated feature $f_u = f_v$ **should be in the same class**. In contrast, behavior difference corresponds to the case that the same feature but not in the same class.The probability gap of two nodes sharing the same class $|P(y=c_1|f_u)-P(y=c_1|f_v)|$ is utilized to measure what extent behavior difference happens. Large $|P(y=c_1|f_u)-P(y=c_1|f_v)|$ indicates more likely that two nodes are in different class 2. Structural disparity is nodes in the same set but different structural patterns.We measure it with $|h_u-h_v|$ 3. Lemma 1 shows when structure disparity $|h_u-h_v|$ is large, $|P(y=c_1|f_u)-P(y=c_1|f_v)|$ could be large, indicating node u and v are likely to have different classes,**violating the consistency behavior (with no structural disparity) expecting the same aggregated feature should map to the same class** 4. Lemma 1 does not directly correlate to majority and minority nodes, but serve as a necessary preliminary step for the claim about majority and minority nodes.Notably,in section 3.1,we only mention the existence of both homophilic and heterophilic patterns,without identifying which pattern is the majority.Lemma 1 only shows the existence of behavior disparity,serving as a preliminary step for analysis on majority and minority nodes.Once only behavior disparity exists on different structural patterns,it is possible for performance difference exists in majority and minority patterns **Q3:** Elaborate reason why use GLNN in Figure 1 **R:** We rephrase with more evidence: Consequently, an under-trained vanilla MLP comparing with a well-trained GNN leads to an unfair comparison without rigorous conclusion The experiment is to examine the effectiveness of GCN on utilizing different structural patterns. Therefore, we compare GCN and MLP architectures as GCN utilizes graph structure during inference while MLP cannot, serving as a structure-agnostic baseline. When GCN surpasses MLP,it indicates GNN benefits from structural patterns effectively, and vice visa.Notably, GLNN can be viewed as a better-trained MLP model. GLNN also utilizes the same MLP model architecture as vanilla MLP, the only difference between vanilla MLP and GLNN is that GLNN is trained in an advanced distillation manner while vanilla MLP is trained with cross-entropy loss The reason why we utilize GLNN rather than only comparing GCN with vanilla MLP is that MLP meets optimization issue on training. Experimental results are shown in Figure 1(rebuttal pdf). Such an obstacle leads to a large performance gap (more than 20%) between under-trained vanilla MLP and well-trained GCN.**Consequently, large performance gap induced by training difficulty hinders the potential for MLP architecture.** Contrastly, GLNN enjoys better training process leads to a more clear comparison between well-trained GNN and well-trained MLP(GLNN) architecture with a convincing conclusion **Q4:** Are observations in Figure 3 hold on other datasets? **R:** Yes.See additional results in Figure 14, Appendix H. Conclusions also hold for Cora, CiteSeer, IGB-tiny, twitch-gamers, and Amazon-ratings datasets **Response on other questions on explanation and typos are in global rebuttal** - Q6,7->problem1 - Q5->problem2 - Q1->problem3 --- Rebuttal Comment 1.1: Title: A gentle remind to reviewer itvA Comment: Dear reviewer itvA Thanks for your review. Your time and efforts in evaluating our work are appreciated greatly. Could we kindly know if the responses have addressed your concerns and if further explanations or clarifications are needed? Thanks. --- Rebuttal Comment 1.2: Title: Thanks for your rebuttal Comment: The authors addressed most of my concerns. In general, I find this paper interesting. I will raise my score to 5. --- Reply to Comment 1.2.1: Title: Thanks for your response Comment: Thank you for your feedback and support. We’re pleased to hear that our rebuttal has addressed your concerns. If there are any further issues or questions, please inform us, and we'll be glad to address them.
Summary: This paper focuses on the performance disparity on homophily and heterophily nodes in node-level classification tasks. It claims that although GNNs have good performance on both pure homophily and pure heterophily graphs, GNNs cannot perform well when dealing with graphs with both these two types of nodes. The paper then analyses the reason for different impact on majority and minority nodes and illustrates the aggregation function's contribution on this. It delves into the theoretical analysis, deriving a non-i.i.d PAC-Bayesian generalization bound based on the Subgroup Generalization bound of Deterministic Classifier. The theoretical analysis indicates that test nodes with larger aggregated feature distances and homophily ratio differences from training nodes experience performance degradation. The practical implications of the findings are demonstrated on real-world datasets. Strengths: - The paper provides sufficient real-world experiments to show how the homophily ratio and the feature distance influence the GNN’s aggregation and the actual performance on both homophilic and heterophilic datasets. This could sever as a foundation for future GNN architecture design. - The paper performs the theoretical analysis that reveals potential reasons for the performance disparity. The findings are supported by experimental results. - The paper is well structured with good clarity in presentation. Weaknesses: - The paper identifies the problem of performance disparity but does not provide a solution to address it. - The classification of nodes into homophily and heterophily classes using a hard threshold is a simple and ideal setting. This setting may fail to capture the complexity and nuances that may exist in real-world scenarios. Do the conclusions/findings still hold if we directly use the continuous homophily ratio rather than thresholding it? Technical Quality: 3 good Clarity: 3 good Questions for Authors: See the weaknesses section. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1:** The paper identifies the problem of performance disparity but does not provide a solution to address it **R:** Thanks for the great question revolving the contribution of this paper. We would like to first provide a more comprehensive understanding on the motivation and contribution of our paper and then discuss the potential solution We first clarify the contribution of our paper as follows: 1. points out the missing fact that homophilic and heterophilic patterns coexist across graphs 2. offers a new local perspective to evaluate GNN performance and indeed 3. gains insights into which nodes and why GNN can work well or not. Overall speaking, our paper does not focus on specific model design but **provides a new landscape for understanding existing GNNs and paving the way for future GNN design.** More specifically, our work can 1. bring insights for previous research progress. We figure out how recent-developed deeper GNNs are better than vanilla GNNs 2. provide an outlook for future research to solve the performance disparity across different GNN architectures. It clearly points out the remaining improvement space for GNN design 3. give insights into different graph applications including Graph OOD, robustness,and fairness We believe our work is technique-sound with both empirical and theoretical insights, helping to understand and push the graph domain further. Accordingly, we will **add future work section with potential solutions and preliminary solution to mitigate performance disparity** as follows, leaving as inspiration for future work. Potential solutions are as follows: 1. combine MLP and GNN in an adaptive approach since MLPs can achieve better performance on minority nodes and GNNs better on majority nodes. Ideally, we can adaptively select MLP for minority nodes and GNN for majority nodes, using an adaptively gated function to control proportion of MLP and GNN. Our findings suggest the homophilic/heterophilic pattern selection can serve as a guide for learning the gate function. 2. utilize global structural information, as the global pattern is more robust, showing less disparity than the local structural pattern. Empirical evidence can be found in Figure 9 that the higher-order homophily ratio differences are smaller than the local structure disparity We then propose a simple solution to solve **performance disparity problem** inspired by the first potential solution. Instead of learning a gate function with homophilic/heterophilic pattern selection as guidance, we use a simple heuristic threshold to identify whether a node is in majority pattern or minority pattern, then select GCN and GLNN (an MLP-based model) for inference on test nodes in majority pattern and minority pattern, respectively. A simple smoothness-based metric $r=\frac{1}{|\mathcal{N}_i|}\sum_{j\in \mathcal{N}_i}\|\mathbf{x}_i-\mathbf{x}_j\|_F^2$ is then applied.A small r indicates center node i is similar to neighborhood nodes, reflecting the homophilic pattern Our proposed **minor selection** has the following steps 1. calculate $R=\{r_1,\cdots,r_n\}$ for all test nodes 2. sort the smoothness list R 3. If graph is a homophilic one, select largest $\alpha$% test nodes, utilizing MLP for inference, while others utilize GCN 4. If graph is a homophilic one, select smallest $\alpha$% test nodes, utilizing MLP for inference, while others utilize GCN Notably,**our preliminary study is based on the vanilla GCN model**, our minority selection can also combine with other GNNs.The focus of the proposed method is to mitigate the performance disparity issue while keeping the overall performance The overall accuracy, WDP, and WSD scores are shown in Table 1(rebuttal pdf). Overall accuracy is comparable with other models. WDP and WSD are to evaluate performance disparity, defined as: $$ WDP=\frac{\sum_{i=1}^D N_i\cdot|A_i-A_{avg}|}{N_{total}} $$ $$ WSD=\sqrt{\frac{1}{N_{total}}\sum_{i=1}^DN_i\cdot (A_i-A_{avg})^2} $$ D is the number of groups,$N_i$ is the node number of group i,$A_i$ is the accuracy of group i, $A_{avg}$ is the weighted average accuracy of all groups.**A smaller WSD and WDP indicate the performance disparity is small**. Our proposed method shows much better fairness than vanilla GCN, and even better than deeper GNN in most datasets. Although our study was conducted on a tight schedule and is preliminary, the impressive performance indicates a promising direction. Further refined designs could enhance it. **W2:** The classification of nodes into homophily and heterophily classes using a hard threshold is a simple and ideal setting. It may fail to capture nuances that may exist in real-world scenarios. Do conclusions still hold if we directly use the continuous homophily ratio rather than thresholding it? **R:** Thanks for the question on homophilic and heterophilic patterns. We initially utilize node homophily ratio $h>0.5$ as homophilic node and $h<0.5$ as heterophilic node. Nonetheless, we want to clarify that the purpose of **using a hard threshold 0.5 is to ease the problem statement** and better understanding at the beginning of the paper. Notably, **most of analyses and conclusions do not revolve around the hard threshold.** For theoretical analysis in Sections 3.1 and 3.3, we majorly consider the homophily ratio difference $|h_i-h_j|$ where $h_i$ is the continuous homophily ratio. For empirical analysis, disparity scores $s_u=\|F_u^{(2)}-F_v^{(2)}\|+|h_u^{(2)}-h_v^{(2)}|$ in Section 3.4 also **considers the continuous homophily ratio case.** Moreover, we also verify the conclusion on multiple datasets with complexity and nuances scenarios as shown in Figure 13. Additional experiments shown in Figure 15-21 of Appendix J indicate the validity of our conclusion on Cora, CiteSeer, Amazon-rating, IGB-tiny, and Twitch-gamers datasets. The above observations indicate that our conclusion could successfully extend to multiple datasets with complexity and nuances. --- Rebuttal Comment 1.1: Title: Thank you for the response Comment: The rebuttal has addressed my concerns. I will keep the current rating. --- Reply to Comment 1.1.1: Title: Thanks for your response Comment: Thanks for your response. We are glad to know that our rebuttal has addressed all your concerns. Please let us know in case there still remain outstanding concerns, and if so, we will be happy to respond. Moreover, we still kindly hope you could consider raising the score if feasible.
Summary: This paper provides a rigorous analysis of the effect of structural disparity on the performance of GNNs. The proposed CSBM-S model and the application of PAC-Bayes analysis, among others, show the different effects of aggregation on the performance of nodes with different structural disparity. The analysis further indicate subgroup generalization bound for GNN and elucidate the effectiveness of deeper GNNs. Strengths: 1. The problem considered in this paper is extremely important. I agree with the authors that real-world graphs can’t be easily classified as homophilic or heterophilic. Therefore, analyzing how structural disparity influences the performance is essential to GNN design. 2. It is well backed up by theoretical analysis and examples. The authors show several nice analysis methods, including CSBM-S model and subgroup generalization bound for GNNs. 3. Extensive experiment enables the conclusions presented in this paper to be convincing Weaknesses: 1. The conclusions hold in this paper share the same promise that the aggregation operation is neighbourhood averaging, which may not generalize to a broad range of GCNs models 2. The writing could be improved in terms of tone and phrases. There are also some grammar and spelling errors Technical Quality: 3 good Clarity: 3 good Questions for Authors: There are two assumptions that nodes from different subgroup share the same distribution and similar degree distribution. How does the conclusions are still valid without the above assumptions as stated in the supplementary material. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1:** The conclusions hold in this paper share the same promise that the aggregation operation is neighborhood averaging, which may not generalize to a broad range of GCNs models. **R:** Thanks for your great question pointing out the gap between our theoretical analysis and empirical result. We want to first clarify that the reason for utilizing average aggregation is for better motivation with the toy model in the introduction section and the theoretical analysis in Section 3.1 and 3.3, but not our ultimate conclusion. Such mean aggregation is also widely adopted for theoretical analysis in many existing GNN literature [1-5]. Despite most existing GNNs may not be exact neighborhood averaging, they are still generally based on weighted neighborhood averaging''. Moreover, inspired by the theoretical understanding, we verify the conclusion with comprehensive empirical experiments which could be found in Section 3.4 and Appendix J. We can see that our theoretical results are still valid qualitatively across different datasets and various architectures. Note that, the experimental results include both shallow GNN, e.g., GCN, GAT and deeper GNN, e.g., GCNII, GPRGNN.Therefore, our conclusion that GNN can perform well on nodes in the majority pattern but not the minority pattern is still valid across architectures empirically. **W2:** The writing could be improved in terms of tone and phrases. There are also some grammar and spelling errors. **R:** We sincerely thank the reviewer for your valuable feedback. We have carefully revised the paper to address these weaknesses to meet the required standards. **Q1:** There are two assumptions that nodes from different subgroup share the same distribution and similar degree distribution. How does the conclusions are still valid without the above assumptions as stated in the supplementary material. **R:** Thanks for your great questions revolving on the necessity of assumptions in. We first need to clarity that assumptions are not strictly necessary but employed for the sake of elegant expression. We want to clarify that we only claim that assumption 2 can be loose but not assumption 1. Assumption 1 tells that node features within the same class are sampled from the same Gaussian distribution, regardless of different structural patterns. The reason why we adopt assumption 1 is that our paper majorly focuses on the structure disparity, controlling by $p$ and $q$, but not the disparity on the original node feature. Therefore, we just keep the samples from the same distribution. Such an assumption is also aligning with the real-world graph as most original node feature shows a strong correlation with the class information. Empirical evidence can be found in Table 10 and 11. We can see that the MLP taking only node feature as input also shows certain discriminative ability. The assumption 2 is that nodes follow the same degree distribution with $p^{(1)} + q^{(1)} = p^{(2)} + q^{(2)}$. To get rid of this assumption, we can assume that $p^{(1)} + q^{(1)} = \alpha (p^{(2)} + q^{(2)})$, where $\alpha \in [0, +\infty)$ is a proportionality coefficient. Then the new Lemma 1 is: $$ |\mathbf{P}_1(y_u=c_1|\mathbf{f}_u)-\mathbf{P}_2(y_v=c_1|\mathbf{f}_v)| \le \frac{\alpha \rho^2}{\sqrt{2\pi}\sigma} |h_u - h_v| $$ Original Lemma 1 is $$ |\mathbf{P}_1(y_u=c_1|\mathbf{f}_u)-\mathbf{P}_2(y_v=c_1|\mathbf{f}_v)| \le \frac{\rho^2}{\sqrt{2\pi}\sigma} |h_u - h_v| $$ $\rho=\left \|\mathbf{u}_1-\mathbf{u}_2\right \|$ is original feature separability, independent with structure.We can see that the only difference is an additional coefficient $\alpha$ depending on the degree distribution differences. Such minor difference does not affect our conclusion that nodes with a small homophily ratio difference $\|h_1 -h_2\|$ are likely to share the same class. [1] Yao Ma, Xiaorui Liu, Neil Shah, and Jiliang Tang. Is homophily a necessity for graph neural networks? The Eleventh International Conference on Learning Representations. 2022. [2] Aseem Baranwal, Kimon Fountoulakis, and Aukosh Jagannath. Graph convolution for semi-supervised classification: Improved linear separability and out-of-distribution generalization. International Conference on Machine Learning. PMLR, 2021. [3] Aseem Baranwal, Kimon Fountoulakis, and Aukosh Jagannath. Effects of graph convolutions in multi-layer networks. In The Eleventh International Conference on Learning Representtions, 2023. [4] Haonan Wang, Jieyu Zhang, Qi Zhu, and Wei Huang. Augmentation-free graph contrastive learning. arXiv preprint arXiv:2204.04874, 2022. [5] Wu, Xinyi, et al. "A Non-Asymptotic Analysis of Oversmoothing in Graph Neural Networks." The Eleventh International Conference on Learning Representations. 2022. --- Rebuttal Comment 1.1: Comment: Thanks for authors' response. Rebuttal has adressed most my concerns and I have raised my score. This work contribute a lot to analysis the effectiveness of GNNs on different types of graphs from node-level perspective, I recommend acceptance. --- Reply to Comment 1.1.1: Title: Thanks for your response Comment: Thanks for your response and support. We are glad to know that our rebuttal has addressed your concerns. Please let us know in case there still remain outstanding concerns, and if so, we will be happy to respond.
Summary: This work tries to understand the effectiveness of GNNs w.r.t different structural disparities within a graph. Previous studies have focussed on GNN's effectiveness on overall graphs, but here the authors try to understand GNN's effectiveness w.r.t structural patterns such as homophilic and heterophilic nodes to provide deeper insight into GNN's performance. Contributions include understanding why the performance of GNNs is rather good on homophilic nodes in homophilic graphs and heterophilic nodes in heterophilic graphs, but not on the opposite set. Specifically, how aggregated feature distances and homophily ratio differences in mixed graphs impact the GNN performance. Authors also present experimental results on how deeper GNNs perform better on minority node subgroups while also proposing a new data split strategy to where majority nodes are selected for train/validation and minority nodes for test. Strengths: GNN performance studies on datasets across difference structural subgroups is a crucial metric to understand the overall performance of the model. Quantitive understanding of the effect of aggregation and homophily ratio difference w.r.t train nodes is a significant metric to compare different GNN architectures. This provides new evaluation strategies for future GNN studies. The effect of new data splitting strategy is again a significant contribution and is useful to evaluate the overall GNN performance. Weaknesses: The work by itself is strong and would have been even stronger if presented with more understanding of the effectiveness of Deeper GNNs on doing comparatively well on minority nodes. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Is there a way to prove deeper GNNs improved discriminative ability on minority nodes? 2. Are there any additional factors other than aggregation and homophily ratio difference that you thought about? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1:** The work by itself is strong and would have been even stronger if presented with more understanding of the effectiveness of Deeper GNNs on doing comparatively well on minority nodes. **Q1:** Is there a way to prove deeper GNNs improved discriminative ability on minority nodes? **R:** Thanks for your great question which inspires us in the future work on analyzing the effectiveness of Deeper GNN, especially on proving deeper GNNs improved discriminative ability on minority nodes. Note that, the following content, providing a detailed discussion of more understanding on deeper GNN, will be added in the revision. The potential approach to achieve analysis on deeper GNN can be found as follows: 1. To empirically verify the improved discriminative ability on minority nodes, we apply a discriminative analysis with initial results in Figure 1(rebuttal pdf).The y-axis indicates the discriminative value $r=\sum_{i=1}^K\|\mu_i^{tr}-\mu_i^{mi} \|$ where $\mu_i^{tr},\mu_i^{mi}$ is are the prototype of class i on train nodes and test minority nodes. x-axis indicates the hidden representation of different hops. With more hops of aggregations, the discriminative value decrease, indicates improved discriminative ability. The above discussion and experimental results will add to the revision. 2. To theoretically verify the improved discriminative ability on minority nodes, we provide a potential thought based on our CSBM-S assumption and discrminative analysis of [1] in ICLR2023. [1] aims to theoretically quantify the discriminative ability on deeper GNNs with more aggregations. However, their analysis uses the vanilla CSBM model as the data assumption, denoted as $CSBM(\mu_1, \mu_2, p, q)$, where $\mu_i$ is the feature mean of class $c_i$ with $i \in \{1, 2\}$. The CSBM model presumes that all nodes follow either homophilic with $p>q$ or heterophilic patterns $p<q$ exclusively. However, this assumption conflicts with real-world scenarios as homophilic and heterophilic patterns coexist across different graphs, as shown in Figure 2. In contrast, our proposed CSBM-S model (Definition 1, line 167), $\text{CSBM-S}(\mu_1,\mu_2,(p^{(1)}, q^{(1)}),(p^{(2)},q^{(2)}), \Pr(homo))$, is more practical than the CSBM model.As far as we can see,conducting a similar discriminative analysis as [1] on our proposed CSBM-M model could be a good solution. Notably, the above discussion and experimental results will add to the future work section in the revision. **Q2:** Are there any additional factors other than aggregation and homophily ratio difference that you thought about? **R:** Thanks for your great question which inspires us in the future work on other important factors for performance disparity. Note that, the following content, providing a more detailed discussion of the important factor for performance disparity,will be added in Appendix. We first want to clarify that,in our paper, we find that both the aggregated feature distance and homophily ratio difference could lead to the performance disparity along with aggregation, the key operation in GNNs. Detailed discussion can be found in Section 3. Moreover, a comprehensive discussion in the Related Work section (Appendix A). Existing literature [2-5] shows that some other structural information, e.g., degree, geodesic distance to the training node, and Personal Pagerank score, could lead to a performance disparity. Nonetheless, all those analyses and conclusions are conducted on homophilic graphs, e.g., PubMed and ogbn-arxiv, while ignoring the heterophilic graphs e.g., Chameleon and Squirrel, which also broadly exists in the graph domain. It could be an exciting new research direction to see how the above factors, which focus on the homophilic pattern in the context of both homophilic and heterophilic properties. [1] Wu, Xinyi, et al. "A Non-Asymptotic Analysis of Oversmoothing in Graph Neural Networks." The Eleventh International Conference on Learning Representations. 2022. [2] Qi Zhu, Natalia Ponomareva, Jiawei Han, and Bryan Perozzi. Shift-robust gnns: Overcoming the limitations of localized graph training data. Advances in Neural Information Processing Systems, 34:27965–27977, 2021. [3] Jiaqi Ma, Junwei Deng, and Qiaozhu Mei. Subgroup generalization and fairness of graph 521 neural networks. Advances in Neural Information Processing Systems, 34:1048–1061, 2021. [4] Xianfeng Tang, Huaxiu Yao, Yiwei Sun, Yiqi Wang, Jiliang Tang, Charu Aggarwal, Prasenjit Mitra, and Suhang Wang. Investigating and mitigating degree-related biases in graph convoltuional networks. In Proceedings of the 29th ACM International Conference on Information Knowledge Management, pages 1435–1444, 2020. [5] Yushun Dong, Ninghao Liu, Brian Jalaian, and Jundong Li. Edits: Modeling and mitigatin data bias for graph neural networks. In Proceedings of the ACM Web Conference 2022, pages 629 1259–1269, 2022.630
Rebuttal 1: Rebuttal: Thanks to all reviewers for their constructive reviews. The pdf file contains new experimental results for reviewers pgwY and GxBV. With the help of reviewers, we find some typos and writing issues in our paper. We will correct those mistakes and add more explanations for better understanding in our revision. Detailed problems are shown as follows. **Problem1:** Grammar problem in Lines 179-181. Elaborate proposition 1. **R:** The updated version of Proposition 1 is as follows. The aggregated feature mean distance between homophilic and heterophilic node subgroups within class $c_1$ is$\left\|\frac{p^{(1)}\mu_{1}+q^{(1)}\mu_{2}}{p^{(1)}+q^{(1)}}-\frac{p^{(2)}\mu_{1}+q^{(2)}\mu_{2}}{p^{(2)}+q^{(2)}}\right\|>0$, indicating the aggregated feature of homophilic and heterophilic subgroups are from different feature distributions, with a mean distance larger than 0 distance before aggregation, since original node features draw from the same distribution, regardless of different structural patterns. The Lemma aims to show how nodes with structural disparity $|h_i-h_j|$ behave differently along with aggregation. Typically, behavior difference is measured with the distance of feature means from different structural patterns within the same class. **Behavior disparity and feature mean distance does not exist before aggregation**, since nodes in the same class are samples from the same original feature distribution $\mathcal{N}(\mu_1,\sigma^2)$, regardless of different groups, **with the same feature mean**. The feature mean distance should be $\|\mu_1-\mu_1\|=0$. After aggregation, the aggregated feature mean of the homophilic group will be $\frac{p^{(1)} \mu_{1}+q^{(1)}\mu_{2}}{p^{(1)}+q^{(1)}}$ while the heterophilic group will be $\frac{p^{(2)}\mu_{1}+q^{(2)}\mu_{2}}{p^{(2)}+q^{(2)}}$. Since $p^{(1)}\ne p^{(2)}$ and $q^{(1)}\ne q^{(2)}$, the aggregated mean feature distance $\left\|\frac{p^{(1)}\mu_{1}+q^{(1)}\mu_{2}}{p^{(1)}+q^{(1)}}-\frac{p^{(2)}\mu_{1}+q^{(2)}\mu_{2}}{p^{(2)}+q^{(2)}}\right\|>0$. It indicates that **there is a distance gap feature mean from different node subgroups in the same class**, indicating behavior differences. **Problem2:** typo in Line 170, heterophilic patterns have $p>q$ **R:** Heterophilic pattern should have $p^{(2)}<q^{(2)}$ as shown in line 163. We will correct it in revision. Consequently, the CSBM-S model has different p and q values where $p^{(1)}>q^{(1)}$ and $p^{(2)}<q^{(2)}$. **Problem3:** typo in line 58-60 on homophilic and heterophilic **R:** The revision is: while heterophilic graphs exhibit an opposite phenomenon with **heterophilic** nodes in the majority and **homophilic** ones in the minority. Pdf: /pdf/f65a3d219eeda3451b9cb189c602a19094bb71d6.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
FedGCN: Convergence-Communication Tradeoffs in Federated Training of Graph Convolutional Networks
Accept (poster)
Summary: The manuscript studies a semi-supervised node classification task using graph convolutional networks (GCNs). Specifically, motivated by the growth of the input graph (data) size, this paper considers a federated learning setting in which training is performed in a distributed manner, by partitioning the underlying graph and assigning each subgraph to a corresponding client. To deal with the communication overhead of the cross-client edges connecting graph nodes which need to be known for the target task, the paper proposes an algorithm that the central server first aggregates the information needed for each node in a client and then sends it to the designated node, rather than sending each information multiple times. The manuscript analyzes the tradeoff between the convergence rate of the algorithm and the communication overhead; in addition, a significant reduction on the communication cost with a reasonable accuracy is demonstrated through the experiment. Strengths: + The problem is of sufficient interest since GNNs takes large-scale graphs as inputs, which can come with a heavy computational burden and may require huge storage costs. Weaknesses: + It seems like the empirical results demonstrate that the proposed algorithm comes with a marginal gain compared to FedSage+ (Zhang et. al. 2021). Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: + The reviewer is wondering how the communication cost is measured in experimental validation (e.g., units). + Is the idea of aggregating cross-client edge information at the central server and sending the aggregated information (instead of sending each multiple times) to each client first proposed by this manuscript? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: + The assumption that the cross-client edges of client $k$, denoted by $\mathcal{E}_k^c$, are known to the client $k$ seems impractical due to the privacy issues. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your valuable comments! >**For weakness: marginal gain of accuracy compared to FedSage+** We agree that FedSage+ often achieves good accuracies. However, the main contribution of FedGCN is that it requires 100X less communication cost by using a pre-training communication round, without compromising accuracy. FedGCN can also ouperform FedSage+ in some settings. Ideally, FedSage+ can have a close performance to that of FedGCN(1-hop), as shown in Table 2. However, as in Fig. 5, FedGCN(2-hop) has 5% better accuracy than FedGCN(1-hop) (the upper bound of FedSage+) as the number of clients increases, which means FedGCN needs more hops of communication in such a cross-device setting to achieve high accuracy. We expect that it will outperform FedSage+ more dramatically in such a setting. >**Question: how the communication cost is measured?** It is calculated by the overall length of arrays needed to be sent, which is independent of the implementation. E.g. the client needs to send an array {1, 0, 1, 0} to the server, the communication cost is then 4. In this way, the cost can be independent of the implementation. Compression techniques may make the actual communication cost much lower than the array size. The code is also provided to reproduce. In Fig. 4. the communication cost of distributed GCN can be 10^9 for Ogbn-Arxiv, which works out to several GBs of communicated data. For Ogbn-Products (2,449,029 nodes), it is hundreds of GBs. FedGCN requires 100X less communication cost. In practical businesses (e.g. billions of nodes in Amazon and Facebook), the communication cost of distributed methods is prohibitive with extremely high latency at every training round. FedGCN's single pre-communication round can thus greatly improve performance. >**Question: Is the idea of server aggregation of cross-client edge first proposed by this manuscript?** Yes. A preliminary version of FedGCN required clients to communicate with each other to aggregate the cross-client edge information, which has privacy leakage. To resolve the issue, we designed the current server aggregation scheme after realizing that the server can perform such aggregation and the homomorphic encryption can also prevent privacy leakage on the server side. >**Cross-client edges are known to the client seems impractical** The cross-client edges typically exist in both clients, which we believe is common and practical. Intuitively, this is due to the fact that edges are generated when nodes at clients interact with each other. Thus, the interaction record, though not personal node characteristics, is then naturally stored at both nodes, i.e., in both clients. We provide some examples below and will use them to clarify this point in the paper’s introduction. For example, in Amazon, a graph may represent buying behaviors (edges) that exist between users (nodes) in two countries (clients). Users in one country want to buy items in another country. The records of these transactions between users in different countries (i.e., the cross-client edges) are then stored in both clients. Due to General Data Protection Regulation (GDPR), however, sensitive user information (node features including name, zip code, gender, birthday, credit card number, email address, etc.) cannot be stored in another country. Including cross-country transactions (cross-client edges) is key for training models that detect international money laundering and fraud. Another example is online social applications like Facebook and LinkedIn. Users in different countries can build connections with each other (e.g., a person in the United States becoming Facebook friends with a person in China). The users in both the U.S. and China would then have a record of this friendship link, while the personal user information cannot be shared across countries. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for addressing my questions and concerns. --- Reply to Comment 1.1.1: Comment: Dear Reviewer BmP4, Happy to know that we have addressed your questions and concerns!! We will add the discussion of the performance gain in cross-device settings and the above examples of cross-client edges to the paper!
Summary: The authors propose a secure federated protocol over GCNs which are split across federated clients. To use the features of a node's neighbors present in other clients (of which both clients are mutually aware of an edge), the accumulation of neighbor features through the adjacency matrix is encrypted via some public key encryption scheme (such as homomorphic encryption), sent to the server, aggregated, and then sent back to the client, where it is decrypted and used to approximate the classical message passing GCN calculation. Feature accumulations of 0-hop, 1-hop, or 2-hop neighbor information may be used. The approach is studied over benchmarked over several well-known graph datasets and compared against other federated approaches, where the method (either 1-hop or 2-hop) is superior to non-FedGCN strategies. Strengths: 1. Narrative is very nicely presented -- review of GCNs is nicely condensed and would be understandable for researchers outside of this particular area. 2. Algorithm is astoundingly simple, takes advantage of well-known public encryption schemes, protects client data features from one-another and from the server. 3. The method outperforms other federated algorithms over a wide array of benchmarks. There is apparently no need to accumulate anything beyond 2-hop features. 4. Convergence guarantee is provided with very standard non-convex, non-iid FL assumptions. Weaknesses: 1. The one neighbor case wasn't adequately addressed. If the graph is heavily partitioned so there are many cross-client edges leading to a single neighbor, dropping the neighbor might not be appropriate. It was suggested differential privacy could be added to the single-neighbor accumulation but this will certifiably affect convergence, which would have been interesting to study. 2. The need for quantization/rounding via the encryption scheme was a bit rushed -- the decryption might not be perfect (see questions). Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: 1. If the encryption scheme requires rounding/quantization, this will affect the convergence rate as information transmission is imperfect. This is because the act of encryption-decryption is acting as a compression operator [1]. Many hallmark gradient compression algorithms study the effects of quantizing model parameters/updates on convergence, and it depends on the accuracy of the recovery. Is the encryption/decryption protocol simply assumed to be perfect? If not, can this be incorporated into the convergence guarantee? [1] Stitch et al., 2018 "Sparsified SGD with Memory." Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We highly appreciate your opinions especially "Algorithm is astoundingly simple", which is also our goal in designing the algorithm. >**For weakness 1: the one neighbor case wasn't adequately addressed.** We agree that there could be cases when the graph is heavily partitioned and also nodes have one neighbor in other clients, although such cases are likely unusual in natural federated graphs. It is promising to potentially incorporate differential privacy in this extreme setup to provide an adequate privacy guarantee [1][2]. We are planning to include more discussions in the paper on tradeoffs between privacy and utilities from applying differential privacy. [1] Releasing Graph Neural Networks with Differential Privacy Guarantees. Transactions on Machine Learning Research 2023. [2] Federated learning with differential privacy: Algorithms and performance analysis. Transactions on Information Forensics and Security 2020. >**For weakness 2 and question 1: encryption scheme** As in Appendix G.2, the encrypted data can be exactly recovered and does not have a performance drop. The rounding we refer to here is merely a conversion from using floating point representation of binary values back to binary numbers. For example, we might have 1.000000000000003 in our CKKS scheme to represent the binary value of 1, where the error is introduced in the floating point representation and CKKS scheme errors, but the magnitude of these errors is usually negligible such that we can easily “round” them to their true values. This need of conversion is because we only use the CKKS scheme (approximation scheme for real numbers) in our system to avoid extra overheads and system complexity from using different HE schemes for different types of values (for example, using CKKS for real numbers while using BGV for integers or using FHEW for boolean values). --- Rebuttal Comment 1.1: Title: Further questions on encryption/decryption Comment: Thanks to the authors for their responses. I have further follow-up regarding the encryption/decryption scheme: For the local model parameters (float64), under the CKKS scheme, even if we choose a scaling factor $\Delta$ which covers all the significant digits (let's say $\Delta=10^{16})$, the CKKS error still could distort the least significant bits such that the decryption is not perfect (but still extremely close, which concurs with your empirical observations), which should be accounted for in the analysis. You could assume the decryption is perfect (which is reasonable, for a large enough scaling factor). --- Reply to Comment 1.1.1: Title: Response to further questions on encryption/decryption Comment: Dear Reviewer z8YN, thank you for the follow-up!! Your observation on the value error is correct and in general it has negligible impact on empirical evaluation as you also observed. However, we agree that adequate analysis on encryption/computation/decryption errors from CKKS scaling and other errors from other HE operators and phases can be included regarding the model performance, which will also be very helpful for the readers to understand the mechanism of HE. We plan to discuss it in the main paper (as you mentioned, the CKKS error still could distort the least significant bits such that the decryption is not perfect but still extremely close) and include such experiments in the appendix.
Summary: The paper introduces a technique to facilitate the federated learning of graph convolution networks. The key idea is to send aggregated node features to each client before the federated learning. The approach involves transmitting these features to each client in an encrypted manner to ensure privacy. Overall, I find the method proposed and the theoretical analysis interesting. The results look good. Some key details of the method might need to be clarified. I tend to accept this work. Strengths: - The paper proposes an improved framework for learning graph ConvNets federately. The results demonstrate the effectiveness. - The theoretical convergence analysis is provided which illustrates a convegence-coomunite tradeoff. Weaknesses: Technical details --- There are a few technical details that the authors may want to clarify for a better understanding of the proposed technique. In equation (2), the authors claim that only feature aggregations of 1- and 2-hop neighbors of node $i$ are needed to evaluate the 2-layer GCN. It would be helpful if the authors could elaborate more on **why 1- and 2-hop aggregations are sufficient**. Is this an exact or approximate evaluation of the GCN model? Additionally, since GCN incorporates non-linearity in the model, it would be valuable to articulate how 2-hop feature aggregations can be directly used for evaluation. In equation (3), regarding client $k$, it would be beneficial for the authors to clarify whether they compute 1-hop feature aggregations for all nodes $V$ or only for the nodes belonging to client $k$, denoted as $V_k$. In line 222, the authors mention $V_k$, but it seems that it should be $V$ instead. Otherwise, it appears impossible to obtain the cross-edge aggregation for node $i$. Proof sketch --- It might be beneficial to provide a sketch of the proof to demonstrate the idea of how to prove the bounds in Table 1. How does the number of hops involved in the analysis? Currently, results are up to 2-hops, is it possible to get results for the high numbers of hops? Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. If the convergence-communication tradeoff is controlled by the number of hops, it appears that achieving a continuous or fine-grained tradeoff control might not be possible. What happens if the convergence achieved with 1-hop aggregations is poor while the communication required for 2-hop aggregations is excessive? How does the proposed technique address such scenarios? 2. Could you provide further details on how the communication costs are computed in Figure 4? 3. The author uses the term "cross-silo" to refer to a small number of clients and "cross-device" to refer to a large number of clients. It would be helpful to define the boundary between these two categories. How many clients are considered the minimum for classifying them as "cross-devices"? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Not discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your recognition that the method and proofs are interesting! We are very happy to elaborate on more technical details and proof sketches. >**Technical details: why feature aggregations of only 1- and 2-hop neighbors of nodes are sufficient to evaluate the 2-layer GCN?** Based on Equation 1, $\mathbf{h}^{(l+1)}\_i=\phi\left(\sum_{j\in\mathcal{N}\_{i}}\mathbf{A}\_{ij}\mathbf{h}\_j^{(l)}\mathbf{W}^{(l)}\right), $ as mentioned in line 127, for a GCN with L layers, the output for node i will depend on neighbors up to L steps away (i.e., there exists a path of no more than L edges to node i). So feature aggregations of 2-hop neighbors give an exact evaluation for 2-layer GCN. >**Technical details: how 2-hop feature aggregations can be directly used for evaluation?** The 2-layer GCN computation for node $i$ is $$\mathbf{\hat{y}}\_i=\phi\left(\sum_{j\in\mathcal{N}\_i}\mathbf{A}\_{ij}\phi\left(\sum_{m\in\mathcal{N}\_j}\mathbf{A}\_{jm}\mathbf{x}_m^T \mathbf{W}^{(1)}\_{c(i)}\right)\mathbf{W}^{(2)}\_{c(i)}\right).$$ $\sum_{m\in\mathcal{N}\_j}\mathbf{A}\_{jm}\mathbf{x}\_m^T$ includes both 1-hop feature aggregation $\sum_{j\in\mathcal{N}\_i} \mathbf{A}\_{ij}\mathbf{x}\_j$, and 2-hop feature aggregation $\left\\{\sum_{m\in\mathcal{N}\_j}\mathbf{A}\_{jm}\mathbf{x}\_j\right\\}\_{j\in\mathcal{N}\_{i}/i}$. The 2-hop feature aggregation is directly added in $\sum_{m\in\mathcal{N}\_j}\mathbf{A}\_{jm}\mathbf{x}\_m^T$. After completing the aggregation of this information, it then goes to the non-linear activation layer $\phi(\cdot)$. >**Technical details in equation 3: For client $k$, compute 1-hop feature aggregations for all nodes $V$ or only for the nodes $V_k$ belonging to client $k$?** As in line 222, each client $k$ sends its encrypted accumulations of local node features, $$\left[\left[ \left\\{\sum_{j\in\mathcal{N}\_i}\mathbb{I}\_k(c(j))\cdot\mathbf{A}\_{ij}\mathbf{x}\_j\right\\}\_{i\in{\mathcal{V}\_k}}\right]\right],$$ to the server based on its local nodes $V_k$. The server then aggregates such information for all clients and the accumulated information for all nodes $V$. After that, the server selects the required 2-hop neighbor feature aggregation for each node $i$ and sends it back to the client $k$. >**Sketch of proof to demonstrate the idea of how to prove the bounds in Table 1** Step 1: As in Theorem 5.6, the analysis for general graphs relies on bounding the difference of the information provided by local and global graphs $\|I_{local}-I_{glob}\|$. Step 2: To further quantify such differences, we adopt the Stochastic Block Model (SBM) for analyzing the graph topology. Step 3: As shown in Appendix H.3.2, the information difference can be separated into two components: the difference in the number of nodes (different number of data samples) and the difference in node label distribution (IID or Non-IID). Step 4: By calculating the two components under the SBM, we can then derive the equations (each one has two components) in Table 1. >**Proof Sketches: How does the number of hops involved in the analysis?** As in Theorem 5.6, Appendix H.1 (1-layer case) and Appendix H.2 (2-layer case), the difference of the information provided by local and global graphs $\|I_{local}-I_{glob}\|$ can be written as $\|K\mathbf{X}_k^T\mathbf{A}_k^T\mathbf{A}_k\mathbf{X}_k-\mathbf{X}^T\mathbf{A}^T\mathbf{A}\mathbf{X}\|$ for the 1-layer case and $\|K\mathbf{X}_k^T\mathbf{A}_k^T\mathbf{A}_k^T\mathbf{A}_k \mathbf{A}_k\mathbf{X}_k-\mathbf{X}^T\mathbf{A}^T\mathbf{A}^T\mathbf{A}\mathbf{A}\mathbf{X}\|$ for the 2-layer case. It can be extended to the L-hop case by expanding $\mathbf{A}^T\mathbf{A}$, though the computation (deriving the analytical form) will be increasingly tedious as the number of hops increases. We already have a cleaner version of the proof and we will also include the above proof sketch in the paper. >**Question 1: What happens if the convergence achieved with 1-hop aggregations is poor while the communication required for 2-hop aggregations is excessive?** This is an interesting future direction. The most natural way to make the communication tradeoff more granular is to perform neighbor sampling with probability $p$ to reduce the communication overhead [1]. Currently, we are communicating with all neighbors in the graph. We will include this direction as future work. [1] BNS-GCN: Efficient full-graph training of graph convolutional networks with partition-parallelism and random boundary node sampling. >**Question 2: Could you provide further details on how the communication costs are computed in Figure 4?** It is calculated by the overall length of arrays needed to be sent, which is independent of the implementation. E.g. the client needs to send an array {1, 0, 1, 0} to the server, the communication cost is then 4. In this way, the cost can be independent of the implementation. Compression techniques may make the actual communication cost lower than the array size, but their efficacy will depend on the specific data to be transmitted, and thus we do not consider them here. The code is also provided to reproduce. In Fig. 4. the communication cost of distributed GCN can be 10^9 for Ogbn-Arxiv, which works out to several GBs of communicated data. For Ogbn-Products (2,449,029 nodes), it is hundreds of GBs. FedGCN requires 100X less communication cost. In practical businesses (e.g. billions of nodes in Amazon and Facebook), the communication cost of distributed methods is prohibitive with extremely high latency at every training round. FedGCN's single pre-communication round can thus greatly improve performance. >**Question 3: How many clients are considered the minimum for classifying them as "cross-devices"?** We borrowed this terminology from [2], and there is no clear definition in the FL community. Generally speaking, "more than 100 clients" is cross-device, and "less than 100 clients" is cross-silo. [2] Federated Learning Tutorial, NeurIPS 2020 --- Rebuttal Comment 1.1: Title: thank you for the rebuttal Comment: Most of my questions are addressed by the rebuttal. I remain to be on the positive side for this submission. --- Reply to Comment 1.1.1: Comment: Dear Reviewer d6pK, Happy to know that our rebuttal has resolved your concerns!! We will add the discussed technical details and proof sketches to the paper!
Summary: The paper presents FedGCN, a framework designed for federated training of graph convolutional networks (GCNs) specifically for semi-supervised node classification. The proposed method aim to communicate cross-client neighbor information just once before training initiates, diverging from previous methods that demanded communication in each round. This shift significantly reduces communication overhead and expedites convergence. FedGCN provides the flexibility to choose between 0-, 1-, or 2-hop neighbor communication to strike an optimal balance between overhead and model accuracy. Empirical results highlight FedGCN's effectiveness and minimal communication cost in comparison with previous techniques. Strengths: 1. Reducing the training communication during distributed training of graph neural networks is a crucial problem when dealing with super large graphs. 2. The experiment is thorough, albeit with small-scale graph datasets. 3. The presentation is clear, well-structured, and easy to follow. I personally appreciate the results in Figure 3 and its presentation format. Weaknesses: 1. The motivation is not clear. In the abstract, the motivation "keeping data where it is generated" and "a single connected graph cannot be disjointly partitioned onto multiple clients" is contradictory. We don't need to partition the graph; the graph itself has multiple partitions. If this is the case, where should we store the connection information for two nodes on different partitions since we need to "keep data where it is generated"? The idea of "keeping data where it is generated" also contradicts the proposed "X-hop communication". (Note that I accept the motivation that the graph is so large that it cannot be stored in one machine.) 2. The assumptions are not well grounded, well-explained, or empirically verified. The three assumptions - a) Lipschitz Continuous Gradient, b) Bounded Global Variability, and c) Bounded Gradient - are extremely strict. Even though this paper claims most of them are standard assumptions of FL, this is far from convincing to me. Since all the assumptions form the upper bound of the main claim (Equation (4)), this should be well addressed and treated. 3. The technical contribution may be limited. If I understand correctly, when rewriting the GCN equation to Equation (2), this paper removes the activation function in the first layer (otherwise, we would have to transform the hidden representation during training). This operation makes the proposal essentially a type of SGC [1]. If this is the case, the technical contribution may be limited. 4. The experiments appear to be quite weak. - a. One of the motivations of this paper is the large size of the graph. However, the graphs used in the experiments are all small (even 'tiny' for Cora, CiteSeer). - b. Since the graphs are small, the communication cost will be negligible. - c. There are some important baselines missing. [2] [1] Simplifying graph convolutional networks, ICML2019. \ [2] Learn Locally, Correct Globally: A Distributed Algorithm for Training Graph Neural Networks, ICLR 2022 Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Please address my concerns in **Weaknesses**. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your helpful suggestions! We plan to incorporate them into our manuscript, as we detail below, and we believe that our reply will resolve your concerns. >**For weakness 1: motivation is not clear** “Keeping data where it is generated” refers to typical data constraints in federated settings where the graph data and its node features are naturally stored in the local clients, which preserves the privacy of users. In these settings, the cross-client edges typically exist in both clients. Intuitively, this is due to the fact that edges are generated when nodes at clients interact with each other. Thus, the interaction record, though not personal node characteristics, is naturally stored at both nodes, i.e., in both clients. We provide examples (Amazon and Facebook, see rebuttal to Reviewer BmP4) and will use them to clarify this point in the paper's introduction. The reviewer's comment that "keeping data where it is generated contradicts the proposed X-hop communication" exactly captures the main challenge of our paper: we believe that FedGCN's method of communication allows models to learn with cross-client edges, without revealing sensitive information across clients. "X-hop communication" is used to communicate averaged information with encryption rather than sending sensitive user features directly. In writing "a single connected graph cannot be disjointly partitioned onto multiple clients" we mean to say that the cross-client edges must exist. We agree that the sentence is a bit confusing. We will modify the abstract accordingly. As mentioned in Line 31, the main challenge of applying FL to GCN training involving a single large graph is that cross-client edges exist among clients. >**For weakness 2: the assumptions are not well-grounded, well-explained, or empirically verified** Lipschitz Continuous Gradient, Bounded Global Variability, and Bounded Gradient are standard assumptions for FL analysis [1-6]. For example, [6] incorporates them as assumption 1, assumption 2, and Eqn. 14 of Appendix B.2, $$\mathbb{E}[\mathcal{L}(\bar{\theta}^{t+1})]\leq\mathbb{E}[\mathcal{L}(\bar{\theta}^t)]+\mathbb{E}[\langle\nabla\mathcal{L}(\bar{\theta}^t),\bar{\theta}^{t+1}-\bar{\theta}^t\rangle]+\frac{L}{2}\mathbb{E}[\|\bar{\theta}^{t+1}-\bar{\theta}^t\|^2].$$ They are “very standard non-convex, non-iid FL assumptions” (mentioned by reviewer z8YN (strength 4) and reviewer d6pK(strength 2)), and we believe they constitute the state-of-the-art FL convergence analysis. That said, we also agree that these non-convex assumptions are still relatively strict, since how to provide FL analysis without such assumptions is still an open problem. We will add discussion of these limitations to our paper. In brief, Bounded Global Variability and Bounded Gradient allow the convergence analysis to accommodate data distribution heterogeneity, a core challenge of FL [1-3]. The bounded gradient assumption in particular holds for certain activation functions, e.g., sigmoid functions, and bounded input features. Lipschitz Continuous Gradient is a technical condition on the shape of the loss function that is standard for non-convex analysis. It in fact relaxes the assumption of (strongly) convex loss functions that were previously common in analyzing FL and stochastic gradient descent [1]. Our theory is also based on [2]’s convergence result for FedAvg. We hope our theory can open the area of theoretical analysis of federated graph learning. >**For weakness 3: the technical contribution may be limited** We do not remove the activation function in the 1st layer. After getting the neighbor feature aggregation, the GCN computation is exactly the same as the centralized training given the same parameter $W$. Eqn. 2 shows only the aggregation of neighbor features; this aggregation is then passed into the activation function as usual in GCNs. As mentioned in our conclusion, the paper "Simplifying GCNs" aims to speed up the local computation by simplifying the GCN computation as $A^kXW$, which is an approximation of GCN. FedGCN can incorporate such methods to speed up local training. >**For weakness 4.a,4.b: graphs used in the experiments are all small, communication cost is then negligible** As mentioned in section 6.1, we experiment on Ogbn-ArXiv (169,343 nodes, 1,166,243 edges), and Ogbn-Products (2,449,029 nodes, 61,859,140 edges). We respectfully claim that a graph with 2,449,029 nodes and 61,859,140 edges is not “small”. [6] also uses Ogbn-ArXiv. We note that Ogbn-Products is bigger than all datasets used in [6]. As described in Appendix E.3, experiments are done in a p3d.16xlarge instance with 8 GPUs (32GB memory for each GPU) and 10 g4dn.xlarge instances (16GB GPU memory in each instance). One run of the Ogbn-Products experiment can take 20 min by full-batch GPU graph training. CPU training is impossible in this case. Experiments take two weeks to finish all data points of Ogbn-Products. For communication cost, please refer to rebuttal to reviewer BmP4 in Question 2. >**For weakness 4.c: some important baselines missing** Thank you for mentioning the paper[6]. It proposes distributed graph training with global correction. For the global correction part, the server stores global graph information and node features, which causes serious privacy leakage in our federated setting. Although the paper studies a different setting, we will cite the paper and discuss it accordingly. [1] On the Convergence of FedAvg on Non-IID Data. ICLR 2019. [2] Parallel restarted SGD with faster convergence and less communication. AAAI 2019. [3] Achieving Linear Speedup with Partial Worker Participation in Non-IID FL. ICLR 2021. [4] Sharper convergence guarantees for asynchronous sgd for distributed and federated learning. NeurIPS 2022. [5] FeDXL: Provable FL for Deep X-Risk Optimization. ICML 2023. [6] Learn Locally, Correct Globally: A Distributed Algorithm for Training Graph Neural Networks, ICLR 2022 --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: I appreciate the author's response. However, I still have the following concerns: 1. Are these assumptions still valid for the graph? For instance, in graph federated learning, the $w$ and $v$ in Assumption 5.3 should be related since model $f(w)$ and $f(w)$ are trained with the connected nodes with high probability. How does this relationship impact the assumption? Additionally, how about Assumption 5.4 and Assumption 5.5? (I remain skeptical that such strict conditions can be assumed even on standard federated learning.) 2. The datasets used in this paper are not real-world graphs that require "keeping data where it is generated." In other words, all the experiments in this paper are conducted on synthetic data. I find it hard to accept a paper that claims to address a real-world problem but only tests its solutions in simulated scenarios. --- Reply to Comment 1.1.1: Comment: Thank you for the new questions! We hope that our former rebuttal has addressed your prior concerns. We have done our best to answer your new questions below. >**Are these assumptions still valid for the graph?** For Assumption 5.3 (λ-Lipschitz Continuous Gradient), $\|\nabla f_k(w)-\nabla f_k(v)\|\leq\lambda\|w-v\|$, $\forall w,v \in\mathbb{R}^d,$ $w$ and $v$ in the statement of this assumption represent two arbitrary sets of parameter values of the model. In a graph neural network, $w$ represents the concatenation of parameters of each layer, i.e., $[W_1,W_2,...,W_L]$ where $W_l$ is the vectorized weight matrix of the $l$-th layer. For example, $w$ can be the model parameters at the first training iteration, and $v$ can be the model parameters after subsequent training iterations. The assumption is general for arbitrary $w$ and $v$. In words, it means that changing the model parameters from $w$ to $v$ will not change the value of the loss function $f_k$ by more than a constant factor, multiplied by the norm of $\left\|w-v\right\|$. The correlation between $w$ and $v$ will not affect the bound. Intuitively, one might in fact expect a correlation between $w$ and $v$ to make the Lipschitz property more likely to hold, since the loss value is less likely to change much if the new parameter values $(v)$ are correlated with the old parameter values $(w)$. Assumption 5.4, $\|\nabla f_k(w_t)-\nabla f(w_t)\|\leq\sigma_G$, follows from Assumption 5.5. If the local gradient is bounded, then since the global gradient is the sum of local gradients, it is also bounded. Thus, the difference between local and global gradients will also be bounded. For Assumption 5.5, $\|\nabla f_k(w_t)\|\leq G,$ we agree that the bounded gradient assumption may not always hold. However, it can be shown that this assumption holds for certain activation functions, e.g., sigmoid functions, and bounded input features. >**Why do we adopt such assumptions?** Our analysis is based on that in [2]. To the best of our knowledge, all state-of-the-art FL papers, such as [1-6], make very similar assumptions. Since the main purpose of our paper is not to advance the convergence analysis of FL in general, but rather to show how this analysis applies to federated graph training, we follow these papers' assumptions. We believe that if better FL theory papers emerge that remove one or all assumptions, we can extend our work to this more advanced theory by analyzing the difference between the local gradient and global gradient in the graph setting. We further believe that, while our exact quantitative convergence bounds may not hold in practice given that some of the theoretical assumptions may be violated, the qualitative insights derived from those bounds may still be valuable. In Figure 5, for example, we empirically validate our qualitative observations on how FedGCN's convergence varies with the number of clients and number of hops. We will further emphasize in the paper that the convergence analysis suggests qualitative insights about FedGCN's performance, even if the exact mathematical expressions do not always hold. [1] On the Convergence of FedAvg on Non-IID Data. ICLR 2019. [2] Parallel restarted SGD with faster convergence and less communication. AAAI 2019. [3] Achieving Linear Speedup with Partial Worker Participation in Non-IID FL. ICLR 2021. [4] Sharper convergence guarantees for asynchronous sgd for distributed and federated learning. NeurIPS 2022. [5] FeDXL: Provable FL for Deep X-Risk Optimization. ICML 2023. [6] Learn Locally, Correct Globally: A Distributed Algorithm for Training Graph Neural Networks, ICLR 2022
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
SQ Lower Bounds for Learning Mixtures of Linear Classifiers
Accept (poster)
Summary: This paper studies the problem of learning mixtures of linear classifiers under Gaussian sampling. The paper provides a stastical query lower bound which demonstrates that known algorithms for the problem in the literature are essentially best possible, even for the special case of learning uniform mixtures. It further establishes the complexity of any SQ algorithm. Strengths: +) Statistical query complexity of learning mixtures of linear classifiers +) Efficient spherical designs to fullfill the required separation assumptions for the results to hold Weaknesses: This paper is primarily a theoretical work. The assumptions, problem setup, and results are only of theoretical interest. I am not sure if the results and technical tools are interesting to the broad machine learning systems society. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: The paper is well written and easy to follow. There are some notation issues. For example, in the abstract, "y=sign(<v_\ell,x>))" where y shall be y_\ell right? This also appeared in several places in the paper. Plus, one right parenthesis ) shall be removed. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback and effort. We will address the typos pointed out in the revision. The reviewer’s stated weakness of our work is the fact that it is “primarily a theoretical work”. We respectfully point out that theoretical research in machine learning (“learning theory”) is well within the scope of NeurIPS’23 and explicitly mentioned in the call for papers. We request that our submission is judged on its merits according to the specified criteria in the call for papers. Specifically, the problem that we study (learning mixtures of linear classifiers) is a classical problem in machine learning and understanding its computational complexity is a question of fundamental importance. Our work provides near-optimal SQ lower bounds for this problem, suggesting that known algorithms are essentially best possible. In the process, our work develops novel constructions of spherical designs that are of independent interest. In summary, we believe that our contributions are of significant interest to the theoretical ML community that has a strong presence at NeurIPS. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal! Comment: Thanks the authors for the rebuttal and for addressing the issues. After reading the rebuttal and other reviewers' comments, I am updating thescore to Borderline accept.
Summary: A statistical query (SQ) algorithm is an algorithm that attempts to learn the data distribution $D$ by querying $f$ to the oracle who responses with $v$ such that $\lvert v - E_{x\sim D} f(x) \rvert$ is small. In this paper, the authors study the problem of finding an SQ lower bound for learning a mixture of linear classifiers. The main ingredients are 1. a result in [DKPZ21], which establishes an SQ lower bound on testing a distribution of $(x,y)\in \mathbb{R}^n\times \{-1,1\}$ where $x$ is normal, $E[y \vert x = z]=g(Uz)$ for some function $g$ with zero low-degree moments and some matrix $U$. 2. Authors' construction of well-separated _spherical design_, that is, unit vectors $v_1,\ldots,v_r$ such that $g(z) = \frac1{r}\sum_{\ell=1}^r\text{sign} (v_{\ell}^Tz)$ has zero low-degree moments, as required in 1. And the last step is to turn the testing problem into the problem of learning the mixture of linear classifiers. The proof of the main theorem is presented in the main paper, whereas those of several lemmas are postponed to the appendix. Strengths: - The authors have established a new SQ lower bound which matches the algorithmic guarantees in some cases [CDV22]. - The idea of designing and using spherical designs to obtain the lower bound is interesting and can be applied to other learning problems. - The writing is well-organized and it is easy to see the high-level idea of the proof. Weaknesses: My only minor concern is that there is a lack of discussion of the main result. See Questions below for some questions that I have in mind. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. the authors have mentioned that the main result provides "a near-optimal information-computation tradeoff for the problem". Does this mean that the problem of finding an optimal tradeoff is still open. 2. Related to 1., are there any open problems introduced, or related to this work? 3. The idea of using spherical designs could be used to find SQ lower bound of other learning problems in which $g$ is an odd function. Are there any other learning problems that the authors have in mind? 4. Can the result be extended to non-normal data? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors should include a section on discussion, or limitations of the main results. For example, I have already mentioned the case where the data is non-normal. Also, am I correct in assuming that the result only applies when $r$ is already known. Is there anything we can do when $r$ is not known? Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their effort and positive assessment of our work. We start by addressing the reviewer’s concerns in the “Limitations” sections below: 1. “Also, am I correct in assuming that the result only applies when $r$ is already known. Is there anything we can do when $r$ is not known?” In our work, we prove a lower bound for known $r$, which automatically establishes the hardness for the (more challenging) setting where $r$ is unknown. On the other hand, we do not know any algorithmic result for unknown $r$. 2. “The authors should include a section on discussion, or limitations of the main results.” After the statement of our main Theorem 1.2, we include a section (line 89 - line 105) to discuss the implications of our result. Our understanding is that a “Limitations” section is not required at NeurIPS’23. If this is not accurate, we will be happy to include one in the revised version. We now proceed by addressing the reviewer’s questions below: 1. The authors have mentioned that the main result provides "a near-optimal information-computation tradeoff for the problem". Does this mean that the problem of finding an optimal tradeoff is still open. Our SQ lower bound qualitatively matches the best known algorithm in the sense that both the upper and the lower bound are of the form $n^{\mathrm{poly}(1/\Delta)\log(r)}$. On the other hand, the degree of the polynomials in the exponent do not exactly match. Specifically, in our SQ lower bound, the exponent on $(1/\Delta)$ is around $1/10$, which is strictly smaller than the one for the algorithmic result of [CDV22]. 2. Are there any open problems introduced, or related to this work? An interesting research direction is to understand if our techniques can be leveraged to obtain SQ lower bounds for other mixture models (e.g., for other mixtures of experts). In addition to SQ lower bounds, it would also be interesting to establish reduction-based hardness for such problems, starting, e.g., from cryptographic assumptions. A more concrete open problem is to obtain sharper lower bounds for this particular problem (matching the constant in the exponent as well). 3. The idea of using spherical designs could be used to find SQ lower bound of other learning problems in which $g$ is an odd function. Are there any other learning problems that the authors have in mind? While the focus of our work has been on learning mixtures of linear classifiers, we believe that a similar approach ought to apply for other “mixtures of experts” problems. We leave this as a direction for future work. 4. Can the result be extended to non-normal data? The main point of our work is that we establish a hardness result (SQ lower bound), even for the arguably simplest (and well-studied) setting where the covariates are drawn from the standard Gaussian distribution. This implies similar SQ lower bounds, e.g., when the covariates are drawn from a more general distribution family (e.g., an unknown subgaussian or log-concave distribution) that includes the standard normal. If one wants to establish SQ lower bounds for a non-Gaussian fixed distribution on covariates, a different construction is needed. References: [CDV22] A. Chen, A. De, and A. Vijayaraghavan. Algorithms for learning a mixture of linear 416 classifiers. In International Conference on Algorithmic Learning Theory, pages 205-226. PMLR, 2022. --- Rebuttal Comment 1.1: Title: Response Comment: I am satisfied with the authors' answers. A comment: > Our understanding is that a “Limitations” section is not required at NeurIPS’23. If this is not accurate, we will be happy to include one in the revised version. My suggestion is that the authors can add their rebuttal to a "future direction" section; for example: > While the focus of our work has been on learning mixtures of linear classifiers, we believe that a similar approach ought to apply for other “mixtures of experts” problems. We leave this as a direction for future work. > If one wants to establish SQ lower bounds for a non-Gaussian fixed distribution on covariates, a different construction is needed.
Summary: The paper provides statistical query lower bounds for learning mixture of linear classifier. In the problem of learning mixture of linear classifier, there are $r$ linear classifier $v_1, \ldots ,v_r \in \mathbb{R}^{n}$. The input feature $x\in \mathbb{R}^{n}$ is draw from gaussian, the label $y = \mathsf{sign}(v_\ell^\top x)$, where the index $\ell$ is chosen with probability $w_{\ell}$ (one has $w_1 +\cdots +w_{r} = 1$). Previous work [CDV'22] give an algorithm of sample complexity $n^{\log(r)/\Delta^2}$ where $\delta$ is the minimum separation between classifiers. The major contribution of this paper is to give an almost matching lower bound of $n^{\log (r) \cdot \mathsf{poly}(\Delta^{-1})}$, under the SQ model. From a high level, the paper follows the framework of [DKS'17] and reduces the problem to sphere design. The major technical contribution of this paper is to provide a spherical design and find a set of vector from unit ball that satisfies (1) none trivial pairwise distance and (2) the correlation with any low degree polynomial equal $0$. They obtain the spherical design using a topological argument inspired from [BRV'13]. ---------------------- I have read the rebuttal and I would keep my positive evaluation of the paper. Strengths: The paper gives an almost matching SQ lower bound for learning mixture linear classifier, the technique is novel and could have broad applications. Weaknesses: There is no major weakness, though I have a few minor questions (to be specified later) Technical Quality: 3 good Clarity: 3 good Questions for Authors: (1) I have some concern with Lemma 3.5. In the statement, it is only required that $f \in L^2(R, N)$, but it seems not true for every such function, right? If I understand correctly, the proof only requires $f = sign$ (i.e., only need to work for sign function), but even for sign function, the claim is sloppy, because $E_{z\sim N_m} [p(z)f(v^\top z)]$ scaling invariant with $v$, which seems not true for polynomial? Please clarify this point. (2) Is the algorithm of [CDV'22] falls into the SQ framework? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: no Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their effort and positive assessment of our work. We respond to the reviewer’s questions below: 1. I have some concern with Lemma 3.5. In the statement, it is only required that $f\in L^2(\mathbb{R},\mathcal{N})$, but it seems not true for every such function, right? If I understand correctly, the proof only requires $f = \mathrm{sign}$ (i.e., only need to work for $\mathrm{sign}$ function), but even for $\mathrm{sign}$ function, the claim is sloppy, because $\mathbf{E}_{z\sim \mathcal{N}_m}[p(\mathbf{z})f(\mathbf{v}^\intercal\mathbf{z})]$ scaling invariant with $\mathbf{v}$, which seems not true for polynomial? Please clarify this point. We thank the reviewer for pointing this out. The statement of Lemma 3.5 requires the assumption that $\mathbf{v}$ is a unit vector (or, more generally, a non-zero vector of fixed $L_2$-norm). Please note that we only invoke this lemma for $\mathbf{v}$ being a unit vector. Without this assumption, the conclusion is not true (as the reviewer pointed out). Under the assumption that $\mathbf{v}$ is a unit vector, Lemma 3.5 holds as stated, i.e., for any function $f$ in the space $L^2(\mathbb{R},\mathcal{N})$ (not only for the $\mathrm{sign}$ function). Looking at the proof of Lemma 3.5 in the appendix, we also want to point out that Claim B.1 requires the assumption that both $\mathbf{U}$ and $\mathbf{V}$ are projection matrices (i.e., $\mathbf{U}\mathbf{U}^{\intercal}=I_{n_1}$, and $\mathbf{V}\mathbf{V}^\intercal=I_{n_2}$), because the equations in line 576 hold only if both $\mathbf{U}$ and $\mathbf{V}$ are projection matrices. 2. Is the algorithm of [CDV22] falls into the SQ framework? The algorithm in [CDV22] can be implemented in the Statistical Query(SQ) model efficiently. In fact, the class of SQ algorithms is rather broad and captures a range of known supervised learning algorithms. More broadly speaking, several known algorithmic techniques in machine learning are known to be efficiently implementable using SQ algorithms. These include spectral techniques, moment and tensor methods, local search (e.g., Expectation Maximization), and many others (see, e.g., [FGR+17, FGV17]). References: [CDV22] A. Chen, A. De, and A. Vijayaraghavan. Algorithms for learning a mixture of linear 416 classifiers. In International Conference on Algorithmic Learning Theory, pages 205-226. PMLR, 2022. [FGR+17] V. Feldman, E. Grigorescu, L. Reyzin, S. Vempala, and Y. Xiao. Statistical algorithms and a lower bound for detecting planted cliques. J. ACM, 64(2):8:1-8:37, 2017. [FGV17] V. Feldman, C. Guzman, and S. S. Vempala. Statistical query algorithms for mean vector estimation and stochastic convex optimization. In Proceedings of the Twenty Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2017, pages 1265-1277. SIAM, 2017. --- Rebuttal Comment 1.1: Title: Thanks for clarification. Comment: Thanks for the clarification. Please consider adding the assumptions on $v$ to the theorem statement.
Summary: The authors prove a lower bound for the number of queries needed in the statistical query model for learning a mixture of linear classifiers. The statistical query model essentially makes oracle queries with a polynomial f(x) and the oracle responds with a value v such that: |v-E[f(x)]\le t, for some threshold t (the accuracy of the oracle). The main result in the paper is Theorem 1.2 which roughly states that to learn a n-dimensional mixture of linear classifiers within TV distance \eps any algorithm must use queries with accuracy 1/poly(n) or must make 2^poly(n) statistical queries. Strengths: - The lower bounds are claimed to be tight. - The problem of learning a mixture of linear classifiers is simple and elegant and thus important from a learning theory viewpoint. - The technical results in the paper leading to Thm 4.2 synthesize ideas from topology and analysis. From a technical perspective the proof is indeed interesting and informative. Weaknesses: - The authors claim the lower bound is "qualitatively match" previous results by Chen et al. 2022. It would be good to reduce the Che et al result to the SQ model or vice versa. Otherwise the lower bound loses some of its significance. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: In Theorem 1.2, is it possible to have \Delta not depend upon r i.e., can the dependence \Delta > 1/r^10 be eliminated? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their effort and positive assessment of our work. 1. Regarding the reviewer’s point: “The authors claim the lower bound is "qualitatively match" previous results by Chen et al. 2022. It would be good to reduce the Che et al result to the SQ model or vice versa. Otherwise the lower bound loses some of its significance.” The algorithm in [CDV22] can be implemented in the Statistical Query(SQ) model efficiently. In fact, the class of SQ algorithms is rather broad and captures a range of known supervised learning algorithms. More broadly speaking, several known algorithmic techniques in machine learning are known to be efficiently implementable using SQ algorithms. These include spectral techniques, moment and tensor methods, local search (e.g., Expectation Maximization), and many others (see, e.g., [FGR+17, FGV17]). We respond to the reviewer’s question below: 1. In Theorem 1.2, is it possible to have $\Delta$ not depend upon $r$ i.e., can the dependence $\Delta \ge r^{-1/10}$ be eliminated? Our lower bound proof requires the assumption that $\Delta\ge r^{-c}$ for some absolute constant $0<c<1$. We note that this parameter setting is arguably the most interesting in practical settings. In our technical proof, the lower bound comes from Proposition 3.2 [DKPZ21], where it requires the low dimension $m \ge2$. In particular, to apply the techniques in our setting, we randomly sample $r$ $\Delta$-separated unit vectors over the $m$-dimensional unit sphere with sufficiently small error guarantees. This is impossible without the dependence on $\Delta$. In addition, by taking $\Delta=r^{-c}$, we do provide a lower bound for small $\Delta$. However, since the algorithmic result of [CDV22] has sample and runtime complexity $\min(n^{O(\log r/\Delta^2)},(n/\Delta)^{O(r)})$, which will be $(n/\Delta)^{O(r)}$ if $\Delta$ is sufficiently small. This provides strong evidence that we are not able to obtain near-optimal hardness results without the assumption on $\Delta$. References: [CDV22] A. Chen, A. De, and A. Vijayaraghavan. Algorithms for learning a mixture of linear 416 classifiers. In International Conference on Algorithmic Learning Theory, pages 205-226. PMLR, 2022. [DKPZ] I. Diakonikolas, D. M. Kane, T. Pittas, and N. Zarifis. The optimality of polynomial re435 gression for agnostic learning under gaussian marginals in the SQ model. In Conference 436 on Learning Theory, COLT 2021, volume 134 of Proceedings of Machine Learning Re437 search, pages 1552-1584. PMLR, 2021. [FGR+17] V. Feldman, E. Grigorescu, L. Reyzin, S. Vempala, and Y. Xiao. Statistical algorithms and a lower bound for detecting planted cliques. J. ACM, 64(2):8:1-8:37, 2017. [FGV17] V. Feldman, C. Guzman, and S. S. Vempala. Statistical query algorithms for mean vector estimation and stochastic convex optimization. In Proceedings of the Twenty Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2017, pages 1265-1277. SIAM, 2017.
Rebuttal 1: Rebuttal: We thank the reviewers for their time and effort in providing feedback. We are encouraged by the positive comments from reviewers (**Jk98**,**vDTe**,**Azis**) for the following: (i) the importance of the problem we study (learning mixtures of linear classifiers) and the tightness of our SQ lower bound (**Jk98**,**vDTe**,**Azis**), (ii) the novelty and potentially broader applicability of our technique (**Jk98**,**vDTe**,**Azis**), and (iii) well-organized writing (**Azis**). The main contribution of our paper is theoretical. Specifically, we establish a near-optimal Statistical Query (SQ) lower bound for learning uniform mixtures of linear classifiers. Our lower bound applies even for the simplest distributional setting where the covariates are drawn from the standard Gaussian. Our SQ lower bound nearly matches prior algorithms for this problem that can be efficiently implemented in the SQ model (line 96). In the process, we give a new efficient construction of spherical designs that is of independent interest. We will address the individual questions and comments by the reviewers separately.
NeurIPS_2023_submissions_huggingface
2,023
Summary: The authors study the problem of learning mixture of linear classifiers with Gaussian covariates. Their primary result is a near-optimal SQ lower bound which applies even for the uniform mixture case. Moreover, as a purely mathematical result, they construct an efficient spherical design (under a stronger definition of the structure) with sample complexity within a polylogarithmic factor of the optimum. Strengths: The paper is well motivated and introduced succinctly. The strongest portion of the submission are the first two sections which nicely introduce the problem, its inherent difficulties, and a discussion of the papers approach. I thought the technical overview was especially helpful to highlight the problem and challenges in proving the major results. Weaknesses: The paper predominantly lacks in presentation of the mathematical results. Namely the major theoretical results presented in the main text are somewhat obtuse and hard to parse / verify the theoretical claims. This may be due my unfamiliarity with the surrounding literature, however the second half of the paper is difficult to follow in the logic of the proofs. While the technical overview helps to introduce the complexities of the analysis, these final sections are entirely non-intuitive. I would suggest to the authors to reduce the number of theorem / lemma statements in the main text and instead more carefully chose results (in line with the major points of Section 1.2). A more expansion on these results with more fleshed out proofs (or proof sketches) would make the paper considerably more readable within the short page limit of the conference. Additionally, it is somewhat hard to tell where the current result fits in with the prior works on the topic. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: What is the relation of the given problem to that of sparse recovery? Does the spherical design technique used to prove your main result extend well beyond simple linear classifiers? How can the novel techniques discussed here be applied more broadly outside of the presented problem instance? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: na Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback and effort. We start by addressing the concerns within the review in order: 1. Mathematical Contributions and Presentation. Our main technical contribution is a novel construction of a spherical $t$-design, which leads to a nearly-optimal SQ lower bound for learning uniform mixtures of linear classifiers (see lines 159 and 160). To achieve this, we leverage ideas and results (Theorem 4.3) from the pure mathematics literature [BRV13]. Although the original theorem in [BRV13] is sophisticated and perhaps non-intuitive for a non-expert, we have distilled and simplified the statement so that it can resonate with ML researchers with theoretical background. In order to help the reader follow the proof idea of our spherical $t$-design construction (Theorem 1.5), we provide high-level explanations in the technical review session (Section 1.2), and also break down our technical results in Section 4 into several short lemmas (see Lemmas 4.4, 4.5 and Theorem 4.6), followed by intuitive prose and proof sketches. We would welcome additional concrete suggestions by the reviewer. 2. Regarding the reviewer’s point: “Additionally, it is somewhat hard to tell where the current result fits in with the prior works on the topic.” In addition to the mathematical topic of spherical designs, the problem of learning mixtures of linear classifiers is important and fundamental from an ML theory viewpoint. In lines 23 and 24, we clearly state the related algorithmic results for this problem. Previous work [CDV22] gave an algorithm with sample and computational complexity $n^{O(\log r/\Delta^2)}$ for the problem, where $\Delta$ is the minimum separation between linear classifiers. In our work, we provide a nearly optimal SQ lower bound of $n^{\mathrm{poly}(1/\Delta)\log r}$. We now proceed by addressing the reviewer’s questions below: 1. What is the relation of the given problem to that of sparse recovery? The problem we study (learning mixtures of linear classifiers) and the associated techniques in our work are orthogonal to the problem of sparse recovery. 2. Does the spherical design technique used to prove your main result extend well beyond simple linear classifiers? Yes. Our technique works for any odd function in the space $L^2(\mathbb{R},\mathcal{N})$, not only for linear classifiers (corresponding to the $\mathrm{sign}$ function). 3. How can the novel techniques discussed here be applied more broadly outside of the presented problem instance? We believe that our new efficient construction of spherical $t$-designs itself is a mathematical contribution of independent interest that could be used to establish SQ lower bounds for other related mixture models (e.g., for mixtures of experts). That said, the focus of our work has been the fundamental problem of learning mixtures of linear classifiers. References: [BRV13] A. Bondarenko, D. Radchenko, and M. Viazovska. Optimal asymptotic bounds for spherical designs. Annals of mathematics, pages 443-452, 2013. [CDV22] A. Chen, A. De, and A. Vijayaraghavan. Algorithms for learning a mixture of linear 416 classifiers. In International Conference on Algorithmic Learning Theory, pages 205-226. PMLR, 2022. --- Rebuttal Comment 1.1: Comment: I thank the reviewer for their response to my questions and comments. I am happy to increase my score to a 5 but highlight to the AC/PC that this work is somewhat outside of my area of expertise so encourage them to consider the other reviews more heavily than my own.
null
null
null
null
null
null
FGPrompt: Fine-grained Goal Prompting for Image-goal Navigation
Accept (poster)
Summary: The paper proposes FGPrompt that conditions the goal image embedding on the observation such that the agent can obtain goal-relevant visual cues during image-goal navigation. To fuse the input and goal image embeddings, FGPrompt introduces two strategies: Mid Fusion by FiLM and Early fusion by encoding the concatenation of the input and goal images. The proposed method achieves a new state-of-the-art by large margins in the Gibson dataset. Strengths: - Mid and early fusing the input and goal images to capture goal-relevant information is well-motivated and sounds sensible. - The proposed method achieves substantial improvements over prior arts by large margins, even with fewer parameters. - Good illustrations of the proposed method and qualitative examples help better understand the method and its efficacy. Weaknesses: - The novelty of the proposed method is a bit weak (see Q1). - Early Fusion learns Resnet to jointly encode the input and goal images to capture goal-relevant clues. However, it is not clear what is the difference between a simple CNN-GRU-based policy and the proposed method (see Q2). - It is not trivial to leverage prior knowledge (e.g., pretrained large models) (Q3). Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Q1: The paper motivates the necessity of fusing the input and goal images but its methodology is directly adopted from FiLM (Mid Fusion). It is unclear which part is novel in the proposed method and what we can learn from the novel part. - Q2: The best-performing model (Early Fusion) uses Resnet to jointly encode the input and goal images, resulting in a common CNN-LSTM-based architecture. This seems not surprising as we have already been using neural networks to let them learn how to effectively encode input (in this case, the input and the goal images). Jointly encoding the input and goal images is evidenced by the authors' quantitative analyses but I'm not sure if this is novel. - Q3: The best-performing model (Early Fusion) is trained in an end-to-end manner. However, this makes it hard for the model to leverage external knowledge, possibly from large models, as they are not usually trained with the concatenation of images. Can the proposed method leverage such knowledge? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Q1. The paper motivates the necessity of fusing the input and goal images but its methodology is directly adopted from FiLM (Mid Fusion). It is unclear which part is novel in the proposed method and what we can learn from the novel part. A1. We are sorry for the confusion. Nevertheless, **we have to clarify that our method is not directly adopted from FiLM**. We here highlight two main differences between our method and FiLM-based methods[A][B][C]: - **Different implementation details.** Existing FiLM-based methods first encoder the conditional input (a textual sentence or an image) to a semantic feature and then map this semantic feature into a single affine transformation factor with shape of 1x1xC. In the observation encoder, all its activation values in spatial dimension are affine-transformed using the same factor. In contrast, we calculate pixel-wise affine transformation factors that shape HxWxC from high-resolution feature maps, and thus the activation in different spatial positions can be transformed in a fine-grained manner. - **Different targets**. Existing FiLM-based methods focus on extracting semantic information from conditional input. However, this semantic information is insufficient for the image navigation task, since the agent relies on detailed clues from the goal image (e.g., texture details and semantic categories of numerous objects) to infer the goal position relative to current observation. To verify the necessity of the pixel-wise affine transformation factors, we introduce a variant that uses the single affine transformation factor inferred from the semantic features of the goal image. In the table below, the FiLM approach incorporating our pixel-wise affine transformation factors showcases notably enhanced performance. | Affine Transormation Factor in FiLM| SR | SPL | |----------------------------|----------|----------| | Single [B] | 32.0 | 24.0 | | Pixel-wise (Ours) | **77.3** | **50.4** | [A] FiLM: Visual Reasoning with a General Conditioning Layer. AAAI 2018. \ [B] BC-Z: zero-shot task generalization with robotic imitation learning. CoRL 2021. \ [C] Using both demonstrations and language instructions to efficiently learn robotic tasks. ICLR 2023. > Q2. The best-performing model (Early Fusion) uses Resnet to jointly encode the input and goal images, resulting in a common CNN-LSTM-based architecture. This seems not surprising as we have already been using neural networks to let them learn how to effectively encode input (in this case, the input and the goal images). Jointly encoding the input and goal images is evidenced by the authors' quantitative analyses but I'm not sure if this is novel. A2. **Our method is fundamentally different from common CNN-LSTM-based architectures [ZER, ZSON, OVRL].** These methods employ two separate CNNs to independently encode the goal image and observation image and then feed the concatenation of these encoded features into LSTM. In contrast, we pursue the integration of information from both the goal image and observation image during the encoding process by concatenating these images in pixel level and using a single CNN to encoder them. Unlike the prior models where CNNs lack access to information from the goal image while encoding observations, and vice versa, **our approach ensures a more cohesive fusion of these sources of information.** This is critical for the image navigation task because the agent relies on detailed clues from the goal image to infer the goal position relative to current observation. Experimental results in the table below also demonstrate the superiority of our method compared with the common CNN-LSTM-based architecture. Besides, our contributions are also recognized by the reviewer 9djz as "*the main takeaway from the paper simple but powerful. **The fact that channel concatenation for current and goal images works so well on ImageNav is something that should be known more generally.***". | Methods | Joint Encoder | SR | SPL | |---------|---------------|----------|----------| | ZER | No | 29.2 | 21.6 | | ZSON | No | 36.9 | 28.0 | | OVRL | No | 54.2 | 27.0 | | **Ours** | **Yes** | **92.3** | **68.5** | > Q3. The best-performing model (Early Fusion) is trained in an end-to-end manner. However, this makes it hard for the model to leverage external knowledge, possibly from large models, as they are not usually trained with the concatenation of images. Can the proposed method leverage such knowledge? A3. **Yes, our proposed method is compatible with external knowledge in pre-trained models.** To verify this, We initialize the visual encoder using pre-trained large models (i.e., CLIP-RN50) except for the first convolution layer. For the first layer, we copy the original weight and duplicate it on the input channel dimension. Then we finetune the model in an end-to-end manner for 50M steps. Experiment results show the advantage of utilizing external knowledge. We believe that the pre-trained model helps the model to learn faster and potentially provides useful experiences for finding objects in images. | Method | External knowledge | SR | SPL | |-----------|---------------------|------|------| | Ours (EF) | without knowledge | 41.1 | 26.8 | | Ours (EF) | with knowledge (CLIP-RN50) | **64.3** | **29.4** | --- Rebuttal Comment 1.1: Title: Rebuttal acknowledgment Comment: I thank the authors for addressing my concerns with a comprehensive explanation with additional experiments. The thorough response has clarified the issues I raised. I am happy to raise my rating accordingly. --- Reply to Comment 1.1.1: Comment: Thank you for your insightful comments and constructive feedback. They are invaluable to the improvement of my work.
Summary: The paper proposes different early to middle fusion mechanisms to improve performance of ImageNav tasks thanks to the availability of higher-resolution information in early to intermediate visual encoder layers. The best proposed method (also the simplest in terms of implementation) just concatenates the target and current images and jointly processes them via early fusion, correspondingly using a stem with 6 input channels. The experiments, using three random seeds, are conclusive about the performance of all proposed methods. Strengths: - Simple design (especially the early fusion proposal) enabling high-resolution spatial reasoning for ImageNav. - Excellent performance in comparison to SOTA methods. - In-depth analysis for the proposed methods. Weaknesses: - I wish the method had also been applied to other task types, e.g. visual rearrangement [1]. Showing good performance in a single task type is a bit limited, even if two variants (panoramic versus limited FOV) are considered. - I understand that Mid Fusion can be seen as the most interesting in terms of ablations and visualizations, but I think it feels somewhat strange that a large portion of the discussion in the paper ends up being about the second best proposal. [1] Weihs et al., Visual room rearrangement, CVPR 2021. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - The paper states an image used as a goal is is a clearer description than language and shows a wide range of application prospects. Arguably, most of the interaction with robots is likely going to happen through natural language, which is our natural vehicle to share information, so I'd consider adding more context to the introduction. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: - They're adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Q1. I wish the method had also been applied to other task types, e.g. visual rearrangement. Showing good performance in a single task type is a bit limited, even if two variants (panoramic versus limited FOV) are considered. A1. Thanks for your valuable suggestion. We conduct experiments on the 1-Phase track of visual rearrangement task and **find our method useful in this task**. We start from a ResNet18+IL baseline that separately encodes the unshuffled image and agent's current observation(the walkthrough image) without fusion mechanism and learn from expert actions. Then we introduce our proposed FGPrompt into the baseline model by fusing the observation with the unshuffled image in an early stage, resulting in one jointly modeled ResNet encoder. **With our FGPrompt, the agent performs much better than the baseline agent**. We believe it helps the agent to locate correspondent or inconsistent objects in the environment. We report the testing metrics on the visual rearrangement 2023 dataset. | Method | Success↑ | FixedStrict↑ | E↓ | |---------|----------|--------------|-----| | ResNet18+IL Basline [A] | 1.89 | 4.92 | 1.32 | | Ours | **7.68** | **20.1** | **0.88** | Besides, we also found that our method is useful on the instance imagenav task regarded to the experiment results in R2Q1. All these results indicate the strong ability of our proposed methods to generalize to a variety of embodied tasks. [A] Visual room rearrangement, CVPR 2021. > Q2. I understand that Mid Fusion can be seen as the most interesting in terms of ablations and visualizations, but I think it feels somewhat strange that a large portion of the discussion in the paper ends up being about the second best proposal. A2. Sorry for the confusion. Actually, a trade-off exists between these two methods. - The early-fusion mechanism is somehow an interesting finding in that it performs competitively and has a simpler architecture. However, though performs well on the default setting, the early-fusion does not generalize well to other scenarios that the goal camera parameters don't match with the agent's one. - Our delicately designed mid-fusion mechanism performs better in this case, as evidenced by the attached experimental results from the instance imagenav task in R2Q1. These results indicate that a carefully designed mid-fusion scheme with more inductive bias is necessary. We will add more discussion and clarify these findings in the revision. > Q3. The paper states an image used as a goal is a clearer description than language and shows a wide range of application prospects. Arguably, most of the interaction with robots is likely going to happen through natural language, which is our natural vehicle to share information, so I'd consider adding more context to the introduction. A3. Thanks for your comment. We will modify the statement to make it more precise. --- Rebuttal Comment 1.1: Title: Rebuttal acknowledgment Comment: I thank the authors for the time to run additional experiments (also the ones proposed by other reviewers), which in my opinion can raise the value of the already solid paper. Taking into account the solidity of the results and the broader scope of the paper given by the different task types, I am happy to raise my rating. --- Reply to Comment 1.1.1: Comment: Thanks again for your valuable comments!
Summary: This paper introduces FGPrompt (Fine-grained Goal Prompting) for the image-goal navigation task (ImageNav). Existing methods for ImageNav suffer from limitations in capturing detailed goal information and focusing on goal-relevant regions in observation images. FGPrompt tries out three different methods for goal prompting to overcome these limitation, including: 1) keypoint matching, 2) FiLM layers and 3) channel concatenation with 2 and 3 emerging as strong techniques for goal prompting. Experimental results on benchmark datasets demonstrate significant performance improvements compared to existing methods while using much smaller model sizes. Strengths: 1) I find the main takeaway from the paper simple but powerful. The fact that channel concatenation for current and goal images works so well on ImageNav is something that should be known more generally. 2) I found the paper easy to read and follow. 3) The method transfers really well to other scene datasets compared to prior approaches. Weaknesses: In my opinion, the paper creates a more complicated story around the various fusion techniques than required. Mid fusion technique is more complicated, requires additional computation, and still performs worse than early fusion. I have two possible suggestions for the authors to make the paper more coherent: 1) Make the mid-level fusion technique a baseline and focus on understanding the early fusion technique further 2) Or, show a scenario/task where the mid-fusion technique might be more useful. I believe that the mid-fusion technique might generalize more on the instance imagenav[1] task where the goal image is assumed to be coming from a camera with different parameters to the camera on the robot. [1] Krantz, J., Gervet, T., Yadav, K., Wang, A., Paxton, C., Mottaghi, R., ... & Chaplot, D. S. (2023). Navigating to Objects Specified by Images. arXiv preprint arXiv:2304.01192. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: While this is not required to be part of the rebuttal, it would be interesting to see how well the trained policy transfers to the real world given its strong transfer on the other scene datasets. Suggestion: 1) In line 61 the authors claim that they design the early fusion mechanism by concatenating the goal and observation images. I would suggest that the claim be toned down to say that they try various ways of goal and observation concatenation. 2) The claim that OVRL-v2 uses pose information in ImageNav is incorrect. The OVRL-v2 paper states that the pose information is only used for the ObjectNav task as mentioned on Page 4. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 2 fair Contribution: 2 fair Limitations: Since the current FGPrompt technique has the assumption that the current image and goal image will be from cameras with the same parameters (resolution, FOV, etc), I will like to hear from the authors about the limitations of their work where the camera parameters don't match[1]. This is especially important given the community is moving towards these harder tasks. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Q1. Mid fusion technique is more complicated, requires additional computation, and still performs worse than early fusion. I believe that the mid-fusion technique might generalize more on the instance imagenav task where the goal image is assumed to be coming from a camera with different parameters to the camera on the robot. A1. Sorry for the confusion. **We do find that the mid-fusion technique generalizes better on harder tasks like instance imagenav**, which is difficult due to inconsistent camera parameters. We evaluate three models, namely the baseline model (seperately encode goal image and observation image), our early-fusion agent, and mid-fusion agent, on this task. All these models are trained on the Gibson ImageNav dataset and directly transfer to the HM3D instance imagenav task. In the table below, the baseline model performs poorly in this task with a very low success rate (less than 1%). The agents with our proposed fusion mechanisms both perform better. **We also observed that the mid-fusion variant actually outperforms early fusion in this scenario**, as its delicately designed activation deformation module yields explicit and adaptive guidance from the goal image to the observation encoder. | Method | Success | SPL | |---------------|---------|-----| | Baseline (no fusion) | 0.6 | 0.2 | | Ours (EF) | 3.4 | 0.8 | | Ours (MF) | **9.9** | **2.8** | We also agument the camera height, pitch and HFOV of goal image in Gibson ImageNav eval episodes to evaluate whether these models can handle the situation when goal image and observation image are captured by cameras with different parameters. Specifically, we follow the distribution of these parameters in the instance imagenav paper [A], sampling goal camera height $h\sim\mathcal{U}(0.8m,1.5m)$, pitch delta from $\mathcal{U}(-5^{\circ},5^{\circ})$, and HFOV from $\mathcal{U}(60^{\circ},120^{\circ})$. In the table below, **we also find that the mid-fusion mechanism performs the best in this scenario**. All these results reveal the effectiveness and robustness of mid-fusion in harder tasks. | Method | Success | SPL | |---------------|----------|----------| | Baseline (no fusion) | 12.1 | 10.5 | | Ours (EF) | 64.6 | 38.5 | | Ours (MF) | **78.1** | **52.7** | [A] Instance-Specific Image Goal Navigation: Training Embodied Agents to Find Object Instances. arXiv 2022. > Q2. Make the mid-level fusion technique a baseline and focus on understanding the early fusion technique further. A2. Thanks for the valuable suggestion. For the mid-fusion technique, as analyzed in Q1, it performs the best on the more complicated and practical instance navigation task. We will show its power in instance imagenav task in the revision. As for the understanding of early-fusion technique, we have conduct an analysis on it as an extension of the mid-fusion scheme in the section F in the appendix, by means of an EigenCAM visualization. We will update the manuscript to make it more clear. > Q3. While this is not required to be part of the rebuttal, it would be interesting to see how well the trained policy transfers to the real world given its strong transfer on the other scene datasets. A3. Thanks for your valuable comment. It is an interesting idea to apply our method on a real robot. Since setting up a real robot in a short period is difficult, we left it as our future work. > Q4. The claim that OVRL-v2 uses pose information in ImageNav is incorrect. The OVRL-v2 paper states that the pose information is only used for the ObjectNav task as mentioned on Page 4. A4. Thanks for your correction, we will modify the statement in the manuscript. > Q5. Since the current FGPrompt technique has the assumption that the current image and goal image will be from cameras with the same parameters (resolution, FOV, etc), I will like to hear from the authors about the limitations of their work where the camera parameters don't match. A5. We agree with the reviewer that the problem definition in ImageNav task has the assumption that the goal images are taken under the same camera setting with the agent and we actually train the agent in these images. As discussed in Q2, we have two main findings on our method: - As the experiment we have done in Q2, our FGPrompt outperforms baseline methods without fusion mechanism by a large margin. It reveals that our FGPrompt still shows potential in solving this harder task in contrast with baseline methods. - The performance of our methods on the instance imagenav task is relatively low compared to the ImageNav task. We speculate that the extremely different perspective of goal images that haven't been seen during training and a longer episode length undermine the performance of our method. This result hints us that our method could make a further improvement when combined with memory-based methods [A] and [B] to achieve more efficient large-scope exploration. We leave this as our future work. [A] Visual graph memory with unsupervised representation for visual navigation. ICCV 2021. \ [B] Topological semantic graph memory for image-goal navigation. CoRL 2023. --- Rebuttal Comment 1.1: Title: Thank you for the rebuttal Comment: I thank the authors for the effort they put into writing the rebuttal. I am happy to see how the paper's story has evolved to be more consistent with the rebuttal. I am updating my score and I expect the authors will follow through to add a section on the strengths of the mid-fusion technique in the final manuscript.
Summary: The authors offer a solution to the image-goal navigation task. The solution focuses on granular feature extraction from the goal image early on in the model pipeline, and usage of the goal image to inform whcih features in the observation the agent should attend to. The paper offers multiple mechanisms to do so: Skip fusion, Mid fusion and Early fusion. Strengths: * Strong results that seem to outdo the SOTA significantly * The method illustration diagram + EigenCAM visuals were well done * Plenty of ablations were explored. Weaknesses: * This isn't necessarily a critique of the contribution of the paper as much as it is of the narrative. The best performing method in the paper (Early Fusion), while having impressive performance, doesn't demonstrate a significantly novel method.(I believe its channel concatenation of the goal and observation before passing through a fully connected MLP. ) As mentioned in the references, image prompting has been used across many other applications as well. It may be worth writing this paper as an establishment of a new baseline of how Image Goal navigation is done, as opposed to a novel contribution. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: * It seems like in Fig 1 graph b, works that solve the the ObjectNav (as opposed to ImageNav) task (ZSON) are also plotted. Is there a reason for that? * Maybe its worth specifying that FC -> fully connected MLP for eq (1) * Was there a significant difference observed in training time/resources needed compared to the baseline for each of the methods? * It seems to me like you are attempting to implement a form of attention. Have you thought about using self attention module but with the observation as the key and goal as the query as a form of fusion? * It may be worth running fine-tuning experiments on OOD Datasets * Out of curiosity, have you tried evaluating using an agent with a different height of the sensor/camera? * I also wonder how big of a role environmental context plays, ie, what would happen if you trained with the goal images where the background was masked out? This is not a priority, just a curiosity. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Q1. The best performing method in the paper (Early Fusion), while having impressive performance, doesn't demonstrate a significantly novel method. A1. Thanks for your comments. We would still like to point out the novelty and contribution of our early-fusion method. We empower the ImageNav agent with the **fine-grained information exchange ability** through a simple yet powerful channel concatenation technique on the **very beginning** of the convolution network. The contributions are well recognized by the reviewer 9djz as “the main takeaway from the paper simple but powerful. The fact that channel concatenation for current and goal images works so well on ImageNav is something that should be known more generally.”. > Q2. As mentioned in the references, image prompting has been used across many other applications as well. It may be worth writing this paper as an establishment of a new baseline of how Image Goal navigation is done, as opposed to a novel contribution. A2. Existing image prompting methods [A][B] focus on extracting semantic information from conditional input. However, it is insufficient for the image navigation task, since the agent relies on detailed clues from the goal image (e.g., texture details and semantic categories of numerous objects) to infer the goal position. In contrast, **our method focuses on fine-grained promting.** We keep the spatial structure to preserve these fine-grained clues in the feature maps during the fusion schemes. To verify the necessity of fine-grained prompting, we introduce a variant that directly prompts the observation encoder using the semantic features of the goal image. In the table below, both our mid-fusion and early-fusion technique with fine-grained prompting showcases notably enhanced performance. |Setting|SR|SPL| |-|-|-| | Semantic Prompting [A][B] | 32.0| 24.0| | Our Fine-grained Prompting (MF) | 77.3| 50.4| | Our Fine-grained Prompting (EF) | **78.9** | **54.7** | [A] BC-Z: zero-shot task generalization with robotic imitation learning. CoRL 2021. \ [B] Using both demonstrations and language instructions to efficiently learn robotic tasks. ICLR 2023. > Q3. It seems like in Fig 1 graph b, works that solve the ObjectNav (as opposed to ImageNav) task (ZSON) are also plotted. A3. Sorry for the confusion. ImageNav results has also been reported in ZSON paper. To make a wide comparison among all ImageNav methods, we include it in graph b. > Q4. Maybe its worth specifying that FC -> fully connected MLP for eq (1). A4. We have modified the equation in the revised version. > Q5. Was there a significant difference observed in training time/resources needed compared to the baseline for each of the methods? A5. Yes. Our early-fusion method significantly reduces the training cost in terms of GPU hours. In comparison, the mid-fusion method slightly increase training cost as it introduces an additional FiLM layer with a convolution operator to obtain affine factors. We provide detailed numbers training a ResNet-50 agent 1x3090 GPU. | Methods| Frames per second (FPS)↑ | GPU hours↓ | |-|-|-| | Baseline | 67 | 2070| | Ours (MF) | 65 | 2135| | Ours (EF) | **126** | **1101**| > Q6. Have you thought about using self attention module? A6. We conduct an experiment that replaces the FiLM module with a self-attention module. Specifically, we project the flattened feature map from the first layer of goal encoder into query and the correspondent sequence from observation encoder into key and value. Then, we utilize the self-attention operation to merge the goal and observation features. Experiment results are shown below: |Methods|SR|SPL| |-|-|-| |Self-attention|13.9|12.2| |Ours| **77.3** | **50.4** | The results indicate the efficiency of the fine-grained conditioned reasoning through FiLM affine transformation compared to a attention mechanism that is hard to learn. > Q7. It may be worth running fine-tuning experiments on OOD Datasets A7. We finetune our FGPrompt agent on HM3D that contains different scenes structures and contents for 100M steps. **As shown in the table below, fine-tuning on HM3D further improve the SR from 76.1% to 81.9%.** Besides, our FGPrompt agent significantly outperforms the baseline by a large margin, demonstrating the effectiveness of our method. | Methods | Pre-trained | Fine-tuned | SR | SPL | |-|-|-|-|-| | Baseline | Gibson| -| 9.6 | 6.3 | | Baseline | Gibson| HM3D| 20.2 | 17.7 | | Ours| Gibson| -| 76.1 | 49.6 | | Ours| Gibson| HM3D| **81.9** | **54.5** | > Q8. Have you tried evaluating using an agent with a different height of the sensor/camera? A8. Yes. We report the evaluation results under different camera heights in the following table. We found that even changing the height of the agent's camera during testing, **our method still outperforms the baseline consistently.** | Methods | Training agent height | Eval agent height | SR | SPL | |-|-|-|-|-| | Baseline | 1.25 | 1.25 | 29.2 | 21.6 | | Baseline | 1.25 | 1.5| 12.8 | 11.3 | | Ours | 1.25 | 1.25| 90.7 | 62.1 | | Ours | 1.25 | 1.5| 76.0 | 46.7 | > Q9. I also wonder how big of a role environmental context plays. A9. To investigate the importance of environmental context, we train our FGPrompt agent using background-removed goal images. We set the pixel of background (e.g., uncountable objects such as wall and floor) to zero according to the ground truth segmentation map. We leverage the semantic annotation in HM3D v2 dataset to obtain segmentation maps. Numbers are reported as below: |Setting|SR|SPL| |-|-|-| |Ours w/o background|64.4|45.2| |Ours w/ background|**75.9**|**48.2**| From the experimental results, our method suffered from a slight degradation when environmental context is removed from the goal image. These results indicate that environmental context in image background provides useful but limited clues. We believe that objects with their arrangement in each room play a critical role in our FGPrompt. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thank you for the detailed response! No further followups from me. --- Reply to Comment 1.1.1: Comment: Thanks for your valuable comments!
Rebuttal 1: Rebuttal: We sincerely appreciate all reviewers’ time and efforts in reviewing our paper and for the constructive feedback. In addition to the response to specific reviewers, here we would like to 1) thank reviewers for their acknowledgment of our work, 2) summarize our contributions, and 3) highlight the new results added during the rebuttal: 1). We are glad that the reviewers appreciate and recognize our contributions. 1. The proposed method achieved substantial improvements. [o71w, 9djz, mycq, CU6j] 2. The main takeaway from our paper is powerful. [9djz, mycq] 3. The proposed method transfer really well. [9djz] 4. The proposed method is well-motivated. [CU6j] 5. The ablation studies are in-depth with good visualization. [o71w, mycq, CU6j] 6. This paper is well-written and easy to follow. [9djz] 2). We summarize our contributions as follows. - **A novel image goal prompting architecture to solve the ImageNav task.** We propose a fine-grained image goal prompting (FGPrompt) architecture to explicitly exchange fine-grained information between goal image and observation image, reaching a new SOTA of the ImageNav task and also showing great potential in some other embodied tasks including instance image navigation and visual rearrangement tasks. - **A simple early-fusion scheme to boost the ImageNav performance with very few parameters.** We empower the ImageNav agent with the fine-grained information exchange ability through a simple yet powerful channel concatenation technique. This scheme shows an absolute advantage on the ImageNav task even compared to some complex memory graph-based methods. - **A generic mid-fusion scheme to address the mismatch between goal camera and observation camera.** We dedicately design a mid-fusion scheme through a novel fine-grained FiLM mapping module to perform more robust information exchange. This scheme shows superior performance in more practical scenarios that the goal image possesses different camera parameters from the observation. - **An in-depth analysis of the image prompting schemes.** We illustrate the activation maps using EigenCAM and reveal how mid-fusion and early-fusion scheme work, bringing new insights for the embodied AI field. 3). In this rebuttal, we have added more supporting results following the reviewers’ suggestions. 1. Comparison results with an attention-based mid-fusion model. [o71w] 2. Finetuning results on the OOD dataset. [o71w] 3. Training efficiency by means of GPU hours compared with baseline. [o71w] 4. Evaluation under different camera settings. [o71w] 5. Ablation results of background context. [o71w] 6. Evaluation on instance image navigation task. [9djz] 7. Evaluation on visual rearrangement task. [mycq] 8. Performance of our method with pre-trained CLIP initialization. [CU6j]
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
TFLEX: Temporal Feature-Logic Embedding Framework for Complex Reasoning over Temporal Knowledge Graph
Accept (poster)
Summary: This paper proposes TFLEX, a framework that reasons over temporal knowledge graphs (TKG). It takes a complex query about either an entity or a timestamp and a set of constraints as input. The query is then converted into a directed acyclic graph (DAG), which is a computation graph that projects the query into an embedding space. The paper also defines operations that happens in the graph, including projection, intersection, complement, and other set/temporal logic operations (e.g. After, Before, and Between). When the query embedding is produced, its distances between all candidate entity/timestamp embeddings are computed. The loss function is designed to minimise the distance between the embedding of the query and that of the correct entity/timestamp and maximise the distance between the query and the incorrect candidates. The framework was evaluated on three standard datasets with modifications. The assessment process involves predefined complex queries on top of each dataset, including 27 types for training and 40 types for test. Several variants of the proposed framework were compared, as well as a few previous works on embedding query for static knowledge graphs. Strengths: - This paper discusses complex query embedding in temporal knowledge graphs, which is an important yet under-investigated topic. - The paper provides a definition to the problem and some attempts to solves the problem. Weaknesses: Related Works - From my understanding, TKGC is an important subset of the task (Temporal Complex Query) defined in this paper. There are many TKGC works missing in the related work part. For example, [1] and [2]. - The discussion on complex query embedding works is insufficient. The paper should provide a more substantial explanation as to why existing works on static KG cannot be easily adapted for temporal KGs. "They cannot utilize temporal information" is a result, not a reason. Method - In line 146 the paper introduces "the relational embedding" without saying how it is defined. Is it a trainable parameter? How is it initialised? - The definition to operators appears arbitrary. In equation (3), $V_q + r + V_t$ are adding the embedding for entities, relations, and timestamps together. What is the semantics of addition here? - Again, in equation (7), something like $q_f^t+\frac{1+q_l^t}{2}$ is in essence adding the entity feature with its a truth value. What is the semantics of addition between these two values? - Later in the definition of distance function (line 199), $q_l^e + ||v_f^e-q_f^e||_1$ is adding a truth value with a L1-norm. What is the semantics of addition here? Experiments: - The method is evaluated on 40 types of generated queries from three datasets for TKGC. The paper did no discuss the reason for choosing these specific 40 types or how complete these types are. Nor has the it discussed whether TFLEX can generalise to queries outside of those 40 types. - There is no comparison with TKGC works. Since the TKGC task aims at completing a simpler query, it is naturally a subtask of what is proposed in this work. In other words, a (s, r, ?, t+1) completion task can be converted to a query and TFLEX should be able to handle it. I would expect at least a separate table comparing TFLEX and state-of-the-art TKGC on a completion task. Otherwise, it is difficult to assess how well TFLEX captures temporal information. Miscellaneous: - There are technical details being moved to appendix due to page limit. Please note that the page limit is nine content pages, not eight. The current main text is not self-contained to provide readers a high-level idea about the benefits of the framework. For example, as I pointed out above, many definitions and designs about the operators lack intuition or rationale, and look like something randomly popping out. - The source code provided in the supplementary material and the link in the paper reveal a non-anonymous email "l****9@sysu.edu.cn". While I believe this is an honest mistake, I urge the author to remove this information. --- Reference [1] Chronor: Rotation based temporal knowledge graph embedding, AAAI 2021. [2] Temporal Knowledge Graph Reasoning Based on Evolutional Representation Learning, SIGIR 2021. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Please see my points above. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: The paper has a good discussion about its limitations in page 11. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive and detailed feedback. Below please find responses to individual comments/questions: ## Related Works > TKGC works in the related work part. see global rebuttal. > The discussion on complex query embedding works is insufficient. To begin with, we have many discussions about the static-to-dynamic difficulty. Apart from the related work, we first mention the difficulty in line 29-30 (Introduction Section), then discuss static QEs’ poor results and analyze the reasons in line 230-232 (Experiment Section), and even further explore the right way to promote the static query embeddings to temporal ones in line 670-707 (a full page for exploration in Appendix C). We would like to add the below sentences in the revision to make it more comprehensive. ``` Firstly, static query embeddings (QEs) are built over (s, r, o) triples instead of (s, r, o, t) quartets, thus ignoring the timestamps for temporal complex reasoning. The second reason is the order property of timestamps, which is on the contrary that entities are unordered, leading to that static QEs are unable to handle Before, After temporal logic. Therefore, it is challenging for static QEs to utilize temporal information in the TKGs. ``` ## Method > the relational embedding. Yes, it’s trainable, and randomly initialised. Besides, entity embedding and timestamp embedding are also trainable and randomly initialised. > The semantics of addition in equation 3. The equation 3 follows the assumption of translation-based methods: $q_o \approx q_s + r + t$. As a comparison, static KGE TransE has $o\approx s+r$, and temporal KGE TTransE has $o \approx s + r + t$. The addition represents a semantical translation starting from the source entity set, following the relation and timestamp conditioning, ending at the target entity set. > What is the semantics of addition between these two values in equation 7? The entity feature and its truth value together form a fuzzy interval in the semantic space. Those entities, whose entity features are covered by the fuzzy interval, are viewed as the answers of the query. > What is the semantics of addition in the distance function (line 199)? The distance function aims to optimize two losses. One is to push the answers to the neighbor of query in the embedding space. It corresponds to the term L1 distance between answer and query. The other is to reduce the uncertainty of the query (the probability interpretation of the logic part), to make the answers more accurate. We use element-wise addition to combine the two losses. ## Experiments: > The paper did no discuss the reason for choosing these specific 40 types or how complete these types are. Nor has the it discussed whether TFLEX can generalise to queries outside of those 40 types. In Appendix B.2, we discuss why to choose these types in the dataset generation section in detail. We mention that it is in order to keep a similar experiment setting with previous static query embeddings. We also present the comparison of query types between temporal and static ones in Table 5. About generalization, please be aware that TFLEX is **trained on 27 types and evaluated on 40 types**. The extra 13 types contain various unseen query structures that do not exist in the training set. The experiment result **Out-of-data reasoning** (line257-258) shows the generalization ability of TFLEX. Therefore, it needs not to introduce more types of queries. > There is no comparison with TKGC works. I would expect at least a separate table comparing TFLEX and state-of-the-art TKGC on a completion task. We don’t agree with the opinion that we ignore all the TKGC baselines. Actually, we compare to two translation-based TKGC methods (TTransE and HyTE), present the results in Figure 4, and discuss in paragraph **Necessity of training on temporal complex queries** (line264-269). We choose TTransE and HyTE because our projection operator is also translation-based ($\mathcal{P}_e(\mathbf{V}_q,\mathbf{r},\mathbf{V}_t) \propto \mathbf{V}_q+\mathbf{r}+\mathbf{V}_t$). Such comparison is fair enough to determine how well complex queries improve the performance of translation-base methods. Introducing other TKGC baselines leads to unfair comparison, and unable to investigate whether training on temporal complex queries is necessary or not. Anyway, we present the Table 1 in pdf in global rebuttal section, which is more complete, comparing TFLEX and SOTA TKGC methods. The table shows that TFLEX is competitive with translation-based methods, but it doesn't outperform the SOTA TKGC methods like ChronoR and TuckERT. However, the result doesn't affect the novelty and contribution of this paper. Please note that the projection operator of TFLEX is as simple as TTransE, not further optimized for TKGC tasks only. Upgrading the projection operator to outperform SOTA TKGC methods remains a future work. Besides, previous work GQE[1] did not compare to any KGC methods when the paper proposed complex logical reasoning task against KGC for the first time. And the following works including Query2box[2], BetaE[3], ConE[4], none of them compares to KGC methods. - [1] Embedding Logical Queries on Knowledge Graphs. - [2] Query2box: Reasoning Over Knowledge Graphs In Vector Space Using Box Embeddings. - [3] Beta Embeddings for Multi-Hop Logical Reasoning in Knowledge Graphs. - [4] ConE: Cone Embeddings for Multi-Hop Reasoning over Knowledge Graphs. ## Miscellaneous: > The presentation of the content. Thanks for pointing out. We will move more content to the main body in the revision. > Non-anonymous email in the supplementary material. Sorry for the mistake. The email belongs to a third-party package author. We will remove it. --- Rebuttal 2: Comment: We sincerely appreciate your valuable and constructive comments. We eagerly anticipate receiving any additional feedback you may have. If you find our response satisfactory, we hope you will consider this a valid reason to consider raising your rating. Should you have any lingering questions about our paper, we are more than willing to address them and enhance the quality of our work. --- Rebuttal Comment 2.1: Title: Thank you for the clarification Comment: I have read the feedback from the author(s) and other reviews. The comments have addressed most of my concerns. Based on the experimental result in Table 1, their clarification on the semantics of equations, and the promised revision to improve readability, I would like to raise my rating of the paper. I would like to reiterate to the author(s) that the paper __needs substantial revision to reach publication-level readability__. My concerns regarding the semantics of the equation largely stem from the paper's lack of background and rationale. The paper should benefit a lot if knowledge shared in their feedback can be merged into either the main text or the appendix of the paper. --- Reply to Comment 2.1.1: Comment: Thank you for your insightful feedback on our paper. We appreciate your dedication to ensuring the quality and clarity of our work. We understand your concerns about the paper's overall readability. Your suggestions to incorporate additional background and rationale into either the main text or the appendix are invaluable, and we will certainly take this into account to provide a more comprehensive understanding of the context. Actually, we will also include more about the semantics of addition, the motivation of operator design, and the knowledge behind the equations to the main body of the paper. We hope these revision will make our paper publication-ready. Once again, we believe that your insightful suggestions will greatly contribute to our paper refinement. We kindly request that you increase your rating to acceptance in light of the improvements we have made. A higher rating is an encouragement and affirmation of our work in this field. Your guidance has been instrumental in guiding our revisions, and we are grateful for your ongoing support. Thank you once again for your time and consideration.
Summary: The authors introduce an embedding-based method for answering complex, i.e., multi-hop, queries on temporal knowledge graphs. The Temporal Feature-Logic Embedding framework (TFLEX) uses fuzzy logic to model first-order logic operations on the entity and timestamp sets. The queries, with answers being either entities or timestamps, are embedded via four components, including feature and logic vectors for both entities and timestamps. The correct answer to the query is determined as the entity or timestamp whose embedding is closest to the query embedding according to a predefined distance function. Based on existing benchmark datasets, three new datasets were created that include a variety of complex queries. The proposed method TFLEX is compared against state-of-the-art query embedding methods, where TFLEX outperforms all baselines. Ablation studies show the effectiveness of the particular design of the method. Strengths: - The paper seems to be the first one to address the multi-hop logical reasoning problem on temporal knowledge graphs, where queries consist of disjunctions of atomic formulas. The authors introduce this new task, which can find application and are relevant in many domains. - The proposed method supports FOL operations as well as temporal operations and is able to perform multi-hop reasoning. - Three new datasets are generated, including 40 kinds of complex queries, which can be used for further benchmark experiments. - The experiments and ablation studies are extensive and confirm the effectiveness of the proposed method. - The source code and datasets are available online, which supports reproducibility and further evaluation. Weaknesses: The main weakness concerns the clarity and preciseness of the technical content, which makes it difficult to follow and to understand the methodology, thereby making it challenging to judge the soundness. Here are some examples: - Figure 1: The query example "During François Hollande was the president of France, which countries did Xi Jinping visit but Obama did not visit?" is given. The logical formulation states that the answer is an entity (country) so that there exists a timestamp T_1 when Xi visited this country and there exists a timestamp T_2 when Obama did not visit this country. This does not seem to reflect the query correctly. The answer should be a country so that there exists a timestamp T_1 when Xi visited this country and for all T_2, Obama did not visit this country. Also, the picture might be easier to understand if the entities for Hollande (e_1, e_3) and France (e_2, e_4) are represented by one node instead of two. - Line 87: The fact set should be a subset of the complete graph. The set on the right-hand side of the equation, however, denotes the complete graph (all possible quadruples). - The definitions for the entity query and the timestamp query are difficult to understand. Maybe it is possible to first introduce atomic formulas, then literals, then conjunctions, and last the disjunctive normal form. Some query examples with corresponding query structures could make it more comprehensible. Why should there be the same number (k) of bound variables V_i and T_i? There is only one relation r in the query definition, but is it not common to have several relations (see Figure 1)? Should “Between” also be one of the possibilities for f? - The method builds on fuzzy logic, which is important for modeling the operators. A short introduction to fuzzy logic directly in the paper (instead of being in the appendix) would help understanding. - The MLPs in Equation 3 are different, so indices could be added (as in later cases) to make it clearer. - Line 160: The Alignment Rule is mentioned and is part of the subsequent equations. Since both formulas (for I_e and I_t) look exactly the same, how are the AND operators calculated differently? - Line 131: ||q|| should be a query set, so is V_q the embedding of all queries or just one query? - Existing literature for temporal KG completion does not only include the two groups tensor decomposition and translation. There are also methods based on dynamic (time-dependent) embeddings, logical rules, autoregressive models, Markov processes, … (see [1] for a survey from January 2022). [1] Temporal Knowledge Graph Completion: A Survey. Borui Cai, Yong Xiang, Longxiang Gao, He Zhang, Yunfeng Li, Jianxin Li. https://arxiv.org/abs/2201.08236 Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: The definitions section and method section contain the formulation for the two cases entity query and timestamp query and include a lot of notation, which impedes the readability. It is sometimes not clear, which equations are needed for which query type. Since both cases are similar in many ways, one suggestion would be to focus on one case, add more details/examples/descriptions, and include the other case in the appendix or mention the differences in a separate section. Minor comments: Figure 1: “Obama” -> “Barack Obama” (to be consistent with the other names) Line 21: The statement “results are inevitably incorrect” is rather strong. Depending on size and content of the KG, results might be correct. Table 1: The best results could be marked in bold to make it easier to identify the best method. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: The authors addressed the limitations and corresponding future work adequately in a separate section. Possible broader impact and application areas are also stated in a designated section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to express our sincere gratitude for the thoughtful feedback. Below, we address the comments and concerns in a comprehensive manner. > Figure 1: The logical formulation and the representation of entities in the query example. The example logical formulation is designed to correctly handle the negation by applying it to the entity set (country) rather than the timestamp set (when François Hollande was the president of France). The set of timestamp already indicates that for-all timestamps in the set. By doing so, we ensure to find the country that meets the specific time constraint for Xi's visit and Obama’s non-visit. As for the representation of entities for Hollande and France, such representation allows each node corresponds to the term in the above query formulation. We believe that using separate nodes (e_1, e_3, e_2, e_4) enhances clarity and avoids potential confusion in the interpretation of the query. > The definitions for temporal complex query. This is a valuable review. Below, we first revise the definitions, then response to the comments. If some parts are still unclear, please let us know. We appreciate your feedback. Temporal Knowledge Graph (TKG) $G = \{\mathcal{V}, \mathcal{R}, \mathcal{T}, \mathcal{F}\}$ consists of entity set $\mathcal{V}$, relation set $\mathcal{R}$, timestamp set $\mathcal{T}$ and fact set $\mathcal{F} = \{ (s,r,o,t) \} \subseteq \mathcal{V}\times\mathcal{R}\times\mathcal{V}\times\mathcal{T}$ containing subject-predicate-object-timestamp $(s,r,o,t)$ quartets. Without loss of generality, $G$ is a first-order logic knowledge base, where each quartet $(s,r,o,t)$ denotes an atomic formula $r(s, o, t)$, with $r$ a binary predicate and $s, o, t$ its arguments. We focus on Existential Positive First-Order (EPFO) query [1] over TKG, namely Temporal Complex Query $q$, which is categorized into entity query and timestamp query. Formally, the query $q$ consists of a target variable $A$, a non-variable anchor entity set $V_a \subseteq \mathcal{V}$, a non-variable anchor timestamp set $T_a \subseteq \mathcal{T}$, bound variables $V_1,\cdots,V_k$ and $T_1, \cdots, T_l$, logical operations (existential quantification $\exists$, conjunction $\land$, disjunction $\lor$, identity $1$, negation $\lnot$), and extra temporal operations ($\textbf{After}$, $\textbf{Before}$). Inspired by [2,3], the disjunctive normal form (DNF) of query $q$ is defined as: $$ \begin{aligned} q[A] = A &\\;\\;:\exists V_1, \cdots, V_k, T_1, \cdots, T_l : (e_{1}^{1} \land \cdots \land e_{n_1}^{1}) \lor \cdots \lor (e_{1}^{m} \land \cdots \land e_{n_m}^{m})\\\\ \text{where} & \\;\\; e = f \circ r(V_s, V_o \text{ or } A, g(T))\text{ or }f \circ r(V_s \text{ or } A, V_o, g(T))\\; \text{ if }q\text{ is entity query,}\\\\ & \\;\\; e = f \circ r(V_s, V_o, g(T \text{ or } A)) \\; \text{ if }q\text{ is timestamp query} \\\\ \text{with} & \\;\\; V_s, V_o \in V_a \cup \\{V_1, \cdots, V_k\\}, T\in T_a \cup \\{T_1, \cdots, T_l\\}, r\in\mathcal{R}, f\in \\{1, \lnot\\}, g\in\\{\textbf{After}, \textbf{Before}\\}\\\\ \end{aligned} $$ In the equation, the DNF is a disjunction of $m$ conjunctions, where $e_{1}^{j} \land \cdots \land e_{n_j}^{j}$ denotes a conjunction between $n_j$ logical atoms, and each $e_i^j$ denotes a logical atom. We ignore indices in the definition of $e_i^j$ to keep the formula clean. The goal of answering the query $q$ is to find the set of entities (or timestamps) $[q]$ that satisfy the query, such that $A\in [ q]$ iff $q[A]$ holds true, where $[ q]$ is the answer set of the query $q$. - [1] Efficient query evaluation on probabilistic databases. - [2] Query2box: Reasoning Over Knowledge Graphs In Vector Space Using Box Embeddings. - [3] Beta Embeddings for Multi-Hop Logical Reasoning in Knowledge Graphs. > Should “Between” also be one of the possibilities for f? We do not include Between in the definition of f for two reasons. Firstly, Between receives *two* inputs, different from any *unary* f in {1, time negation, After, Before}. Secondly, Between is not atomic, because `Between(t1, t2) = TimeAnd(After(t1), Before(t2))`. > A short introduction to fuzzy logic in the main body of the paper. We will add the following introduction to fuzzy logic and vector logic in the method section. To cope with logical transformation in the vector space, we introduce vector logic, which is a type of fuzzy logic over vector space. Fuzzy logic is a generalization of Boolean logic, such that the truth value of a logical atom is a real number in $[0, 1]$. In comparison, as a generalization of a real number, the truth value in vector logic is a vector $[0, 1]^d$ in the semantic space. We denote the logical operations in vector logic as $\textbf{AND}(\land), \textbf{OR}(\lor), \textbf{NOT}(\lnot)$, and so on, which receive one or multiple vectors and output one vector as answer. For more details about fuzzy logic, please refer to Appendix A. > The indices for MLPs to make it clearer. Line 160: how are the AND operators calculated differently? It should be clarified that all operators don't share parameters, which means all MLPs are different. We spend efforts to keep the design of operators simple, thus proposing the same structure with trainable parameters for all dyadic operators. So, $\mathcal{I}_e$ and $\mathcal{I}_t$ is the same on logic parts, but not the same on the feature parts. We would revise the notation as follows to make it clearer: - $\mathcal{P}_e(\mathbf{V}_q,\mathbf{r},\mathbf{V}_t) = g(\textbf{MLP}_0^e(\mathbf{V}_q+\mathbf{r}+\mathbf{V}_t))$ - $\mathcal{P}_t(\mathbf{V}\_{q_1},\mathbf{r},\mathbf{V}\_{q_2}) = g(\textbf{MLP}_0^t(\mathbf{V}\_{q_1}+\mathbf{r}+\mathbf{V}\_{q_2}))$ - $\alpha_i = …\textbf{MLP}_1 → \alpha^{e,t}_i = …\textbf{MLP}^{e,t}_1$ - $\beta_i = …\textbf{MLP}_2 → \beta^{e,t}_i = …\textbf{MLP}^{e,t}_2$ > Existing literature for temporal KG completion see global rebuttal. --- Rebuttal 2: Comment: I hope this message finds you well. As the deadline for the reviewer-author discussion draws near, we would like to kindly request your input on our rebuttal. We understand your time is valuable, and we greatly appreciate your initial engagement with our paper. Your feedback and insights are pivotal to the improvement of our work. If you could spare a moment to review our responses and consider the adjustments we've made, we would be truly grateful. Your evaluation is crucial to the progression of our research, and your thoughtful assessment will undoubtedly contribute to the overall quality of the paper. Thank you once again for your time and consideration. Please feel free to reach out if you have any questions or require further clarification. We eagerly await your response. --- Rebuttal Comment 2.1: Comment: Thanks for clarifying some of the questions. As other reviewers suggested, you should consider more reference to related work, including (explainable) temporal reasoning on TKGs. --- Reply to Comment 2.1.1: Comment: Thank you for your response! We are delighted to have received your feedback, and your insights are invaluable for enhancing the quality of this paper. Concerning the citations of relevant works, we have added up to 19 additional references on the topic of TKGC in the revised manuscript. These references can be found in the global rebuttal available at (https://openreview.net/forum?id=oaGdsgB18L&noteId=KOfY3P2mqt), showcasing the distinctions and connections between our paper and existing works on the subject. Furthermore, in response to your recent mention of temporal reasoning on TKGs, we have incorporated an additional paper, few-shot temporal reasoning over TKGs [20]. With these amendments, the section on TKG-related works now reads as follows: ```markdown TKGC task aims at inferencing new facts in the TKGs. Existing TKGC methods could be categorized to (1) tensor decomposition [1,2,3], (2) timestamp-based transformation [4,5,6,7,8], (3) dynamic embedding [9,10,11,12], (4) Markov process models [13,14], (5) autoregressive models [15,16,17], (6) others [18,19,20] and so on. Among these works, most of them only confined to the one-hop link prediction task, also known as one-hop reasoning. Some works [12,15,16,17,19] can perform multi-hop reasoning via a path consisting of connected quartets. But none of them could answer logical queries that involve multiple logical operations (conjunction, negation and disjunction). In this paper, we focus on the temporal complex query answering task, which is more challenging than TKGC task. ``` This revision includes representative works up until 2022. Despite this, none of the referenced articles delve into reasoning involving complex symbolic logic, as they remain restricted to single-hop and multi-hop reasoning. Therefore, we firmly believe that this paper marks the first multi-hop logical reasoning framework on TKGs with a temporal component that encompasses logic. We are confident that our paper has made indispensable contributions to the field of knowledge graphs, encompassing the introduction of the initial formal definition, the creation of a novel reasoning dataset, the establishment of a TKGR framework, and the identification of future directions for this domain. Your reviews have also consistently acknowledged these contributions. We hope these can serve as justifiable reasons for you to consider a higher rating. It is really important to support our future work. Based on your brief response, it appears that aside from the section on related works, you are satisfied with the rebuttal provided. Should you have any further inquiries or if you believe there are other aspects we have overlooked, please do not hesitate to bring them to our attention. We are eager to engage in further communication and discussion with you. References: - [1] Tensor Decomposition-Based Temporal Knowledge Graph Embedding. - [2] Tensor decompositions for temporal knowledge base completion. - [3] Tucker decomposition-based Temporal Knowledge Graph Completion. - [4] ChronoR: Rotation Based Temporal Knowledge Graph Embedding - [5] Deriving validity time in knowledge graph. - [6] Leveraging Static Models for Link Prediction in Temporal Knowledge Graphs. - [7] TeRo: A Time-aware Knowledge Graph Embedding via Temporal Rotation. - [8] HyTE: Hyperplane-based Temporally aware Knowledge Graph Embedding. - [9] Temporal Knowledge Graph Completion Based on Time Series Gaussian Embedding. - [10] DyERNIE: Dynamic Evolution of Riemannian Manifold Embeddings for Temporal Knowledge Graph Completion. - [11] Know-Evolve: Deep Temporal Reasoning for Dynamic Knowledge Graphs. - [12] TeMP: Temporal Message Passing for Temporal Knowledge Graph Completion. - [13] RTFE: A Recursive Temporal Fact Embedding Framework for Temporal Knowledge Graph Completion. - [14] Learning Dynamic Embeddings for Temporal Knowledge Graphs. - [15] Recurrent Event Network: Autoregressive Structure Inference over Temporal Knowledge Graphs. - [16] Temporal Knowledge Graph Reasoning Based on Evolutional Representation Learning. - [17] Learning Neural Ordinary Equations for Forecasting Future Links on Temporal Knowledge Graphs. - [18] Explainable Subgraph Reasoning for Forecasting on Temporal Knowledge Graphs. - [19] Learning to Walk across Time for Interpretable Temporal Knowledge Graph Completion. - [20] Learning to Sample and Aggregate: Few-shot Reasoning over Temporal Knowledge Graphs
Summary: This paper proposes a method to learn on temporal knowledge graphs, using a combination of fuzzy logic with a temporal extension and node embeddings. To test the method, three new datasets were generated. The choices made for the model seem logical and are over all well motivated in the paper (e.g. figure 2). The final model was tested on multiple data sets and compared to several non-trivial baselines. Furthermore, ablation studies were performed to analyze the effect of several components of the TFLEX model. While this paper has several strong points when it comes to presentation (e.g. figure 2 and the colors in line 164), there are some lacking areas. Firstly, having to much mathematical notation spread throughout the text itself (e.g. lines 173-180) compromises readability. Secondly, given the novelty of the model, I would have liked to see a detailed diagram representing the model. This paper provides a novel approach for learning a temporal graphs and provides three data sets for this task. Research on temporal graphs is somewhat limited (as seen by the limitations of the baselines), making this a welcome addtion to the sub-area of machine learning on graphs. Strengths: This paper performs a analysis by motivating choices and thoroughly evaluating the model. The figures that are present are very clear. Weaknesses: Some of the math could have been beter separated from the text, making the paper more readable. There is not figure to visually convey the model itself. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: To me, it is not clear why we would want up to n compositions of the identity, negation, before and after operations (see line 105). In lines 124-125 you seem to assume a query only has one answer. Is this only for training purposes, or is this a limitation of the model? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The model seems to be limit the answer set of queries to just one answer (the embedding closest to the query embedding). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's thoughtful assessment of our paper. We would like to address the points raised in the review and provide clarification and additional information to enhance the overall understanding and appreciation of our work. > To me, it is not clear why we would want up to n compositions of the identity, negation, before and after operations (see line 105). Well, we also find that it is not necessary. We will remove it in the revision. For now, you can refer to our response to reviewer NJDL, where we present a revised version of the definition. > In lines 124-125 you seem to assume a query only has one answer. Is this only for training purposes, or is this a limitation of the model? To clarify, a query has an answer set instead of one answer only. Each answer in the answer set is named Temporal Query Answer. For each answer, its embedding should be close to the query embedding. Once again, we appreciate the opportunity to receive feedback and engage in this scholarly discourse, and we are dedicated to delivering an enhanced version of our paper that aligns with the reviewer's and the broader academic community's expectations. --- Rebuttal Comment 1.1: Title: Reviewer please acknowledge having read the rebuttal Comment: Review SQqW please let me know if your questions are addressed and want to revise your score if applicable.
Summary: This paper studied the multi-hop logical reasoning problem on temporal knowledge graphs and proposed the first temporal complex query embedding framework named Temporal Feature-Logic Embedding framework(TFLEX).Firstly, they defined the task of multi-hop logical reasoning over TKGs. Secondly, they designed the first multi-hop logical reasoning framework which utilizes fuzzy logic to compute the logic part and extend fuzzy logic on timestamp set for supporting all FOL operation and extra temporal operations(After, Before and Between). At last, they generated three new TKG datasets for the task of multi-hop logical reasoning. Experiments on benchmark datasets demonstrate the efficacy of the proposed framework in deal with different operations in complex queries. Strengths: 1. They creatively conducted research multi-hop logical reasoning problem on the TKGs and proposed the first TFLEX framework to answer the temporal complex queries. 2. they relatively proposed using fuzzy logic to handle temporal feature-logical embedding and expanded extra temporal operations. 3. They provided three new TKG datasets and compared them with multiple benchmarks for experimental verification in order to better validate the reliability of their framework. 4. They had clear explanations and explanations of the definitions and methods used, with reasonable classification and appropriate illustrations. 5. There are more detailed compensations and explanations in the appendix to better explain the details of the method and experiment. 6. There are also corresponding open source code and datasets. Weaknesses: 1. In the section on related work, there is a lack of appropriate explanations for the problems and shortcomings of the methods cited in the relevant articles, as well as a comparison with current work. 2. The appendix section provides some overly detailed supplements to the overall concept, such as a more concise description of fuzzy logic. 3. It is not recommended to directly apply source code for explanation and display in the appendix section. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. It is recommended that the author compare the methods of relevant articles with the improvements of the author's article to highlight the efficiency of their methods 2. It is suggested that the author can further optimize the content of the paper to streamline the appendix content, and simplify the subsection: Explaining answers with the framework and Experimental analysis into the main body of the paper to better demonstrate its experiments and principles. 3. It is suggested that Pseudocode should be used for interpretation, which is helpful for reading and analysis and is more standardized. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: The author thoroughly analyzed its limitations in the article, including insufficient time operators, improved time embedding, long query generation time, MRR and Hits@k Weak. Also proposed its plan for future improvement work Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's time and effort in evaluating our manuscript. We have carefully considered the provided feedback and would like to address each point of concern: > Lack of comparison with related work: (see pdf in the global rebuttal section) We acknowledge the reviewer's comment regarding the need for more explicit discussions about the problems and shortcomings of the methods cited in the related articles. In the pdf in global rebuttal section, we include a comprehensive comparison between the methods outlined in the related work and our proposed approach. This will not only highlight the limitations of existing methods but also underscore the advancements offered by our approach. > Overly detailed supplements in the appendix: We understand the reviewer's concern about the detailed supplements in the appendix. While we believe that these supplemental explanations contribute to a deeper understanding of the concepts, we will revisit the appendix section to ensure that the content remains concise and directly relevant to the main concepts presented in the paper. > Using pseudocode for interpretation: We appreciate the suggestion to use pseudocode for better interpretation. In the revised manuscript, we will include pseudocode to present the key algorithms and operations in our TFLEX framework. This will enhance the readability and facilitate a standardized understanding of our proposed methodology. > Streamlining appendix content and simplifying subsections: We take the reviewer's point about streamlining the appendix content and simplifying certain subsections. To enhance the readability and coherence of the paper, we will integrate the content from the subsections "Explaining answers with the framework" and "Experimental analysis" into the main body of the paper. This adjustment will help our readers better understand the experimental setup and principles underlying our work. ---- In conclusion, we are genuinely thankful for the reviewer's insightful feedback, which has guided us towards refining our manuscript. We are committed to addressing each concern and ensuring that the revised paper comprehensively presents our work while effectively addressing the points raised. Thank you for your time and consideration.
Rebuttal 1: Rebuttal: Dear Reviewers, We would like to express our sincere gratitude to the reviewers for their thoughtful and constructive feedback on our submission. We greatly appreciate the time and effort dedicated to evaluating our work, and we are excited to engage in this rebuttal process to address the raised concerns and comments. Below, we provide the response for the points that mostly concerned. > [@BpBt, @NJDL, @Zrqw] Existing literature for temporal KG completion and comparison with related TKGC works. We select some related TKGC works that we have already read before or cited by the survey (Temporal Knowledge Graph Completion: A Survey.) mentioned by reviewer NJDL. We would like to add the following sentences to the related work section. ```md TKGC task aims at inferencing new facts in the TKGs. Existing TKGC methods could be categorized to (1) tensor decomposition [1,2,3], (2) timestamp-based transformation [4,5,6,7,8], (3) dynamic embedding [9,10,11,12], (4) Markov process models [13,14], (5) autoregressive models [15,16,17], (6) others [18,19] and so on. Among these works, most of them only confined to the one-hop link prediction task. Some works [12,15,16,17,19] can perform multi-hop reasoning via a path consisting of connected quartets. But they cannot answer logical queries that involve multiple logical operations involving conjunction, negation and disjunction. In this paper, we focus on the temporal complex query answering task, which is more challenging than TKGC task. ``` > [@Zrqw] A separate table comparing TFLEX and state-of-the-art TKGC on a completion task. we attach the table in the pdf. > [@BpBt, @NJDL, @Zrqw] Improvement of readability In order to improve the readability of the paper, we plan to revise as follows: 1. More readable notations in the formulas. More explanations about the symbols. [#our response to NJDL and Zrqw]. 2. A leading sentence in the method section. ```md In this section, we replace the variables in the query formulation with temporal feature-logic embeddings, and perform logical operations via neural networks. We first introduce the temporal feature-logic embedding for entities, timestamps, and queries in Section 4.1. Afterwards, we introduce logical operators in Section 4.2 and how to train the model in Section 4.3. ``` 3. A short introduction to fuzzy logic to be presented in method section. Please refer to our response to NJDL for detail. [#our response to NJDL] 4. A brief introduction to query types in the dataset setting in experiment section [#our response to Zrqw]. 5. Streamlining appendix content and simplifying subsections. [#our response to BpBt] ------ In conclusion, we once again thank the reviewers for their valuable insights and feedback. We have carefully considered all the comments and suggestions and have made corresponding revisions to improve the quality and clarity of our work. We believe that our paper contributes significantly to the field by presenting a novel approach for complex logical reasoning over temporal knowledge graphs. We are confident that the revisions we have made address the reviewers' concerns. We look forward to the opportunity to present our findings and engage in insightful discussions with the community. Thank all for the time and consideration. ------ References: - [1] Tensor Decomposition-Based Temporal Knowledge Graph Embedding. - [2] Tensor decompositions for temporal knowledge base completion. - [3] Tucker decomposition-based Temporal Knowledge Graph Completion. - [4] ChronoR: Rotation Based Temporal Knowledge Graph Embedding - [5] Deriving validity time in knowledge graph. - [6] Leveraging Static Models for Link Prediction in Temporal Knowledge Graphs. - [7] TeRo: A Time-aware Knowledge Graph Embedding via Temporal Rotation. - [8] HyTE: Hyperplane-based Temporally aware Knowledge Graph Embedding. - [9] Temporal Knowledge Graph Completion Based on Time Series Gaussian Embedding. - [10] DyERNIE: Dynamic Evolution of Riemannian Manifold Embeddings for Temporal Knowledge Graph Completion. - [11] Know-Evolve: Deep Temporal Reasoning for Dynamic Knowledge Graphs. - [12] TeMP: Temporal Message Passing for Temporal Knowledge Graph Completion. - [13] RTFE: A Recursive Temporal Fact Embedding Framework for Temporal Knowledge Graph Completion. - [14] Learning Dynamic Embeddings for Temporal Knowledge Graphs. - [15] Recurrent Event Network: Autoregressive Structure Inference over Temporal Knowledge Graphs. - [16] Temporal Knowledge Graph Reasoning Based on Evolutional Representation Learning. - [17] Learning Neural Ordinary Equations for Forecasting Future Links on Temporal Knowledge Graphs. - [18] Explainable Subgraph Reasoning for Forecasting on Temporal Knowledge Graphs. - [19] Learning to Walk across Time for Interpretable Temporal Knowledge Graph Completion. Pdf: /pdf/40e2143536c37023098322afdb8b93fd5038f55f.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: Authors present new embedding framework called TFLEX to embed complex temporal queries over temporal knowledge graphs to perform multi-hop reasoning with time constraints on TKGs. Authors present the overall embedding framework using Fuzzy logic to model complex logical queries and extending fuzzy logic to include three temporal operators After, Before and Between. It presents new benchmark datasets to evaluate embedding on TKGs and show experimental results to show the benefits of proposed approach. Strengths: paper presents a new embedding framework and dataset for complex temporal queries, which can benefit the community in extending research in this direction. Weaknesses: Not sure if this framework can handle complex queries without temporal constraints on par with other complex query embedding methods? suppose if we ignore the temporal aspect of current framework and run the standard benchmarks on complex query handling does it work as well as other methods in the literature. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Can this solve complex query answering equally well on other bench mark tasks and how does it fare against the prior art? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer's constructive feedback on our paper. We are pleased that the reviewer recognizes the merits of our work and acknowledges the contributions we have made to the field. We would like to address the concerns and questions below. > Not sure if this framework can handle complex queries without temporal constraints on par with other complex query embedding methods? suppose if we ignore the temporal aspect of current framework and run the standard benchmarks on complex query handling does it work as well as other methods in the literature. Can this solve complex query answering equally well on other bench mark tasks and how does it fare against the prior art? It's an interesting question. And of course, the answer is yes. Ignoring the time parts of TFLEX degenerates to the variant FLEX in our experiments. We report the results of FLEX on the standard datasets FB237, FB15k and NELL, comparing to famous QE baselines (GQE, Query2box, BetaE, ConE) in the tables below. From the tables, we observe that FLEX achieves competitive performance compared to the four baselines. This is similar to the situation on temporal complex queries over TKGs. We attribute the improvement to the fuzzy logic operators, which can handle complex queries better than the geometric operators in QE baselines. Table 1. MRR results for answering queries without negation ($\exists$, $\land$, $\lor$) on FB15k, FB237 and NELL. The best results are in bold. **AVG** denotes average performance. | **Dataset** | **Model** | **1p** | **2p** | **3p** | **2i** | **3i** | **pi** | **ip** | **2u** | **up** | **AVG** | | :---------- | :-------- | :------- | :------- | :------- | :------- | :------- | :------- | :------- | :------- | :------- | -------: | | FB15k | GQE | 53.9 | 15.5 | 11.1 | 40.2 | 52.4 | 27.5 | 19.4 | 22.3 | 11.7 | 28.2 | | | Q2B | 70.5 | 23.0 | 15.1 | 61.2 | 71.8 | 41.8 | 28.7 | 37.7 | 19.0 | 40.1 | | | BetaE | 65.1 | 25.7 | 24.7 | 55.8 | 66.5 | 43.9 | 28.1 | 40.1 | 25.2 | 41.6 | | | ConE | 73.3 | 33.8 | 29.2 | 64.4 | 73.7 | 50.9 | 35.7 | **55.7** | 31.4 | 49.8 | | | FLEX | **77.1** | **37.4** | **31.6** | **66.4** | **75.2** | **54.2** | **42.4** | 52.9 | **34.3** | **52.4** | | | | FB237 | GQE | 35.2 | 7.4 | 5.5 | 23.6 | 35.7 | 16.7 | 10.9 | 8.4 | 5.8 | 16.6 | | | Q2B | 41.3 | 9.9 | 7.2 | 31.1 | 45.4 | 21.9 | 13.3 | 11.9 | 8.1 | 21.1 | | | BetaE | 39.0 | 10.9 | 10.0 | 28.8 | 42.5 | 22.4 | 12.6 | 12.4 | 9.7 | 20.9 | | | ConE | 41.8 | 12.8 | 11.0 | 32.6 | 47.3 | 25.5 | 14.0 | 14.5 | 10.8 | 23.4 | | | FLEX | **43.6** | **13.1** | **11.1** | **34.9** | **48.4** | **27.4** | **16.1** | **15.4** | **11.1** | **24.6** | | | | NELL | GQE | 33.1 | 12.1 | 9.9 | 27.3 | 35.1 | 18.5 | 14.5 | 8.5 | 9.0 | 18.7 | | | Q2B | 42.7 | 14.5 | 11.7 | 34.7 | 45.8 | 23.2 | 17.4 | 12.0 | 10.7 | 23.6 | | | BetaE | 53.0 | 13.0 | 11.4 | 37.6 | 47.5 | 24.1 | 14.3 | 12.2 | 8.5 | 24.6 | | | ConE | 53.1 | 16.1 | 13.9 | 40.0 | **50.8** | 26.3 | 17.5 | 15.3 | 11.3 | 27.2 | | | FLEX | **57.8** | **16.8** | **14.7** | **40.5** | **50.8** | **27.3** | **19.4** | **15.6** | **11.6** | **28.2** | Table 2. MRR results for answering queries with negation on FB15k, FB237, and NELL. The best results are in bold. **AVG** denotes average performance. | **Dataset** | **Model** | **2in** | **3in** | **inp** | **pin** | **pni** | **AVG** | | :---------- | :-------- | :------- | :------- | :------- | :------- | :------- | -------: | | FB15k | BetaE | 14.3 | 14.7 | 11.5 | 6.5 | 12.4 | 11.8 | | | ConE | 17.9 | 18.7 | 12.5 | 9.8 | 15.1 | 14.8 | | | FLEX | **18.0** | **19.3** | **14.2** | **10.1** | **15.2** | **15.4** | | | | FB237 | BetaE | 5.1 | 7.9 | 7.4 | 3.6 | 3.4 | 5.4 | | | ConE | 5.4 | 8.6 | 7.8 | **4.0** | **3.6** | 5.9 | | | FLEX | **5.6** | **10.7** | **8.2** | **4.0** | **3.6** | **6.5** | | | | NELL | BetaE | 5.1 | 7.8 | 10.0 | 3.1 | 3.5 | 5.9 | | | ConE | 5.7 | 8.1 | 10.8 | 3.5 | 3.9 | 6.4 | | | FLEX | **5.8** | **9.1** | **10.9** | **3.6** | **4.1** | **6.7** |
null
null
null
null
null
null
Follow-ups Also Matter: Improving Contextual Bandits via Post-serving Contexts
Accept (spotlight)
Summary: The paper studies a variant of linear contextual bandits where a post-serving context is provided to the learner *after* the learner selects an arm. The reward is linear in the context and the post-serving context (but thus may not be linear in the original context on its own). The post-serving context is specified by noisy perturbation of a fixed (but unknown) function of the pre-specified context. The paper abstracts away the process of learning the function: the function is assumed to be learnable with sufficiently many pairs of context and noisy post-serving context pairs, and the “complexity” of the function is parameterized by the error of this learning process. The paper designs an algorithm based on LinUCB that creates a confidence set for the post-serving context in addition to the typical confidence set for the reward function parameters. The regret bound has a $O(T^{1-\alpha})$ term that depends on the complexity $\alpha$ of the function and a $O(\sqrt{T})$ term that depends on the dimension. The paper builds on the classical analysis of LinUCB, but requires a novel elliptical potential lemma. The paper further complements the theoretical analysis with simulations on synthetic data and real-world data (MovieLens dataset). Strengths: The paper provides an intriguing model for additional information available to the learner beyond the reward. The idea of a post-serving context is natural and well-motivated. The approach of abstracting away the complexity of learning the function mapping contexts to means of post-serving contexts is also clean and elegant. The LinUCB-based algorithm in the paper is simple and elegant. The paper also provides a detailed discussion of shortcomings of relying on a point estimate of the post-serving contexts (in Section 2.2) and the need to appropriately address uncertainty in phi via a confidence set when computing an upper confidence-bound. The analysis of the algorithm is also technically interesting, with an introduction of the novel elliptical potential lemma. The paper clearly describes how the novel elliptical potential lemma accommodates slower learning rates and allows for noisy contexts, and furthermore highlights how the lemma is tight in several regimes. The paper is very well-written and explains the key ideas clearly. Weaknesses: While the paper focuses solely on upper bounds, it could be interesting to investigate lower bounds. Is the dependence on alpha in the regret bound tight? (See question below.) That being said, this is a fairly minor weakness, since the upper bound analysis is already involved and interesting. Another minor weakness is that in Observation 1, it seems that the result is restricted to linear bandit algorithms (which should probably be clarified in the statement too). This seems to be a bit of an unfair comparison because the model is misspecified, and the linear bandit algorithm was not designed to handle misspecified rewards. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Questions: - Is the dependence on alpha in the regret bound tight? Typos: - “Generlaize” -> “generalize” p.5 - “Sketche” -> “sketch” p.6 - “Sthe” -> “the” p.6 - “Followings” -> “following” p.7 Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Yes, the authors adequately addressed limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We deeply appreciate the reviewer for the thoughtful comments! You can find our responses to each question below. **Re: weakness 1 \& Q1** For $\alpha=1/2$, our bound is tight, which achieves $1/\sqrt{T}$, which matches the lower bound. However, for other $\alpha$, it’s an interesting problem setup which combines the lower bound of offline PAC learning and online learning, and we are currently not sure if our bound is tight. Future directions could be incoporating more complex ways of estimating the phi functions such as manifold regression (see, e.g., Yang \& Dunson, 2016.), non-parametric ways like k-nearest-neighbors to estimate phi, or to estimate complex non-smooth function (e.g., those in Holder space $H(\beta, L)$, see the note by Tibshirani \& Wasserman, 2019) ), which may cover the case of $\alpha \in (0.5, 1]$. Whether our regret bound is tight for such more complex situations is an interesting open direction, not only because it is beyond the scope of the current paper but also because even understanding whether the convergence rate for estimating the function $\phi$ itself is already a highly non-trivial question and, in fact, an active research area. Yang, Yun, and David B. Dunson. "Bayesian manifold regression." (2016): 876-905. Tibshirani, Ryan and Larry Wasserman. "Nonparametric Regression", Statistical Machine Learning, Spring 2019 **Re: weakness 2** The goal of Observation 1 is to illustrate the issue of misspecified features when directly applying linear bandit algorithms and thus motivates the necessity of modeling the functional relationship between the pre-serving and post-serving context. We did not intend to view it as a comparison between standard linear bandit algorithms and our to-be-developed algorithm. If we don’t model this functional relationship, the model will be misspecified, and thus a linear regret cannot be avoided. Once again, we appreciate your time and constructive feedback. If further questions or concerns arise, please don't hesitate to reach out. Thank you! --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their response! I am satisfied with the response and am maintaining my score (strong accept).
Summary: This paper looks into contextual bandits in the scenario where only partial context is provided to the learner for making decisions. Specifically, this paper comes up with an environment setting where the full context consists of pre and post components which are revealed to the learner before and after she finalizes the action to take. Given the true reward is tied to the full context, the algorithm should include a post-context inferring step to get low regret. For the linear case, this paper proposes a low regret algorithm poLinUCB for this pre-post contextual bandit setting. The performance of poLinUCB is proved with theorems and evaluated by simulations, where a smaller regret rate is shown in contrast to other method/baselines. Strengths: 1. Both the theorem and simulation show in agreement that poLinUCB outperforms counter-method (Wang et al.) and baselines that are mentioned in the paper. 2. The theorem and its consequential sharper regret rate in $T$ compared to Wang et al. is built upon the generalized EPL, which may be considered as a new tool for other independent interests that take the uncertainties of context into account. Weaknesses: **Soundness of the problem setup** Though the math content from problem formulation to algorithm to theorems are sound, it’s of independent evaluation that whether the setup at the first place is worth studying, put in other words, does the reward model (under line 107) cover/match any real-world problem. For me, some major drawbacks are: - it’s unlikely in reality that z_t will be fully revealed to learner - the learnability assumption fails when z_t is not fully determined by x_t, which is what usually happens in the hidden contextual setting where the hidden part is influenced by multiple factors, including but not limited to the revealed part x_t. - given this setting where z_t is fully predictable by x_t, then isn’t kernel/neural bandit already a good solution for this problem? **Method proposed is pretty similar to existing one** It’s in doubt that the outperformance is gained because poLinUCB has any essential algorithmic improvement than that of Wang at el. **or the formance gain is a only matter of new problem setting**. Wang at el. is not designed for the specific problem setting brought up by this work. - I can see that the original algorithm by Wang at el will suffer larger regret under your reward setup where true reward has component $<z_t, \beta_a^*>$. Since $z_t$’s from previous rounds are accessible to learner, the most accurate way for estimating \beta^* is certainly to utilize $z_t $rather than $\hat \phi (x_t)$ in loss. poLinUCB is more like an alternation of exsisting method as on adapting it to a new problem. - But what if true reward turns to have $<\phi^*(x_t), \beta_a^*>$, which is like a twin problem setup of this paper’s, then maybe Wang at el. will outperforms poLinUCB given $z_t$ is a noisy sample of $\phi^*(x_t)$ thus casts extra error into $\hat \beta_a$, while $\hat \phi (x)$ is ERM, which should have some noise reduction effect strengthened as $t$ goes up. Upper bound of regret in Wang at el. is inferior doesn’t necessarily mean the algorithm is less optimal, the performance may be underestimated by an upper bound that is not tight. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - What I’m not sure of is whether the problem setting is of good value to study, hopefully the author will respond with more evidence that may justify the problem setting. - Is the dependency on $d_u = d_x +d_z $able to further optimize, given $z_t$ is predictable by $x_t$, the effective dimension of this problem maybe smaller than $d_u$. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Identified Limitations are listed in Weakness and Questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you to the reviewer for the comprehensive and valuable comments! Below, you'll find our in-depth responses to the questions raised. **Re: weakness 1 & 2** The reviewer is correct that there may be other hidden factors (both pre-serving or post-serving) that affect the reward. In this work, we are just trying to capture such post-serving signals as much as we can, by modeling it as the post-serving contexts that are observable. We do not mean to claim that we are able to capture everything post-serving, neither can any previous works. This is also why our reward model – like most standard bandit models – also has a noise term, which is designed to capture remaining unobservable signals. Our experiments show that even utilizing those observable post-serving features, it already is useful for reducing the regret. **Re: weakness 3** To a certain degree, the reviewer's proposal is correct. However, for neural bandits, relying solely on the mapping from the pre-serving context x to the reward r may not be as efficient as our method, which exploits the structure of the reward function, i.e., $r = \theta * x + \beta * \phi(x)$. We’ve observed that the linUCB performs worse than poLinUCB as linUCB doesn’t exploit this structure, though ultimately it will achieve no-regret. In general, leveraging this intrinsic structure will enhance generalization and make the learning process more efficient. Additionally, on the computational side, the kernel method necessitates the inversion of the covariance matrix, which comes with a computational cost that is cubic in the dimension. In addition, calculating the NTK is also a costly process. For instance, the computation of the convolutional neural tangent kernel (CNTK) involves $\Omega(d^4\cdot n^2)$ operations (Arora et al., 2019), where $d$ is the dimension, and $n$ is the number of data points. [Reference] Arora, Sanjeev, et al. "On exact computation with an infinitely wide neural net." Advances in Neural Information Processing Systems 32 (2019). **Re: weakness 4** We believe poLinUCB has essential algorithmic improvement over Wang at el. In fact, our first attempt in solving our problem is a natural adaptation of the algorithm of Wang et al. 2016 (LinUCB ($\hat{\phi}$)). Note that LinUCB ($\hat{\phi}$) is not exactly the same as the algorithm of Wang et al. 2016, but a natural adaptation of their idea to our setup (i.e., estimating post-serving context first, and then use the post-serving context estimation to estimate linear parameters), which is why we think it is a suitable algorithm for our setup. We tried to prove that this algorithm has small regret, which we did not succeed (our later experiments does show that its regret is not good). This is the motivation for us to design the poLinUCB algorithm. **Re: weakness 5\&6** Your understanding is intuitively correct for the first point. Additionally, we did experiments follow the setup you mentioned. Please see the Figure (2) in the uploaded PDF (as a response to all reviewers). We observe that the algorithm adapted from Want et al., 2016, (LinUCB ($\hat{\phi}$)) still performs worse than our method, though the gap seems become smaller. **Re: Q1** The potential significance of our problem setting lies in the exploration of using pre-serving context to predict post-serving context within the contextual bandits framework. Unlike reward data, which may be sparse or less accessible, the mapping from pre-serving contexts (such as user history) to post-serving contexts (such as feedback) often presents a more abundant and accessible source of information. Our proposed algorithm leverages this relationship, potentially creating a dynamic, adaptive recommendation system that could lead to improved recommendations by predicting user responses. In summary, our research highlights the potential of leveraging the more accessible and abundant data of predicted post-serving features, aligning with practical needs across various domains, and possibly paving the way toward more sophisticated, responsive recommendation systems. **Re: Q2** Regarding the dependency on $d_u$, if the mapping from x to z is non-linear and without additional assumptions on $\phi$, we don’t think the dependency on $d_u$ can be improved. This is because the covariance matrix of $(x, \phi(x))$ can be full rank $d_u$ when $\phi(x)$ is non-linear. We're grateful once again for your time and detailed feedback. Should you have any more questions or concerns, please feel free to contact us. Thank you! --- Rebuttal Comment 1.1: Comment: Thanks for author's response, which addressed all of my questions and cleared most of my concerns, especially the new simulation. Given this paper proposing an interesting problem setup and providing a new elliptical lemma, I'd like to raise my score to 6.
Summary: The paper proposes a novel contextual bandit problem with post-serving contexts and introduces a new algorithm, poLinUCB, that achieves tight regret under standard assumptions. The authors demonstrate the effectiveness of their approach through both synthetic and real-world experiments, showing that poLinUCB consistently outperforms other strategies in leveraging post-serving information. Overall, the paper makes contributions to the field of online learning and contextual bandits. Strengths: - Introduces a new contextual bandit problem with post-serving contexts and proposes a novel algorithm, poLinUCB, that achieves tight regret under standard assumptions. - Demonstrates the effectiveness of the proposed approach through both synthetic and real-world experiments. - Makes significant contributions to the field of online learning and contextual bandits, and has the potential to improve efficiency in a wide range of applications. Weaknesses: - The paper does not seem to be fully finished. There is no conclusion section. - The paper could benefit from a more thorough discussion of the limitations and assumptions of the proposed approach, particularly in the context of real-world applications. - The experimental evaluation is limited. It could be expanded to include more datasets and scenarios. - The paper could provide more details on the implementation and computational complexity of the proposed algorithm. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: - Could the authors provide more details on the choice of hyperparameters and the sensitivity of the proposed algorithm to their values? - Could the authors provide more insights into the practical implications and potential impact of the proposed approach? - How does the proposed algorithm handle noisy or incomplete post-serving context information, and what are the limitations of the approach in such scenarios? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 1 poor Contribution: 2 fair Limitations: See comments above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks to the reviewer for the thoughtful observations and comments! Our detailed responses to the questions can be found below. **Re: weakness 1 \& 2** We apologize for missing the conclusion and limitation section in the main paper. Due to space constraints, we had to move the conclusions and limitations to Appendix A.1 (see supplementary material). We tried to prioritize the more technical contributions in the main body. Based on our knowledge, if the paper gets accepted, there will be one extra page and we will move the conclusion and limitation section back to the main paper. The potential limitation of our work hinges on the assumption that the function $\phi^\star$ is learnable, a condition that may not always be possible. However, we do observe that modeling the function mapping from x to z will boost the performance on a real world dataset, MovieLens. **Re: weakness 3** Thank you for the constructive feedback. We acknowledge the importance of comprehensive experimental evaluations in strengthening the validity of the research findings. Our selection of datasets and scenarios was aimed at presenting an initial exploration of the effectiveness of the proposed methodology in a new problem setup, while balancing feasibility and computational constraints. That being said, we certainly agree that a more extensive set of experiments, including a wider range of datasets and scenarios, may help to provide more insights. It could also validate our method across diverse contexts, understand its limits better, and optimize it to function efficiently in varied settings. **Re: weakness 4** We’ve included the details about the experiments in the experimental section. To promote reproducibility, we will surely release the code for reproducing all the figures in our paper. **Re: Q1** For the coefficient of the variance term in UCB, we performed a grid search using the values {0.01, 0.1, 1}. As for the Adam optimizer, we didn't adjust the learning rate, instead choosing the default value which already demonstrates satisfactory performance. **Re: Q2** At its core, our approach aims to enhance the efficiency and effectiveness of decision-making processes in dynamic environments, which is relevant in numerous practical domains. In the realm of e-commerce, for instance, our method could be utilized to enhance product recommendations by better modeling user responses based on their past interactions and newly posted reviews. This can lead to improved customer satisfaction and potentially increased revenue. Similarly, in the field of healthcare, our approach could be used to optimize personalized treatment plans, where the "post-serving context" could be patient responses to previous treatments. This could potentially lead to more effective treatment strategies and improved patient outcomes. In the broader realm of machine learning, our approach introduces a new problem setup on the exploration-exploitation trade-off, by incorporating post-decision information into the learning process. This could stimulate the development of new learning algorithms with improved performance. **Re Q3** Our algorithm can handle the noisy post-serving context information. Our generalized elliptical potential lemma is developed to address the technical challenge in analyzing the regret under noisy post-serving context information. However, it's important to note that significantly incomplete information might affect the effectiveness of the algorithm, as is common for almost all machine learning models. Further research will be required to understand and optimize the performance under such conditions. Once again, we appreciate your time and constructive feedback. If further questions or concerns arise, please don't hesitate to reach out. Thank you! --- Rebuttal Comment 1.1: Comment: Thank you for addressing my comments. I have raised my score to 6
Summary: This paper introduces and analyzes the problem of contextual multi-armed bandits with post-serving contexts, where additional reward-relevant information is revealed after the the algorithm makes its choice. It divides the traditional context into a pre-serving context which is known the the algorithm at decision-time and a post-serving context which revealed later, but for which the algorithm can learn to produce an accurate estimate of its mean based on the pre-serving context. It then shows that the standard elliptical potential lemma needs updating in order to handle this case, and provides an alternative version. The paper then provides a novel algorithm which estimates the post-serving context in order to use in the new EPL, and proves its regret rate. Finally the paper performs experiments validating the performance of the poLinUCB algorithm. Strengths: 1) Makes a novel contribution to a more general version of the elliptical potential lemma which is well-motivated by their chosen application problem 2) Introduces a novel but useful-seeming problem setting where additional reward-relevant information is revealed after the algorithm's decision 3) Experiments clearly demonstrate the performance improvements benefits of their new algorithm, which nearly matches the performance of LinUCB which Weaknesses: 1) The paper doesn't have a conclusion section Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1) Could you spell out an example of the setting in more detail? The paper does a good job of explaining situations where there is a post-serving context, but isn't that explicit about what the reward function is where the reward is revealed immediately after the decision but the post-serving context is clearly helpful for predicting performance 2) Could you say a bit more about why poLinUCB outperforms LinUCB even when $\phi$ is a linear function? If $z = Ax$, then $r = \theta x + \beta z = \theta x + \beta Ax = (\theta + \beta A)x$, and it is equivalent to a slightly different linear contextual bandits problem. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: The authors did not discuss the limitations of this work. While they would be similar to the limitations of normal bandit problems, adding the post-serving context could plausibly be more impactful than (for instance) a 20% performance improvement on a standard bandit problem. In particular, it seems plausible that the post-serving context could allow one to use more complex reward functions, which could be good or bad. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful to the reviewer for the insightful comments! Please find our detailed responses to the questions below. **Re: weakness 1 \& limitations** We apologize for missing the conclusion and limitation section in the main paper. Due to space constraints, we had to move the conclusions and limitations to Appendix A.1 (see supplementary material). We tried to prioritize the more technical contributions in the main body. Based on our knowledge, if the paper gets accepted, there will be one extra page and we will move the conclusion and limitation section back to the main paper. **Re: Q1** Thanks for the great question. For example, for youtube recommendation, the pre-serving context $x$ could be a feature vector that summarizes the user's profile, along with their purchase and browsing history. The 'arm' might be the available youtube videos, and the post-serving feature $z$ could be an embedding extracted from the user’s watching behavior, including speed-up or speed-down, liked or not liked, watching time and its review contents (if any). **Re: Q2** Yes, you are correct! If the context-generating function is linear, LinUCB should ultimately achieve no-regret as well. However, poLinUCB may have an advantage in the initial stages, as it can utilize the additional information encoded by the (x, z) pairs. Specifically, for poLinUCB, it simultaneously learns the matrix $A$ (if $z = A x$), and $\theta$ and $\beta$, which can leverage the data more efficiently. This advantage is also observed in Figure 1 (in the main paper), when $\phi$ is linear, the regret of LinUCB tends to flatten as the number of time steps increases. However, this pattern isn't observed in the other two cases. Thank you again for your valuable time and consideration. Should you have any additional concerns or questions, kindly let us know. Thank you! --- Rebuttal Comment 1.1: Comment: Thanks for the responses, they were helpful answers to my questions. I maintain that this is a good paper with a clearly well-grounded extension to to a well-studied problem, and the proposed modification of the Elliptical Potential Lemma is of independent interest.
Rebuttal 1: Rebuttal: Dear Reviewers, We have added the figures to the accompanying PDF. We hope you find them useful and informative. Best, Authors Pdf: /pdf/626febd9f47231d42cb0800583480407794dd0ec.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This work considered a novel contextual bandit problem with post-serving contexts and designed a new algorithm, poLinUCB. With a generalized version of Elliptical Potential Lemma (EPL), they provided tight regret under standard assumptions. Empirically, they showed on synthetic and real-world datasets, the proposed algorithms outperformed LinUCB with different types of contexts. Strengths: - The paper is well-written and organised. Easy to read and well-defined notation. - The proposed new setting of contextual bandits is valid, interesting and well-motivated. The analysis of natural attempts fails provided good support evidence for the new proposed algorithm. - A generalised elliptical potential lemma (EPL) is proposed and well-explained. Can be an independent interest. - A regret bound is provided with parameters introduced in assumptions, with a clear description that how the parameter can be realised for the proposed algorithm. - The generalisations to action-dependent contexts, Linear Stochastic Bandits and Linear Bandits with Feature Mappings are interesting and useful extensions of the proposed setting. - Comparison to related algorithms in empirical is provided on both synthetic and real-world datasets. Weaknesses: The regret analysis replies to Assumption 1 -- Generalised learnability of phi^ast, and the regret bound depends on the value of alpha. It is not clear when phi is not a linear function, how alpha will change. For example, when alpha approaches 0, the regret bound will approach linear regret. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - Can you provide a detailed discussion of how Assumption 1 will be realised when phi is not a linear function: how alpha will change, how regret bound would be? - Can you test in synthetic experiments for more types phi functions, to verify the discussion above? - would the form the phi function influence the parameter learning in Eq. (3)? - what if phi function depends on contexts? e.g. for different users, different types of contexts can be observed before and after serving, and those contexts can have different relationships. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: As mentioned by the authors in the appendix, this work heavily relies on Assumption 1 and the relationship between pre- and post-serving contexts. In practice, those assumptions may not hold. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer for the thoughtful comments! Below please find our detailed responses to the questions below. **Re: weakness 1, Q1 \& Q2** Our regret analysis can accomodate different values of $\alpha$ in Assumption 1, which we would view it as a strength rather than weakness. The rate of $1/\sqrt{T}$ (when $\alpha=0.5$) is a commonly observed rate for many classical machine learning algorithms, including linear regression, logistic regression, and SVM with a linear kernel. This rate is rooted in the law of large numbers and the central limit theorem. For smaller $\alpha$ values, the generalization error will converge more slowly than $1/\sqrt{T}$, indicating that the learning problem becomes increasingly difficult. In the extreme case when $\alpha = 0$, $\phi^\star$ function cannot be learned accurately, thus we will inevitably suffer a linear regret (simply due to model misspecification). For standard linear functions, $\alpha = 0.5$ and our regret bound in such situations is tight w.r.t. $\alpha$. However, we intentiaonally made assumption 1 to be more general in order to accomodate other much more complex ways of estimating the $\phi$ functions such as manifold regression (see, e.g., Yang \& Dunson, 2016.), non-parametric ways like k-nearest-neighbors to estimate phi, or to estimate complex non-smooth function (e.g., functions in Holder spaces). For example, when $\phi$ is a function in Holder space $H(\beta)$, then the learning rate is $T^{-2\beta/(2\beta + d)}$, which is generally slower than $T^{-0.5}$ and depends on $\beta$ as well as the data dimension $d$ (see, e.g., the note of [Tibshirani and Wasserman, 19] below). The reviewer can also find a visual illustration in the Figure (1) of our uploaded PDF. Yang, Yun, and David B. Dunson. "Bayesian manifold regression." (2016): 876-905. Tibshirani, Ryan and Larry Wasserman. "Nonparametric Regression", Statistical Machine Learning, Spring 2019 **Re: Q3** The Eq. (3) is independent of $\phi$. Therefore, $\phi$ willl not affect the parameter learning in Eq. (3). **Re: Q4** This is a great question. For this setup, it's equivalent to say $\phi^x(x) = \theta(x) * x$, i.e., the function $\phi$ depends on $x$. If $\theta(x)$ is an arbitrary function, then the problem is clearly not learnable. If the composited function $\phi^x(x) = \theta(x) * x$ satisfies our learnability Assumption 1, we can view $\phi^x(x)$ as the new $\phi’(x)$, and our Theory still applies. We’ve studied a similar problem as one generalization in section 5.1, where the context generating function $\phi$ depends on the arm we selected. As a consequence, this will introduce an additional $\sqrt{K}$ in the first term of the regret bound in Theorem 1. Thank you once again for your time and thoughtful feedback. If there are any further concerns or questions, please kindly let us know. Thank you!
null
null
null
null
null
null
Learning Cuts via Enumeration Oracles
Accept (poster)
Summary: The paper proposes an efficient algorithm for learning local cuts used in integer programming relaxations. The algorithm uses a variant of the Frank-Wolfe algorithm and employs a stopping criterion for reducing the number of iterations during the application of the Frank-Wolfe algorithm. The paper showcases the effectiveness of the proposed algorithm by applying it to the multidimensional knapsack problem and reports its findings. I have read the authors' rebuttal. Strengths: * The paper contributes to solving the generic IP problem efficiently. * The paper includes experimental results showing the usefulness of the proposed algorithm. Weaknesses: The paper lack a discussion about the relevant of the studied problem to the NeurIPS community. I am not sure how much the results would appeal to the broad NeurIPS audience. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Please include a discussion about the relevance of the problem to NeurIPS audience. I appreciate the theoretical work, but I am not sure about its relevance to the broad NeurIPS audience. At least this warrants further discussion and motivation, which are currently lacking from the paper. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 2 fair Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. The paper lack a discussion about the relevant of the studied problem to the NeurIPS community. I am not sure how much the results would appeal to the broad NeurIPS audience. Please include a discussion about the relevance of the problem to NeurIPS audience. I appreciate the theoretical work, but I am not sure about its relevance to the broad NeurIPS audience. At least this warrants further discussion and motivation, which are currently lacking from the paper. We understand the concern about the direct relevance of our work to the broader NeurIPS audience. Although the intersection of learning methods and integer problem-solving might not be a mainstream application in Machine Learning, there has been a rising trend of studies in this area, spurred by advancements in learning techniques. In fact, several past NeurIPS publications align with our topic. Some of these include: - Wu, Y., Song, W., Cao, Z., Zhang, J., Gupta, A., & Lin, M. S. (2022). Graph Learning Assisted Multi-Objective Integer Programming. In NeurIPS. - Chmiela, A., Khalil, E., Gleixner, A., Lodi, A., & Pokutta, S. (2021). Learning to Schedule Heuristics in Branch-and-Bound. In NeurIPS. - Wu, Y., Song, W., Cao, Z., & Zhang, J. (2021). Learning large neighborhood search policy for integer programming. In NeurIPS. - Chen, X., & Tian, Y. (2019). Learning to perform local rewriting for combinatorial optimization. In NeurIPS. - He, H., Daume III, H., & Eisner, J. M. (2014). Learning to search in branch and bound algorithms. In NeurIPS. This point is also reinforced by the review of reviewer kHh3 (see "Strengths"). Given this context, we are confident that our interdisciplinary exploration holds value for the community. Such work could stimulate knowledge exchange and potentially motivate machine learning enthusiasts to devise novel applications or enhancements in optimization problems using machine learning techniques. Acknowledging your feedback, we agree that the paper should offer more clarity on the significance of employing learning methods in integer programming. To this end, we will add a dedicated paragraph in our paper's revised version to elucidate this relationship further. --- Rebuttal Comment 1.1: Title: Response acknowledgment Comment: I thank the authors for their responses. My rating remains the same.
Summary: The authors propose a new method for generated lifted cuts from a smaller projected polytope that replaces the usually-expensive LP/IP-based separation routines with an optimization-based method that uses the Frank-Wolfe algorithm. The authors present theory showing that the proposed separation routine indeed produces strong cuts. They then validate their methods via a case study on the multidimensional knapsack problems and variants. Strengths: The proposed method is novel and is a genuinely new and interesting algorithmic development in cutting plane generation. The paper is written clearly and precisely. The need for faster separation routines that do not rely on large LPs (or even MIPs) requiring expensive column generation routines is clear, and the authors make interesting progress towards a solution. Weaknesses: I think the Knapsack case study could’ve used some more details, including comparisons to existing methods of lifting in this domain. E.g., how does the authors’ Frank-Wolfe based method compare to the usual pipelines of lifting for knapsack inequalities – in particular sequential up-lifting/down-lifting for minimal cover inequalities? Also, how does the authors’ separation procedure itself compare to the usual separation routines for cover inequalities (e.g., those in “Lifted Cover Inequalities for 0-1 Integer Programs: Computation” by Gu, Nemhauser, and Savelsbergh)? Does it produce similar cuts? Stronger ones? Weaker ones? Is it faster or slower? Knapsack constraints are one of the main areas where lifting really shines since it can be done fairly efficiently, so some more comparison here would be super interesting. But, I still view this paper as providing a nice contribution without the deeper dive into lifting knapsack cover inequalities. A comparison to the equivalent local cuts implementation that SCIP uses seems important to include, but maybe SCIP doesn’t do anything like this? Tangentially, to what extent are local cuts actually used by general-purpose MIP solvers? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Methodologically, how does the (high-level) approach differ from lift-and-project? Some brief discussion of the similarities/differences would be nice to include. Do the authors think this approach would scale beyond the kinds of multi-dimensional knapsack/GAP problems studied? Those problems conveniently have efficient lifting methods, but for general MIP lifting can be expensive (requiring solutions to MIPs). (I do understand that lifting is not the focus of this paper.) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Comments on weakness: 1. [...] E.g., how does the authors’ Frank-Wolfe based method compare to the usual pipelines of lifting for knapsack inequalities – in particular sequential up-lifting/down-lifting for minimal cover inequalities? [...] Also, how does the authors’ separation procedure itself compare to the usual separation routines for cover inequalities (e.g., those in “Lifted Cover Inequalities for 0-1 Integer Programs: Computation” by Gu, Nemhauser, and Savelsbergh)? Does it produce similar cuts? Stronger ones? Weaker ones? Is it faster or slower? [...] The mentioned knapsack separation routines are applied in the default settings of SCIP. Since SCIP is considered state-of-the-art for the open source solvers, we assume that its internal cut selection mechanism would prefer these cuts if they were helpful. While SCIP is very informative about how many separators were called and how many cuts were effectively applied, we did not analyze this further in the paper, as a discussion here would quickly evolve out of the scope of the paper. However, we would gladly publish all the SCIP logs from the experiments alongside the revised manuscript. 2. A comparison to the equivalent local cuts implementation that SCIP uses seems important to include, but maybe SCIP doesn’t do anything like this? Tangentially, to what extent are local cuts actually used by general-purpose MIP solvers? SCIP currently dooes not include an implementation of either local cuts method (see, e.g. https://www.scipopt.org/doc/html/group__SEPARATORS.php for a list of cuts available). Hence, our experiments focus on comparing our FW-local cuts to "all that SCIP has to offer", i.e., the default configuration which permits all cuts subject to SCIP's built-in cut selection mechanism. Answers to questions 1. Methodologically, how does the (high-level) approach differ from lift-and-project? Some brief discussion of the similarities/differences would be nice to include. We appreciate your suggestion and agree that a comparative discussion would be useful. Lift-and-project methods share some high-level ideas with our approach, such as exploring solutions in a different space and then projecting back to the original space. However, their mechanics are significantly different. Lift-and-project methods convert the problem, e.g., into an equivalent quadratic formulation to simulate non-linearity in integer programs, which is then linearized in the lifting phase. Afterward, this extended formulation is projected onto the original variable space, resulting in a strengthened binary MIP formulation. This step is the projection phase. In contrast, our approach, maintains the linearity of the program. We work with a trivial projection of the polytope onto a lower-dimensional variable space where the problem can be solved more efficiently, and then the solution is lifted back to the full variable space. We will elaborate on this comparison in the revised manuscript. 2. Do the authors think this approach would scale beyond the kinds of multi-dimensional knapsack/GAP problems studied? Those problems conveniently have efficient lifting methods, but for general MIP lifting can be expensive (requiring solutions to MIPs). (I do understand that lifting is not the focus of this paper.) We recognize that the effectiveness of our approach partly relies on the availability of efficient lifting routines, which might not be the case for all problem classes. However, the final impact of our method on the overall solving process involves a tradeoff between the runtime of the lifting routine and the strength of the produced cuts, both of which can vary with the problem class. Thus, even problem classes with less efficient lifting routines might theoretically benefit from our method. Many significant and widely applicable problem classes do possess efficient lifting methods and local cuts have been successfully applied to them (as referenced in the "Related Work" section). For some problems, no lifting is required at all (for instance, when the Trivial Lifting Lemma holds, see Section 3.2), such as with the Linear Ordering Problem. We did not conduct thorough experiments on other problem classes, but this is a part of our planned future work. It is challenging to make confident predictions due to the numerous factors involved in evaluating MIP/IP solver performance. --- Rebuttal Comment 1.1: Comment: I actually couldn't find anything about cover cuts/lifting in the SCIP separators link (but maybe I didn't look hard enough). Nonetheless, I thank the authors for their response and maintain my rating for the paper to be accepted. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, here's a discussion in which one of the SCIP developers confirms that cover cuts are separated automatically: http://listserv.zib.de/pipermail/scip/2020-April/003925.html
Summary: This paper presents a method for generating separating cutting planes for integer programming problems that is based on the Frank-Wolfe method for optimizing over polyhedron. It falls roughly within the existing local/Fenchel cut framework: given a point, it identifies the closest point within the feasible region by repeatedly calling a (hopefully fast) optimization oracle, and then Strengths: Overall I quite like the paper and how it stitches together some disparate, existing ideas from the optimization literature. The paper is generally very well-written. The paper is in an area of clear interest to the Operations Research and Mathematical Optimization community, and the subject matter is consistent with a number of other papers accepted to NeurIPS in the past. The application of Frank-Wolfe in this manner is interesting and non-trivial, as the authors apply some extensions to improve its practical performance (the "FW Gap" characterization being chief among them). Weaknesses: The results in the computational study are slightly underwhelming. A 31% improvement in solving times is nice, but this only considers "solved" instances and the geo-mean solve times here are quite small to begin with (6s for "default"). The provided solve times on all instances (including ones that terminate without proving optimality) show a significantly smaller (though still positive) speedup. Given that MIP solvers such as SCIP tend to be engineered more for hard instances than easy ones, it is difficult to know how much to read into these results without additional detail. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: * L19: "However, the by far most interesting case is the one we consider here" (IP vs. MIP). This is a subjective statement that I imagine many researchers in the MIP community would (strongly!) disagree with. I'd suggest softening or removing, as this is not really necessary to justify the restriction to IP for the purposes of the paper. * L30: "first p indices" seems like it is left over from an earlier draft that considered MIP. * The first paragraph of Section 2 is fairly confusing, largely because you introduce two polyhedron (P and tilde{P}), but really only work with the second. As such, statements about "valid cut"s and "original space" are unclear without further qualification. I'd suggest rewriting it, and maybe adding some discussion about how P and tilde{P} should or do relate to each other: "subproblem" suggests some relationship (maybe containment in a projected space?), but is vague. * L101: You do not assume that tilde{P} is full-dimensional (just P, but see comment above), so there may not exist an interior point. * Figure 1: The figure and caption are a bit confusing: it's not clear what the tilde{P}_I label refers to; indices t and k are used variously in the caption and figure; there is no explicit reference to (c) in the caption, etc. * L151: Is the oracle required to return a solution vertex? If so, you should state that above. If not, does this affect any of the statements to follow? * L317: It is a bit confusing to the reader that you state a percentage solve time improvement in an unqualified manner, when it only applies to a subset of the instances in your test bed (the solved ones). Please be more explicit about this in the test (here, and also earlier in the contributions section). * Section: To bolster the takeaways from the computational study, it would be interesting to have more information about the relative performance on the unsolved instances. For instance: How many instances does each method solve to optimality within the time budget? What do the dual and primal/dual integrals look like for the various methods? How long does "default" SCIP spend in cut separation? A separate suggestion to isolate the dual effect of the new method would be to seed each method with the best known primal solution as a warm start. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. L19: "However, the by far most interesting case is the one we consider here" (IP vs. MIP). This is a subjective statement that I imagine many researchers in the MIP community would (strongly!) disagree with. I'd suggest softening or removing, as this is not really necessary to justify the restriction to IP for the purposes of the paper. We understand the concern and agree that the phrasing needs refinement. Our objective wasn't to compare IP vs. MIP directly, but rather to highlight that the methods proposed in our paper are more readily applicable to the IP case. We will amend this in the revised manuscript. 2. L30: "first p indices" seems like it is left over from an earlier draft that considered MIP. Indeed, that is an oversight on our part, a remnant from considering the MIP case. We will revise this in the updated manuscript. Thanks! 3. The first paragraph of Section 2 is fairly confusing, largely because you introduce two polyhedron (P and tilde{P}), but really only work with the second. As such, statements about "valid cut"s and "original space" are unclear without further qualification. I'd suggest rewriting it, and maybe adding some discussion about how P and tilde{P} should or do relate to each other: "subproblem" suggests some relationship (maybe containment in a projected space?), but is vague. We appreciate your feedback and will work on providing a clearer relationship between the two polyhedra in the revised manuscript. tilde{P} can be thought of as a projection of P onto a lower-dimensional space. 4. L101: You do not assume that tilde{P} is full-dimensional (just P, but see comment above), so there may not exist an interior point. Thank you for pointing this out. We indeed have to assume that the projection is full-dimensional, which we will do in the revised manuscript. 5. Figure 1: The figure and caption are a bit confusing: it's not clear what the tilde{P}_I label refers to; indices t and k are used variously in the caption and figure; there is no explicit reference to (c) in the caption, etc. Thanks for pointing this out! We will definitely fix the figure in the revised version of the manuscript. While the figure itself is correct, the caption does indeed contain several errors; for explanation: - (c) shows the iteration "at convergence", the reference should be in the last sentence of the caption; - the index "t" should actually be "i" from Figure (c). Due to our assumptions, we know that FW converges, so for every iteration k before convergence, there exists an index i > 0 such that convergence/termination occurs at iteration k + I. Lastly, as we are allowed to provide one PDF page worth of figures and tables as part of our "global" rebuttal, we attach the updated figure there for your reference. 6. L151: Is the oracle required to return a solution vertex? If so, you should state that above. If not, does this affect any of the statements to follow? Thanks for raising that point. In our setting and example (MKP), the oracle indeed always returns a vertex, but this is not a requirement for the following statements, especially for the theory of FW (the property is never explicitly used). We will clarify this in the revised version. 7. It is a bit confusing to the reader that you state a percentage solve time improvement in an unqualified manner, when it only applies to a subset of the instances in your test bed (the solved ones). Please be more explicit about this in the test (here, and also earlier in the contributions section). Your point is well-taken, and we apologize for any confusion. We will revise the manuscript to explicitly state what the percentage refers to. 8. Section: To bolster the takeaways from the computational study, it would be interesting to have more information about the relative performance on the unsolved instances. For instance: How many instances does each method solve to optimality within the time budget? What do the dual and primal/dual integrals look like for the various methods? How long does "default" SCIP spend in cut separation? A separate suggestion to isolate the dual effect of the new method would be to seed each method with the best known primal solution as a warm start. We appreciate your feedback. For all full branch-and-bound run results, the number of instances each method solved within the time limit is listed (see Tables 1 and 3). Additionally, we already exclude the effect of primal heuristics on the solution process by initializing the runs with the optimal solution values. This fact is only mentioned in the Appendix - we will move this point to the main part of the paper and include details on time spent in cut separation in the revised version of the paper. --- Rebuttal Comment 1.1: Comment: Dear Reviewer, The author-reviewer discussion period is closing soon, so could you please go over the authors' rebuttal and respond with a message to the authors? It is important that authors receive a reply to their rebuttals, as they have tried to address comments raised by the reviewers. Best regards, AC
Summary: * Context: This paper deals with the subject of generating cuts for solving Integer Programs (MIP). Rather than using a cut generating algorithm based on a formula (like Gomory cuts) to create hyperplanes separating the solution of the relaxed problem from the (integer) feasible domain, the papers attempts to generate "Local cuts". Local cuts are cuts that are obtained by generating a cut in a subproblem with reduced dimension (so that any optimization problem required to generate the cut are easier to solve), and then, taking the cut for the subproblem and transforming it into a cut for the actual problem to solve. * Contribution: This paper presents a local cuts generating algorithm, based on the Frank-Wolfe / conditional gradient algorithm. Frank-Wolfe is used to find the point in the subproblem polyhedron $\tilde{P}$ that is the closest to the point to be separated $\tilde{x}$, as this point will be likely to be on a facet. The Frank-Wolfe algorithm has the advantage that it only require linear optimization oracle access over the subproblem to arrive at a solution, which can often be implemented efficiently. The paper also demonstrates that using duality information, the Frank-Wolfe algorithm does not need to be run all the way to convergence and that the strength of the cut generated by intermediate iterates can be evaluated, leading to a speed-up. The impact of using these cuts is evaluated on the multi-dimensional knapsack problem, by including the local cut generating algorithm into the open source SCIP solver. Strengths: The paper presents a algorithmic improvement for solving an important problem, and does it in a clear fashion. It is clearly delineated what exists in the current literature and what constitutes an actual contribution of the work. Explanations and motivations are straightforward to follow. Weaknesses: * From reading the paper, what is not obvious is how to take the contribution of this paper, and apply it to a different problem. After reading the paper, I understand how to apply this to a knapsack problem, but it's not obvious to me how I would do it if I had a more arbitrary MIP (for example one for verifying neural networks), or even a different type of problem solvable by integer programs (let's say a TSP). What would be beneficial would be to have somewhere clearly "These are the requirements that you need for your problem to be solvable using this method" * I think that what is missing is some appropriate baseline from the evaluation. As it is, the contribution of the paper seems to me to be "here is an efficient way to do local cuts", while the only baseline that is compared to is default SCIP. Is the performance difference that is seen due to using local cuts or using the local cuts presented in this paper? (I think that Table 3 in the appendix might have some results that are relevant to this, but this would benefit from being moved to the main part of the paper.) Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: * [Section 2] Could you clarify the relation between the polytope $P \subseteq \mathbb{R}^n$ and the polyhedron $\tilde{P} \subseteq \mathbb{R}^k$ ? Is it always a restriction (a subset of the variables in P, such that an element is in $\tilde{P}$ if there exists an element in $P$ with those values?) or could it be something more general ( like an affine transformation of $\tilde{P}$ ?) and have as only requirement "if we have a cut in $\tilde{P}$, we can find a cut in $P$? . As it is, it sounds quite abstract. * [Section 5] Downlifting vs up- and down lifting is not explained anywhere. Given that downlifting performs significantly worse on all benchmarks, it seems that it might not be worth it to include it in the main version of the paper to avoid confusion. Similarly, CMIR seems to have very little impact so unless it can be explained, it might make sense to move it to the appendix. * Would it be possible to include some more information on what the default settings of SCIP looks like? Is it a reasonable baseline? Is there an indication on how many cuts (whether local or not local) it corresponds to? Typos / small comments: - Line 30, I don't understand what the notion of the first $p$ indices is. Is that a reference to some Mixed Integer Programming case, where only the first $p$ variables need to be integral? - Figure 1 legend, is it supposed to be the L_2 projection (rather than L_t) ? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Comments on weaknesses: 1. [..] What would be beneficial would be to have somewhere clearly "These are the requirements that you need for your problem to be solvable using this method" [..] As we restrict the scope to IPs for this paper, we will only list the requirements for applying the method for IPs. To be able to apply the method in general, you'll need the following components: a) A projection P -> \tilde{P}. Note that we restrict \tilde{P} to always be a subset of P, which restricts the degrees of freedom. In order to be practical, the selected \tilde{P} should be such that the oracle (see 2.) solves linear optimization problems over \tilde{P} efficiently. b) An oracle solving problems over \tilde{P} to optimality (returning a vertex). Note that while there may exist practically efficient specialized methods (such as dynamic programming for MKP), enumeration constitutes a general go-to option, especially in the presence of parallel hardware. c) A lifting method to lift cuts from \tilde{P} up to P. Nevertheless, we think making this list explicit is a great idea. While all components are there, we will make sure that in the revised version, we modify the language accordingly to clarify these conditions. 2. [...] while the only baseline that is compared to is default SCIP. Is the performance difference that is seen due to using local cuts or using the local cuts presented in this paper? [..] In Table 1, we compare the performance of baseline SCIP (aka "default") to variants of SCIP where the only cuts enabled are our FW-Local-Cuts (lc0 if separation only occurs in the root node; lc1 if this is done in the whole tree). As the reviewer correctly notes, the desired computation is indeed shown in the appendix, but here in Table 2: There, we compare _our_ local cuts (FW) with the LP-based local cuts from [42]. Note that both methods result in different cuts, hence one always has to take both the runtime as well as the gap closed measure into account when comparing them. The third paragraph in Appendix 2 discusses the results. Answers to questions: 1. [Section 2] Could you clarify the relation between the polytope P and the polyhedron P_tilde? Is it always a restriction (a subset of the variables in P, such that an element is in if there exists an element in with those values?) or could it be something more general (like an affine transformation of?) and have as only requirement "if we have a cut in, we can find a cut in? As it is, it sounds quite abstract. In the paper, we consistently assume that \(P_{tilde}\) is a subset of P, with no transformations of any kind. This will be clarified in the revised version. We have not delved into the potential application of affine transformations. While they might be theoretically possible, they would greatly complicate the process of efficiently lifting cuts to the original space. Moreover, we did not discern a distinct advantage in applying such transformations. The restriction to using subsets of variables only has, in our opinion, already enough degrees of freedom (think, e.g., projecting out continuous variables in a MIP) yet enables us to find amenable subproblems quite effectively, as shown, e.g., in the case study for MKP. 2. [Section 5] Downlifting vs up- and down lifting is not explained anywhere. Given that downlifting performs significantly worse on all benchmarks, it might not be worth it to include it in the main version of the paper to avoid confusion. Similarly, CMIR seems to have very little impact so unless it can be explained, it might make sense to move it to the appendix. Lifting procedures are inherently problem-specific. The lack of detailed explanations for individual lifting procedures for MKP in the paper stems from our reliance on established methods from existing literature, for which we provide references. We initially included all variants in the primary experiments to align with the customary practices in MKP literature. However, acknowledging your point, we are inclined to relocate the less effective variants to the appendices in the revised version to enhance clarity. 3. Would it be possible to include some more information on what the default settings of SCIP look like? Is it a reasonable baseline? Is there an indication on how many cuts (whether local or not local) it corresponds to? In general, we understand the desire to include a thorough discussion here; nevertheless, we also see the degree of complexity that would be required to correctly reflect this in the paper. For example, the default settings in SCIP enable the use of all implemented types of cuts, but the actual application is subject to an internal cut selection procedure. While SCIP is very informative in this regard, e.g. writing out how many separators were called and how many cuts were effectively applied, a discussion here would quickly evolve out of the scope of the paper. SCIP is considered best-in-class for the academic solvers, hence the default settings usually reflect a decent baseline. Finally, we could offer a compromise here: Together with the revised version of the paper, we publish the SCIP log files of all the experiments contained in the paper which contain the data. 4. Line 30, I don't understand what the notion of the first indices is. Is that a reference to some Mixed Integer Programming case, where only the first variables need to be integral? Your observation is accurate. In the context of MIP, the terminology would be applicable, but our focus on the IP case, where every variable is integral, requires a different explanation. In the IP scenario, LP-relaxations are undertaken as long as the solution \(x^*\) has fractional values across one of the n indices. We will rectify this in the revised paper. 5. Figure 1 legend, is it supposed to be the L_2 projection (rather than L_t) ? You're right; it should indeed be L_2. This oversight will be addressed in our paper's next version. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response, notably for pointing me to the appendix. Table 2 there is a bit hard to interpret, given the fact that between LP generated cuts and FW generated cuts, there is nothing held constant (which could be either time until a given gap closed, or setting a time budget and reporting the gap closed, although I imagine this might be hard to setup in practice.) I still found the paper interesting enough as it is, and am happy to maintain my rating and for this paper to be accepted. --- Reply to Comment 1.1.1: Comment: Thank you for your comment and maintaining the rating! We will try and figure out a way to improve Table 2 in the revised version. At the very least, we will try and report the separation time for the LP-based approach as well, making it easier to compare against the FW-case.
Rebuttal 1: Rebuttal: Dear Reviewers, We sincerely thank you all for reviewing our paper. Your insights and suggestions have been incredibly valuable in refining our work and will undoubtedly lead to enhancing the quality and impact of our research. We appreciate your time and dedication to peer review and are grateful for your contributions. Below you will find our answers to each specific question you raised. Additionally, we use the optional PDF page for figures and tables to provide the updated Figure 1, following the incorporation of comments from Reviewer kHh3. Thank you once again for your valuable feedback. Sincerely, Authors Pdf: /pdf/80680305eec179118a36e275aab849b179372ead.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper studies cut generation from operations research. While this is potentially a promising approach, I believe the paper falls out of scope of the conference. Optimization can be seen as part of machine learning machiery, but normally NeurIPS papers are expected to have some connection to machine learning, e.g., at the very least, the experiments could have been done on some machine learning problem. It is not common to reject a paper based on out of scope, however in this case the paper is purely operations research. I believe submiting to an operations research venue would be more appropriate, and it is likely that the impact of the work would be greater in such a community. Strengths: Did not look in detail. Weaknesses: Did not look in detail. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: What is the motivation for submiting this paper to NeurIPS? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: Did not look in detail. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. What is the motivation for submiting this paper to NeurIPS? We recognize that our paper primarily falls under Operations Research, but it's crucial to note a significant distinction. Unlike many conventional cutting plane techniques in this field that hinge on fixed equations and formulas for separating hyperplane derivation, our method aims to learn unknown facets of a polytope from a lower-dimensional version, subsequently generalizing them into cutting planes for the high-dimensional polytope. This learning approach, while not new to Operations Research as high-level meta-paradigm, is made tangible and practical by our novel use of the Frank-Wolfe algorithm. This innovation paves the way for *general-purpose* cutting planes, learned directly from the problem rather than from pre-established rule sets. We understand that this may not be the most typical application of learning algorithms, yet we respectfully differ with the view that it lacks interest for the community. We believe it could instead stimulate knowledge exchange across fields and potentially inspire machine learning researchers to apply new or improved methods derived from machine learning advancements to optimization problems. Furthermore, several NeurIPS papers from previous years have explored the application of learning methods in solving integer programming problems. A few notable ones are: - Wu, Y., Song, W., Cao, Z., Zhang, J., Gupta, A., & Lin, M. S. (2022). Graph Learning Assisted Multi-Objective Integer Programming. In NeurIPS. - Chmiela, A., Khalil, E., Gleixner, A., Lodi, A., & Pokutta, S. (2021). Learning to Schedule Heuristics in Branch-and-Bound. In NeurIPS. - Wu, Y., Song, W., Cao, Z., & Zhang, J. (2021). Learning large neighborhood search policy for integer programming. In NeurIPS. - Chen, X., & Tian, Y. (2019). Learning to perform local rewriting for combinatorial optimization. In NeurIPS. - He, H., Daume III, H., & Eisner, J. M. (2014). Learning to search in branch and bound algorithms. In NeurIPS. This point is also reinforced by the review of reviewer kHh3 (see "Strengths"). --- Rebuttal Comment 1.1: Comment: My issue is still that, to me, the paper seems to clearly fall within Operations Research without an obvious connection to machine learning. I agree that combining optimization and (machine) learning is a promising research direction. However the "learning" component in this work seems to be different from what I would normally expect from papers combining machine learning and optimization. To make this more concrete, consider the third paper you referenced, namely "Learning large neighborhood search policy for integer programming" by Wu et al. The paper is about using deep reinforcement learning (machine learning) to solve integer programs. When reinforcement learning faces integer programs, this gives rise to many technical difficulties given the nonlinearity of integer programs, and this is interesting to explore how to learn effective policies given these very difficult challenges. There is a clear connection to machine learning in this case, and many researchers from reinforcement learning and optimization could find interest in the work. Other works dealing with optimization and machine learning might find related difficulties in generating data for learning, selecting appropriate metrics for learning, etc. But in your paper, the "learning" is rather on the OR side, and I do not see this connection to the broader machine learning community. Can you point out specifically which part of the paper could potentially be of interest to machine learning researchers (even in a broad sense)? --- Reply to Comment 1.1.1: Comment: We appreciate the comments and point of view of the reviewer and would like to complement them with ours: Traditionally cutting planes have been derived from polynomial time verifiable methods. This limits their expressive power. We take a different approach by considering subproblems of an NP-hard problem (which theoretically inherits its complexity status), generate cuts and lift them. The process of obtaining a potentially polynomial non-verifiable cutting plane and generalizing it to higher dimension we call learning as the structural insights from lower dimension that are encoded in the cutting plane are applied to 'before unseen' higher dimension versions of the problem. This learning process has a clear polyhedral interpretation and we demonstrate that it is useful to collect enough knowledge about the problem to speed-up the solution process. Moreover, we think that there is a certain overlap between the learning and mathematical optimization community (or Operations Research, if you like). We see our paper in this area.
Summary: This paper studies integer programming (IP) by proposing an alternative cutting-plane approach that makes use of the Frank-Wolfe (FW) algorithm for the separation sub-routine (i.e., separation of the target from the feasible set). The main idea is to use “local cuts” which aims at deriving the facets of $P_I$, i.e., the integer hull of the feasible set, $P$, of the relaxed linear program (LP). Facets of $P_I$ are the strongest cuts but they are unknown and finding the facets could be computationally expensive. The authors propose a separation procedure which does not need to solve an LP, and instead uses away-step FW variant to (approximately) solve the separation problem. To demonstrate the performance of their approach, the authors compare their FW-based approach with the default SCIP solver, an optimization framework for solving mixed-integer programs, against solving the multi-dimensional knapsack problem. They also describe the implementation of the linear minimization oracle (LMO) of the FW algorithm using dynamic programming for the application at hand. Strengths: - Authors provide an alternative to the cutting-plane approaches by simplified local cuts. Instead of exactly solving an LP, the authors make use of FW algorithm and dynamic programming to find facets of the underlying polyhedron in lower dimensions. They use known lifting techniques to obtain the full-dimensional cut. - The stopping criterion for the algorithm is derived via fundamental properties of the FW algorithm and LMOs. Using a fundamental, technical intuition, they improve the performance of the separation routine by an additional stopping criterion. - They compare their approach with SCIP with default settings as a benchmark and provide improved run times. Weaknesses: - The fundamental differences and improvements with respect to existing cutting-plane methods to solve IP problem (specifically MKP as the authors provide it as an example) are not clarified. Computational/complexity improvements with respect to LP-based methods are not studied in technical details. - The method is not comprehensively compared against (in practice) to existing methods for solving IPs, specifically MKPs. The authors provide a comparison between LP-based separation as implemented by [42] and the FW approach in the appendix. The FW approach is faster in run time but the gap closed is less than the LP-based approach. I am not sure whether it is enough to claim a clear advantage for the proposed algorithm. However, we should note that Table 3 shows promising results for the FW algorithm. - Since this paper does not propose a new theory and analysis, I don’t find the experimental evaluation comprehensive enough. The authors should have considered other problems, such as TSP, while including existing algorithms in their runs. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - How do you set $\epsilon$ in your algorithm? - Have you compared your algorithm numerically with other proposed methods for solving MKP or other popular IP problem (other than the results in the appendices)? - Have you run experiments for other problem instances? - I am curious about the possible shortcomings of the FW approach. The linear minimization oracle for the FW method uses dynamic programming, which introduces an overhead for the computation of $v_t$. Is there any regime where the FW approach with the proposed LMO oracle becomes computationally expensive such that the run times are comparable to LP-based approaches? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: - The authors might need to consider more problems like MKP to validate the performance of their approach. - I would prefer to see several other methods to be included in the numerical tests. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. How do you set $\epsilon$ in your algorithm? Our algorithm sets $\epsilon$ to 1e-9, following the SCIP default. We will include this into the revised version of the paper. 2. Have you compared your algorithm numerically with other proposed methods for solving MKP or other popular IP problems (other than the results in the appendices)? Regarding the comparison of our algorithm with other methods, we have chosen to focus on a generic method capable of operating on any IP given a suitable lifting routine. We deemed it most relevant to test this within a general-purpose solver, rather than specialized heuristic methods. Our choice of SCIP is motivated by its status as the most widely used academic solver of this kind. The comparisons presented in the appendices involve the most recent state-of-the-art competing methods, as per our knowledge. 3. Have you run experiments for other problem instances? Concerning the testing of different problem instances, we used the MKP to introduce and illustrate our methodology, given its relevance and popularity in IP problems, thereby providing a robust benchmark. However, we plan to explore other problem classes in future work, with a particular interest in problems with trivial lifting, which we believe align well with our approach. 4. I am curious about the possible shortcomings of the FW approach. The linear minimization oracle for the FW method uses dynamic programming, which introduces an overhead for the computation of $v_t$. Is there any regime where the FW approach with the proposed linear optimization oracle becomes computationally expensive such that the run times are comparable to LP-based approaches? Please note that the oracle _used in the MKP_ uses dynamic programming. The structure of the oracle is very much problem-dependent, e.g., in the case of a TSP, the oracle could be enumerating a set of round tours on a subset of cities. Hence, one has to find a balance between the overhead introduced by calling the oracle and the advantages of the FW procedure in general. As for potential limitations of the FW approach, while it is technically possible to craft cases where FW runtimes exceed those of the LP-based approach, the overall effect on the solving process is more intricate. It depends on the quality of the cuts produced and their interplay with the various components of a modern MIP solver. This interaction encompasses aspects such as the cut filtering implementation, the strength of other obtained cuts, the solver's emphasis on generating cutting planes compared to other techniques, and more. Consequently, the method's true potential can only be gauged empirically over a relevant test set, as demonstrated in our computational studies by comparing out-of-the-box SCIP with SCIP enhanced by our method, as well as comparisons against SCIP enhanced with competing methods. --- Rebuttal Comment 1.1: Comment: Dear Reviewer, The author-reviewer discussion period is closing soon, so could you please go over the authors' rebuttal and respond with a message to the authors? It is important that authors receive a reply to their rebuttals, as they have tried to address comments raised by the reviewers. Best regards, AC --- Rebuttal Comment 1.2: Title: Thank you for your response Comment: I have read the responses by the authors regarding my questions. I understand that the reason behind the comparisons against SCIP rather than heuristics-based approaches. However, I still think it would complete the picture if the authors compare their methods against multiple existing algorithms and consider multiple IP problems to validate their claims with firm, numerical results. Nonetheless, I appreciate the alternative perspective to make use of FW-based sub-routine in IP solvers. I think this paper needs further improvements, as indicated in my initial comments, but I will increase my score after the authors response and reading the comments of the other reviewers. --- Reply to Comment 1.2.1: Comment: Thank you for your comment. We appreciate your perspective and we will revise the paper for the camera-ready version and broaden the computational results accordingly.
null
null
null
null
Softmax Output Approximation for Activation Memory-Efficient Training of Attention-based Networks
Accept (poster)
Summary: This paper proposes the Softmax Output Approximation algorithm, which approximates the softmax output during the forward pass and reconstructs it during the backward pass. In attention-based models like Transformer, the softmax output activations consume a significant amount of memory. By approximating them, a considerable amount of training memory can be saved. The algorithm successfully achieves memory savings of up to 84% compared to the existing softmax in various tasks. Strengths: 1. The paper is well-written and easy to understand overall. 2. The mathematical analysis of gradients for approximating softmax outputs is reasonable. In particular, **the highly novel aspect of this approach** is that it approximates the distribution by assuming it and then uses sorting, without relying on pruning or quantization techniques. 3. The experimental results demonstrate that the proposed method effectively reduces memory usage during pretraining or fine-tuning on certain datasets while achieving comparable accuracy to the baseline. Weaknesses: **1. Low accuracy on IMDb tasks** : The paper acknowledges the low performance on certain datasets, such as IMDb, and attributes it to the specific characteristics of the task. However, if the proposed softmax approximation struggles with tasks like IMDb, it may be challenging to apply it to other difficult tasks. **2. Approximation time overhead** : The paper claims that the overhead of the approximation is negligible compared to the general training workload. However, considering the potential overhead caused by sorting when dealing with large softmax outputs, it raises doubts about whether the overhead can truly be disregarded. **3. Lack of Comparison** : There are existing algorithms that approximate activations in various ways. For example, ActNN [1] and GACT [2] compress all activations based on their sensitivity, while [3] Mesa compresses all activations in a Transformer-specific manner. AAL [4] and DropIT [5] approximate input activations of linear and convolutional layers using auxiliary activations and pruning, respectively. It would be beneficial to compare the proposed approach with these existing methods. [1] ActNN: Reducing Training Memory Footprint via 2-Bit Activation Compressed Training [2] GACT: Activation Compressed Training for Generic Network Architectures [3] Mesa: A Memory-saving Training Framework for Transformers [4] Learning with Auxiliary Activation for Memory-Efficient Training [5] DropIT: Dropping Intermediate Tensors for Memory-Efficient DNN Training Technical Quality: 3 good Clarity: 3 good Questions for Authors: I think the idea presented in this paper is highly novel. If the following questions are addressed satisfactorily, **I am willing to raise my score to 5-7**. **1. Low accuracy on IMDb tasks** :It is disappointing to see a significant performance decrease on the IMDb dataset, while the SST-2 dataset shows minimal performance degradation. In my opinion, with appropriate hyperparameter tuning, it should be possible to achieve performance comparable to the baseline. If it is not feasible, it would be helpful to provide a qualitative explanation that softmax output activation may work well for general tasks, and IMDb tasks are an exceptional case. **2. Approximation time overhead** :It would be beneficial to report the increase in training time compared to the baseline. Specifically, it would be valuable to demonstrate that the approximation time overhead remains low even for larger input sizes. **3. Lack of Comparison:** Adding various algorithms [1-5] that approximate activations to the Related Work section would be beneficial. Particularly, for GACT [2] and Mesa [3], which also target Transformer and approximate softmax outputs, it is important to provide specific comparisons with proposed Softmax Output Approximation algorithm in terms of accuracy, memory usage, and training time. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: This paper lacks a discussion on the limitations of the proposed algorithm and possible future work. It would be beneficial to include these aspects in the revised version of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We deeply appreciate the constructive feedback and the clear guidelines provided by the reviewer to make our work better. **Q1** Low accuracy on IMDb tasks > We have conducted a new experiment with the IMDb task and obtained comparable performance to the baseline, i.e., baseline accuracy 0.876 vs. our method accuracy 0.883 ($m=1$), by tuning the hyperparameters as the reviewer suggested. Please take a look at the details of the new experimental result (PDF file) posted on the Global rebuttal. The reason we got the low-performance result on the IMDb task in our submission might be some subtleties and difficulties in approximating the exact softmax outputs that tend to diverge randomly from the (continuous) exponential function. The IMDb task was the one that revealed this phenomenon most in our experiment. Although our method now works well with the IMDb task, as shown in our new experiment, we will keep working on validating the efficacy of our method for other difficult and general tasks and include them in our revised paper. **Q2** Approximation time overhead > As the reviewer correctly pointed out, our method inevitably increases the training time to some extent because it includes additional sorting and softmax output approximation processes. As the reviewer suggested, we have measured the time overhead incurred by our method over different batch sizes, sequence (input data) lengths, and the number of softmax output values stored in memory, of which details are included in the PDF file in the Global rebuttal. From this measurement experiment, we can observe that 1) it takes almost the same amount of time no matter how many softmax output values are stored in memory given the same input data length, and 2) the increase in time complexity caused by the change in batch size is greater than the sequence length. However, it looks like that our method takes at least about 1.6$\times$ and up to about 6$\times$ more time when compared to the baseline that does not apply our methods. The reason for it, we speculate, is that our method is implemented with the relatively slow and naïve “Python (Pytorch)” rather than fast “CUDA kernel (C++)”, unlike the original softmax operation that is highly optimized and run with “CDUA kernel”. In more detail, our method simply selects the “to-be stored softmax output values” after sorting them in the forward pass. And then, during the backward pass, it approximates the non-stored softmax output values and places them into their original positions. When turning this algorithm into actual code, we needed to use many additional matrix operations such as slicing, the scatter function, etc., not only the sorting mechanism, which takes a much longer time than sorting several hundreds or thousands of softmax elements, without implementation optimization (e.g., CUDA kernel optimization). That might explain the increased time complexity of our method compared to the baseline. Since the sorting time complexity we used is just $n \log_{2} n$, which does not seem to incur significant time overhead compared to the compute-intensive back-propagation (gradient computation) when $n$ is around several thousands, we believe that the time gap between our method and the baseline would be dramatically reduced through the implementation optimization of our method, making our method more practical. Thanks again for your careful comment on this matter, and we will keep working on how to optimize our method to decrease the time overhead. **Q3** Lack of Comparison > We appreciate the reviewer providing reference papers that can be of great help to our submission and future research. We will include those papers in our related work and thoroughly discuss them against our method. Also, we will conduct a comparison experiment in terms of accuracy, memory usage, and training time. Here, we briefly discuss them as follows. > In the case of GACT, to our understanding, unit bits are reduced through quantization with a maximum of 4 bits and a minimum of 2 bits to minimize overlapping and errors of each layer under specific conditions. Thus, when applied to the softmax layer, it could achieve 16x memory reduction at maximum, i.e., from 32 bits (floating point) to 2 bits. And in the case of Mesa, similar to GACT, the gradient is quantized. As a preliminary experiment, we have briefly checked the difference between the gradient matrix obtained from the softmax output approximated by our method and the gradient quantized from Mesa. Although additional experiments seem necessary as it is unclear how this will affect actual learning, their gradient values generated from the same input data do not differ significantly from the result of our preliminary experiment. In other words, the error rates of the gradient matrix created from the existing normal softmax and the gradient matrix from Mesa and our method seem to be similar, meaning that there may be a little difference in their model accuracy. > In sum, we speculate our method would have more room to reduce softmax activation memory thanks to its flexible ratio of approximation ($m$ can be changed accordingly given $n$) while achieving similar model accuracy to GACT and Mesa, as the gradient matrix obtained from our method and Mesa seem to have similar error from the gradient obtained from the original softmax operation. However, we will conduct in-depth experiments with GACT and Mesa and compare them against our method (including training time) to see if our conjecture is correct or not, which will be included in our revised paper. --- Rebuttal Comment 1.1: Title: Response to Reviewer Comment: I sincerely appreciate the author's effort and response. All my queries have been perfectly resolved. I commend you sincerely for tackling what must have been a very difficult process of tuning the hyperparameters in the IMDb dataset to obtain proper accuracy, as per my request. I believe **the algorithm's approximation method is a highly novel approach that has not been proposed before**, and **it maintains high accuracy while saving memory**. I strongly recommend to the AC that this paper be accepted, and I will raise the score to 7. --- Reply to Comment 1.1.1: Comment: We really appreciate your acknowledging the novelty of our work and your willingness to recommend it to be accepted by NeurIPS 2023. Your support gives us stronger confidence in our work. We are grateful for your suggestion to tune the hyperparameters for the IMDb task, which helps us gain more confidence that our method would be applied to more difficult and general tasks. We will conduct additional experiments on this and include the results in our revised paper. Also, we will compare our work against GACT and Mesa, which, we expect, will effectively distinguish our method from the existing works on approximating activations.
Summary: This paper tries to reduce the memory footprint while training Transformer models by only keeping a fraction of softmax output and estimate them back during back propagation. Strengths: This is a very interesting and practical topic. With this tech, we can reduce the computation cost significantly while training large models. Weaknesses: I don't see any obvious weakness. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Could you report the training profiling comparisons w/ and w/o the approximation overhead? Could you elaborate more on how this tech works with other Transformer variants like Performer? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We wanted to take a moment to express our heartfelt gratitude for reviewing our paper. Received your positive assessment has given us renewed confidence in our work. **Q1** Training profiling > Thanks for a good question. We felt that the training profiling the reviewer mentioned means two aspects (model performance and training time). Please let us know if we have mistaken your question. > First, profiling in terms of model performance (accuracy) is expected to be acceptable through our experiments presented in our current manuscript. In addition, in the case of the IMDb task, which had a rather low score, we have re-experiment it, which shows comparable performance to the baselines. Our Global rebuttal response includes the details of the new experiment on the IMDb task. >Second, profiling in terms of time is deemed necessary. Basically, we expect that the time complexity increases when applying our method, compared to the baseline that does not apply our method, as our method needs to perform additional processes such as sorting. Regarding this, we have conducted a new experiment on time overhead and included the result in the Global rebuttal response. We kindly ask the reviewer to check out the Global rebuttal response. **Q2** Transformer variants > Please let us describe the gist of common Transformer variants and their applicability to our method. The biggest goal of the Transformer variants is to reduce the time complexity of $O(n^2)$ of the matrix-wise calculation ($QK^T$) of the attention mechanism in the inference phase. In addition, some of these methods of reducing time complexity also streamline all of the $softmax(\frac{QK^T}{\sqrt{d}})V$, which are the computational processes of the attention mechanism. However, since these attention mechanisms are new variants of attention mechanisms, they are often accompanied by particular models suitable for them. However, our method can be applied in the case of an efficiency method with no change to softmax among attention mechanisms. The reason is that the applicability of our method depends on the existence of a softmax function for the input. If the softmax function exists and takes up activation memory, we can try our method. Also, the advantage of our method is that it can be applied to all variants of attention-based models. We believe this has been proven by our experiments on XLNet (0.12 billion parameters) with a similar number of parameters to GPT-2 small (0.124 billion parameters). --- Rebuttal Comment 1.1: Comment: Thanks authors for the response! My questions have been addressed. --- Reply to Comment 1.1.1: Comment: We are glad that our answer has helped address the reviewer’s concerns and questions. We will include how our method can be applied to other Transformer variants and conduct experiments on it. We appreciate the reviewer’s positive feedback and support for our work.
Summary: The paper proposes an effective scheme for compressing the backpropagation of Softmax, the main idea of which is to only retain the maximum and minimum m values of the output, with the middle part using linear interpolation. Impressively, this approximation scheme not only saves memory but also achieves better results than precise backpropagation in some tasks. Strengths: 1. The memory usage of Softmax backpropagation is effectively reduced through approximation, which can be used in Attention models; 2. There is a strict theoretical analysis of approximation errors; 3. The performance is even better than the accurate Softmax backpropagation in some tasks. Weaknesses: 1. There is no quantitative analysis of the amount of memory that can be saved theoretically; 2. The memory efficiency has not been validated on larger models; 3. The reason why the approximate softmax is better (in some tasks) has not been further analyzed. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: The method in the paper does indeed save the memory required for the backpropagation of Softmax, but I have a question, that is, is the proportion of memory occupied by the Softmax part in the entire backpropagation crucial? Intuitively, the memory occupied by the Softmax part is not too much, but perhaps my intuition is wrong, so I think this part needs further quantitative estimation, which determines the importance of this work (especially in the current large language model scenario). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: I couldn't find any discussion about limitations in the main text and the appendix of the paper. Perhaps the authors should include it in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for providing valuable and positive comments regarding our work, and the questions that helped us to refine our work. Here, we have provided a detailed response. **Q1** The proportion of memory occupied by the softmax > Thanks for asking a crucial question about our work. It may seem the proportion of the softmax output is not significant from the perspective of the entire network. However, our analysis and experiment show that it actually takes a large portion of activation memory. For instance, the softmax operation takes up 80% of the attention module itself, and 64.7% and 62.2% of the entire layer activation output in the forward pass of the classic Transformer and BERT model. However, to validate the effectiveness of our method, we will conduct in-depth experiments on large models, as the reviewer suggested. > In principle, the size of the softmax activation memory of an attention-based model is equivalent to [Batch size $\times$ Multi-head $\times$ Sequence length $\times$ Sequence length]. Therefore, for large language models that take longer sequences and/or a larger number of multi-heads, the percentage, as well as the absolute size, of softmax activation memory can be easily increased. > Although we will conduct experiments on larger models, we can take a simple example for an intuitive understanding of how big is the softmax activation output. In the case of the XLNet model, the softmax operation is passed 12 times for each forward execution. By assuming that the batch size is 256, the multi-head is 12, and the sequence length is 100, the total amount of memory taken by the softmax output is [256 $\times$ 12 $\times$ 100 $\times$ 100] $\times$ 12 $\times$ 4 bytes, which is about 1.37GB. We can expect that for a large language model with more encoders, decoders, and multi-heads, softmax activation will take up more portion of activation memory.
Summary: This paper proposes to approximate softmax to improve memory efficiency for training attention models. During forward pass, first the softmax output is computed. Only the $m$ highest and $m$ lowest elements are stored along with sorted order from the the $n$ elements. Rest of the $n-2m$ entries are discarded to reduce the memory footprint. During backward pass, the discarded entries are approximated using exponential distribution whose parameters can be approximated using the stored elements. The $m$ highest and $m$ lowest elements are selected so as to minimize error in approximated gradient during the backward pass. Experiments on machine translation, text classification, and sentiment analysis showcase the efficacy of proposed softmax approximation. Strengths: - While the idea to approximate softmax using the top-$m$ entries to reduce attention memory footprint has been explored before, the use of $m$ lowest entries and its motivation from gradient error minimization is novel. - The proposed method is simple and easy for the community to reproduce. The method also offers an easy way to plugin the approximation to pre-trained models. Weaknesses: - Incorrect claims and/or missing justifications - Lines 286-290 Efficient attention models do not focus on training memory reduction. This statement is false as there are many efficient attention mechanisms that explicitly reduce the memory for attention by (a) changing the order of computation with kernelized softmax: Linear [1], Performer [2], RFA [3]; (b) reducing the number of tokens to attend: sparse [4], Reformer [5], local [6], longformer [7], or (c) condensing the sequence: Linformer [8], compressive [9], perceiver [10] - Line 60-61: the softmax operation takes up 80% of the attention module itself, and 64.7% and 62.2% of the entire layer activation output during the forward pass. It is not clear that at what sequence lengths is this claimed. - Error bound (Line 133): The assumption $s_i s_j \geq s'_i s'_j$ or $s_i s_j \leq s'_i s'_j$ is missing justification or any discussion. I am not sure how well this is going to hold in practice. - Notations: - Sometimes the paper uses confusing notations. For instance see line 74, $z$ is defined to be a vector of length $n$ with query/key multiplication $z= QK^T$ as an example. Typically $Q$ and $K$ are matrices of size $n \times d$ which would lead $z$ to be a matrix. - Weak experiments: - Baselines: The paper only compares against the scaled dot product attention as the baseline. The following two baselines are particularly close and also target memory reduction. - FLASH Attention [11] exploits lazy softmax computation (normalization can be delayed until the end of the attention) to avoid storing softmax values and directly compute output values leading to memory savings. - Memory-efficient Transformers via Top-k Attention [12]. This is another closely related work that not only targets memory improvements for attention but also improves the memory for feed-forward layers. - Tasks: - While the paper conducts experiments on MT, text classification, and sentiment analysis, it is unclear how the proposed technique would impact more general tasks in NLP or other domains. More real world tasks such as pre-training would help understand this better. Either Long range Arena [13] or more recent Comprehensive Attention Benchmark [14] would also provide more coverage on tasks and help understand better the tradeoffs of softmax approximation. - Time-memory trade off as a function of sequence length. While the paper presents the relative memory improvement for sequence length of $100$, it would be good to see a general trend for memory consumption and time computation overhead as sequence length varies. The overhead on computation time is not discussed anywhere in the paper. - Missing references to other works that approximate softmax to improve the computational and memory complexity. Some of these work could also be good candidates as baselines - SMYRF: Efficient Attention using Asymmetric Clustering (NeurIPS 2020) - Fast transformers with clustered attention (NeurIPS 2020) - Nystromformer (AAAI-21) - Sparse attention with learning-to-hash (ICLR 2022) *References*: - [1] Fast Autoregressive Transformers with Linear Attention (ICML 2020) - [2] Rethinking Attention with Performers (ICLR 2021) - [3] Random Feature Attention (ICLR 2021) - [4] Generating Long Sequences with Sparse Transformers (Arxiv 2019) - [5] Reformer: The Efficient Transformer (ICLR 2020) - [6] Generating wikipedia by summarizing long sequences (Arxiv 2018) - [7] Longformer: The Long-Document Transformer (Arxiv 2020) - [8] Linformer: Self-Attention with Linear Complexity (Arxiv 2020) - [9] Luna: Linear Unified Nested Attention (NeurIPS 2021) - [10] Perceiver: General Perception with Iterative Attention (ICML 2021) - [11] Fast and Memory-Efficient Exact Attention with IO-Awareness (NeurIPS 2022) - [12] Memory-efficient Transformers via Top-k Attention (Proceedings of the 2nd Workshop on Simple and Efficient Natural Language Processing 2021) - [13] Long Range Arena : A Benchmark for Efficient Transformers (ICLR 2021) - [14] CAB: Comprehensive Attention Benchmarking on Long Sequence Modeling (ICML 2023) Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Please see the discussion on weakness. In particular on missing justifications/claims, comparison to baselines/LRA tasks, and time-memory trade-off. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 3 good Limitations: The discussion on negative societal impact is not required. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Incorrect claims and/or missing justifications **W1** Line 286-290 > Thanks for letting us know about the related works we missed. What we meant was that existing works on efficient Transformers do not explicitly try to save activation memory, unlike our method. Although they can save memory not in terms of activation memory, we would like to mention that our method is unique and orthogonal to them. By the way, we will go over them and revise our paper. In Performer[2] in (a), the row vector of query and key is mapped from $d$ dimension to $r$ dimension through random feature map $ϕ$. Then, attention is calculated by inner producing the mapped $ϕ(q_i)$ and $ϕ(k_j)$. In this case, memory efficiency is different from our method because the matrix-wise operation of the attention mechanism is removed from the efficient softmax. In Longformer[7] in (b), the existing attention-based model fails to learn long sentences due to the complexity of $O(n^2)$. This is a method using dilated sliding, global attention, etc. In this case, the operation of $QK^T$ is streamlined, and the attention score using softmax is obtained in the same way as the existing attention. We may be able to apply our method to this process. This will be tested later and used for comparison with transformer variants. In Linformer[8] in (c), since it simply decomposes the attention matrix into a low-rank matrix and uses it to obtain an attention score, it seems that it can be used overlapping with our method applicable depending on the presence or absence of softmax for attention score. Thank you for letting us know about the various perspectives of memory efficient attention as above. They may be different from the efficiency of activation memory that occurs during the training process like our method, but they will be greatly helpful for our future research. **W2** Line 60-61 > It is the ratio of how much softmax is passed through among the layers that each input passes through inference in BERT and Transformer-base. It has nothing to do with the length of the sentence. **W3** Line 133 Error bound > As the reviewer correctly pointed out, the assumption may not always be held. However, we expect that the error bound does not change even if the assumption does not hold. That is because 1) the error bound is the sum of absolute error and 2) both the original softmax output ($s_i$) and the approximated output ($s'_i$) are positive numbers with a value between 0 and 1. Thus, the total amount of error given on the left-hand side in Equation 8 should not change regardless of the signs of each term. The reason we explicitly add the assumption is that we hope it would helps readers understand the process of the error-bound analysis. Although we believe the assumption does not affect the error bound, we will keep looking into it to see if there is a mistake or something we are missing. We appreciate again your careful and crucial comment on this. **W4** Line 74 Notations > Thank you for your accurate point. In the case of that notation, it's our mistake. Since $Z$ is a matrix and $\vec{z}$ is a vector, as you said $Z=QK^T$ of $Z \in \mathbb{R}^{n \times n}$. Here, $n$ is the sequence length, and since softmax is applied based on $\vec{z_i} \in \mathbb{R}^{1\times n}$ to obtain the attention score, $\vec{z}$ is expressed as $\vec{z} \in \mathbb{R}^n$. Looks like it needs a fix. ### Weak experiments **W1** Baselines > Flash attention speeds up the attention mechanism by using different memory access rates for each memory layer on the GPU. To improve efficiency using different access speeds, it accelerates matmul and sofmtax, that is, memory-bound operations, in which memory access is greater than the amount of computation. For this purpose, tilling and recomputation were used. In the case of matrix matmul, it is efficiently changed using tilling, and in the case of softmax, it was made efficient through recomputation. In this case, in order to avoid using $O(n^2)$ memory required to recalculate the softmax, the input is reconstructed after saving the softmax normalization statistics. In this case, our method is less applicable as there is no storage or operation of the softmax output. If possible, we think that we can increase the storage efficiency by not storing the input and normalization statistics and applying our method. **W2** Tasks > Our experiments were evaluated against representative NLP tasks. In the case of Text Classification and Sentiment Analysis, the pre-training model was fine-tuned, and in the case of MT, full-training was performed, which implicitly includes the pre-training stage. As the reviewer pointed, we were unable to experiment with a wide range of tasks due to time and resource limitations. Nevertheless, our method can restore the original softmax outputs using only a part of the softmax output, so it is expected to be capable of providing a similar performance unless data modality or type changes significantly. We will continue to conduct verification experiments with various tasks, including many interesting works the reviewer introduced, to see if there is some deviation from our expectations and revise our manuscript accordingly. > For Memory-efficient Transformers via Top-k Attention[12], a sparse attention method similar to Longformer[7] is borrowed from "Generating Long Sequences with Sparse Transformers" for softmax efficiency. We will evaluate the applicability and comparison of our method. We will also check the benchmarks and methods presented in the paper as much as possible we can. **W3** Time-memory trade off > We have attached a PDF file in Global rebuttal response, which provides new experiment results on memory consumption and computation time. We kindly ask the reviewer to see the Global response. --- Rebuttal Comment 1.1: Title: Thank you for clarification Comment: I would like to thank the authors for taking out time to address the questions. However, I still have unresolved questions and concerns. **Lines 286-290** > What we meant was that existing works on efficient Transformers do not explicitly try to save activation memory, unlike our method. Although they can save memory not in terms of activation memory, we would like to mention that our method is unique and orthogonal to them. I would encourage the authors to include a discussion on this in revised version. **Regarding Lines 60-61:** > It is the ratio of how much softmax is passed through among the layers that each input passes through inference in BERT and Transformer-base. It has nothing to do with the length of the sentence. Thank you for clarifying. Could the authors elaborate on details of ratio? Specifically, is the pairwise dot product, QK^T, part of the softmax operation, i.e., in the ratio's numerator? The attention module exhibits quadratic complexity in terms of the sequence length. Depending on this ratio's specifics, it may or may not be dependent on sequence length. **About Line 133 and Error Bound:** > Thus, the total amount of error given on the left-hand side in Equation 8 should not change regardless of the signs of each term. The reason we explicitly add the assumption is that we hope it would helps readers understand the process of the error-bound analysis. Although we believe the assumption does not affect the error bound, we will keep looking into it to see if there is a mistake or something we are missing. I understand the rationale behind adding the assumption for clarity in the error-bound analysis. However, while the total error on the left side of Equation 8 remains unchanged, without this assumption, moving summations internally becomes problematic (on the right side). I'm not convinced by the authors' belief that this assumption doesn't impact the error bound. My own attempt at the derivation, without this assumption, is as follows: For any $i$, let $S_1$ be the set where $s_is_j > s'_is'_j$ and $S_2$ be the complementary set. We know $S_1 U S_2 = j \in \{1,2,\cdots n\}$. WLOG assume $s_is_i > s'_is'_i$. $ \sum_{j=1 \backslash i}^n \left|s_i s_j-s_i^{\prime} s_j^{\prime}\right| $ $ = \sum_{j \in S_1, j \neq i} (s_i s_j-s_i^{\prime} s_j^{\prime}) - \sum_{j \in S_2} (s_i s_j-s_i^{\prime} s_j^{\prime}) $ $ = \sum_{j \in S_1, j \neq i} (s_i s_j-s_i^{\prime} s_j^{\prime}) - \sum_{j \in S_2} (s_i s_j-s_i^{\prime} s_j^{\prime}) $ $ = \sum_{j=1, j \neq i}^n (s_i s_j-s_i^{\prime} s_j^{\prime}) - 2 * \sum_{j \in S_2} (s_i s_j-s_i^{\prime} s_j^{\prime}) $ $ = \left(s_i (1-s_i) -s_i^{\prime} (1-s_i')\right) - 2 * \sum_{j \in S_2} (s_i s_j-s_i^{\prime} s_j^{\prime}) $ $ \leq |s_i (1-s_i) -s_i^{\prime} (1-s_i')| + \delta $ where $\delta = 2 * \sum_{j \in S_2} (s_i^{\prime} s_j^{\prime} - s_i s_j) > 0$ Kindly clarify if I made a mistake or misunderstood something. While the empirical evidence presented is commendable, I urge the authors to either provide a proof devoid of this assumption or acknowledge its potential limitations in real-world scenarios. **Baselines:** The paper seems to overlook key comparisons against pivotal baselines. In particular, the "Long Range Arena" is a well-known baseline employed in numerous efficient attention mechanisms. A comparison against this baseline would undoubtedly be beneficial. **Time-Memory Tradeoff:** The authors' depiction of the time-memory tradeoff in the graphs is appreciated. However, a 3-6x overhead for even moderate sequence lengths, such as 400, is a significant overhead that would severely limit the use of proposed method. **Final Remarks:** I carefully reviewed other feedback given significant difference between my own rating and others. Noting the novelty of the proposed method on approximating softmax, I have decided to adjust my score to 4. While the proposed method is novel, the paper currently falls short in terms of rigor. Specifically baseline comparisons, justification of bounds, and practicality due to overheads is preventing me from giving a higher rating. --- Reply to Comment 1.1.1: Comment: **Lines 286-290:** > Thanks for your comment. Your advice has allowed us to re-evaluate our method on its strengths and weaknesses against other methods. We will describe the differences between our method and other baseline methods. **Regarding Lines 60-61:** > The ratio calculation on lines 60-61 is simple but needs to be more specific. Our method does not consider the complexity of an attention module, which usually has a quadratic complexity depending on the sequence length. Instead, it is the ratio that takes into account only the number of layers without considering the size or dimensions of the inputs and outputs of each layer. For example, when the transformer's encoder has two embedding layers and one multi-head attention layer, the multi-head attention layer accounts for 33.3% (1/3). Also, in the case of $QK^T$, it is regarded as a single operation because it is the matrix multiplication of $Q$ and $K$. In this way, our ratio is presented to indicate how much the softmax in attention operation is used in an attention-based model in a layer-wise manner. We will clarify the way we calculate the ratio in our revised paper by describing the details of it. **About Line 133 and Error Bound:** > Appreciate your efforts in deriving the error bound. In a glimpse, your derivation seems correct, which we missed in our submission. We will double-check our analysis in light of your derivation and provide the correct error bound in our revised paper in the case that the assumption does not hold. **Baselines:** > We concur with your opinion that this paper needs more comparisons against many pivotal works you mentioned, especially 'Long Range Arena'. We will include thorough comparison discussions as well as experiments as much as we can in our revised paper in order to clarify the differences and similarities between our method and existing works. **Time-Memory Tradeoff:** > As mentioned in the memory trade-off analysis, the implementation of our method is not optimized in terms of execution time since it is written in naïve Pytorch (Python), not CUDA Kernel (C++), unlike the original softmax written in CUDA Kernel. Currently, we are working to reduce the time overhead by implementing our method with CUDA Kernel. Since the theoretical time overhead is just O(n log2 n), which is required for sorting, we believe that 3-6x time overhead will be dramatically decreased when the code optimization is done with some marginal time variations depending on the sequence length. **Final Remarks:** > Your thorough and pragmatic review has allowed us to better examine our method's practicality and limitation against other works. We will put effort into revising our work based on your constructive comments, especially including baseline comparisons, error-bound analysis, and time overhead reduction.
Rebuttal 1: Rebuttal: We want to extend our heartfelt thanks to all the reviewers for taking the time to review our research paper from diverse angles and offering constructive critiques. Your valuable insights have greatly enriched the quality of our research. We have conducted new and re-experiments requested by the reviewers and included the results in the attached PDF file. We kindly ask the reviewers to check out the attached PDF file. **Experiments on IMDb and WMT (Table 1 and Figure 1)** >***Re-experiment with the IMDb task*** > > The result on the left side of Table 1 is the re-evaluation of the IMDb task described in the paper. All experimental parameters are the same as in the paper. The sentence length is limited to 100 and the batch size is 128. The result using normal softmax (baseline) is 0.876. By applying our method, the value of the highest 1 and lowest 1 softmax elements out of 100 softmax elements are stored in memory ($m=1$), and it achieves an accuracy of 0.883 by using only 1.9% of softmax activation memory, which is slightly higher than the baseline. This variability seems to arise from subtle difficulties in approximating outputs from softmax inputs (from unstructured sequence data) that tend to deviate randomly from the exponential function, which we will further study. > ***New experiment on the WMT-14 De-En task*** > > The right side of Table 1 is the new experiment result on Machine Translation learning using WMT-14 De-En data for the Transformer-base model. In the case of the WMT14 dataset, there are about 4 million train data and 3,000 validation and test data. We have experimented with WMT-14 De-En using around 20,000 train data samples due to the tight time limitation of the rebuttal. The result shows the perplexity of the test dataset every $m$. The result using normal softmax (baseline) is 1.004. With our method, the same perplexity (1.004) is obtained by storing the values of the highest 2 and lowest 2 softmax outputs out of 100 elements. Figure 1 shows the learning performance trajectory with various $m$ over training epochs, which shows that our method draws almost the same trajectory as the baseline (normal). **Experiments on time and memory complexity (Figure 2 and Figure 3)** > ***Time and memory evaluation according to sentence length and batch size at a single softmax layer*** > > Figure 2(a) shows the activation memory usage and time complexity of the original softmax function and the softmax function to which our method is applied, as described in the caption. It can be observed that the time complexity changes according to the sequence length regardless of the number of softmax elements stored in memory. We can also observe that the time increases rapidly from a minimum of 4$\times$ to about 30$\times$ due to inefficient code implementations in the process of sorting, cutting, and rearranging the matrix, which are required by the proposed softmax output approximation. This is because we used a non-optimal way to implement our algorithm (Python, not CUDA kernel), unlike the original softmax function highly optimized with CUDA kernel. This will continue to be modified in our work, and we plan to implement it with the highly optimized CUDA kernel as the final goal. > ***Evaluation of the effect of batch size and sequence length on the turnaround time of our method*** > > Figure 2(b), we present the factors that increase the time complexity when our method is applied. Each x-axis is batch size, and each line represents sentence length. As we can see, the batch size changing the softmax output shape to $n$ has a more significant effect on the inference speed than the sequence length ($n^2$). Based on this experiment result, we will optimize our code and make it faster. > ***Time complexity evaluation of the entire model*** >> Figure 3, we proceed to the time evaluation spent by all the layers of the Transformer-base model, unlike Figure 2, which measures the amount of time spent at a single softmax layer. In this experiment, three different batch sizes of 32, 64, and 128 are used, which are divided into two groups; blue bars (original softmax; baseline) and green bars (our method). For the x-axis of $n=100$, the increase in time complexity due to our method occurs on average by a factor of about 1.7$\times$. In the case of $n = 200$, the complexity increases by about 5 $\times$ on average, and the amount of time required is about 6.3 $\times$ at most, which is smaller than the amount of time spent at a single softmax layer. Also, the trend of linear change is observed with the batch size and sequence length. Please note that these results may vary depending on the situation of the GPU or CPU. However, as we mentioned, we will improve our implementation of the proposed softmax output approximation method and will evaluate the time overhead again with the optimized code that can execute our algorithm with more efficiency while the significant memory-saving capability is maintained. Pdf: /pdf/19805f6c8b1fc054670ec3f1c4c5068bf1d430d0.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper is about approximating softmax function, by storing only a fraction of the entire softmax output in a memory. The authors argue that by applying approximation on the softmax function, they were able to save memory usage of softmax activation up to 84%. The motivation starts from the observation that the softmax activation module takes up a huge amount of memory consumption in widely-used Transformer-based models. Instead of storing all softmax outputs, the work suggests storing only top-m highest and top-m lowest values of the outputs. From error analysis, the optimality of storing only highest and lowest values is justified. Using those stored fraction of values, the rest of values are approximated in the assumption that the values will follow a modified exponential distribution. From experiments, the authors empirically prove that the proposed method has advantages of reducing memory usage while keeping performance degradation negligible. Strengths: - The proposed method is easy to apply. - Experiments are done across multiple domains and tasks. - Reducing the overhead of softmax has high impact on further research Weaknesses: - It's not clear how large or small the MAEs of ~0.05 (output error) or ~0.0003 (gradient error) are. Rather than mean absolute error, relative error would be more appropriate method to present the significance of error introduced by the approximation. - Performance and memory usage comparison with other efficient softmax methods (e.g. LongFormer, Performer) would be useful. (typo) line 25: GTP-2 → GPT-2 Technical Quality: 3 good Clarity: 3 good Questions for Authors: - In equation 5, the inverse of softmax to compute $z_i$ (i.e. input) from $s_i$ (i.e. output) depends on $\sum_{j=1}^n e^{z_j}$. Can you elaborate more on how $z_i$ are recovered from $s_i$? Does that imply that we don't need to care about $\sum_{j=1}^n e^{z_j}$, since it's partition function and the softmax function doesn't change output by transition? - In equation 8, the first and the second term seem to be equivalent only when $s_i s_j \ge s_i' s_j'$ or $s_i s_j \le s_i' s_j'$ for all $i \neq j$. If the assumption does not hold, how does the error bound change? - In Table 1, as far as I understood, $m$ indicates the number of values to be kept among highest and lowest values; in such case $2m$ values will be stored. But for $n=100$, it is written that memory usage is 50%; doesn't $m=50$ mean we save all output values? - One of the old beliefs about softmax or attention mechanism is that having lower rank may suffer the performance. Is the proposed approximation method free from making softmax's rank lower? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I believe the limitations are addressed well in the checklist. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the time and effort you have dedicated to providing insightful feedback on our paper, including the typo checking. **Q1** Inverse of softmax >We understand that Equation 5 may be confusing. In Equation 5, we can restore $z_i'$ (i.e., approximated softmax input) if we have $s_i'$ (i.e., approximated softmax output) and $\ln(\sum^n_{j=1} e^{z_j})$. In the forward execution, when the input goes through the softmax function, it stores the intact values of selected elements of the softmax output along with $\ln(\sum^n_{j=1} e^{z_j})$ in memory. Then, during the backward pass, by using $s_i'$ (i.e., approximated softmax output) and the stored $\ln(\sum^n_{j=1} e^{z_j})$, the input $z_i'$ is restored from Equation 5. **Q2** Error bound >As the reviewer correctly pointed out, the assumption may not always be held. However, we expect that the error bound does not change even if the assumption does not hold. That is because 1) the error bound is the sum of absolute error and 2) both the original softmax output ($s_i$) and the approximated output ($s'_i$) are positive numbers with a value between 0 and 1. Thus, the total amount of error given on the left-hand side in Equation 8 should not change regardless of the signs of each term. The reason we explicitly add the assumption is that we hope it would helps readers understand the process of the error-bound analysis. Although we believe the assumption does not affect the error bound, we will keep looking into it to see if there is a mistake or something we are missing. We appreciate again your careful and crucial comment on this. **Q3** $m=50$ > Yes, as the reviewer mentioned, all the elements are stored in memory when $n=100$ and $m=50$ since it stores $2m$ elements in memory. **Q4** Lower rank > We appreciate the reviewer letting us consider an important aspect of softmax operation in the proposed method. We know about the low-rank performance degradation of the attention mechanism and softmax. Fortunately, the proposed method does not lower the rank of the softmax. It may seem to do so because the proposed approximation mechanism saves only $m$ selected softmax outputs in memory during the forward pass. However, the forward execution fully utilizes the full rank of the original softmax as a normal feedforward running, and then only the $m$ selected softmax outputs are stored in memory to be used later for the backward pass. During the backward pass, the non-stored softmax output elements are resorted to fully construct the exactly-same rank of the original softmax for gradient computation. Hence, the performance degradation caused by the lower rank of softmax is not the case that happens in the proposed method. **Gradient error** > We appreciate the reviewer’s advice regarding the readability and understandability of our manuscript. We agree with the reviewer that the current MAE metric may not help readers get the feel of the relative amount of error. So, as the reviewer suggested, we have converted the absolute gradient matrix error to the relative percentage error, i.e., [Epoch: 1 / RE: 7.70%], [Epoch: 8 / RE: 13.33%], [Epoch: 15 / RE: 8.28%], [Epoch: 20 / RE: 12.74%]. We will include the relative percentage error in our revised manuscript. In addition, through these improved indicators, we will try to further reduce the approximation error of the gradient matrix in future studies. Thanks again for your careful suggestion. We truly appreciate the time and effort you invested in reviewing our work and providing constructive feedback. Your contributions have had a positive impact on the quality of our research, and We are sincerely thankful for your support.
Summary: This paper targets improving the memory efficiency of attention networks by reducing the activation storage of the softmax output. The authors propose to store only m highest and m lowest softmax output values, together with some auxiliary variables, and infer the missing part during backpropagation through interpolation. Results on several NLP tasks show encouraging results. Strengths: 1. The idea is easy to follow and implement 2. The model obtains good performance and memory reduction on a set of tasks. Weaknesses: 1. Important baselines are missing. 2. The author only reports memory consumption but doesn't show the effect on training time and the impact of larger model sizes. 3. Translation benchmarks should be improved. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * There are many approaches also aiming to improve the efficiency of attention, such as FlushAttention and Checkpointing. As a reader, I expect to see how your method compares with them and whether they could be complementary to deliver further improvements. * Apart from memory consumption, please provide training and inference time change. * The authors only experiment with Transformer base models, but the objective of memory reduction is to enable larger models. The authors should explore how their approach performs as the model size increases substantially. * For translation, a more convincing benchmark is WMT tasks. Examples in Multi30K follow simple patterns which artificially makes the translation task trivial. Please update your experiments with WMT benchmarks. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I didn't see particular limitations of their approach: the proposed method seems general to all softmax-based attention models. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for providing valuable and positive comments regarding our work, and the questions that helped us to refine our work. **Q1** Comparison of our method with FlashAttention and Checkpointing > Flash attention tries to speed up the attention mechanism by using different memory access rates for each memory layer on the GPU. In order to improve efficiency using different access speeds, it accelerates matmul and sofmtax, that is, memory-bound operations, in which memory access is greater than the amount of computation. For this purpose, tilling and recomputation are used. In the case of matrix matmul, it is efficiently changed using tilling, and in the case of softmax, it is made efficient through recomputation. In this case, in order to avoid using $O(n^2)$ memory required to recalculate the softmax, the input is reconstructed after saving the softmax normalization statistics. In this case, our method is less applicable as there is no storage or operation of the softmax output. If possible, we think that we can increase the storage efficiency by not storing the input and normalization statistics and applying our method. This possibility will definitely be confirmed and evaluated through further research. In contrast, we would like to mention the merits of our method. In our method, if there is a large language model that prioritizes performance over any efficiency, and if it is an attention-based model, our method can be used depending on the presence or absence of softmax for attention scoring, regardless of the structure. Also, in the case of flash attention, it is a hardware-dependent method, but our method is software-dependent and efficient through an algorithm, so there may be a difference. Of course, flash attention is already sufficiently efficient. The reason for this view is that Flash attention v2, which is written at the level of 'CUDA kernel', exists. To compare at the same level, our method will also have to be compared with efficiency at the level of the 'CUDA kernel'. To this end, we will continue to study this matter and revise our manuscript. > Checkpointing is a method of inference between the corresponding layer and the checkpointed state whenever a gradient is needed without storing the output and input values of some layers required in the backward process. This increases time complexity due to inference and reduces activation memory. A similarity to our method is that there is an additional process to find the unsaved value when the gradient is needed. The difference is that checkpointing is selectively stored according to the model and situation, and since our method can be used with any softmax function, there will be a difference in that there are no restrictions on the model. Furthermore, if we can restore some values to the state to be checkpointed, similar to our method, the efficiency of checkpointing will increase. **Q2** About time complexity > We have conducted a new experiment on the training time of our method. The result can be found in the Global rebuttal response. Since our current code implementation is written in slow and naïve "Pytorch" and "Python", we would like to point that it is not obvious and not fair to compare our method with "torch.nn.functional.softmax" optimized with the CUDA kernel. Given this situation, we observe that the training time increases when compared to the baseline. The detailed indicators are included in the Global rebuttal response. We will optimize the implementation of our algorithm through continuous code optimization, especially with the CUDA kernel, which we expect to bring a huge speed up. **Q3** Experiments with large models > We appreciate the reviewer’s constructive suggestion on the need for experiments on large language models. Although we experimented with XLNet (0.12 billion parameters), similar in size to the GPT-2 small (0.124 billion parameters), which showed competitive learning performance, we agree that additional experiments are needed to see if our method works effectively for much larger language models. We will conduct in-depth experiments on larger models as much as possible we can by utilizing all the sources we have and will include the experiment result in our revised manuscript. Before presenting the experiment results on larger models, here we can provide some hints on how our method will work when applied to larger models. An important aspect of our method is how many and which softmax output values should be stored in memory to effectively perform softmax approximation that does not affect the training of a model. In general, in the case of a larger model that needs to learn longer and presumably more difficult data (e.g., longer sentences), the number of stored softmax output values in memory will also get longer if the ratio of the stored softmax output values is maintained. Thus, although the data gets longer and possibly gets harder to learn, we can expect that the softmax output values would be effectively approximated with marginal errors, as more softmax output values are stored in memory proportionally to the data length. However, again, we will conduct experiments to see if this surmise is correct or not. It would be appreciated if the reviewer would bear with us on the experiment results. **Q4** WMT task experiment > Thanks for suggesting to experiment with WMT. As the reviewer requested, we have conducted a new experiment on "WMT2014 German-English" to validate the reliability of our method. The result shows that our method achieves the same model performance (perplexity) on the WMT task up to using 7.75 times less activation memory. The details of the experiment results are provided in the PDF file in the Global rebuttal response. We kindly ask the reviewer to take a look at it.
null
null
null
null
Neural Harmonics: Bridging Spectral Embedding and Matrix Completion in Self-Supervised Learning
Accept (poster)
Summary: In this paper, the authors provide theoretical analysis on convergence and downstream performance of self supervised representation learning (SSL) approaches using tools from low-rank matrix completion. In particular, (i) they relate an eigenproblem objective to SSL methods, (ii) find that SSL methods perform a conjunction of laplacian embedding and low rank matrix completion, (iii) relate SSL augmentations to the number of observed entries required in matrix completion and (iv) provide some experimentation around incoherence as it relates to downstream performance. Strengths: - Writing and theoretical exposition is clear - Conceptually the trace maximization formulation is elegant. Specifically, the authors provide a broad framework through this line of reasoning to relate commonly used SSL objectives. - The results on incoherence are well-aligned to empirical findings on projection head vs backbone representations, provides a potential explanation for this phenomena Weaknesses: - Additional experimentation on performance across the trace maximization approach for small scale datasets would be helpful - Further experimentation for the incoherence results in Figure 2 would be helpful, for instance how does incoherence compare across different SSL methods (i.e. SimCLR, BarlowTwins, etc) Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Details on the experimentation run for CIFAR-10 would be helpful. Only a few short paragraphs are presented which claim equivalent performance to VICReg - Are there any ways to make use of distributed low-rank matrix completion approaches with theoretical guarantees? (i.e Mackey, JMLR 2015) Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: - It is unclear how the proposed trace maximization scales in runtime with $N$ and the number of view augmentations - Some of the assumptions only hold for the self-supervised case and not the supervised contrastive learning case (i.e. CLIP) such as positive anchor backbone representations being close to one another. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful feedback! We would like to start the response by addressing the weaknesses pointed out by the reviewer. We first report additional experiments and details on the performance across small-scale datasets (CIFAR-10, CIFAR-100, and ImageNet-100), followed by additional experimentation for incoherence results. All results are summarized in the attached PDF for convenience. Hopefully, the following answers the concerns raised in the questions and limitations sections of the review. We finish by discussing the distributed low-rank matrix completion approach. We have conducted additional experiments for the trace formulation on CIFAR-10, CIFAR-100, and ImageNet-100 along with a comparison to VICReg, one of the commonly used SSL methods. The setup is equivalent to one given in the supplementary materials, we reiterate it here. We use the ResNet-18 backbone with a 3-layer projector with the following dimensions: 2048-2048-2048. Number of the training epochs is 1000 for CIFAR and 400 for ImageNet-100. The batch size for CIFAR-10 and CIFAR-100 is 256, and for ImageNet-100 — 512. We use the LARS optimizer: learning rate = 0.3, weight decay = 1e-4, and momentum = 0.9. The learning rate schedule is cosine annealing with a linear 10-epoch warmup. These hyperparameters are typical for VICReg training (thus favouring VICReg) and we do not adjust them to train our trace maximization formulation. While VICReg has 3 special hyperparameters that weigh individual loss terms, our trace formulation only requires specifying parameter $t$ of the heat kernel, which controls the mass spread across views in a positive pair/set. We fix $t=2$ across all experiments. We train each model on a single Nvidia V100 GPU with 16Gb memory. On CIFAR-100, VICReg runtime is 6h 7m, trace formulation - 5h 57m (similar timing difference for other datasets). The trace formulation does not incur additional runtime overhead. Since the approximate heat kernel matrix used in the trace maximization objective is sparse (essentially comprised of $m+1$ unique entries, where $m$ is the number of augmentations used) and can be stored and multiplied efficiently, the method scales similarly to other SimCLR-like methods with increasing batch size and number of augmentations. The major bottleneck for such methods is usually data preprocessing as augmentations are typically CPU-intensive. Below, we report the linear evaluation results on the trained models — mean accuracy across 5 and 4 independently learned models (top-1 / top-5) for CIFAR-10 and CIFAR-100, respectively, and 1 run for ImageNet-100 (we hope to add more training runs as they should be finished before the final discussion period ends). The evaluation protocol is standard (please see details in Appendix 8.2). | | CIFAR-10 | CIFAR-100 | ImageNet-100 | | ----------- | ----------- | ----------- | ----------- | | VICReg | 91.15 / 99.64 | 66.76 / 89.39 | 79.28 / 94.64 | | Ours | 91.19 / 99.67 | 67.35 / 89.91 | 78.36 / 94.3 | These results are comparable to state-of-the-art even without any tuning for more favourable hyperparameters. Please see the attached PDF for Figure 1 and its caption reporting results on the incoherence across three commonly used SSL methods: SimCLR, BarlowTwins, and VICReg. Regarding the question about distributed low-rank matrix completion, indeed, there is an immediate connection between distributed low-rank MC and the stochastic gradient descent in SSL in the sense that they both perform a random sampling of the submatrices. While the distributed MC performs optimal approximations for all of the sampled submatrices simultaneously and adopts their averaging as a final step, in SSL typically one processes random batches sequentially. While the goal of the latter method is to provide matrix reconstruction or its low-rank factors, SSL aims to learn the parameterized function that computes a set of eigenfunctions at a given point. We could adopt the distributed approach in SSL since the underlying approximation step is differentiable as we still need to facilitate a learnable mapping. This review was particularly helpful in enhancing the submission and we hope that our response comprehensively addresses the concerns the reviewer had regarding our work.
Summary: Self-supervised learning methods can effectively leverage limited signals to converge towards meaningful representations, but how is it made possible? This paper tries to give a response. This paper establishes a connection between SSL and a matrix completion problem by showing that these are Lagrangian dual of each other. This further implies that optimizing the SSL objsective simultaneously entails reconstructing the kernel matrix. This leads to some theoretical findings, including: - The trace maximization formulation entails several popular SSL methods: SimCLR, BarlowTwins, VICReg. - A less incoherent matrix is easier for matrix recovery, which explains why typical SSL methods rely on the representations (the incoherence is low) rather than the embeddings (the incoherence is high). Based on the theoretical insights, this paper proposes a trace maximization objective for SSL (eq. 4). Following are some findings from experiments: - The trace maximization objective is on par with existing SSL methods. - There is a negative correlation trend between the incoherence and the number of layers in the projection head. - The experiment findings support the hypothesis in proposition 4.3 — that incoherence indeed plays an important role in explaining the use of the backbone outputs. Strengths: - This paper proposes a novel, matrix completion formulation for SSL that can entail several popular SSL methods. - The analysis of the matrix completion formulation provides insights about various parts of SSL. - Empirically, the trace maximization objective leads to performances on par with other approaches. Weaknesses: I have some minor points about the usage of terminologies — please refer to the comment section below. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - Figure 2: What is RQ loss? - I’m not familiar with the SSL literature (I’m working on NLP), so the roles of the representation vs embedding appears differently from what I thought. Apparently, the representation is directly acquired from the backbone layers and is closer to the input. The embedding is further away from the input. In NLP, however, the embedding is the one that is closer to the input. But once I get my head around this discrepancy of terminologies, I see that many intuition and findings of this paper makes sense. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: I do not see negative potential societal impacts of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their perceptive feedback. The review offers a very well-structured and observant summary of the submission. We would like to add an additional comment about the incoherence and the role of the projection head. We think there might be a typo in the following --- *''A less incoherent matrix is easier for matrix recovery, which explains why typical SSL methods rely on the representations (the incoherence is low) rather than the embeddings (the incoherence is high)''*, but to be on the safe side, we would like to elaborate more. Higher incoherence intuitively means that information is *not* stored in the few important entries. Randomly sampling an incoherent matrix for a few of its entries will reveal more information than sampling a coherent one, which renders low-rank incoherent matrices possible to complete. Since common SSL methods train representations such that they recover spectral decomposition, the embeddings inherit this presumably high incoherence. But our proposition is that higher incoherence of representations makes them hard for downstream tasks. We posit that a projection head might play a buffer role for representations to be less incoherent and thus easier for downstream tasks. Next, we answer the reviewer's questions: - The RQ loss is a naming artifact, it denotes the trace maximization formulation. We apologize for this typo! We will change the notation accordingly. - Indeed, the terminology in NLP and SSL diverges in this regard causing confusion. We tried to compensate for that by reiterating the meaning of the used terminology in the submission as often as possible. We want to thank the reviewer for their patience and persistence! --- Rebuttal Comment 1.1: Comment: Thank you for the clarification and answers. I am keeping the original score.
Summary: This paper aims to provide a theoretical understanding of the recent successes of self-supervised learning methods by leveraging tools like Laplacian-based dimensionality reduction methods and low-rank matrix completion. The authors introduce an eigen-problem objective for spectral embeddings from graphs, which is used to interpret modern self-supervised learning methods. Strengths: 1. Using an eigenproblem objective for spectral embeddings derived from graph augmentations, the authors explain the workings of contemporary self-supervised learning methods. This offers a fresh lens to understand self-supervised representation learning. 2. The paper further shows that self-supervised learning techniques can concurrently execute Laplacian-based nonlinear dimensionality reduction and low-rank matrix completion. This dual functionality further explains the success of self-supervised learning methods. Weaknesses: The approach presented in the paper is significantly dependent on the incoherence between the outputs of the backbone and the projection head. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Could this approach help to understand the inner working of Large Language models (LLMs)? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The paper needs more numerical experiments. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback! First, we would like to address the concern about the dependence on the incoherence between the backbone and the projection head. We first would like to clarify that the underlying assumption for learning useful representations is that the similarity/kernel matrix of the data is somehow aligned with the downstream task. For example, if the measure of similarity between objects is ‘orthogonal’ to the downstream task, then we cannot hope to extract useful information into the representations. However, there is another aspect of whether we can produce useful representations. In this work, we argue that modern SSL methods perform spectral embedding and low-rank matrix completion. Successful matrix completion relies on the high incoherence of the matrix we want to complete. As all the methods considered in the submission could be seen as performing spectral decomposition, the produced representations inherit the incoherence of that matrix. We argue that this incoherence phenomenon could explain why the projection head is not used in the downstream tasks. All in all, we could summarize the above by saying that the problem itself and thus all considered SSL methods assume and rely on the underlying similarity/kernel matrix to be incoherent and thus recoverable. We would also like to note that we conducted more experiments with our trace maximization formulation and present the results in the PDF attached to the global response. Regarding the question about the inner workings of the large language models, we are enthusiastic to explore if this approach could offer insights into the representations learned by LLMs — one may find many similarities in the sense many NLP approaches are based on pretext tasks. However, now we cannot meaningfully elaborate on this matter further. --- Rebuttal Comment 1.1: Title: Comments after rebuttal Comment: Thank you for your response. I am keeping my overall recommendation at 6.
Summary: The authors observe that self-supervised learning (SSL) attracts growing attention and that, by now, numerous corresponding loss functions have been proposed. They systemize these from the point of view of Laplace operators (on Riemannian manifolds) and low-rank matrix approximation. Indeed, for SimCLR, BarlowTwins, and VICReg they show that these learn eigenfunctions of a Laplacian. They also demonstrate that models trained w.r.t. related trace maximization objectives reach performances that are on par with those resulting from modern SSL techniques. Strengths: This paper is a blast from the past in the best possible sense. More rigorous theoretical underpinnings of recent self-supervised learning models are still very much lacking and this paper makes considerable progress in this regard and shows a connection to low-rank-matrix completion tasks. This contribution is theoretically rigorous and technically sound and solid. It also shows that “well known” trace maximization objectives lead to model which reach performances that are on par with those resulting from modern SSL techniques. Weaknesses: A minor point of criticism is that several interesting experimental findings are deferred to the supplementary material. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: Given the scope of the paper (bridging a gap between modern learning techniches and classically "well known" models), I can't see any practical limitations. Neither are there any concerns regarding negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their kind and perceptive feedback. We sincerely appreciate the reviewer’s high opinion of the strengths of this submission. We hope that additional experimentation results (attached as PDF, summarized in the global response) will only help reinforce this position. We will find a way to incorporate all interesting experimental findings in the main body. --- Rebuttal Comment 1.1: Comment: Thanks for your response; I am keeping my score.
Rebuttal 1: Rebuttal: We would like to thank all the reviewers for their time, effort, and considerate reviews! While we will address each review individually, in this global response, we would like to summarize the main pieces of those individual responses. First of all, we would like to present more experimental results concerning the performance of the trace objective and the incoherence tested on common methods (SimCLR, BarlowTwins, and VICReg). We have compiled them into a PDF for your convenience. There, Table 1 reports the top-1 and top-5 downstream accuracy (mean and standard deviation across 3-5 runs) of our proposed formulation (Ours) and VICReg across three datasets: CIFAR-10, CIFAR-100, and ImageNet-100. In short, the performance of the proposed objective is comparable to the state-of-the-art. Meanwhile, Figure 1 reports incoherence vs accuracy results for SimCLR, BarlowTwins, and VICReg on the same architecture and training/testing configuration. All methods reveal similar behaviour — backbone outputs are less incoherent and have better downstream performance, while projection head outputs have high incoherence and worse downstream performance. We caution, however, that incoherence should not be used to compare methods/models since it is not indicative of the information load in the representations/embeddings, which would typically be measured by metrics based on matrix rank. While we do not necessarily strive to overtake the state-of-the-art — we do not optimize the hyperparameters in our favour, we would like to emphasize the importance of establishing the connection of augmentation-based self-supervised methods to spectral embedding methods and low-rank matrix completion, which our submission aims to provide. This connection also hints at the reason why one typically disposes of the projection head once the model is trained and used in a downstream task. We posit that incoherence of the matrix that one tries to recover is behind it and find empirical support for this phenomenon. The reviews were instrumental in enhancing the presentation, shaping new experiments, and finding additional insights. We hope that our response provided better clarity on the submission, resolved any confusing aspects, and provided convincing reasons for a potential score increase. Pdf: /pdf/17d636517e8547d07f9cb7feed5d892bc3f63a4c.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Analyzing the Sample Complexity of Self-Supervised Image Reconstruction Methods
Accept (poster)
Summary: The paper presents a theoretical analysis of the sample complexity of the problem of learning a linear denoiser with a self-supervised learning loss, and verifies the bounds on a series of experiments with linear denoising. The paper also presents an empirical study of the gap between self-supervised learning and supervised learning in the context of image reconstruction with neural networks (for denoising and compressive MRI problems). Strengths: - A theoretical bound illustrates the role of sample complexity in the setting of self-supervised learning for image denoising. A bound scaling as 1/N where N is the dataset size is presented. - The gap between supervised and self-supervised learning is evaluated for various imaging tasks, showing that self-supervised losses which are unbiased estimators of the supervised loss can achieve a performance on par with supervised learning for large sample sizes. Weaknesses: There is little link between the theoretical analysis of linear denoisers and the empirical results on non-linear denoising and reconstruction with deep networks. It is not clear whether the theory developed in the linear case can explain the non-linear setting. Moreover, some assumptions in the main theorem seem unrealistic in the context of deep learning, e.g. networks are trained on multiple epochs, not on a single pass as required by the theorem. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Does the dimension of the signal set play a significant role in Theorem 1? While the dimension d appears in eq. 5, it doesn't seem to impact strongly the final bound, which is somehow surprising. Why the paper doesn't analyze learning with a SURE-based loss? This should also be an unbiased estimator of the supervised loss for the denoising case. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The paper discusses the limitations related to the choice of an specific neural network. However, I think it would be good to include a discussion of the limitations of using a linear denoiser analysis to understand the dynamics of learning highly non-linear denoisers. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for the feedback. In the following we address the weaknesses and questions pointed out by the reviewer. - **Weakness and limitations; on the connection of theory and practice, and our theory pertaining to a linear estimator:** We think our theory and empirical results for the linear estimator and the empirical results for the neural networks are well connected in that the qualitative behavior found for a linear estimator theoretically is also observed empirically for neural networks. In particular, our theoretical results for denoising predict that the gap between noise2noise and supervised training depends on the noise variance $\sigma_e^2$ of the training targets and closes as the number of training examples $N$ increases, and that's exactly what we find empirically for neural networks as well. *Regarding the theory pertaining to a linear estimator:* In our theory section, we focus on a linear estimator as for the linear estimator we can make a precise statement, while for a complicated neural network we cannot. However, our result also applies to a more general setup, in particular to any estimator for which the loss function is strongly smooth and has a bounded stochastic gradient as formalized in the following Theorem, which we added along with proof and discussion to the appendix: > **Theorem 2.** *Consider the estimate $\mathbf{\theta} _ N$ obtained by running the SGM for $N$ iterations on the training set $\mathcal{D}$ with a decaying stepsize $\alpha_k = \frac{1}{c+k}$, where $c$ is a constant. Assume that the loss $h(\mathbf{\theta}) = ||f_{\mathbf{\theta}}(\mathbf{y}) - \mathbf{y}'||_2^2$ for any pair $(\mathbf{y},\mathbf{y}')$ is $Q$-strongly smooth and that the stochastic gradient is $(M,B)$-bounded. Then the expected generalization error, where expectation is over the random training set $\mathcal{D}$ obeys* $$ \mathbb{E} [R(\mathbf{\theta} _ N) ] \leq R(\mathbf{\theta}^*) + \frac{Q}{2} \frac{1}{N-2} \frac{1}{m^2} (2M^2 e_0 + B^2). $$ > The Theorem follows from the definition of strong smoothness and Lemma 1 in Appendix B and its interpretation is equivalent to the interpretation of Theorem 1 for the linear case: We get a rate of $1/N$ and the term associated with maximizing the self-supervised loss becomes larger in the noise variance $\sigma_e^2$ on the training targets since a larger noise variance requires a larger parameter $B$. - **Weakness part 2, regarding the assumption in Theorem 1 to consider a single pass of the stochastic gradient method:** With a single pass over the training set we already get an optimal risk bound (up to constants), and thus there seems little value in analyzing multiple passes. Analyzing a single pass of SGD over a training set is a standard technique in the analysis of SGD and is widely accepted, for two reasons: Often it is sufficient to get optimal bounds up to numerical constants (as in our setup), and moreover performing multiple passes significantly complicates the analysis since then we can't leverage independence of the samples as efficiently. - **Question part 1, the role of the signal dimension d in Theorem 1:** Good question, the signal dimension doesn't play a significant role in the theorem, since we scale the signal energy to be one in expectation, irrespectively of the signal dimension. The other energies are also scaled to be independent of the signal and ambient dimension. The signal dimension does, however, play a role in that it determines how much noise is filtered out (i.e., the factor $d \sigma_z^2/n$), this is the standard factor that we expect from subspace denoising. - **Question part 2, why do we not consider a SURE-based loss?** That is an excellent question. The SURE loss gives an unbiased estimate of the loss under additional assumptions, i.e., Gaussianity of the noise. We consider a class of self-supervised noise2noise-like losses that is significantly more general in that it applies beyond Gaussian denoising to for example real-world camera denoising (see the pdf attached to our global response) and in that it generalizes to compressive sensing as discussed in the paper. We are interested in this more general class as it is much more widely applicable. We hope that our clarifications above address the reviewer's concerns and would appreciate it if the reviewer would consider raising their score. We are also happy to discuss further, thanks again for your comments. --- Rebuttal Comment 1.1: Comment: Many thanks for answering my questions. I have raised my score accordingly.
Summary: The work investigates the cost of self-supervised training by characterizing its sample complexity. Strengths: 1. The paper is based on the given theory and carries out corresponding empirical research on self-supervised denoising and accelerated MRI. 2. The paper shows that a model trained with such self-supervised training is as good as the same model trained in a supervised fashion, but self-supervised training requires more examples than supervised training. 3. The paper shows that the performance gap between self-supervised and supervised training vanishes as a function of the training examples, at a problem-dependent rate, as predicted by the theory. Weaknesses: 1. The main concern is that the theoretical approach of this paper seems to be similar to [1], just extending from supervised to self-supervised settings. The corresponding contribution of the theoretical approach should be further elucidated. 2. In the results reported by some previous self-supervised denoising works (e.g., Neighbor2Neighbor), Noise2Noise generally performed the same as supervised methods. But in this work, even with a lot of training data, there is still a gap between the two, what is the reason for this? 3. The paper is based on the setting of simple Gaussian noise. I would like to ask if the authors have done corresponding research or experiments on real-world RGB noise. Is this work still applicable to real-world situations? [1] Scaling laws for deep learning based image reconstruction. ICLR 2023. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please see the weaknesses. I am willing to improve the score if the concerns are addressed well. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitations have been discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback. In the following we address the weaknesses in the order as pointed out by the reviewer. - **Weakness 1, 'the theoretical approach of this paper seems to be similar to [1]':** We would like to point out that the theoretical approach of this paper is different from that of [1] in that both the setup and the proof technique are substantially different. Our work analyzes the self-supervised noise2noise loss, while [1] considers a standard supervised loss. Also, [1] considers an early-stopped estimator while we do not. The proof technique used in [1] is different from ours. Ours is based on a convergence analysis of the stochastic gradient method, while [1] is based on analyzing the iterates of gradient descent. - **Weakness 2, noise2noise performing equivalently to supervised training in other works:** Yes, in some publications a network trained with a noise2noise loss performs as well as a network trained with a supervised loss. However this is misleading as it typically pertains to an unrealistic setup where the noise is resampled. Specifically, the original noise2noise paper [2] re-samples the noise on the noisy training targets in every training epoch, which requires the original image. Follow-up work like neighbor2neighbor [3] or noisier2noise [4] stuck to this approach. However, this is an unrealistic setup, since if we are given the ground truth images we can and should just train in a supervised manner. In our paper we consider the more realistic setup where we are given two noisy realizations of an image only. An important contribution of our work is to show for this setup, a gap between supervised and noise2noise training exists, which closes as the number of examples becomes large. - **Weakness 3, noise2noise for real-world image denoising beyond Gaussian noise:** That is an excellent point. Noise2noise like training for denoising is applicable beyond Gaussian noise. The condition formulated in Proposition 1 only requires the noise on the training targets to be uncorrelated with the residual. There are a variety of situations in practice where noise2noise like training is applicable. To demonstrate this point, we conducted additional experiments during the rebuttal period on real-world camera image denoising to empirically determine the performance gap between models trained in a self-supervised noise2noise and a supervised manner as a function of the number of training examples. Our results on denoising the raw images in the Smartphone Image Denoising Dataset (SIDD) [6] show that an initial gap of 2.6dB in PSNR for only 100 training patches reduces to 0.2dB for 100k patches. See the attached pdf for the full results that we'll include in the paper. There are other real-world settings where our results are applicable, e.g., to real fluorescence microscopy images noise, see the paper [5]. We hope that our clarifications above address the reviewer's concerns. In particular we hope the reviewer's main concern, i.e., that the theory is similar to [1], has been addressed, and the secondary concern has been addressed with the new simulations on additional real-world-simulations for denoising. If this is the case, we would appreciate it if the reviewer would consider raising their score. We are very happy to discuss further, thanks again for your comments. [2] Lehtinen et al. Noise2Noise: Learning Image Restoration without Clean Data. ICML 2018. [3] Huang et al. Neighbor2Neighbor: Self-Supervised Denoising From Single Noisy Images. CVPR 2021. [4] Moran et al. Noisier2Noise: Learning to Denoise From Unpaired Noisy Data. CVPR 2020. [5] Zhang et al. A Poisson-Gaussian Denoising Dataset with Real Fluorescene Microscopy Images. CVPR 2019. [6] Abdelhamed et al. A High-Quality Denoising Dataset for Smartphone Cameras. CVPR 2018. --- Rebuttal Comment 1.1: Title: Checking in Comment: Thanks a lot again for your review and feedback. We hope we have addressed your concerns. Please let us know if you have any remaining concerns and questions.
Summary: The paper is an study on the sample complexity for image reconstruction in two types of methods, self-supervised and supervised. The authors studies the risk bounds for the case of self-supervised methods. They then evaluate the convergance rates in numerical and empirical experiments for two problems, denoising and compressive sensing. The authors conclude that the convergence rates are similar in the two cases of unsupervised and supervised scenarios. However, the self-supervised approach requires more iterations to reach a similar performance. Strengths: 1) The authors theoretically study and find specific risk bounds for using the self-supervised method. 2) The authors study the sample complexity empiracally by considering a range of number of parameters for the network and a range of training set sizes. Weaknesses: 1) For many of the empirical experiments, the authors only report the best out of multiple runs. Illustrating the mean and some measure of variance among multiple runs gives a more complete picture rather than just the best case. Otherwise, the readers would wonder how reliable it is to do a single run of the self-supervised approach compared to the supervised approach. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1) In appendix 1, in the first line, I think the sign of the last term, $e$, should be negative, based on the given definition for $y\prime$. This results in changing the sign of a few terms in following lines. But the conclusion still holds. 2) Since there is an assumption that the noise distributions for training and inference to be the same in the case of compressed sensing, is it fair to call it self-supervised? 3) As the authors also point to in their limitations segment, the experiments are limited to U-net like design for the architectures. However, they mention they do not expect using different designs would change the qualitative results. Could they ellaborate on the intuition behind this expectation? - Typo: - Line 511: missing reference Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: The title and the claims in the paper point to image reconstruction in general. However, the experiments are only limited to denoising and compressed sensing. The behavior might be very different for some other image reconstruction tasks such as image inpainting or super-resolution. This limitation should be more pronounced in the claims. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback and their positive evaluation of our work. In the following we address the weakness and questions pointed out by the reviewer. - **Weakness 1, reporting only the best runs:** For all empirical results on denoising and compressive sensing presented in Figure 2,4,5, the Figures show all conducted runs, not only the best ones. The figures show that as the number of training examples increases the variance of the runs decreases, which is why for the largest training set sizes we only conduct a single run. Instead of drawing the performance curves in Figure 2,4,5 based on the best performing models, we could also use the mean performance, which would shift all curves a little bit downwards but would not affect our findings. - **Question 1:** Many thanks for pointing out the typo, we fixed it. - **Question 2, is it fair to call the compressive sensing setup self-supervised:** Yes, we believe it is fair to call the accelerated MRI compressive sensing setup self-supervised since we do not have access to fully-sampled data during training. Specifically, during training, we only assume that we have access to 1/3 of all possible measurements, which we then split into a network input with an undersampling factor 1/4 and a corresponding training target containing the remaining measurements plus some overlap. At inference, we assume access to only 1/4 of all possible measurements. Contrary, a supervised method assumes access to fully-sampled data at training time. - **Question 3, why we expect the choice of network architecture not to affect our qualitative results:** Our result that the performance of a model trained in a noise2noise self-supervised way approaches the performance of a model trained in a supervised way, is based on how well two different loss functions approximate the risk, and thus the qualitative findings do not depend on the explicit network architecture. This can be also seen from Proposition 1 and 2 in which we formulate the conditions under which this result holds for denoising and compressive sensing and which do not depend on the particular choice of the network $f _ {\mathbf{\theta}}$. Motivated by your question we added a reference to the Propositions in the limitations segment to make this point more clear. - **Limitations, regarding the generality of our claims:** As pointed out by the reviewer, our results are for a class of self-supervised methods based on constructing unbiased estimates of the gradients of the supervised loss and pertain to denoising and accelerated MRI. There are other problems where such self-supervised losses can be constructed, for example for CT imaging and for some cryo-EM setups, and for those setups the qualitative results from our paper also apply. However, we fully agree that the results and statements in our paper pertain to denoising and accelerated MRI. Also, there are many problems where two independently obtained measurements cannot be easily obtained, and then the class of methods discussed in this paper does not apply. We made both points more clear in the limitations section as suggested. Thanks! Thanks again for your review, and please let us know if you have any other questions. --- Rebuttal Comment 1.1: Title: Response to the authors Comment: Thank you for your detailed response to the reviewers' comments on your paper.
null
null
Rebuttal 1: Rebuttal: Dear reviewers, Attached is a pdf containing experimental results on real-world camera image denoising as discussed in our response to reviewer 8mky (weakness 3), who asked if our results hold for real-world noise beyond the Gaussian setup studied so far in our paper. The results demonstrate how the performance of models trained in a noise2noise self-supervised way approaches the performance of models trained in a supervised way as a function of the number of training examples also for real-world noise analogously to our previous results for Gaussian denoising. We hope that you find the additional material helpful and are happy to discuss further. Pdf: /pdf/c486eb7bfe773c23b07f5fa4e25ab24eed1f3374.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Activity Grammars for Temporal Action Segmentation
Accept (poster)
Summary: This paper proposes a grammar-based activity segmentation method. The authors proposed a grammar induction algorithm, as well as an improvement over the existing activity grammar parser, the Generalized Earley Parser (GEP). The proposed model shows improvements over prior works on both grammar induction and action segmentation. Strengths: - The exploration of marrying neural-symbolic representations is an essential aspect of research, especially given the limited application of neural-symbolic methods for real-world problems. - The proposed grammar induction method does improve existing prior works and could potentially be beneficial for future research. Weaknesses: - The experiments on activity segmentation are mainly compared with baselines that are not state-of-the-art. Though the ablations prove the improvement in grammar induction and parsing, the overall performance can not be justified by the current experiments (i.e., might need better results to justify the motivation of grammar-based activity understanding methods). - Another concern is the design of grammar. Given recent advances in language modeling and unsupervised grammar induction, the motivation of symbolic methods is not very clear or not shown by the experiments. - The overall formulation and algorithm design for the BEP follows from the GEP parser. - The notation used in this paper is error-prone and needs to be clearer, especially given that the grammar-based method largely depends on these notations. (e.g. L.146, should it be $a_i^{M}$). Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: See weakness section Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The authors have properly addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer rBwB for their meaningful comments. ### **[Comparison with SOTA]** Following the reviewer's suggestion, we compare ours with the state-of-the-art methods in Table R3 where ours shows comparable or superior performance. Note that most of the refinement methods [A3, 9, A5, 5, A6, 15, 41] typically fine-tune their underlying temporal action segmentation models, whereas our method does not require such fine-tuning, thus being able to apply to any black-box temporal action segmentation models. The breakfast and 50salads benchmarks are the two most standard benchmarks for action segmentation. We will add more experiments to the final manuscript using another benchmark. ### **[Motivation of our approach]** Similar to other neuro-symbolic approaches [A7], our grammar-based method facilitates an explainable output using its grammar structure, improving the generalization performance to unseen action sequences, which we have demonstrated in Tables 2-3, Figure 4, Table S1, and Figure S3. Furthermore, combining a symbolic approach with neural networks can also increase robustness and reliability [A8]. In our case, the induced grammar is used to improve the noisy output of a temporal action segmentation network. To better see the effect, we conduct experiments in Table R2 using the ASFormer that performs multiple refinement stages in decoding [8, 42]; as shown in the first compartment of Table R2, the performance of the lower decoding stages shows lower performance. The second compartment of Table R2 shows the results of applying our method to each stage of the decoder in ASFormer. It reveals that our grammar-based refinement brings more significant improvement when initial action segmentation is from lower stages, i.e., less accurate. ### **[Effectiveness of BEP over GEP]** While both BEP and GEP are based on the Earley parser, they exhibit differences in their search algorithms. In detail, BEP and GEP have distinct orders for exploring production rules, leading to differences in the searching space as we applied pruning techniques to BEP. For better understanding, we explain how the depth-based priority works during the parsing through the example in Table S1 and Section B.4 of the supplementary material. In addition, the motivation of BEP stems from the fact that GEP was unable to parse the CFG generated through KARI within a reasonable time (lines 175-177). To address the challenge, we have introduced additional pruning techniques and confirmed that the searching algorithm proposed in BEP was more effective (Table 4), prompting us to adopt it as the preferred solution. ### **[Clarification on KARI]** Thanks for the correction. $\boldsymbol{a}^\mathrm{M_i}$ should be revised to $\boldsymbol{a}_i^\mathrm{M}$ in line 146. We will try our best to clarify notations and explanations in the final version of our paper. --- Rebuttal Comment 1.1: Comment: ### **[ Additional comparison with SOTA - Applying our method to DTL [41] ]** In addition to Table R3 of the pdf attached to the global response, we apply our method to DTL [41] on 50 Salads. Since its pre-trained weights are not yet released despite our request, we attempted to reproduce DTL incorporated with MSTCN [8] and ASFormer [42] based on the original paper and its official code repository [A9]. The reproduced results of DTL are summarized in Table R4. For both Table R4-(a) and R4-(b), the first, second, and third rows report performance of the baseline (either MSTCN or ASFormer), that of the combination of DTL and the baseline, and that of applying ours to the combination. The tables demonstrate that our method substantially improved performance of DTL in all the segmentation metrics (i.e., edit score and F1 scores). This suggests that **our grammar refinement method is effective for DTL, regardless of the baseline incorporated with it**; it indeed offers complementary benefits to various temporal action segmentation models, such as MSTCN, ASFormer, and DTL. **[Table R4. The performance of applying our method to DTL]** (a) MSTCN | model | edit | F1@10 | F1@25 | F1@50 | acc | |------------------------------|------|-------|-------|-------|-----| |MSTCN (reprod.) |62.4|69.5|65.3|55.7|75.2| |MSTCN + DTL [41] (reprod.) |67.3|74.9|72.7|64.7|79.9| |MSTCN + DTL [41] + ours |68.4 (1.1&uarr;)|76.7 (1.8&uarr;)|74.8 (2.1&uarr;)|65.5 (0.8&uarr;)|78.9 (1.0&darr;)| (b) ASFormer | model | edit | F1@10 | F1@25 | F1@50 | acc | |------------------------------|------|-------|-------|-------|-----| |ASFormer (reprod.) |76.5|83.8|81.7|74.8|86.1| |ASFormer + DTL [41] (reprod.) |78.8|85.1|84.2|76.3|86.3| |ASFormer + DTL [41] + ours |80.2 (1.4&uarr;) |85.9 (0.8&uarr;)|84.8 (0.6&uarr;)|77.6 (1.3&uarr;)|85.4 (0.9&darr;)| [A9] Ziwei Xu et al. DTL-action-segmentation. https://github.com/ZiweiXU/DTL-action-segmentation, 2022.
Summary: This paper proposes a new grammar induction algorithm, an effective parser, and a grammar evaluation framework. They improve temporal action segmentation by extracting and handling context-free grammar with recursive rules. The assessment presents good generalization and discrimination capabilities of induced grammar. After reading the rebuttal, my concern about the second and third points are resolved. For the first point, the authors provide the SOTA table as requested. However, the performance of the proposed method is not convincing. Overall, given that this paper introduces a new framework for temporal action segmentation, I still consider the pros outweigh the cons and will keep my initial recommendation. Strengths: 1. KARI proposes a new grammar induction algorithm, an effective parser, and a grammar evaluation framework. They improve temporal action segmentation by extracting and handling context-free grammar with recursive rules. The assessment presents good generalization and discrimination capabilities of induced grammar. 2. KARI outperforms state-of-the-art models on two benchmarks, Breakfast, and 50 Salads, for the temporal action segmentation task. 3. The writing is good. Weaknesses: 1. The evaluation of MIF is based on breakfast and 50salads. Can the author provide results on more benchmarks? The SOTA table is missing in the paper. 2. The method is relatively new, but the details are mostly presented in the form of text. Some illustrations for the method's visual representation are missing. I think this is very beneficial to readers' understanding and the dissemination of the paper. 3. For the first row of MS-TCN results in Table 2, is better than KARI, but the author did not bold it. Also, the author should give an explanation why KARI doesn't work here. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See weaknesses. In the first row of Section 3.1, the citation is missing. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Limitations are not discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer FXdk for constructive comments. ### **[Comparison with SOTA]** Following the reviewer's suggestion, we compare ours with the state-of-the-art methods in Table R3 where ours shows comparable or superior performance. Note that most of the refinement methods [A3, 9, A5, 5, A6, 15, 41] typically fine-tune their underlying temporal action segmentation models, whereas our method does not require such fine-tuning, thus being able to apply to any black-box temporal action segmentation models. The breakfast and 50salads benchmarks are the two most standard benchmarks for action segmentation. We will add more experiments to the final manuscript using another benchmark. ### **[Grammar illustration]** Following the suggestion, we created Figure R1 with example sequences and KARI-induced grammar for a better understanding. We also provide the induction example in Section A.3 of the supplementary material. In Figure R1-(a), we give an example sequence of $\boldsymbol{a}$ of the 'milk' activity, where 'spoon powder' and 'pour milk' are the key actions, and visualize how the sequences are divided into sub-strings according to the key actions. Figure R1-(b) presents the KARI-induced grammar from $\boldsymbol{a}$. ### **[MS-TCN results in Table 2]** Let us clarify that KARI works on MS-TCN in Table 2. The first row of each compartment indicates the performance of the original paper. We apply the proposed methods to the reproduced models, as the pre-trained model of MS-TCN is not open-sourced. Therefore, we compare the performance of the proposed methods with the reproduced performance in the second row of each compartment, similar to the previous work [1, 14, 41]. --- Rebuttal Comment 1.1: Comment: ### **[ Additional comparison with SOTA - Applying our method to DTL [41] ]** In addition to Table R3 of the pdf attached to the global response, we apply our method to DTL [41] on 50 Salads. Since its pre-trained weights are not yet released despite our request, we attempted to reproduce DTL incorporated with MSTCN [8] and ASFormer [42] based on the original paper and its official code repository [A9]. The reproduced results of DTL are summarized in Table R4. For both Table R4-(a) and R4-(b), the first, second, and third rows report performance of the baseline (either MSTCN or ASFormer), that of the combination of DTL and the baseline, and that of applying ours to the combination. The tables demonstrate that our method substantially improved performance of DTL in all the segmentation metrics (i.e., edit score and F1 scores). This suggests that **our grammar refinement method is effective for DTL, regardless of the baseline incorporated with it**; it indeed offers complementary benefits to various temporal action segmentation models, such as MSTCN, ASFormer, and DTL. **[Table R4. The performance of applying our method to DTL]** (a) MSTCN | model | edit | F1@10 | F1@25 | F1@50 | acc | |------------------------------|------|-------|-------|-------|-----| |MSTCN (reprod.) |62.4|69.5|65.3|55.7|75.2| |MSTCN + DTL [41] (reprod.) |67.3|74.9|72.7|64.7|79.9| |MSTCN + DTL [41] + ours |68.4 (1.1&uarr;)|76.7 (1.8&uarr;)|74.8 (2.1&uarr;)|65.5 (0.8&uarr;)|78.9 (1.0&darr;)| (b) ASFormer | model | edit | F1@10 | F1@25 | F1@50 | acc | |------------------------------|------|-------|-------|-------|-----| |ASFormer (reprod.) |76.5|83.8|81.7|74.8|86.1| |ASFormer + DTL [41] (reprod.) |78.8|85.1|84.2|76.3|86.3| |ASFormer + DTL [41] + ours |80.2 (1.4&uarr;) |85.9 (0.8&uarr;)|84.8 (0.6&uarr;)|77.6 (1.3&uarr;)|85.4 (0.9&darr;)| [A9] Ziwei Xu et al. DTL-action-segmentation. https://github.com/ZiweiXU/DTL-action-segmentation, 2022.
Summary: This paper addresses the challenge of temporal action segmentation by introducing an activity grammar to guide neural predictions. The proposed approach involves a grammar induction algorithm (KARI) to extract a powerful context-free grammar from action sequence data. Additionally, an efficient generalized parser (BEP) transforms frame-level probabilities into a reliable sequence of actions based on the induced grammar. Experimental results on benchmark datasets show significant improvements in both performance and interpretability of temporal action segmentation. Strengths: This paper is well written which demonstrates its motivation, methodology and experiments. Especially, the method of this paper is easy to follow and the provided visualization is a plus to understand the proposed algorithm. The idea of introducing the recursive rules is inspiring, which helps to identify the repetitions of actions and action phrases. The proposed grammar evaluation scheme demonstrates the effectiveness of the proposed KARI-induced activity grammar, achieving good recall while maintaining reasonable precision. The experimental results look promising, achieving good performance on various benchmarks. Weaknesses: The benchmark is relatively small, which generates concerns about the scalability of the proposed method. Also, it’s good to know the computational cost of the proposed method to evaluate its scalability. The design of the proposed KARI is not well justified by using ablation study. It will be good to show the effectiveness of each component and its alternatives. Minor: Missing reference for ln 109 and 119. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Is the proposed method limited by the action type? For example, it could only work for well-structured activities. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: as is mentioned in previous sections. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer CeJ9 for their meaningful comments and suggestions. ### **[Scalability and computation cost of the proposed method]** Following the reviewer's suggestion, we analyze the computational cost of the KARI and BEP. Since KARI induces activity grammars based on the activity sequences in the dataset, the time consumption is directly affected by the number of sequences. The Breakfast dataset includes 196 unique action sequences from 1,712 video sequences. Figure R3-(a) shows the running time plot when gradually increasing the dataset from 40\% to 100\%, where we can empirically estimate linear running time with respect to the dataset size; it takes less than 0.1 seconds to induce grammar from the entire activity sequences in our implementation. Figure R3-(b) shows a similar plot when varying the number of sequences from 200 to 1,000. These results indicate that KARI has reasonable scalability so that it can be applied to large-scale datasets. The computational cost of BEP depends on the complexity of the grammar $G$ generated from the induction algorithm. Regarding BEP, the time complexity of computing the parsing probability for each sequence is $O(T)$ [29] where $T$ denotes the number of frames, resulting in a worst-case time complexity for the entire parsing process becomes $O(T|G|)$. ### **[Ablation study on KARI]** Please refer to the general response for the ablation study on KARI. ### **[Limited to well-structured activities?]** KARI induces an activity grammar by analyzing action sequences using flexible temporal dependencies across actions; it uses OR rules for temporally equivalent actions and AND rules for dependent actions. It does not require specific activity structures in action sequences, showing impressive generalization to real-world data (Table S2) as well as random synthetic data (Table 1). ### **[Missing reference]** Thanks for letting us know. We will add citations of [A1] in line 109 and [A2] in line 119.
Summary: This paper presents a grammar induction algorithm that takes as input sequences of frame-level predictions and outputs structured sequences of actions. The advantage of their approach is that it allows recursive rules, which enhances its generalization abilities. Strengths: 1. The method is more flexible than previously proposed grammar induction methods. 2. The results are better compared to baselines. 3. All the steps are explained in detail and most of them are well justified. Weaknesses: 1. The grammar explanation (section 3.2) is confusing. Especially the large amount of superindices and subindices representing different concepts. For example, what does E^{M(m,n)} mean? Or, in "t \in {b, a, k(m, n)}" what does the "b" represent? Probably simpler notation or a figure would help. I also did not understand what "each sub-string a_i^M starts with a key action and includes all the key actions in K" means. Do you start at a key action and do not close the sub-string until all key actions have been found? But then, I assume there will only be one such sub-string in the whole sequence. Is there any overlap between sub-string? Is it possible that a sequence does not contain any key action? 2. I do not think the synthetic activity grammars are a good setting to judge the quality of the proposed approach. Without delving into the details of the generated random grammars, it is unclear whether the biases introduced in the generation may make them easier to deal with for some kinds of methods than others. Real-world grammars do not have that problem. 3. There are no results showing the importance of the specific contribution. Did the authors identify cases where the inclusion of recursive rules made a difference? Minor weakness: a few typos and grammar mistakes: "due to the reason" -> "due to this reason" (lines 4, 22), "and also be applied" -> "and can also be applied" (line 29), "we proceeds" (line 137), "has" -> "have" (line 70), "other researches" (line 74), a few empty citations (lines 109, 119), etc. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Would it be possible to combine this approach to perform end-to-end training with the frame-level predictions Y? This is, train the grammar and the temporal action segmentation network simultaneously. - Accuracy does not improve when using grammars (it actually decreases). Why is that the case? I would assume that having contextual grammatical information about future and past actions would improve the accuracy of the current action prediction. Do you think the accuracy would improve with a flexible/good enough grammar, or this is a limitation of using action grammars? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Limitations and broader impact are discussed in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer YXrU for constructive comments and suggestions. We will clearly revise Section 3.2 using simpler notations and more illustrations. ### **[Grammar illustration]** Following the suggestion, we created Figure R1 with example sequences and KARI-induced grammar for a better understanding. We also provide the induction example in Section A.3 of the supplementary material. In Figure R1-(a), we give an example sequence of $\boldsymbol{a}$ of the 'milk' activity, where 'spoon powder' and 'pour milk' are the key actions, and visualize how the sequences are divided into sub-strings according to the key actions. Figure R1-(b) presents the KARI-induced grammar from $\boldsymbol{a}$. ### **[Partitioning strategy for sub-string $\boldsymbol{a}^\mathrm{M}_i$]** In lines 143-144, the statement, "each sub-string $\boldsymbol{a}_i^\mathrm{M}$ starts with a key action and includes all the key actions in $\mathcal{K}$," indicates that the sub-string $\boldsymbol{a}_i^\mathrm{M}$ should include all of the key actions existed in the activity. In the example in Figure R1-(a), the middle sequence $\boldsymbol{a}^\mathrm{M}$ is further divided into two distinct sub-strings, denoted as $\boldsymbol{a}^\mathrm{M}_1$ and $\boldsymbol{a}^\mathrm{M}_2$; $\boldsymbol{a}^\mathrm{M}_1$ consists of ['spoon powder', 'stir milk', 'pour milk', 'stir milk'] and $\boldsymbol{a}^\mathrm{M}_2$ comprises ['pour milk', 'spoon powder', 'pour milk']. Each sub-string starts with a key action, extends until all key actions of the activity have been included, and ends before the key action of the next sub-strings begins. As can be seen in this example, multiple sub-strings, i.e., $\boldsymbol{a}^\mathrm{M}_1$ and $\boldsymbol{a}^\mathrm{M}_2$, that contain the entire key actions, can exist in a single sequence $\boldsymbol{a}$. ### **[Middle variable $E^{\mathrm{M}(m,n)}$]** In line 156, $E^{\mathrm{M}(m,n)}$ indicates a variable for the rule that covers an action sub-string between the $n_\mathrm{th}$ key action and the $n+1_\mathrm{th}$ key action when the $m_\mathrm{th}$ permutation $\pi_m$ is applied to key actions. Since the temporal order between key actions can vary, we introduce the concept of key action permutations $\Pi^\mathrm{M}$ (line 146). In the example of Figure R1 with two key actions of 'pour milk' and 'spoon powder', there exist two possible permutations: $\pi_1=$ ['pour milk', 'spoon powder'] and $\pi_2=$ ['spoon powder', pour milk'], where $\Pi^\mathrm{M}=\{\pi_1, \pi_2\}$. The rule of $E^\mathrm{M(1,1)}$ thus has 'stir milk' as a body part. This action occurs between 'spoon powder' and 'pour milk,' i.e., the first action of $\pi_1$ and the second action of $\pi_1$. ### **[Action sequences without key actions]** It is possible to extract grammar from the sequences without any key actions by simply putting the entire action sequence $\boldsymbol{a}$ into $\boldsymbol{a}^\mathrm{L}$. ### **[Evaluation on synthetic \& real dataset]** We generated synthetic activity grammars based on the properties of real-world action sequences, which can be observed from existing activity datasets [20, 36]. The procedure for grammar generation can be summarized as follows. We randomly select one to five key actions among 20 terminal elements and randomly determine the number of variables. For production rules, we assign each terminal to variables in a random manner (lines 275-279) and generate rules by randomly compositing them with 'OR' and 'AND'. In addition, we also conducted experiments using grammars without any key actions; 50 synthetic grammars are generated and evaluated with the same procedure in the main paper. As shown in the table below, the KARI-induced grammar shows powerful generalization performance compared to ADIOS-induced grammar, while precision is comparable with others. We also visualize the confusion matrix in Figure R2 for better understanding. |grammar|precision|recall| |---|---|---| |ADIOS-AND|1.00|0.09| |ADIOS-OR|1.00|0.60| |KARI (ours)|0.84|1.00| While the analyses with diverse synthetic grammars demonstrate the efficacy of KARI, we agree with the reviewer's concern that synthetic activity grammar may introduce some biases that are different from real-world activity grammars. Note that we thus also did evaluate the grammar induction algorithms on the real-world dataset, as shown in Table S2 of the supplementary material. ### **[Ablation study on KARI]** Please refer to the general response for the ablation study on KARI. ### **[End-to-end learning with KARI]** In this paper, the proposed method is applied to the pre-trained models without fine-tuning. Since we do not incorporate grammar into the models in training, end-to-end training is currently unavailable. However, we believe that a learnable version of KARI would make it possible to be trained with the neural network regarding the previous work [25, 26], and we leave it as future work. ### **[Accuracy drop]** To analyze the slight decrease in accuracy while significantly increasing edit and F1 scores, we conduct a case study. We present the quantitative and qualitative results of a single sample in Figure R4 and the table below: |model|acc|edit|F1@10|F1@25|F1@50| |---|---|---|---|---|---| |ASFormer[42]|91.87|85.71|85.0|85.0|77.5| |Ours|90.86(1.01$\downarrow$)|100.0(14.29$\uparrow$)|95.78(10.28$\uparrow$)|95.78(10.28$\uparrow$)|87.32(9.82$\uparrow$)| From the red box drawn in Figure R4, we observe that this discrepancy seems to be caused by adjusting the boundary between actions based on the segmentation output. Future improvements could be achieved by adding boundary regression modules [15] or other comparable methods. ### **[Typos and empty citations]** In line 160, $t\in\{\mathrm{b}, \mathrm{a}, \mathrm{}(m,n)\}$ should be revised to $t\in\{\mathrm{L}, \mathrm{R}, \mathrm{M}(m,n)\}$ which indicates a set of superscripts of notations. We will add citations of [A1] in line 109 and [A2] in line 119.
Rebuttal 1: Rebuttal: We thank all the reviewers for their insightful comments and suggestions. We are happy to see that the reviewers have given our work a positive evaluation, noting that "the method is more flexible than previously proposed grammar induction methods (YXrU)," "the idea of introducing the recursive rules is inspiring (CeJ9)," "the assessment presents good generalization and discrimination capabilities(FXdk)," and "the proposed grammar induction method does improve existing prior works and could potentially be beneficial for future research(rBwB)." Nevertheless, the reviewers also point out important comments that: 1. the explanation of grammar induction can be improved for clarity with illustrations, 2. the grammar induction algorithm requires ablation study, 3. comparison with the state of the arts needs to be added, 4. motivation for the proposed method needs to be clarified. Through this rebuttal, we aim to clearly expound the components of KARI and their respective roles, compare the proposed method with the state-of-the-art models, and explain the motivation of our approach. We will revise the manuscript by incorporating the detailed comments from the reviewers. We include Figures R1-R4 and Tables R1-R3 for additional results in the pdf files. In response to the questions posed by reviewers YXrU and CeJ9 regarding the ablation study of KARI, the results are included in the general response. ### **[Ablation study on KARI]** We conduct an ablation study to evaluate the two main components of KARI, key action extraction and recursive rule generation. The results are shown in Table R1, which demonstrates the effectiveness of both components. In particular, eliminating recursive rules (third row in Table R1) significantly decreases the overall performance. We will add these results to our final manuscript. ### **[References]** [A1] J. Frederick et al. Basic methods of probabilistic context free grammars. Springer Berlin Heidelberg 1992. [A2] D. Klein and CD. Manning. A generative constituent-context model for improved grammar induction. ACL 2002. [A3] D. Wang et al. Temporal relational modeling with self-supervision for action segmentation. AAAI Conference on Artificial Intelligence. 2021. [A4] S.J. Li et al. MS-TCN++: Multi-stage temporal convolutional network for action segmentation. TPAMI 2020. [A5] Z. Wang et al. Boundary-aware cascade networks for temporal action segmentation. ECCV 2020. [A6] D. Singhania et al. Coarse to fine multi-resolution temporal convolutional network. arXiv. 2021. [A7] P. Hitzler et al. Neuro-symbolic approaches in artificial intelligence, National Science Review. Volume 9, Issue 6. 2022. [A8] R. Evans and E. Grefenstette. Learning explanatory rules from noisy data. Journal of Artificial Intelligence Research 61 (2018): 1-64. Pdf: /pdf/01d4d937cc7d2645b2f76df46b41e5942708aaac.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Neural Fields with Hard Constraints of Arbitrary Differential Order
Accept (poster)
Summary: The authors propose a method to enforce hard constraint points on neural fields. Instead of a single black-box coordinate network predicting the field values, this work uses neural networks to learn basis functions which are then combined in a linear transformation. Given enough basis functions, the weights of this linear transformation can be found by solving a system of linear equations. Strengths: The paper is well-written and easy to understand. The authors have included their source code which seems reasonably well organized. The method itself is simple and guarantees that constraint points are not violated. It outperforms existing methods on the MERL BRDF dataset. The paper includes a varied selection of experiments to validate the method, and some ablation studies have been performed and reported in the appendix. Weaknesses: The paper contributes little in terms of theory. While the derivation is simple enough, it would have been nice to see a theoretical argument for why this method converges faster than unconstrained training, at least for special cases. The constraints seem to be applicable only to single points, which is not clear from the abstract. Consequently, initial states and boundary conditions cannot properly be handled using this method. Experiments 4.1 and 4.4 are synthetic and very simple. Only experiments 4.2 and 4.3 measure real-world performance. The paper introduces six basis functions in chapter 3.2 but they are never compared against each other in the experiments. The authors simply employ different basis functions for different experiments. This makes it hard to judge which one to use for any given problem. Comparisons against state-of-the-art is also sparse. Only the BRDF fitting experiment compares to related work. The paper does not include learning curves for the various experiments. Figure 5 is somewhat related to learning curves, but the quantitative evaluation of the training process is severely lacking. Training time is not discussed in the paper either. Learning curves against wall-clock time would be appreciated. Minor: * L11: The claim in “Our approaches are demonstrated in a wide range of real-world applications.” seems a bit of a stretch. It’s more like one to two. * L41: Missing citation for the statement about inequality constraints. * L72-78: The review of previous work is missing stream functions for divergence-free fields (e.g. Deep fluids, 2019) and other conservation properties (e.g. Guaranteed Conservation of Momentum for Learning Particle-based Fluid Dynamics, 2022). Hamiltonian and Lagrangian networks have also been used with conserved properties in mind. * The citations in the main text do not have hyperlinks to the references at the end. * The hybrid kernel basis is poorly explained in 3.2. * Footnote 2 is not explained. It is unclear what kind of LaTeX expressions can be specified. * Chapter 5 is more outlook than summary * A.2 and Figure 5: Which experiment does this belong to? Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: In Eq. 11, why do you minimize the log L1 loss, not the log L2 loss? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: While some advantages and disadvantages of various basis functions are mentioned, the authors do not provide a limitations section and only lightly touch on general limitations of the method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the constructive feedback. ## Theoretical analysis We greatly appreciate this suggestion. Please refer to the general response Q3. ## Summary of basis functions Here, we summarize the properties of various basis functions. | | Independent basis | Constraint basis | Hypernetwork basis | Dot-product Kernel basis | Gassian Kernel basis | Hypernetwork kernel basis | Hybrid kernel basis | |:----------------------------------------------------------------:|:-----------------:|:----------------:|:------------------:|:---------------------------------:|:-----------------------------:|:-------------------------:|:---------------------:| | Learning capacity | | Poor | Poor | Fair | Good | Good | Good | | Linear independence | | Poor | Fair | Poor | Good | Good (if using Gaussian) | Good | | Matrix sparsity | Dense | Dense | Dense | Dense | Sparse | Sparse (if using Gaussian) | Sparse | | Number of model parameters independent of number of constraints | No | Yes | Yes | Yes | Yes | Yes | Yes | | Controllable sparsity | No | No | No | No | No | No | Yes | | Higher-order constraints | Yes | Yes | Yes | No | No | Yes | Yes (if using hypernet) | For problems lacking high-order constraints, like the BRDF experiment, we recommend the Gaussian kernel basis. For problems involving high-order constraints, such as sparse shape reconstruction (line 239-251), we suggest using the hypernetwork kernel basis. For large-scale tasks, such as dense shape reconstruction (line 252-255), we recommend the hybrid kernel. Please refer to Sec. 3.2 for relevant explanations, and supplementary A2 and B2 for empirical comparison. ## Single point constraints We agree that our method applies to discrete points rather than a continuous set. However, CNF is perfectly suitable for initial and boundary value problems, as demonstrated in Sec 4.4. In fact, all the major PDE solvers such as FDM, FEM, and spectral methods solve the initial and boundary problems at discrete grid points. CNF can improve the performance of the Kansa method, a type of spectral method for solving general PDEs. Compared to mesh-based solvers such as FDM and FEM, the Kansa method has the advantage of solving a PDE on an irregular grid without meshing, i.e. grouping the points into triangles or quadrilaterals. CNF addresses a major limitation of the Kansa method (please refer to line 260-266). We plan to perform a more extensive evaluation of CNF in comparison to other PDE solvers in future work. ## Hybrid kernel basis The hybrid kernel was designed to promote the sparsity of the matrix in Eq. 4. $\Psi_i\left(\mathbf{x}\right) = \kappa\left(\phi_i\left(\mathbf{x}_i\right), \phi_i\left(\mathbf{x}\right)\right) \kappa_G\left(\mathbf{x}_i ,\mathbf{x}\right)$ The first part $ \kappa\left(\phi_i\left(\mathbf{x}_i\right), \phi_i\left(\mathbf{x}\right)\right)$ could be either the Gaussian kernel basis or the hypernet kernel as defined in Sec. 3.2 (depending on the task). We multiply it with a compactly supported function $\kappa_G$ so that the matrix sparsity can be explicitly adjusted. A candidate $\kappa_G$ can be a truncated Gaussian kernel such as: $ \kappa_G\left(\mathbf{x}_i, \mathbf{x}\right) = \begin{cases} \exp\left( -\frac{\|\mathbf{x}_i- \mathbf{x}\|^2}{2\sigma^2}\right) & \text{if } \|\mathbf{x}_i- \mathbf{x}\| < 3\sigma\\ \text{ or } 0 & \text{if } \|\mathbf{x}_i- \mathbf{x}\| \ge 3\sigma. \end{cases} $ Reducing $\sigma$ yields a matrix with more zero entries. ## Comparisons against SOTA Besides the BRDF fitting experiment, our shape reconstruction experiment also compares with the state-of-the-art work, NKF. It is important to note that, unlike our work, the design NKF cannot model normals with hard constraints. Please refer to supplementary A1 for theoretical explanation and C2 for empirical evaluation. ## Footnote 2 This highlights our user-friendly interface for defining complex, higher-order linear operators (source code in `diff_utils.py` in supplementary): ``` def compute_op(latex_str, y, x) ``` In the context of the advection operation $\frac{\partial f(x, t)}{\partial t} + \beta(x) \frac{\partial f(x, t)}{\partial x}$, our interface can parse `latex_str` if provided as `f_{x_1} + {beta}f_{x_0}` and compute this operation automatically. The usage of this interface will be further clarified through illustrative examples, which will be provided alongside the code release. ## A.2 and Figure 5 This is a standalone experiment where we minimize the condition number of the matrix $A_f$ in Eq. 4 given various basis functions. This experiment measures the inherent linear independence of basis functions of different designs. ## log L1 loss We follow the prior work NBRDF, which also uses log L1, for a fair comparison. BRDF values have a large variation in scale (0.1-700). To correctly fit both ends, logL1 provides a good balance as reported by the NBRDF paper. --- Rebuttal Comment 1.1: Comment: Thank you for clarifying! I believe the learning curves and table comparing the different basis functions are a valuable addition. **Single point constraints** While it is true that classical methods also use boundary conditions at discrete points, these methods have a clearly defined and interpretable interpolation scheme. Fields defined through coordinate networks effectively use black-box interpolation between the constraint points. Can you set an upper bound on the deviation from the given boundary condition for all boundary points (constraint points and interpolated points)? --- Reply to Comment 1.1.1: Comment: Thank you for the suggestions. We will incorporate the additional reports into the revised paper. While classical methods such as FDM may employ interpretable interpolation schemes for continuous set evaluation, this interpolation requires a well-defined meshing of grid points – grouping the points into triangles or quadrilaterals. Note that, for these methods, meshing is required for both solving the PDE at the grid points and evaluating the solution away from the grid points. Meshing on an irregular grid, in the case of our experiment in Section 4.4, proves to be extremely challenging and severely harms the accuracy of these methods. In contrast, our approach, as is based on Kansa, allows for analytical and continuous evaluation of the solution function across the entire domain, without requiring any interpolation or meshing. Our PDE solver also did not use coordinate networks but a proposed skewed RBF as the basis function. We only suggest using neural networks when the behavior of the basis functions to be optimized away from the constraint points is highly complex, as is in the BRDF and shape-reconstruction experiments. When it comes to solving PDEs, the only prior we need is linear independence and smoothness. Therefore, our proposed skewed RBF, which is essentially another variant of Gaussian kernel basis but without the neural encoder, is sufficient. We will also highlight this in the summary of basis functions. The objective of our PDE experiment was to demonstrate that we can address a major limitation of Kansa — tuning the hyperparameters of the basis function. There have been studies [1] conducted to establish the error bounds of Kansa. The error estimate of Kansa is largely dependent on the selection of the basis functions and their shape parameters. CNF effectively refines the shape parameters to achieve a reduced error range, as evidenced in our empirical comparison. Note that the RMSE reported in Section 4.4 and Supplementary D2 were measured away from the grid points. We leave the rigorous derivation of the exact error bounds of our approach to future work, where we plan to conduct a more comprehensive evaluation of CNF against other PDE solvers. [1] Kazemi, B.F., Jafari, H. Error estimate of the MQ-RBF collocation method for fractional differential equations with Caputo–Fabrizio derivative. Math Sci 11, 297–305 (2017).
Summary: A broad range of problems can be formulated as linearly constrained problems, e.g., learning material appearance, interpolatory reconstruction, solving linear PDE, etc. In order to solve linearly constrained optimization problems, this paper developed a novel hard constraint method that builds upon neural fields and differentiable linear solver, naming constrained neural fields (CNF). **Contribution**: 1. A novel methodology CNF is proposed, specifically, linear equality constraints are transformed into a linear system, i.e., eq. (3)-(4); 2. Then both weights of neural fields, $\beta_i$, and learnable parameters, $\theta$, of neural fields (see eq. (2)) can be learned from gradient descent of the objective function in eq. (1) given a differentiable linear solver is applied to eq. (3)-(4); 3. Hyper kernel basis is proposed, which is benchmarked with various basis functions (see section 3.2) and demonstrates advantages such as stable conditional number throughout training (Fig. 5); 4. In experiments, 4 examples from very different background are solved by CNF with superior performance, showing the potential of CNF as a general learning framework for linearly constrained problem. Strengths: **Originality**: CNF is a novel method of solving linearly constrained problem by implementing neural fields based on differentiable linear solver. **Quality**: The paper shows that CNF can solve various problems with high performance. **Clarity**: The methodology is clearly presented. Worthy of mentioning, the analysis of conditioning of the matrix due to different kernel methods (see Appendix A) is quite convincing about why the authors believe Gaussian kernel with hybrid kernel basis (see eq. (9)) is the best choice. **Significance**: This work can be applied to a broad range of linearly constrained problems. Weaknesses: The work relies on differentiable linear solvers, and therefore the most suitable problem for CNF is a linearly constrained problem. In section 5, the authors discussed that CNF can be applied to nonlinear problems given a differentiable solver. However, such a differentiable solver to nonlinear problems is in general not easy to get. Hence, CNF is currently limited to linear problems. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: N.A. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: N.A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate the positive comments and feedback. We acknowledge the challenges posed by nonlinear problems, particularly in terms of convergence and the expensive computational graph of nonlinear solvers. Potentially, the latter could be addressed through the use of implicit layers. We plan to further study it in future work.
Summary: The paper presents a method for integrating hard constraints, represented by a linear operator, into neural field basis functions. This is achieved by learning kernel functions as basis functions at specific constraint points. Through experimentation, the paper provides evidence to show the effectiveness of the proposed method in comparison to unconstrained neural fields across diverse practical tasks. Strengths: Originality: The paper introduces a novel approach by directly applying linear operator constraints to the basis functions constructed with neural field functions and learning a linear representation. Also, the weights are found by applying a solver to a linear system, which is nice and removes some optimization problems when the weights should also be learned. Quality: The results obtained in the paper demonstrate the effectiveness of the proposed method across various tasks. I appreciate the efforts in comparing different common implementations of neural field bases. Clarity: The paper effectively explains the reasoning behind critical implementation choices, such as the selection of basis functions and the choice of kernel. Significance: The proposed approach addresses the significant challenge of the application of explicit hard constraints for neural fields. Weaknesses: - As a reader, I found it challenging to comprehend the training procedure without delving into the code. Therefore, providing a comprehensive explanation would facilitate understanding. This would make it easier to grasp the methodology. - Furthermore, sharing more details about how regularization is applied would provide valuable insights into the approach. Explaining the specific methods used for regularization and their impact on the model's performance would enhance the clarity and comprehensiveness of the paper. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - To enhance clarity, it would be better for the authors to dedicate a separate section to explaining the model training process in detail. This section needs to explicitly highlight which parameters or weights are optimized and specify the optimizer/solver employed for this purpose. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors have addressed the limitations of the work in the Summary section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback. ## Details regarding the training procedure We appreciate your suggestion to dedicate a separate section to explain the training process in detail. Please refer to the general response Q1 for training details. ## Regularization The only regularization we recommend is a term containing the condition number of the matrix in Eq. 4. The condition number is a standard metric in numerical linear algebra that measures the sensitivity of the solution to errors in the input data, such as matrix coefficients or the right-hand side vector. A singular, noninvertible matrix has an infinite condition number. A smaller condition number corresponds to a smaller error in the solution satisfying the hard constraints. The condition number can be added as a regularization term to the main loss. It can be computed by the ratio of the maximal and minimal singular values of the matrix from SVD decomposition. Smoothness is also a commonly preferred quality of neural fields, depending on the task. Smoothness can be promoted by adding a total variation term. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my concerns. I will maintain my initial rating.
Summary: The authors look at enforcing hard constraints on neural fields. Here, the problem formulation is to take continuous coordinates as input and predict the solution on these points as output. The neural field is represented as a linear sum of basis functions, and specifically, variants of a neural kernel function. The constraints must be linear operators which are then satisfied via using a linear solver. Strengths: - Enforcing constraints more precisely on neural fields could improve prediction performance, and trying to do this via harder constraints seems promising. - The method utilizes flexible representations of neural fields (i.e., neural kernel fields; as well as other representations of the basis functions) to enforce the relevant constraints. Weaknesses: - The evaluation metrics are limited, and it is hard to contextualize these results with respect to other neural field methods. For example 1, the authors compare to other neural representation methods, but what about comparing to “soft constraint” approaches as well? It would be helpful to know how speed vs accuracy compares when enforcing the constraint is a softer way. - There is no discussion of the speed and training time to implement this hard constraint approach. It seems like it would be expensive to do a linear solve on these matrix systems, greatly hindering efficiency. More details about the linear solver would be helpful (as well as how the authors treat the system as fully differentiable). - One major limitation of this approach is that only linear operators are used as constraints (thereby being able to utilize eon 3). The practicality of this method seems especially useful when the operator is non-linear. - It would be helpful to describe the problem formulation for each problem (inputs/outputs, form of constraint, what is being minimized, etc.), as this is not always clear from the paper. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - Is the constraint enforced at both training and test time? - What linear solvers are being used here? The authors mention “note that when employing general solvers such as SQP and DC3…” but don’t give details about the solvers themselves. - What is the training time and inference time, and how does it compare to other NN fitting approaches? In particular, these linear solvers could be very expensive. - Can you show an example of using the method when the constraint is a non-linear operator? - In example 4.3, is eqn. 12 (Eikonal equation) being solved in a “soft constraint” way? Where are the hard constraints being enforced? - The format of some of the references needs to be corrected. For example, references 7, 16, 25, 39 do not specify a publication venue. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: - Limitations are not discussed (besides that this approach does not work for non-linear operators), but it seems like this approach would also be difficult to scale up for larger systems because of the expensive linear solves on larger systems. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the constructive feedback. ## Comparison to soft constraint approaches In our experiment on learning material appearance described in Sec. 4.2, we compared CNF with FFN [35] and SIREN [32], two representative soft constraint methods. Our approach surpasses them qualitatively and quantitatively. Complete results are available in the supplementary (Section B3). Furthermore, we conducted extra evaluations by comparing CNF with SIREN in shape-reconstruction and PDE-solver experiments. CNF excels in both cases; please refer to Q2 in the general response for details. For a theoretical argument on CNF's superiority over soft approaches, please see Q3 in the general response. ## Training and inference efficiency, large-scale system Detailed insights into training duration, inference speed, and learning curves are available in the general response (Q2). CNF is efficient in both training (minutes) and inference (seconds). The design of CNF also effectively handles large-scale problems. We developed a hybrid kernel to ensure matrix sparsity (line 163-170) and a patch-based sparse solver (introduced in lines 181-185 and evaluated in line 252-255) tailored for this purpose. Importantly, as CNF weights, $\beta$, can be precomputed, inference does not require solving matrix equations. This further enhances its applicability to large-scale problems. For additional inference procedure details, please see general response (Q1). In the context of sparse solvers, we acknowledge that our current implementation is not completely vectorized for fast GPU execution with batched training points, although the operation of the solver itself is highly efficient (execution within seconds). This limitation primarily arises due to the limited support of sparse matrix operations in popular ML frameworks. However, it is crucial to emphasize that this constraint does not reflect a shortcoming in CNF's design. We hope to attract more attention from the ML community to enhance support for sparse matrix operations within the frameworks to resolve this implementation constraint. We will include this discussion in the revised paper. ## Details about the linear solver For typical cases, we solve the matrix equation through LU decomposition with partial pivoting and row interchanges, which is fully differentiable if the matrix is full rank. Please see general response Q1 for details on how the linear solver is integrated in training. For large systems, we use the hybrid kernel basis to ensure matrix sparsity (line 163-170) and a patch-based sparse solver (line 181-185) for solving large sparse matrix systems. ## Practicality of solvers to linear operator constraints vs nonlinear operator constraints The linear operator constraints cover a wide range of real-world problems, such as interpolating fixed points, fitting exact differentials, and solving linear ODEs and PDEs. Our performance is the best compared to all the prior approaches on the same linear problems. Most importantly, a general solution to nonlinear problems is typically not the ideal solver for linear problems, which typically require special treatment to improve efficiency. For example, the simplex method as a solver for linear programming typically takes less iterations and memory compared to a general solver. Therefore, linear and nonlinear problems are two separate problems requiring different solutions. A nonlinear solver cannot be a universal solution for both scenarios. Therefore, we focus on linear problems, and leave nonlinear problems as separate future work. CNF can be extended to non-linear operator constraints if step 1 of the training process (please refer to the general response Q1) is replaced with a nonlinear solver to solve a nonlinear version of Eq 3. However, such a nonlinear solver would not be an ideal solution for linear operator constraints, as its convergence and efficiency are both difficult to analyse and promote. ## Problem formulation of each experiment We are happy to provide detailed math formulation of each experiment in the author-reviewer discussion period. Unfortunately, the rebuttal does not allow for a sufficient word limit to include these details. ## Is the constraint enforced at both training and test time? Yes, please refer to the training and inference details in Q1 of the general response for how the constraint is enforced at both training and test time. ## Discussion of SQP and DC3 Please refer to general response Q3 for details. ## Show an example of using the method when the constraint is a non-linear operator? CNF can be extended to non-linear operator constraints if step 1 of the training process (please refer to the general response) is replaced with a differentiable nonlinear solver to solve a nonlinear version of Eq 3. Any iterative method that converges, such as Gauss-Newton, can be a candidate solver. However, the challenge lies in the convergence and the costly computational graph of such iterative solutions. The latter can be potentially addressed through the use of implicit layers [12], which we leave as future work. ## Is the Eikonal equation being solved in a “soft constraint” way? Where are the hard constraints being enforced? That is correct, as the Eikonal equation is a non-linear PDE we choose to introduce its geometric bias into the optimization as a soft constraint via training. In this case, the hard constraints are on the points and the point normals themselves - that is, for all $x$ and $n(x)$ in our point set $P$, $F(x) = 0$ and $\nabla F(x) = n(x)$. --- Rebuttal Comment 1.1: Title: Response to authors Comment: Thank you to the authors for your responses, I appreciate it. I have gone through the responses and the paper again, as well as looked at all the related work. A major limitation of this work is that it is currently only used for linear operators, and that means that it cannot tackle a number of more realistic systems at this point. I know that the authors describe that another solver, like Gauss-Newton, can be used, but it is challenging to converge and the computational graph is expensive. Thus, it seems like this approach is limited because it cannot handle a very large number of important use cases. One of the related constraint works, PDE-CL, seems to be able to work on non-linear differential operators. Edit (2 hours after the above comments): The other point I want to make about training and inference speed is that right now, you are only comparing to other neural network approaches. For a number of these problems, you should also be comparing to classical numerical methods. --- Reply to Comment 1.1.1: Comment: Thank you for your comment. ## Nonlinear operator constraints We acknowledge that solving problems with nonlinear operator constraints is not the focus of this paper, as previously discussed in the limitations section. However, within the realm of scientific computing, it is crucial to assess the limitations of different computational methods from a holistic standpoint, as elaborated upon in our rebuttal. For example, consider the problem of matrix decomposition for solving linear systems. The Cholesky decomposition is highly efficient and stable compared to the more general LU decomposition, but it is limited to positive definite matrices. However, it would be unjust to argue that LU is superior to Cholesky due to its broader applicability, or that Cholesky is superior to LU due to its efficiency and stability. Cholesky and LU were tailored for distinct objectives. The objective of our paper was to devise highly efficient and stable methods for addressing linear operator constraint problems. Our approach outperforms existing works in such scenarios, as was substantiated in both theoretical analysis and empirical evaluations. There are also several facts to highlight: - PDE-CL also focuses on linear problems. Their extension to nonlinear problems is no different from what we have suggested (replacing the linear solver with a non-linear least-squares solver) and shares the same challenges as we described. Their evaluation of nonlinear problems is limited to the Burgers equation. - We have extended the formulation of PDE-CL to a broader context and showed improvements in their design; please refer to the general response Q3. - As PDE-CL is a concurrent work, a comprehensive empirical comparison with it is infeasible. ## Comparison to classical numerical methods Please refer to “comparison to general approaches” in the general response Q3 for a theoretical analysis of CNF’s superiority compared to classical methods. This applies to almost all types of classical numerical methods for constrained optimization. We will incorporate this analysis in the revised paper. A fair comparison of classical numerical methods would involve solving the constrained optimization problem in the form of the equation in general response Q3. This is computationally infeasible for most classical methods due to 1) the high dimensionality induced by $\theta$ and 2) the nonlinearity due to its formulation. Thus, we follow the standard practice and only present training and inference times w.r.t other neural network baselines.
Rebuttal 1: Rebuttal: We thank the reviewers for the detailed and constructive feedback. Below are our responses to the common questions: # Q1. Training and inference details We offer a thoroughly tested codebase that assists users in modeling challenging constraints using CNF. To ensure comprehensiveness, we will also incorporate the following descriptions and pseudocode blocks in the supplementary. Recall the training objectives $ \underset{\theta}{\arg\min} $ $\mathcal{L}\left(f_\theta;\theta\right) \quad \text{s.t. } \mathcal{F} \left[f_\theta\right] \left(\mathbf{x}\right) = g\left(\mathbf{x}\right) \;\forall \mathbf{x} \in \mathcal{S}:= \left\lbrace x_i\right\rbrace_{i=1}^I, $ where $ f_\theta\left(\mathbf{x}\right) = \sum_i \beta_i \odot \Psi_i\left(\mathbf{x}\right), $ and $\theta$ indicates the learnable parameters of each basis function $\Psi_i$. The following procedure describes the training process: ```algorithm Repeat: 1. Compute 𝜷 by solving Eq. 4 2. Compute gradient ∂𝐿/∂𝜃 = ∂𝐿/∂𝑓 ∂𝑓/∂𝜃 3. Update 𝜃 via gradient descent Until converged ``` Step 1 computes the weights $\mathbf{\beta}$ to ensure that the constraints are always satisfied throughout training. Steps 2 and 3 update the training parameters $\theta$ to optimize the training loss $\mathcal{L}$ under the constraints. $\mathbf{\beta}$ is computed using an LU decomposition with partial pivoting and row interchanges, which is fully differentiable if the matrix in Eq.4 is full rank. The computation of $\mathbf{\beta}$ constructs a computational graph that tracks $\frac{\partial \mathbf{\beta}}{\partial \theta}$. Any loss function with a valid gradient $\frac{\partial \mathcal{L}}{\partial f}$ can be smoothly integrated into our training process. Next, we have the inference algorithm: ```algorithm Input: 𝑥 Output: 𝑓_𝜃(𝑥) If 𝜷 is None: Compute 𝜷 from Eq. 4 else: 𝑓_𝜃(𝑥) ← Σᵢ 𝜷ᵢ ⊙ Ψᵢ(𝑥) ``` Here, $\mathbf{\beta}$ only needs to be pre-computed once since it does not depend on the evaluation point $\mathbf{x}$. As a result, performing inference with CNF is very efficient and boils down to computing a linear combination of $I$ basis functions, a task that can be vectorized for efficiency. # Q2. Training/inference time, learning curve, and additional evaluation Please refer to the attached PDF for details. CNF is efficient in training (minutes) and inference (seconds). We also compare with SIREN [32] for shape reconstruction and solving PDEs. CNF demonstrates superior performance in all experiments compared to SIREN. # Q3. Theoretical analysis Here, we provide a summary of the theoretical argument that elucidates CNF's superior performance over existing solutions: ### Comparison to general approaches Many general algorithms in scientific computing, including the popular SQP and more recent DC3 [11], adopt a formulation for the constraint optimization problem as: $ \underset{\theta}{\arg\min} $ $ \mathcal{L}\left(\theta\right) \quad \text{s.t. } \quad h_1 \left(\theta \right) = 0, h_2 \left(\theta \right) = 0, ... $ where $\theta$ denotes the learnable parameters. When it comes to the case where $\theta$ represents the weights of a deep neural network, the constraints become extremely high-dimensional and nonlinear, especially when the number of constraints grows. Under this formulation, a constraint is only linear when $h(\theta)$ is a linear function of $\theta$. Therefore, the linearity does not hold when $\theta$ represents the weights of a neural network. In our formulation, we only require the operator $\mathcal{F}$ to be linear, while the neural basis functions $\Psi$ can still be highly nonlinear with respect to $\theta$. Therefore, our formulation reduces the problem's complexity and allows us to explicitly determine and promote the existence of the solution. ### Comparison to soft constraint approaches While there have been attempts to model constraints by overfitting an NN trained with regression, CNF has several clear advantages over such soft approaches: - CNF satisfies hard constraints without training, while soft approaches may require extensive training. - Despite extensive training, soft constraint approaches may fail to satisfy hard constraints due to inherent limitations in their learning capacity. In contrast, CNF provides a robust guarantee of hard constraint satisfaction within machine precision error, provided the condition number is small. - Another drawback of employing soft approaches becomes evident when effectively imposing priors. The incorporation of priors often involves introducing an additional term to the loss. Consequently, a tradeoff arises between the constraints and the prior, which is controlled by a hyperparameter. In contrast, CNF offers a clean solution without such a tradeoff. With CNF, the inclusion of priors does not compromise hard constraints, thereby maintaining a harmonious balance among various aspects of the model. ### Generalization of NKF and PDE-CL CNF generalizes the prior works NKF [38] and PDE-CL [25]. NKF employed dot-product kernel bases for 3D reconstruction, while PDE-CL used constraint bases to solve PDEs. However, their bases exhibit poor performance in terms of linear independence and learning capacity (refer to supplementary A2, B2). Additionally, their dense matrices make them unsuitable for large-scale problems. The simple dot-product kernel in NKF also cannot handle higher-order constraints (refer to supplementary A1 for explanations and C2 for evaluation). We introduce several novel variations of basis functions to enhance linear independence, learning capacity, and matrix sparsity, along with strategies to analyze and promote solution existence. Our work unifies NKF and PDE-CL and demonstrates that CNF can be applied to general constrained optimization problems. Pdf: /pdf/ef605ff68a3bbd1ae12087bd021421684f97035a.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Particle-based Variational Inference with Generalized Wasserstein Gradient Flow
Accept (poster)
Summary: This paper proposes to compute gradient flows of the KL divergence in the space of probability measures endowed by a generalization of the Wasserstein distance, which uses more general cost based on Young functions, and called Generalized Wasserstein gradient flows (GWGF). Authors provide the forward Euler scheme of such gradient flow and observe that the choice of the Young function has an impact on the convergence rate of the flow. Thus, besides proposing an algorithm to solve the GWGF with neural networks and analyzing its convergence rate, they also propose an algorithm to adapt the Young function in order to have accelerated convergence rate. Finally, they propose several applications and compare with other gradient flows based methods such as the SVGD or the preconditioned functional gradient flow. They show that they obtain consistently better results and also illustrate the benefits of the adaptative algorithm. Strengths: The paper is overall pretty clear and the idea to perform gradient flows in this class of OT problems with more general costs is very appealing. - Interesting idea well motivated - Good Theoretical analysis - Many applications with comparisons with other methods Weaknesses: - It is not a big weakness, but the class of Young functions investigated seems not very big (as it only experimentally considers $c'x,y)=\|x-y\|_p^p$ with $p\in [1.1,4]$). A test with a less common cost could have been interesting. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: The existence of the OT map for the Wasserstein cost is not discussed and is assumed. In [1] (Section 1.3), the result is stated when the cost is of the form $c(x,y)=g(x-y)$ with $g$ strictly convex. In the Young function class, $g$ is only assumed to be convex. So I think I am missing something here. In equation (7), I believe that a $\frac{1}{2h}$ is missing under the Wasserstein cost. In Figure 1, the SVGD is not reported. Is it because it is hard to find good hyperparameters which converge for multimodal distributions? I think it would have been nice to add a plot of the evolution of p in the adaptative version of GWG. Equation (13) is referred to as the Euler-Maruyama scheme. In my opinion, this is the forward-euler discretization and not the Euler-Maruyama scheme which refers to the discretization of SDEs. [1] Santambrogio, F. (2015). Optimal transport for applied mathematicians. Birkäuser, NY, 55(58-63), 94. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: The limitations are addressed in appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your careful review and valuable questions! We address your comments and questions as below. ### Weaknesses `W1`: The class of Young functions investigated seems not very big. A1: The choices of Young functions are quite delicate since it should strick a balance between accelerating convergence and maintaining numerical stability. Apart from the $\\|\cdot\\|_p^p$ and the preconditioned quadratic form proposed in ([1]), we further test other functions like MGF $g(\\cdot)=\exp\left(\\|\cdot\\|_2^2/(2\\sigma^2)\right)-1$ on the 2D-Gaussian mixture model. The results show that this class of Young functions may suffer from numerical instability and poor acceleration performance, and thus is not preferred. For even more general $g$ functions, one interesting future direction is that we can parameterize $g$ as an ICNN [2]. ### Questions `Q1`: The existence of the OT map for the Wasserstein cost is not discussed and is assumed. A1: Thanks for your advice! The general convexity is indeed not sufficient to ensure the existence of OT map. To be mathematically rigorous, we will add the "strictly convex" requirement in our revision. `Q2`: In equation (7), I believe that a $\\frac{1}{2h}$ is missing under the Wasserstein cost. A2: We beg to differ. Actually, our definition of cost function is $c_h(x,y):= g(\\frac{x-y}{h})h$ in Theorem 1. For sanity check, if $g(\cdot)=\frac{1}{2}\\|\cdot\\|^2$, then in eq (7), $W\_{c\_h}(\\mu, \\mu\_{kh})=\\frac{1}{2h}W\_2^2(\\mu, \\mu\_{kh})$. This corresponds to the $L\_2-GF$ ([3]). `Q3`: In Figure 1, the SVGD is not reported. It would have been nice to add a plot of the evolution of p in the adaptative version of GWG. A3: We reported the results of SVGD and the plot of the evolution of $p$ in the rebuttal PDF. Although SVGD may be fast at first, it fails to converge to target distribution in 300 iterations. On the other hand, after 50 iterations, the neural network in Ada-GWG can estimate the vector field accurately and exhibits significant acceleration effect. `Q4`: Equation (13) should be referred to as the forward-Euler discretization. A4: Thanks for your advice, we will correct this mistake in our revision. [1] Dong, Hanze, et al. ‘Particle-Based Variational Inference with Preconditioned Functional Gradient Flow’. arXiv Preprint arXiv:2211. 13954, 2022. [2] Amos, Brandon, et al. ‘Input Convex Neural Networks’. International Conference on Machine Learning, PMLR, 2017, pp. 146–155. [3] Ambrosio, Luigi, et al. Gradient Flows: In Metric Spaces and in the Space of Probability Measures. Springer Science & Business Media, 2005. --- Rebuttal Comment 1.1: Comment: Thank you for the response, the clarifications and the additional experiments. I am overall convinced by this work and will keep my rating unchanged.
Summary: The paper's primary focus is the use of a generalized Wasserstein gradient flow of the KL divergence for solving the sampling problem in a particle-based variational inference framework, which they name Generalized Wasserstein Gradient Descent (GWG). Unlike the usual Wasserstein gradient flow of the KL, which corresponds to the Langevin diffusion, the implementation of this generalized gradient flow is more complex, hence the authors propose learning the gradient vector field via a neural network. They further back their proposal with a theoretical analysis which indicates that if the neural network learns the vector fields sufficiently well, guarantees can be obtained in a $L^q$ variant of the Fisher information. Strengths: The paper presents a novel sampling algorithm using generalized Wasserstein gradient flows, which provides an interesting avenue for designing alternative approaches to SVGD. The algorithm is backed by theory. The authors present robust guarantees about the convergence of the proposed method given certain conditions on the learning ability of the neural network. Weaknesses: While the approach presented in the paper is interesting, the technical novelty appears to be limited, as the underlying analysis follows the framework presented by Balasubramanian et al. Assumptions 1 and 2, which concern the neural network's capability to accurately learn vector fields, form a significant portion of the work's theoretical underpinning. However, the realism of these assumptions in practical scenarios is unclear. Technical Quality: 3 good Clarity: 3 good Questions for Authors: In the context of existing literature, the reference to Bernton (2018) on line 88 seems to be out of place. Would it not be more accurate to reference the Jordan–Kinderlehrer--Otto (JKO) paper? Lines 170-171 contain an inaccurate claim regarding the dissipativity assumption in Balasubramanian et al. (the dissipativity assumption they use is much weaker than the log-Sobolev inequality) – could this be corrected? The paper heavily relies on Assumptions 1 and 2 regarding the accuracy of the neural network's learning of the vector fields. Could the authors shed more light on how realistic these assumptions are and discuss any empirical evidence that might support these assumptions? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The fact that the analysis hinges on an assumption for which it is unclear if it holds in practice, namely accurate estimation of the vector field via neural networks, should be addressed more clearly. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your careful review and valuable questions! We address your comments and questions as below. #### Weaknesses `W1`: The technical novelty appears to be limited, as the underlying analysis follows the framework presented by Balasubramanian et al. A1: We indeed borrow some ideas from [1]. However, our analysis is much more involved and exhibits great novelty for analyzing particle-based VI. As described in the beginning of Appendix D, we need to bound both discretization error and estimation error. 1. For discretization error, the main technical difficulties include how to ensure the smoothness of particle distribution. Note that we even do not assume the smoothness of target distribution. 2. For estimation error, since neural networks can only give an estimation of the optimal vector field, the difficulty lies in how to control the evolution trajectory of particle distribution with inaccurate vector field. Most importantly, the two error terms are actually entangled with each other and the bound of one term will depend on the other. Therefore, the analysis of both terms needs to be handled with great care to achieve the SOTA rate. To the best of our knowledge, our result is the first non-asymptotic analysis of particle-based VI with functional gradient. And it's also for the first time, that one can show that particle-based VI is able to outperform traditional Langevin Monte Carlo theoretically. We believe that our techniques can motivate further theoretical analysis of particle-based VI methods. #### Questions `Q1`: In the context of existing literature, the reference to Bernton (2018) on line 88 seems to be out of place. A1: Thanks for your advice! We will correct the reference in our revision. `Q2`: Lines 170-171 contain an inaccurate claim regarding the dissipativity assumption in Balasubramanian et al. A2: Thanks for your advice! We will correct it in our revision. The assumption made by Balasubramanian et al.[1] is to control the order of growth of the potential function, which is more general than the common dissipativity assumption. This assumption indeed does not imply log-Sobolev inequality. However, it is still quite strong and our analysis does not rely on it. Therefore, our claim still holds that "particle-based methods can outperform the traditional LMC as long as the neural nets can estimate the vector field accurately". `Q3`: Could the authors shed more light on how realistic Assumptions 1 and 2 are and discuss any empirical evidence that might support these assumptions? A3: We discuss the Assumptions 1 and 2 in two aspects. 1. **Theoretical justification for Assumption 1 and 2** Given current particle distribution $\mu$, since we estimate the vector field by maximizing eq (14), we can define the training loss $\\mathcal{L}\_{\\text{train}}(v):=\\mathbb{E}\_{\\mu} [\\langle \\nabla\\log\\frac{\\pi}{\\mu}, v\\rangle - g(v)] $. The maximizer is $v^*=\\nabla g^*(\\nabla\\log\\frac{\\pi}{\\mu})$ and the maximum value is $\\mathcal{L}\_{\\text{train}}^*:=\\mathcal{L}\_{\\text{train}}(v^*)<\\infty$. Similar to Lemma D.14, we can show that for any $p>1$ and any arbitrarily small $\\varepsilon\_1>0$, if $g(\\cdot)=\frac{1}{p}\\|\cdot\\|_p^p$, there exists $\\varepsilon_2:=\\varepsilon_2(\\varepsilon\_1, p)>0$, such that $$ \\mathbb{E}\_{\\mu} [\\|v-\nabla g^*(\\nabla\\log\\frac{\\pi}{\mu})\\|_p^p] \\leq \\varepsilon\_1 \\mathcal{L}\_{\\text{train}}^* + \\varepsilon\_2 [\\mathcal{L}\_{\\text{train}}^*-\\mathcal{L}\_{\\text{train}}(v)]. $$ This shows that Assumption 1 is realistic as long as we can optimize the training loss function well (Assumption 2 is similar). We also discuss the asymptotic rate of estimating optimal $v^*$ with finite particles in Appendix C to further justify our Assumption 1 and Assumption 2. 2. **Empirical evidence** Since the score of particle distribution is unknown, we cannot compute the objective in Assumption 1 or 2 directly. However, given the great empirical performance of score-based generative model, we believe that the capability of modern neural networks is sufficient to estimate the optimal vector field. [1] Balasubramanian, Krishnakumar et al. “Towards a Theory of Non-Log-Concave Sampling: First-Order Stationarity Guarantees for Langevin Monte Carlo.” Annual Conference Computational Learning Theory (2022). --- Rebuttal Comment 1.1: Comment: Thank you for your response. Regarding smoothness, although you do not assume smoothness of the target, you assume the existence of accurate approximations of the true vector field via vector fields which are smooth, which is morally the same kind of assumption but less transparent. Therefore, I do not think this particularly adds to the novelty of the analysis. In general, I think you are over-claiming. In your rebuttal, you write that this is the first time that particle VI is theoretically shown to outperform Langevin Monte Carlo, but this paper relies on far more assumptions than LMC which cannot be checked. I think the proposed algorithm is interesting and the theory does lend some credence to it, but please avoid misleading comparisons. The analysis of LMC relies on very few, well-established and checkable assumptions, and in particular does not require guarantees for training a neural network (which is practically impossible at present) and infinitely many particles. I will keep my current score. --- Reply to Comment 1.1.1: Comment: Thanks for your suggestions! Our results indeed depend on some assumptions which cannot be checked explicitly and we acknowledge that our claim may be overly assertive. We will avoid such strong claims in our revision. Furthermore, it should be clarified that to remove the smoothness assumption of target distribution is not the main contribution of this work. Instead, our advantage is that we are able to eliminate the additional growth condition in Balasubramanian et al. We hope that our theory part can motivate further research on particle-based VI.
Summary: This paper proposes a general version to solve Wasserstein Gradient Flow for Particle-based VI. The authors show that this approach offers strong convergence guarantees and better performance over SVGD and other methods, as evidenced by extensive experiments on both simulated and real data sets. The paper also introduces a novel theoretical convergence guarantee of ParVIs with neural-net-estimated vector fields for generalized Wasserstein gradient flow. Strengths: GWG is an interesting extension of ParVI, which extend the quadratic distance to more general metrics. Both theoretical and empirical justification suggest the improvement of the proposed algorithm. Weaknesses: The title Particle-based Variational Inference with Generalized Wasserstein Gradient Flow is too close to (Dong et al. 2023). I would suggest to use a new title to highlight the improvements. More intuitive illustration of the method would be more helpful. The implementation details and codes are missing. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: What is a proper heuristic strategy to choose $p$? Is there any geometric interpretation of the proposed $g$? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The paper lacks some intuitive demonstration of the propose $g$, which would be very helpful for readers. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your careful review and valuable questions! We address your comments and questions as below. #### Weaknesses `W1`: The title Particle-based Variational Inference with Generalized Wasserstein Gradient Flow is too close to (Dong et al. 2023). A1: Thanks for the suggestion! We will modify our title in our revision. `W2`: More intuitive illustration of the method. A2: As mentioned in Section 3.2, our motivation and intuition for GWG is to accelerate particle-based VI in terms of faster decay of KL divergence, which is determined by equation (10). We showed in Example 1 that conventional $L_2$-GF may suffer from slow decaying rate of KL. In this sense, it is natural to generalize the formulation of minimizing movement scheme and also the corresponding training method. We will make this points clearer in our revision. `W3`: The implementation details and codes are missing. A3: We include all the implementation details in Appendix F of the supplementary material. We are still cleaning up the code and it will be released soon. #### Questions `Q1`: What is a proper heuristic strategy to choose $p$? A1: As claimed in Section 3.2, our primary goal is to achieve faster decay of KL divergence. Hence the criterion to choose $p$ is to enlarge the magnitude of derivatives of KL, i.e. $\\mathbb{E}\_{\\mu\_t} \\| \nabla\\log\\frac{\\pi}{\\mu_t}\\|_q^q$, where $q=p/(p-1)$. Informally, $p$ should be small (so that $q$ is large) if $\\|\nabla\\log\\pi - \\nabla\\log\\mu_t \\|$ is large and vise versa. The intuition is to apply a larger penalty term when there is a significant difference between the target score and the particle score. In practice, this heuristic strategy is difficult to carry out. Therefore, based on this criterion, we propose Ada-GWG to choose $p$ automatically by increasing an lower bound of $|\\partial\_t{\\mathrm{KL}}|$. `Q2`: Is there any geometric interpretation of the proposed $g$? A2: Different $g$ corresponds to different Wasserstein metric and Wasserstein space. If $g(\cdot)=g_0(\\|\cdot\\|)$, where $\\|\cdot\\|$ can be any norm in the Euclidean space and $g_0$ satisfies some mild assumptions, we can easily show by generalized Minkowski's inequality ([1]) that $g_0^{-1}(W\_{c\_h}(\\mu,\\nu))$ is a well-defined metric and hence $\mathcal{P}_{c_h}$ defined in Theorem 1 is a Wasserstein space. The general class of functions for $g$ allows us to explore the underlying structure of different probability spaces and further utilize this geometric structure to accelerate convergence. [1] Mulholland, H. P. ‘On Generalizations of Minkowski’s Inequality in the Form of a Triangle Inequality’. Proceedings of The London Mathematical Society, 1949, pp. 294–307. --- Rebuttal Comment 1.1: Comment: Thanks for your response and it help me to understand the paper better. I would suggest the authors add more practical interpretations and justifications in future versions. --- Reply to Comment 1.1.1: Comment: Thanks for your suggestion! We will modify our paper accordingly in our revision.
Summary: This paper proposes a novel particle-based variational inference framework based on generalized Wasserstein gradient flow of KL divergence, named generalized Wasserstein gradient descent (GWG). The strong convergence guarantee of the proposed algorithm is provided. The authors also provide an adaptive version based on Wasserstein metric to accelerate convergence. Strengths: - The paper is well-written and clearly-organized. - It is quite novel to replace the standard Wasserstein-2 metric by using a Wasserstein metric with cost function defined by Young function. - Numerical results support effectiveness of proposed method. Weaknesses: - The description of main results (Theorem 2 and Theorem 3) are unclear. It is not specified how the vector-valued function vk is learned. Is vk the maximizer of equ (14) with finite samples? - The convergence result only holds for \bar \mu_{Nh}, which is the average of path instead of the end-iteration solution \mu_{Nh}. - The numerical results do not show a quanitative convergence improvement of the proposed Ada-GWG with other methods like SVGD, L2-GF or PFG. For instance, It would be better to compare these method in the form of Figure 2, while you plot KL divergence or even maximum mean discrepancy between current iteration and posterior as a function of iteration number or cpu time. Typos: - In Algorithm 1, it should be \hat A(p_k) instead of \hat A(p). Technical Quality: 3 good Clarity: 3 good Questions for Authors: - How Assumption 3 on the smoothness of neural networks are ensured in practice? How to estimate the constants G_p and M_p given a neural network? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your careful review and valuable questions! We address your comments and questions as below. ### Weaknesses `W1`: The description of main results (Theorem 2 and Theorem 3) are unclear. It is not specified how the vector-valued function $v\_k$ is learned. Is $v\_k$ the maximizer of eq (14) with finite samples? A1: Exactly! $v\_k$ is the neural net learned by maximizing equation (14) with finite samples. But here for the theoretical part, we only consider the infinite-particle limit, which is very common in the theoretical analysis of particle-based VI (like [1] [2]). To bridge this gap, we discuss the asymptotic normality of the neural net estimator in Appendix C of the supplementary material. But in general, analyzing the convergence of particle-based VI with finite particles is very difficult, and we leave this for future work as mentioned in Appendix G. [1] Liu, Qiang. “Stein Variational Gradient Descent as Gradient Flow.” NIPS (2017).\ [2] Salim, Adil et al. “A Convergence Theory for SVGD in the Population Limit under Talagrand's Inequality T1.” International Conference on Machine Learning (2021). `W2`: The convergence result only holds for $\bar{\mu}\_{Nh}$, which is the average of path instead of the end-iteration solution $\mu\_{Nh}$. A2: (1) In the literature of non-log-concave sampling, it is generally difficult to attain the convergence rate of the end-iteration solution ([1], [2]). If the target distribution satisfies the log-Sobolev inequality, then we are able to get convergence result of $\\mu\_{Nh}$. Please refer to Appendix D.3 of the supplementary material for more discussions. (2) On the other hand, we can still sample from $\\bar{\\mu}\_{Nh}$ by sampling $t\\sim \\mathrm{Unif}[0,Nh]$ first and then getting a sample from $\\mu\_t$ by $X\_t = X\_{kh} + (t-kh)v\_k(X\_{kh})$ where $k=[\\frac{t}{h}]$. Therefore, we don't think this is a serious drawback. [1] Balasubramanian, Krishnakumar et al. “Towards a Theory of Non-Log-Concave Sampling: First-Order Stationarity Guarantees for Langevin Monte Carlo.” Annual Conference Computational Learning Theory (2022). [2] Korba, Anna et al. “A Non-Asymptotic Analysis for Stein Variational Gradient Descent.” ArXiv abs/2006.09797 (2020): n. pag. `W3`: The numerical results do not show a quantitative convergence improvement of the proposed Ada-GWG with other methods like SVGD, L2-GF or PFG. For instance, It would be better to compare these method in the form of Figure 2, while you plot KL divergence or even maximum mean discrepancy between current iteration and posterior as a function of iteration number or cpu time. A3: Thanks for your advice! We plotted the Test RMSE of BNN experiments in Appendix F.4 and it shows that even with unproperly selected $p\_0$, Ada-GWG makes significant improvements and demonstrates comparable performance to the optimal choice. We further plot the JS-divergence of Gaussian Mixture experiment in the PDF file of global response and will add them in our revision. The results also indicate that Ada-GWG can accelerate convergence compared with the baselines. `W4`: In Algorithm 1, it should be \hat A(p\_k) instead of \hat A(p). A4: Thanks for catching the typos, and we have corrected these typos in our revision. ### Questions `Q1`: How Assumption 3 on the smoothness of neural networks are ensured in practice? How to estimate the constants $G\_p$ and $M\_p$ given a neural network? A1: (1) Since for any $p\_1\geq p\_2>1, \text{and}\ x\in \\mathbb{R}^d\\backslash\\{0\\}$, $d^{1/p\_1-1/p\_2}\\leq\\frac{\\|x\\|\_{p\_1}}{\\|x\\|\_{p\_2}}\leq 1$, the constants $G\_p$ and $M\_p$ differ from $G\_2$ and $M\_2$ by a factor of $d^{|1/2-1/p|}$ at most. Moreover, $G\_2$ is a standard assumption on the smoothness of neural nets in the literature of score matching ([1]). And as mentioned in Lines 155-156, $M\_2$ corresponds to the Lipschitz constant of the Hessian of $\\log\\pi$ informally, which is also a widely-used assumption to analyze Langevin Monte Carlo ([2]). Therefore, we think Assumption 3 is reasonable to assume. In practice, we can utilize a regularizer (e.g. weight decay) to avoid overfitting and control the smoothness if necessary. (2) To estimate $G\_p$ and $M\_p$, we can compute the gradients of neural nets at current particles in each iteration and use the one with largest gradient norm for estimation. We did so in the experiment of conditioned diffusion. The results are shown in the PDF file of global response. The magenitudes of $G\_2$ and $M\_2$ remain in a reasonable range, further justifying our Assumption 3. [1] Lee, Holden, et al. ‘Convergence for Score-Based Generative Modeling with Polynomial Complexity’. Advances in Neural Information Processing Systems, vol. 35, 2022, pp. 22870–22882. [2] Mou, Wenlong, et al. ‘Improved Bounds for Discretization of Langevin Diffusions: Near-Optimal Rates without Convexity’. Bernoulli, vol. 28, no. 3, Bernoulli Society for Mathematical Statistics and Probability, 2022, pp. 1577–1601. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I'd like to keep my score. --- Reply to Comment 1.1.1: Comment: Thanks for your response. Please feel free to let us know if you have further questions!
Rebuttal 1: Rebuttal: We thank all reviewers for their constructive feedback, and will modify our paper accordingly in our revision. We address some of the common issues raised by the reviewers below. **Justification for Assumption 1 and 2** 1. `Theoretical justification` Given current particle distribution $\mu$, since we estimate the vector field by maximizing eq (14), we can define the training loss $\\mathcal{L}\_{\\text{train}}(v):=\\mathbb{E}\_{\\mu} [\\langle \\nabla\\log\\frac{\pi}{\mu}, v\\rangle - g(v)] $. The maximizer is $v^*=\\nabla g^*(\\nabla\\log\\frac{\\pi}{\\mu})$ and the maximum value is $\\mathcal{L}_{\\text{train}}^*:=\\mathcal{L}\_{\\text{train}}(v^*)<\\infty$. Similar to Lemma D.14, we can show that for any $p>1$ and any arbitrarily small $\\varepsilon\_1>0$, if $g(\\cdot)=\\frac{1}{p}\\|\\cdot\\|\_p^p$, there exists $\\varepsilon\_2:=\\varepsilon\_2(\\varepsilon\_1, p)>0$, such that $$ \\mathbb{E}\_{\\mu} [\\|v-\\nabla g^*(\\nabla\\log\\frac{\\pi}{\\mu})\\|\_p^p] \\leq \\varepsilon\_1 \\mathcal{L}\_{\\text{train}}^* + \\varepsilon\_2 [\\mathcal{L}\_{\\text{train}}^*-\\mathcal{L}\_{\\text{train}}(v)]. $$ This shows that Assumption 1 is realistic as long as we can optimize the training loss function well (Assumption 2 is similar). We will add this result in our revision. We also discuss the asymptotic rate of estimating optimal $v^*$ with finite particles in Appendix C to further justify our Assumption 1 and Assumption 2. 2. `Empirical evidence` Since the score of particle distribution is unknown, we cannot compute the objective in Assumption 1 or 2 directly. However, given the great empirical performance of score-based generative model, we believe that the capability of modern neural networks is sufficient to optimize the training loss and estimate the optimal vector field. **Theoretical novelty and highlights** Our analysis is very involved and exhibits great novelty for analyzing particle-based VI. As described in the beginning of Appendix D, we need to bound both discretization error and estimation error. 1. For discretization error, the main technical difficulties include how to ensure the smoothness of particle distribution. Note that we even do not assume the smoothness of target distribution. 2. For estimation error, since neural networks can only give an estimation of the optimal vector field, the difficulty lies in how to control the evolution trajectory of particle distribution with inaccurate vector field. Most importantly, the two error terms are entangled with each other and the bound of one term will depend on the other. Therefore, the analysis of both terms needs to be handled with great care to achieve the SOTA rate. To the best of our knowledge, our result is `the first non-asymptotic analysis of particle-based VI with functional gradient. And it's also for the first time, that one can show that particle-based VI is able to outperform traditional Langevin Monte Carlo theoretically`. We believe that our techniques can motivate further theoretical analysis of particle-based VI methods. **Quantitative results of improvement** We plotted the Test RMSE of BNN experiments in Appendix F.4 (also included in the PDF file of global response). It shows that even with improperly selected $p\_0$, Ada-GWG makes significant improvements and demonstrates comparable performance to the optimal choice. This suggests that Ada-GWG is able to push $p\_0$ towards the optimal $p$. We further plot the JS-divergence of Gaussian Mixture experiment in the PDF file and will add them in our revision. The results also indicate that Ada-GWG can accelerate convergence significantly compared with other baselines. We hope our response has adequately addressed the reviewers' questions and concerns, and look forward to reading any other additional comments. Pdf: /pdf/9a827e38cfbb87a337790740a8b8b12ad336aa43.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Unbiased constrained sampling with Self-Concordant Barrier Hamiltonian Monte Carlo
Accept (poster)
Summary: This paper propose a computational framework for sampling via Hamiltonian Monte Carlo via self-concordant barrier functions for the purposes of constrained sampling on suitably nice Riemannian manifolds, where the Riemannian metric defines the barrier function. They provide computational and theoretical guarantees on their procedure, which hinges on an "involution checking step", making it more theoretically sound than other papers on this matter, in particular this makes the procedure unbiased. The authors supplement their theoretical findings with computational experiments in both real-world and synthetic data. Strengths: Strong theoretical contributions relative to prior work --- kudos to the authors for fixing the discrepancies in Kook et al (2022a), and proposing their solution (note, I didn't read the proofs in this paper either, only skimmed, but the results appear reasonable. I thought the background section, etc, contained all the relevant information, and the related works section also provided ample context. Weaknesses: I think the presentation is a bit suboptimal, namely the paper is very dense and hard to read in some parts. Otherwise, I really enjoyed this paper. I think the technical contributions appears to be somewhat minimal, though surely it took a lot of work to justify the "involution checking step", but again, not an expert in this area. I also understand that Kook et al. (2022b) provided some theoretical understanding of the algorithm from the previous paper --- would the authors comment a bit more on that? Do the theoretical results in that paper suffer from the same problems as in this one, or is everything remedied and this paper is concurrent with that one? Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - What is this [x,y] or x : y notation in lines 101-102 (and other places)? I was not able to parse this... - I was also largely confused and unconvinced by the numerical experiments... In Table 1 it appears what the authors propose is about "as good" as CRHMC (as in, 50/50 chance it is better performing). I also cannot understand Fig 2 at all --- what does the y-axis mean, and what is the ground truth value the algorithms are trying to estimate? Is the goal to simply be as close to MALA as possible? Seems odd, but I'm far from an expert Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback and their positive evaluation. > What is this [x,y] or x : y notation in lines 101-102 (and other places)? I was not able to parse this... We refer to the general response for more details about our presentation. We hope that these explanations clarify our notation and we are ready to further update our paper if the reviewer has specific concerns regarding our terminology. > I think the technical contributions appears to be somewhat minimal, though surely it took a lot of work to justify the "involution checking step", but again, not an expert in this area. We acknowledge that our algorithmic modification is minimal compared to the algorithm of [1], but we strongly believe that the study of such implementation is of fundamental interest for the MCMC community. Indeed, in a similar way as [2], our contribution (i) highlights that the **use of well known implicit schemes for ODE integration** (see [3] for a complete reference) **combined with space constraints leads to asymptotic bias when it comes to their numerical implementation**, and (ii) proposes to solve it in a straightforward manner. More generally, we are convinced that our work is a first step for future work on designing implicit integration schemes for constrained sampling methods. > I also understand that Kook et al. (2022b) provided some theoretical understanding of the algorithm from the previous paper --- would the authors comment a bit more on that? Do the theoretical results in that paper suffer from the same problems as in this one, or is everything remedied and this paper is concurrent with that one? We refer to the general response for a detailed comparison with [4]. In particular, we show that [4] suffers from the same shortcomings as [1] when it comes to the assumption made on the implicit integrator. > I was also largely confused and unconvinced by the numerical experiments... In Table 1 it appears what the authors propose is about "as good" as CRHMC (as in, 50/50 chance it is better performing). I also cannot understand Fig 2 at all --- what does the y-axis mean, and what is the ground truth value the algorithms are trying to estimate? Is the goal to simply be as close to MALA as possible? We refer to the general response for more details on the setting of our experiments, including a comparison with [1] on real-world data and and a discussion on the setting chosen for synthetic data. In particular, experiments on real-world data are meant to show that CRHMC [1] and BHMC have roughly the same complexity, while BHMC implements an additive involution checking step (which may have an effect on the autocorrelation of the obtained Markov chain). Moreover, we explain that we take the estimators of $\int_{\mathsf{M}}\langle x, \mu \rangle \mathrm{d}\pi(x)$ given by MALA and IMH as ground truth for our experiments on synthetic datasets, where $\pi$ refers to the Gaussian target distribution with mean $\mu\in \mathbb{R}^d$. In this setting, we prove that the estimator given by BHMC is closer to this ground truth than the one given by CRHMC [1]. [1] Sampling with Riemannian Hamiltonian Monte Carlo in a constrained space. Kook et al., 2022a. [2] Geometric numerical integration. Hairer et al., 2006. [3] Multiple projection Markov Chain Monte Carlo algorithms on submanifolds. Lelièvre et al, 2022. [4] Condition-number-independent convergence rate of Riemannian Hamiltonian Monte Carlo with numerical integrators. Kook et al., 2022b.
Summary: The paper proposes a new algorithm for sampling on a constrained space via a Hamiltonian Monte Carlo algorithm, which uses a Riemannian Manifold HMC algorithm, with the Hessian metric given by a self-concordant barrier. This generalizes and improves upon existing work in constrained sampling, while removing a source of bias by adding an “involution checking step”. Strengths: The intuition for the algorithm, including the choice of self-concordant barrier, is well-informed and intuitive, although it has appeared in some form in Kook et al. (2022). Thus, I will discuss many of the strengths and weaknesses in relation to the prior work of Kook et al. (2022). This paper possesses two generalizations of Kook et al. Firstly, the primary problem setting in Kook et al. (Eq 11 of that paper) is less general than that in this paper. The proof for this method is also cleaner, and purports to avoid some technical issues present in the proof Kook et al. (I did not carefully verify this point). Furthermore, it avoids the problems of bias presented in Kook et al., which is carefully justified by a Theorem. The method is clearly unbiased when evaluated on the hypercube and simplex, by a margin that is usually many standard deviations. The broad details of the proof seem to be correct and well-written, although I did not check every detail. Weaknesses: No rigorous non-asymptotic theory was presented, although this is also true of Kook et al., likely because of the complexity of these algorithms. It is difficult to draw firm conclusions from the real-world instances in the experiments, since the method does not show improvement in all/a clear majority of cases. The notation in this paper is a bit cumbersome, particularly in the Appendix. Nonetheless this is understandable given the complexity of the algorithm, and the authors did a good job in reducing the notational burden within the main text. To summarize, it was difficult to review all the details/proofs in this manuscript, given its length and technical nature. Nonetheless, I feel like it presents an important development within constrained sampling and merits publication. This informs my final score. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: How difficult is it to establish any non-asymptotic convergence results in this context? What would be the expected rates, and would they differ (in the order of dependencies) with Kook et al.? As noted under “weaknesses”, the experiments on “real-world data” do not seem fully convincing. Can the authors elaborate further on why this would be the case, and if there was extensive tuning of the parameters involved? It seems from the Appendix that the parameters of both algorithms were not extensively tuned, which then begs the question whether this is really a fair comparison. It is curious that a (constrained) Gaussian distribution is chosen for the synthetic data, when Kook et al. consider a uniform distribution instead. Their choice allows for some simple tests of uniformity. Does considering a uniform distribution make any appreciable difference in the output? The discussion in Appendix J is quite useful and I think some more of these details could be also summarized in the main text. It is my personal preference, but I believe text in the paper should not be highlighted. Use boldface or perhaps underlining to emphasize the text in Ll. 192-193 and in Algorithm 1. L. 91 “motivates to” -> “motivates us to” L. 170 “the definition domain” -> “the domain” L. 213 “enables” -> “enables one/enables us” The authors should be careful about the formatting of some equations in the Appendix, since they exceed the column width. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: None beyond those raised in earlier sections. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments, we appreciate your acknowledgment of the paper’s merits. > No rigorous non-asymptotic theory was presented, although this is also true of Kook et al., likely because of the complexity of these algorithms. > How difficult is it to establish any non-asymptotic convergence results in this context? What would be the expected rates, and would they differ (in the order of dependencies) with Kook et al.? We believe that the non-asymptotic convergence results of [1] could be extended to our framework, i.e., comprising the involution checking step. However, such a study would be very technical and is outside of the scope of the current paper. We conjecture that the convergence rate obtained in [1] would also hold in our case. In particular, we do not expect a change in the dependency of this rate with respect to the hyperparameters of the algorithm. > It is difficult to draw firm conclusions from the real-world instances in the experiments, since the method does not show improvement in all/a clear majority of cases. > It is curious that a (constrained) Gaussian distribution is chosen for the synthetic data, when Kook et al. consider a uniform distribution instead. Their choice allows for some simple tests of uniformity. Does considering a uniform distribution make any appreciable difference in the output? We refer to the general response for more details on the setting of our experiments, including a comparison with [2] on real-world data and a discussion on the setting chosen for synthetic data. In particular, we explain that the experiments on real-world data are meant to show that the involution checking step does not hurt the complexity of BHMC, and justify the choice of a 'non-uniform' target distribution for synthetic datasets. > As noted under “weaknesses”, the experiments on “real-world data” do not seem fully convincing. Can the authors elaborate further on why this would be the case, and if there was extensive tuning of the parameters involved? It seems from the Appendix that the parameters of both algorithms were not extensively tuned, which then begs the question whether this is really a fair comparison. Regarding the choice of the hyperparameters except $\eta$ (including $h$, $\beta$, the number of iterations in the implicit solver..), we rely on the code provided by [2]. In particular, we do not fine-tune these hyperparameters for both CRHMC and BHMC. Since we are not provided with a truthful baseline due to the high-dimensional setting, we do not tune the parameter $\eta$ and set $\eta=10$ as in the experiments on toy datasets. > The notation in this paper is a bit cumbersome, particularly in the Appendix. > It is my personal preference, but I believe text in the paper should not be highlighted. We refer to the general response for more details about our presentation. We hope that these explanations clarify our notation and we are ready to further update our paper if the reviewer has specific concerns regarding our terminology. > To summarize, it was difficult to review all the details/proofs in this manuscript, given its length and technical nature. We are aware that our work is technical, and for completeness, we provide below a short description of our main theoretical results (Proposition 2 and Theorem 3), as well as a sketch of their proof. Proposition 2 justifies Assumption A3 made on the numerical integrator. Theorem 3 proves the reversibility of Algorithm 3. Below, we give more details on these results and provide a sketch of their proof. **Proposition 2**. In this proposition, we prove that in the self-concordant setting, for any starting point $z^0$, the implicit scheme $F_h=G_h \circ s$ is locally an involution as long as $h$ is small enough. To prove this result, we proceed as follows. We first prove that we can build a symplectic diffeomorphism $\gamma_h$ between $z^0$ and a solution $z^1 \in F_h(z^0)$ on a neighborhood of $z^0$ (see Lemma 23, 24-(a) and 25). This proves the first item of Proposition 2. Next, using Lemma 22, 24-(b) and 26, we prove that $F_h$ coincides with $\gamma_h$ on a given neighborhood of $z^0$. The main difficulty when establishing Proposition 2 comes from Lemma 22, which identifies the momentum $p^0$ used to go from $x^0$ to $x^1$ and shows that it is locally invertible as a perturbation of the identity. Proving this result requires computing higher order derivatives of the Riemannian metric and controlling their smoothness. **Theorem 3**. In this theorem, we prove the reversibility up to momentum reversal of the whole Markov kernel of Algorithm 3 (including momentum sampling, ODE integration, MH filter). The critical step of this procedure is to ensure that $\Phi_h$ preserves the volume form on some compact sets under Assumption A3, derived from Proposition 2. This is proved in Lemma 29. To do so, we have to restrict the cotangent bundle to the set of points $A_h$, where h is small enough so that $\Phi_h$ acts like an involution. Then, the proof consists in operating a change of variable on a partition of open subsets of a given compact set. The main difficulty in this proof is the control the compact sets on the domain of definition of $\Phi_h$ in order to perform the change of variable in Lemma 29. [1] Condition-number-independent convergence rate of Riemannian Hamiltonian Monte Carlo with numerical integrators. Kook et al., 2022b. [2] Sampling with Riemannian Hamiltonian Monte Carlo in a constrained space. Kook et al., 2022a. --- Rebuttal Comment 1.1: Title: Response Comment: I thank the authors for their extremely detailed response. I appreciate the explanation on why a non-uniform example was chosen and I believe that this work makes an important contribution in terms of identifying a true gap in Kook et al. For me, one major drawback of this paper continues to be the lack of non-asymptotic convergence results, which would be really necessary to make this paper fully self-contained. As a result, I will maintain my current score.
Summary: This paper introduces a "involution checking step" in Riemannian Hamiltonian Monte Carlo methods that ensures time reversibility of the resulting Markov chain. The numerical experiments demonstrate the competitiveness of the proposed algorithm with regard to the method proposed in Kook et al. (2022a). --- The rebuttal addressed my quetions. I have raised the rating to 6. Strengths: 1. The algorithms are novel. 2. The algorithms and analyses take practical numerical integrators into consideration. 3. Appendix J points out clearly the theoretical gaps in Kook (2022a). Weaknesses: 1. **Comparison with existing work.** The most relevant previous work seems to be Kook et al. (2022a) and Kook et al. (2022b). The second one appeared later and focused on theoretical analyses, but this submission only discusses about the gaps in the analysis of the first one. 2. **Gap between theory and practice.** The algorithm that actually ensures time reversibility is Algorithm 3, but the numerical experiments are conducted with Algorithm 1. 3. **Imprecise statement about existing work.** In Ln. 304, it is claimed that the convex constraint set considered by Kook et al. (2022a) takes a specific form. But that is simply a special case of the class of constraint sets in Kook et al. (2022a). 4. **Presentation.** The presentation involves many terminologies without explanations. The paper may not be readable to a layperson. 1. There are some highlights with unclear meanings. 2. What do the vertical axes mean in Figure 2? 5. **Typos.** 1. Ln. 115: Concordance -> self-concordace 2. Ln. 118: polynomial -> polynomial-time 3. What does the colon mean in (4)? 4. Ln. 287: a involution -> an involution 5. Some words in the reference list are not properly capitalized. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. How does this work compare with Kook et al. (2022b)? 1. Please address the gap between theory and practice in the weaknesses block. 1. What do the highlights in the submitted files mean? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: This is a paper on theoretically rigorous algorithm design. The assumptions are explicitly stated. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to review our submission and for your constructive feedback. > 1. How does this work compare with Kook et al. (2022b)? We refer to the general response for a detailed comparison with [1]. In particular, we show that [1] suffers from the same shortcomings as [2] when it comes to the assumption made on the implicit integrator. We will add this detail in the section of the related work. > 2. Please address the gap between theory and practice in the weaknesses block. We are aware of this slight discrepancy between theory and practice, and will highlight this in the Discussion section as a limitation of our study. We emphasize that **directly analyzing Algorithm 1 as such is a very challenging task**. Indeed, the self-concordance properties are only defined locally, whereas deriving reversibility results require global control. More precisely, given a certain current state, we have to **ensure that $h$ is small enough** (depending on this current state) to obtain that the implicit integrator (or its numerical approximation) is locally an involution (see Proposition 2 and Assumption 3). However, in practice, $h$ is fixed by the user. Thus guaranteeing that Proposition 2 holds for any element of the constrained set and a fixed step size $h$ is difficult. To overcome this limitation, we study a modified version of Algorithm 1, given by Algorithm 3, where the involution checking step (ICS) is replaced by a stronger checking step involving the set $A_h$, *which implies ICS* (see Section 4.2) and for which we are able to state its reversibility. Although this algorithm may be implemented, this would come with a huge (and unrealistic) computational cost since verifying that $z \in A_h$ implies to find the minimal critical step-size $h_\star$ on a neighborhood of $z$. The question of the existence of a global step-size $h$ such that Algorithm 1 can be properly analyzed is not the topic of this paper and is left for future work. > 3. Imprecise statement about existing work Indeed, the setting presented in the introduction of [2] is a convex body $K$ provided with a self concordant barrier $\phi$, combined with a linear equality constraint $Ax=b$ (equation 1.1). We remark that this setting is still a **special case of ours** by rewriting this whole set as $K’$={$A^\dagger b +u \in K, u \in \text{Ker} A$}, which is associated with the self-concordant barrier $u \to \phi(A^\dagger b +u)$. We will clarify this result in the updated version of the paper. However, we emphasize that the **main case of interest in [2] is the case where $K$={$x: \ell < x <u$}**. Indeed, the authors state in their introduction that their algorithm is ‘[efficient] when $K$ is a product of convex bodies $K_i$, each with small dimension’, i.e., $\mathfrak{g}$ is a block-diagonal matrix. Indeed, the design of their numerical integrator heavily relies on the computation of the Cholesky decomposition of the matrix $A \mathfrak{g}^-1 A^T$, which is efficient in the case where $\mathfrak{g}$ is a (block-)diagonal matrix. In addition, through their paper, the authors consider the particular case where the $K_i$ are 1D, both in theory and in the experiments. The efficiency of the whole procedure in this specific case is justified in Appendix B.2.3. For these reasons, we highlighted this framework of [2] in the related work section. We will change our discussion regarding [2] (lines 302-306) as follows: 'On the other hand, Kook et al. (2022a) integrate the Hamiltonian dynamics via implicit schemes without any “involution checking step” in a similar self-concordant setting. In particular, they consider a convex body $K$ equipped with a self concordant barrier $\phi$, combined with a linear equality constraint $Ax=b$. This setting is a special case of our framework (see A1 and A2) by rewriting this whole set as $K’$={$A^\dagger b +u \in K, u \in \text{Ker} A$}, equipped with the self concordant barrier $u \to \phi(A^\dagger b +u)$. Kook et al. (2022a) provide an efficient implementation of their algorithm in the case of a convex bounded manifold of the form $\mathsf{M}$={$x \in \mathbb{R}^d : Ax = b, \ell < x < u$}'. > 4. Presentation We refer to the general response for more details about our presentation. We hope that these explanations clarify our notation and we are ready to further update our paper if the reviewer has specific concerns regarding our terminology. [1] Condition-number-independent convergence rate of Riemannian Hamiltonian Monte Carlo with numerical integrators. Kook et al., 2022b. [2] Sampling with Riemannian Hamiltonian Monte Carlo in a constrained space. Kook et al., 2022a. --- Rebuttal Comment 1.1: Title: Thanks Comment: The rebuttal addressed my quetions. I have raised the rating to 6.
Summary: The authors resolve an existing bias with naive manifold versions of HMC by adding an involution checking step, and therefore leading the new HMC algorithm to be unbiased. I'm not super familiar with this problem, and I would like the authors to explain some of the technical details, as the proofs are quite involved for a conference review. Strengths: The authors proposes a solution to an existing bias in manifold HMC, both justifying theoretically and proposing convincing empirical simulations. Weaknesses: The technical details are quite involved and I believe could be explained better. Technical Quality: 3 good Clarity: 3 good Questions for Authors: While I have many questions, I am still open to raising my score once these questions are adequately addressed. 1. When you say there's no guarantee that $G_h$ has a unique solution, do you mean that (1) there are cases with multiple solutions, or (2) there are cases with no solutions, or both? If possible, can you also provide an intuitive example where this failure occurs? 2. Can the authors explain to me the main intuitions behind why the naive implementations of Kook et al. (2022a) leads to a biased stationary measure? In particular, why does the Metropolis accept-reject step not fix this bias? 3. To add to the previous question, on an intuitive level, how does this involution check fix this issue? 4. It appears that this is not a problem when we can simulate the ideal Hamiltonian dynamics, is this related to the fact that the domain of flow is the entire cotangent bundle $T^* M$? 5. The proofs are quite involved. Can you explain to me on a high level (a) how does the proof work, and (b) where do the complicated calculations mostly arise from? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comment and thoughtful questions. ### Answer to Question 1. We refer to the general response for a discussion on the number of solutions of the implicit mapping $G_h$. We provide a one-dimensional example, where $G_h$ may admit 0,1 or 2 solutions for the implicit update of the momentum, depending on the initial condition and $h$. ### Answer to Questions 2 & 3. The main reason for the bias in the implementation of [1] is that **there is no guarantee that the numerical integrator used to solve the implicit scheme (Alg. 3 in [1]) is involutive**. This is however essential to derive theoretical results relying on the Metropolis Hastings (MH) filter in the Hamiltonian setting. Indeed, to obtain the reversibility of the Markov kernel corresponding to the Hamiltonian integration step followed by the MH step (see Lemma 30 in our paper), we need to show that for any compactly supported function $f$ $ \int \bar{\pi}(z) a(R^{\Phi}_h(z) \mid z)f(z,R^{\Phi}_h(z)) \mathrm{d} z= \int \bar{\pi}(z) a(R^{\Phi}_h(z) \mid z)f(R^{\Phi}_h(z),z) \mathrm{d} z$ , (a) where $\bar \pi$ is the extended target distribution, $a(z' \mid z)=\min(1, \exp(-H(z')+H(z))$ is the MH acceptance filter and $R^{\Phi}_h$ is the numerical integrator of the Hamiltonian $H$ with a step-size $h$. Since $R^{\Phi}_h= T_h \circ \Phi_h \circ T_h$, where $T_h$ is an involutive function, and $\Phi_h$ is the numerical approximation of the implicit scheme $G_h \circ s$ (see Section 3.1 in our paper), showing (a) amounts to prove that $ \int g(z,R^{\Phi}_h(z)) \mathrm{d} z = \int g(R^{\Phi}_h(z),z) \mathrm{d} z$ , (b) for a certain function $g$. **A sufficient condition to obtain (b) is thus to ensure that $\Phi_h$ is an involution**. However, this condition is not satisfied in the CRHMC [1], and **the application of the MH filter does not solve this issue**. In our setting, we directly enforce this condition through the involution checking step (up to some numerical relaxation). To support our point, we would like to mention the paper [2], where the authors also enforce an involution condition in a slightly different fashion than ours. Namely, given a current state $z$, they compute the whole set of solutions $\Phi_h(z)$, choose $z’\in \Phi_h(z)$ according to some probability distribution defined on this set and then verify that $z \in \Phi_h(z’)$ (see line 7, Alg. 2 in [2]). ### Answer to Question 4. In the case where the Hamiltonian flow is well defined for all times, the domain of the flow is indeed the entire cotangent bundle but this is not sufficient to ensure the reversibility of HMC. The reversibility actually comes from the fact that the flow is involutive (up to momentum reversal) by definition of the Hamiltonian. However, in our manifold setting, there is one subtlety that needs to be checked, namely that the Hamiltonian flow does not leave the cotangent bundle in finite time. As detailed in Proposition 14, there is indeed no guarantee that this flow is defined for all times, due to the ill-conditioned behavior of the dynamics near the boundary of the manifold. Therefore, given a time horizon $h>0$, we restrict the cotangent bundle to the points where the flow is defined (see the definition of $E_h$ and $\bar{E}_h$ in Appendix D) on the time interval $[0,h]$. Then, we prove that **the Hamiltonian flow is involutive up to momentum reversal on this subset (see Lemma 16)**, which is exactly the property needed to derive the reversibility of the algorithm with the ideal dynamics. Hence, we do not need to perform an *involution check* in this case. As a result, we introduce c-BHMC in Appendix D, where **the only checking step verifies that the flow is well defined on $[0,h]$, starting at the current state**. ### Answer to Question 5. Our main theoretical results are Proposition 2 and Theorem 3. Proposition 2 justifies Assumption A3 made on the numerical integrator. Theorem 3 proves the reversibility of Algorithm 3. Below, we give more details on these results and provide a sketch of their proof. **Proposition 2**. In this proposition, we prove that in the self-concordant setting, for any starting point $z^0$, the implicit scheme $F_h=G_h \circ s$ is locally an involution as long as $h$ is small enough. To prove this result, we proceed as follows. We first prove that we can build a symplectic diffeomorphism $\gamma_h$ between $z^0$ and a solution $z^1 \in F_h(z^0)$ on a neighborhood of $z^0$ (see Lemma 23, 24-(a) and 25). This proves the first item of Proposition 2. Next, using Lemma 22, 24-(b) and 26, we prove that $F_h$ coincides with $\gamma_h$ on a given neighborhood of $z^0$. The main difficulty when establishing Proposition 2 comes from Lemma 22, which identifies the momentum $p^0$ used to go from $x^0$ to $x^1$ and shows that it is locally invertible as a perturbation of the identity. Proving this result requires computing higher order derivatives of the Riemannian metric and controlling their smoothness. **Theorem 3**. In this theorem, we prove the reversibility up to momentum reversal of the whole Markov kernel of Algorithm 3 (including momentum sampling, ODE integration, MH filter). The critical step of this procedure is to ensure that $\Phi_h$ preserves the volume form on some compact sets under Assumption A3, derived from Proposition 2. This is proved in Lemma 29. To do so, we have to restrict the cotangent bundle to the set of points $A_h$, where h is small enough so that $\Phi_h$ acts like an involution. Then, the proof consists in operating a change of variable on a partition of open subsets of a given compact set. The main difficulty in this proof is the control the compact sets on the domain of definition of $\Phi_h$ in order to perform the change of variable in Lemma 29. [1] Sampling with Riemannian Hamiltonian Monte Carlo in a constrained space. Kook et al., 2022a [2] Multiple projection Markov Chain Monte Carlo algorithms on submanifolds. Lelièvre et al. 2022. --- Rebuttal Comment 1.1: Title: Response Comment: Hi Everyone, Thank you for the response. Let me start by saying I think the 1D example is incredibly convincing, and this situation finally made sense for me for the first time. The rest of my questions are also addressed, albeit I would hope the authors consider rewriting some portions of the paper, and maybe add some more intuitive content in the extra page (if accepted). I will raise my score to 7.
Rebuttal 1: Rebuttal: We thank the reviewers for their insightful comments and are encouraged by their positive feedback regarding the proposition, soundness, and clarity of our work. We provide detailed responses to each reviewer but summarize here their main feedback. ### 1. Comparison with [1] and counterexample In [1], Kook et al. analyze the convergence rate of time-discretized Riemannian Hamiltonian Monte Carlo (RHMC) in a self-concordant setting (i.e., the Riemannian metric is the Hessian of the self-concordant barrier of the manifold), either relying on the Implicit midpoint integrator (Alg. 2 in [1]) or using the Generalized Leapfrog (GL) integrator (Alg. 3 in [1]). Both of these integrators are implicit, and in practice, must be replaced by numerical schemes such as fixed-point procedure. Crucially, **in [1], the authors assume that these implicit integrators admit a unique solution given any starting point and any step-size h**. However, proving such a statement in the self-concordant setting is highly non-trivial and might not even be true, as we explain below with a simple example. In our paper, we do not make the assumption from [1] on the GL integrator, and rather consider a numerical solver which outputs a unique approximate solution to this implicit scheme. By doing so, **our analysis is meant to be as close as possible to the practical implementation of RHMC**. In this case, the involution checking step (ICS) is needed to ensure the measure preservation property in RHMC. Therefore, **the theoretical (and practical) limitation about the reversibility of CRHMC [5] is still an issue in [1]**. To go further, we show below that the assumption made by [1] and [5] may not hold in the case of the GL integrator, defined in (6) in our paper. Consider the 1-dimensional setting $\mathsf{M}=(-\infty, 0)$. In this case, $\mathfrak{g}(x)=1/x^2$. Let $h>0$, $X_0\in \mathsf{M}$. Assume that $\beta=1$, and consider the first iteration of BHMC. We have $\tilde{P}_1 \sim \mathrm{N}(0, 1/X_0^2)$ and $P_1^{(0)}=\tilde{P}_1-\frac{h}{2}\partial_x H_1(X_0,\tilde{P}_1)$, where $H_1$ is defined in Section 3.1. As mentioned in our paper, $\partial_x H_1$ does not depend on the momentum variable, and thus, $P_1^{(0)} \sim \mathrm{N}(\mu, 1/X_0^2)$, where $\mu=-\frac{h}{2}\partial_x H_1(X_0,\tilde{P}_1)$. Using (5) in our paper, the implicit equation in $P_1^{(1/2)}$ in the GL integrator is actually a polynomial equation of degree 2: $\frac{h}{2}X_0 (P_1^{(1/2)})^2 + P_1^{(1/2)} -P_1^{(0)}=0$. Denote $\Delta=1+2 h X_0 P_1^{(0)}$. Then, whenever $P_1^{(0)}> -1/(2 h X_0)$, i.e., $\Delta < 0$, this equation admits no solution. Recalling that $P_1^{(0)} \sim \mathrm{N}(\mu, 1/X_0^2)$, this event occurs with positive probability, thus violating the setting of [1] and [5]. ### 2. Details on the experiments **Synthetic data**. In this setting, the target distribution $\pi$ is a constrained Gaussian distribution with mean $\mu=(10 / \sqrt{d-1}) *(0, \sqrt{d-1}, 1,…,1)^T$, variance $1$ and the domain is the hypercube or the simplex. This choice of a ‘non-uniform’ distribution is motivated by the experimental observation that the importance of the reversibility condition is **best revealed when the mass of $\pi$ is not evenly distributed on the boundary of the domain**. The choice of the target quantity $A=\int_{\mathsf{M}} \langle x, \mu \rangle \mathrm{d}\pi(x)$ is arbitrary but is enough to highlight the bias in CRHMC [5], which is corrected when using BHMC. In our experiments, we aim at computing $A$ with the best accuracy. To do so, we consider as ground truth the estimators obtained with competitive algorithms such as MALA (for the hypercube) and IMH (for the simplex). Note that these two algorithms are run 10 times longer than CRHMC [5] and BHMC (i.e., $10^6$ iterations) to ensure accurate unbiased estimation. Finally, each algorithm (CRHMC [5], BHMC, IMH, MALA) is run 10 times to obtain confidence intervals. We present boxplots in Figure 2 (here the vertical axis refers to the estimated target quantity). **Real-world data**. In this setting, **we do not have access to a realistic baseline that yields an unbiased estimator**, because the dimension of the polytope is too high and running MALA or IMH would be too costly. Hence, the goal of this experiment is instead to highlight that the ICS does not hurt the convergence properties of the algorithm. More precisely, we show that our method is comparable with the one from [5] in terms of time complexity to obtain independent samples. While it is clear that adding an ICS implies a tradeoff between accuracy and complexity of the method, our results demonstrate that this tradeoff does not penalize BHMC in the considered settings. ### 3. Precisions on the notation We precise the following notations: (i) for any vectors $u$ and $v$, $D\mathfrak{g}(x)[u,v]$ is the vector $u^T D\mathfrak{g}(x) v$, (ii) $\mathfrak{g}(x)^{−1} : D\mathfrak{g}(x)$ stands for the vector $Tr(\mathfrak{g}(x)^{-1} D\mathfrak{g}(x))$. Regarding the yellow highlighted parts: these highlights are used in Alg. 1 to emphasize our main algorithmic contribution, compared to the implementation of [5], namely the ICS. We will add a sentence to explain this in the updated version of the manuscript. In the appendix, highlights are used in Alg. 2 to emphasize our contribution compared [5] in the case of continuous RHMC and are also used in Alg. 3 to emphasize the differences with Alg. 1. [1] Condition-number-independent convergence rate of Riemannian Hamiltonian Monte Carlo with numerical integrators. Kook et al., 2022b. [2] A family of MCMC methods on implicitly defined manifolds. Brubaker et al., 2012. [3] Monte Carlo on manifolds: sampling densities and integrating functions. Zappa et al., 2018. [4] Multiple projection Markov Chain Monte Carlo algorithms on submanifolds. Lelièvre et al., 2022. [5] Sampling with Riemannian Hamiltonian Monte Carlo in a constrained space. Kook et al., 2022a.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
When Demonstrations meet Generative World Models: A Maximum Likelihood Framework for Offline Inverse Reinforcement Learning
Accept (oral)
Summary: This paper focuses on learning from demonstration via offline inverse reinforcement learning (IRL). Offline IRL suffers a similar problem as offline reinforcement learning (RL) and imitation learning (IL), where the policy cannot generalize well on unseen states and actions---this problem is known as distribution shift. To address this problem, this paper proposes to first learn a dynamic model, and formulates a maximum likelihood (ML) objective to simultaneously recovers both the reward function and the policy. Notably, the policy is optimized using a maximum entropy objective along with pessimism based on the uncertainty of the learned dynamic model. This paper provides PAC-style bounds to quantify the amount of samples required to achieve $\varepsilon$-optimal solution to the MLE objective. The paper further provides an algorithm that obtains such a $\varepsilon$-optimal solution under specific assumptions. Finally, this paper provides empirical evaluation on the D4RL benchmarks. ## Contributions - A new MLE objective for recovering a policy that is close to the expert policy. - A theoretical analysis that describes the statistical guarantees of the objective - An algorithm that obtains to near-optimal solution under linear parameterization of the reward function - An empirical evaluation on standard benchmark, D4RL, that indicates statistically significant improvement over existing baselines in majority of the tasks. Strengths: - The paper is well written and easy to follow in general---I particularly appreciate the presentation on providing formal statements followed by the high-level intuitions. - The paper proposes a novel formulation for offline inverse reinforcement learning. - The paper provides numerous theoretical justifications and an algorithm that is inspired by said analyses. Weaknesses: - In practice, how do we ensure assumption 2 (ergodicity)? It seems like this assumption actually "hides" some part of the coverage requirement? - I am curious as to how the MLE objective connects to the policy error---I completely understand that if $\pi_\theta = \pi^E$ then the policy error is zero. However, it does not seem to me that achieving $\varepsilon$-error on the MLE (i.e. $L(\theta^*) - L(\hat \theta) \leq \varepsilon$) does not directly tell us the policy error. - I think the training description for BC is somewhat vague---on page 8, line 314: what exactly does it mean by "train the algorithm until convergence"? Do we have some form of validation checking for BC? [1, 2, 3] have results regarding how BC would perform based on specific validation. Secondly, was there any hyperparameter search on BC, ValueDICE, and CLARE? - Regarding the experiment on recovered rewards, what is the performance if we were to fix the reward to 0? Isn't it better if we were to consider the correlation between the true reward function and the obtained reward function? ## References [1]: Hussenot, L., Andrychowicz, M., Vincent, D., Dadashi, R., Raichuk, A., Ramos, S., ... & Pietquin, O. (2021, July). Hyperparameter selection for imitation learning. In International Conference on Machine Learning (pp. 4511-4522). PMLR. [2]: Mandlekar, A., Xu, D., Wong, J., Nasiriany, S., Wang, C., Kulkarni, R., ... & Martín-Martín, R. (2021). What matters in learning from offline human demonstrations for robot manipulation. arXiv preprint arXiv:2108.03298. [3]: Ablett, T., Chan, B., & Kelly, J. (2023). Learning from Guided Play: Improving Exploration for Adversarial Imitation Learning with Simple Auxiliary Tasks. IEEE Robotics and Automation Letters. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - On page 3, equation 2a: $\theta$ parameterizes only the reward function, and $\pi_\theta$ corresponds to the policy obtained when $r$ is parameterized by $\theta$, correct? - On page 5, proposition 1: Is there any result regarding larger $\varepsilon$? It seems like we may want to sacrifice this approximation error. ## Possible Typos - Page 3, line 93: gamma \in (0, 1)? - Page 3, line 95: The $P$ should not be the same as the transition dynamics right? I feel we should be clear that we are either overloading notation or use another symbol. - Page 5, equation 11: What's $\epsilon$? Is it the same as $\varepsilon$? - Page 8, line 303: Missing space after "2)" Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: - The paper's proposed method requires uncertainty estimation which is still an active research problem, especially in the context of neural networks---as far as I understand the paper leverages existing work that lacks theoretical guarantee. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed review of the paper and the valuable feedback. Below, we address the reviewer's comments in a point-by-point manner. **Response to Weakness 1:** We thank the reviewer for this good question. As we discussed in Section. 6, we constructed an ensemble of estimated dynamics models as $ \\{ \hat{P}^{i}\_{\phi,\varphi}(\cdot|s,a) = \mathcal{N}( \mu^{i}\_{\phi}(s,a), \Sigma^{i}\_{\varphi}(s,a) ) \\}\_{i=1}^N$ where each one models the location of the next state by Gaussian distribution. Under Gaussian distributions, it is achievable to eventually get from every state to every other state with positive probability. Hence, the ergodicity holds in our estimated dynamics model in practice. **Response to Weakness 2:** We appreciate the reviewer for raising this insightful question. The general answer is that $\varepsilon$-error on the MLE implies that the recovered policy $\pi_{\hat{\theta}}$ is $\varepsilon$-close to the expert policy $\pi^E$ measured by the KL divergence. In our formulation eq (2a) - (2b), the MLE objective follows $L(\theta) := \mathbb{E}\_{\tau^E \sim (\eta,\pi^E,P)}[ \sum\_{t=0}^{\infty} \gamma^t \log \pi\_{\theta}(a_t|s_t) ]$. According to the definition of the state visitation measure $d^E(s)$ under the expert policy $\pi^E$ where $d^E(s) := (1-\gamma) \sum\_{t=0}^{\infty} \gamma^t P^{\pi^E}(s_t = s|s_0 \sim \eta)$, we can rewrite the MLE objective as below: $L(\theta) := \mathbb{E}\_{\tau^E \sim (\eta,\pi^E,P)}[ \sum\_{t=0}^{\infty} \gamma^t \log \pi\_{\theta}(a_t|s_t) ] = \frac{1}{1-\gamma}\mathbb{E}\_{s \sim d^E(\cdot), a \sim \pi^E(\cdot|s)}[ \log \pi\_{\theta}(a|s) ].$ Therefore, the $\varepsilon$-error on the MLE implies that $L(\theta^*) - L(\hat{\theta}) = \frac{1}{1-\gamma}\mathbb{E}\_{s\sim d^E(\cdot), a \sim \pi^E(\cdot|s)}[\log \pi\_{\theta^*}(a|s) - \log \pi\_{\hat{\theta}}(a|s)] = \frac{1}{1-\gamma}\mathbb{E}\_{s\sim d^E(\cdot), a \sim \pi^E(\cdot|s)}[\log \frac{\pi\_{\theta^*}(a|s)}{\log \pi\_{\hat{\theta}}(a|s)} ] \leq \varepsilon.$ When the expert trajectories are consistent with the optimal policy under a ground truth reward parameter $\theta^*$, we have $\pi^E = \pi\_{\theta^*}$. Due to this property, we can show $L(\theta^*) - L(\hat{\theta}) = \frac{1}{1-\gamma}\mathbb{E}\_{s\sim d^E(\cdot), a \sim \pi^E(\cdot|s)}[\log \frac{\pi\_{\theta^*}(a|s)}{\log \pi\_{\hat{\theta}}(a|s)} ] = \frac{1}{1-\gamma}\mathbb{E}\_{s\sim d^E(\cdot), a \sim \pi^E(\cdot|s)}[\log \frac{\pi^E(a|s)}{\log \pi\_{\hat{\theta}}(a|s)} ] = \frac{1}{1-\gamma}\mathbb{E}\_{s\sim d^E(\cdot)} [ D\_{KL}( \pi^E(\cdot|s) || \pi\_{\hat{\theta}}(\cdot|s) ) ] \leq \varepsilon.$ Hence, we can show the $\varepsilon$-error on the MLE implies that the recovered policy $\pi\_{\hat{\theta}}$ is $\varepsilon$-close to the expert policy $\pi^E$ measured by the KL divergence. **Response to Weakness 3:** We appreciate the reviewer’s comments. Regarding the training of BC, we assess the checkpoints generated throughout the training process and monitor the performance of the resulting policies at every few updates. Once the performance of BC stops to achieve further improvement (in terms of the average rewards in episodes) within 20 training epochs, we will use the updated policy to generate ten rollout episodes and then record the average reward of the rollout episodes as the final performance measure. Moreover, it is important to note that for benchmark algorithms BC, ValueDice, and CLARE, we have utilized their official implementations, incorporating their default hyper-parameters, which have been fine-tuned beforehand. Below, we provide the sources for the official implementations of the benchmark algorithms, which we have mentioned in Appendix A (the section of experiment details): BC, ValueDice: https://github.com/google-research/google-research/tree/master/value_dice CLARE: https://openreview.net/forum?id=5aT4ganOd98 **Response to Weakness 4:** We appreciate this suggestion. The numerical results are included in the PDF in global response (see https://openreview.net/forum?id=oML3v2cFg2&noteId=oCZvqhc8PI). **Response to Question 1:** Yes. When the reward function is $r(\cdot,\cdot;\theta)$, we use $\pi_{\theta}$ to denote the optimal policy obtained in the estimated dynamics model. **Response to Question 2:** We appreciate the reviewer for this good question. The reason we define $\varepsilon \in (0,2)$ is because we use $\varepsilon$ to bound $\mathbb{E}\_{(s,a)\sim d^E(\cdot,\cdot)}[|| P(\cdot|s,a) - \hat{P}(\cdot|s,a)||_1]$ and the maximum dynamics mismatch error under L1 norm is bounded by 2. Based on the definition of the L1 distance, we have $|| P(\cdot | s,a) - \hat{P}(\cdot | s,a) ||_1 = \sum\_{s^\prime \in \mathcal{S}} | P(s^\prime | s,a) - \hat{P}(s^\prime | s,a) | \in [0,2]$. When two distributions perfectly match, their L1 distance is 0, while a complete mismatch between distributions results in an L1 distance of 2. Hence, when analyzing sample complexity, we define $\varepsilon \in (0,2)$ to bound the error $\mathbb{E}\_{(s,a)\sim d^E(\cdot,\cdot)}[|| P(\cdot|s,a) - \hat{P}(\cdot|s,a)||_1]$. **Response to Typo 1:** We will explicitly define $\gamma \in (0,1)$ in our paper. **Response to Typo 2:** We appreciate the comments. There is one typo in the term $P(s_t = s | s_0 \sim \eta)$, which should be corrected as $P^{\pi}(s_t = s | s_0 \sim \eta ).$ Under any fixed policy , the term $P^{\pi}(s_t = s | s_0 \sim \eta )$ denotes the probability that $s_t = s$ at time step t when $s_0$ is sampled from $\eta(\cdot)$ and the actions in the MDP are sampled from $\pi$. Hence, the state-action visitation measure $d^{\pi}\_{P}(s,a)$ in line 95 follows: $d^{\pi}_{P}(s,a):= (1 - \gamma) \pi(a|s) \sum\_{t=0}^{\infty} \gamma^t P^{\pi}(s_t = s | s_0 \sim \eta ).$ **Response to Typo 3:** We appreciate the comments. $\epsilon$ is a typo and we should correct it as $\varepsilon$. **Response to Typo 4:** We will include the space after "2)". --- Rebuttal Comment 1.1: Comment: Thank you for the detailed answers and the extra experiments. I believe the authors have addressed all my questions or concerns (especially for weakness 2, my understanding is that this will translate to policy error due to [1]), I am happy to increase the score from 6 to 8. [1]: Xu, Tian, Ziniu Li, and Yang Yu. "Error bounds of imitating policies and environments." Advances in Neural Information Processing Systems 33 (2020): 15737-15749. --- Reply to Comment 1.1.1: Title: Many thanks for your comments and positive feedback Comment: We truly appreciate your detailed review and insightful comments. We will discuss this paper and the translation to policy errors in our revised version.
Summary: This paper addressed the issue of covariate shift in offline imitation learning. The authors extended the uncertainty-regularized model-based offline RL to the imitation learning setting. The key idea is to first learn transition dynamics from samples, and then solve an optimization problem that jointly seeks a policy such that it optimizes the learned transition dynamics accompanied by a reward model and maximizes the log-likelihood of actions in data. The authors provide theoretical guarantees for the maximization of action log-likelihood and empirical results for the learned policy. Strengths: 1. The issue of covariate shift is indeed important for offline IRL. 2. The efficacy of the algorithm is partially supported by empirical results. 3. Analysis is provided for the model-based part of this algorithm. Weaknesses: 1. The paper is somewhat hard to follow. See questions below. 2. The effect of overcoming the distributional shift is not emphasized in the experiment section. None of the experiments was carried out on small datasets where the coverage of state–action space is limited. In fact, the medium datasets in D4RL contain 1M transitions, and the medium-expert versions contain 2M transitions. The datasets for results in Figure 2 contain 5000 expert demonstrations, which might correspond to 5M transitions if each expert trajectory contains 1000 transitions. 3. The paper does not have an informative conclusion part. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Why do you include discounting in the log-likelihood term? 2. Why do you include the entropy term in eq. 3a? 3. The theoretical analysis relies on the relation between optimal policy and optimal soft Q-function (eq.4 and eq. 14). But as mentioned in line 226--230, practical implementations utilize an actor-critic framework to approximate the optimal policy. How does this approximation affect the analysis? 4. Eq. 15 shows a clear resemblance with the reward maximization part in max-entropy IRL (eq. (1)). It suggests that the proposed algorithm seems to replace the online interaction of max-entropy IRL with sample generation using the learned dynamics. Then, what is the motivation to maximize the log-likelihood of expert actions? A suggestion. One concern of this approach is that to estimate the transition dynamics we need a sufficient amount of data, but the covariate shift issue occurs when we only have a small amount of data. I would suggest the authors include results on a few expert demonstrations (e.g. 10~100) to verify their claim. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: No. there is no discussion on limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed review of the paper and the valuable feedback. Below, we address the reviewer's comments in a point-by-point manner. **Response to Weakness 1:** We will address your comments to improve the readability of the paper. **Response to Weakness 2:** We appreciate the reviewer’s comments. In our paper, in fact, we have utilized small datasets to evaluate the performance of our proposed algorithm when the coverage of the expert-visited state-action space is limited. To address some potential misunderstanding regarding our experiment, we would like to clarify that **in our work, one expert demonstration corresponds to one transition sample $(s, a, s^\prime)$ from the expert trajectories.** We have explicitly mentioned this in Appendix A and the caption of Table 2. In Fig. 2 -4, we respectively report the performance of our proposed algorithm when five trajectories (5,000 transition samples), one trajectory (1,000 transition samples) and ten trajectories (10,000 transition samples) are available. **Response to Weakness 3:** We appreciate the reviewer’s suggestion. We provide a conclusion section in the global response and we kindly refer the reviewer to check it (see https://openreview.net/forum?id=oML3v2cFg2&noteId=oCZvqhc8PI). **Response to Question 1:** We appreciate the reviewer for raising this insightful question. The general answer that the discounted log-likelihood objective is derived from the dual problem of the classic formulation maximum entropy (MaxEnt) IRL. In the literature of MaxEnt IRL [1], the problem aims to learn a (linearly parameterized) reward that can induce a policy to achieve the same expected reward as the expert trajectories while maximizing its entropy. Under infinite horizon MDP, the existing results have shown that the dual problem of MaxEnt IRL is the maximization of the “discounted” log-likelihood over expert trajectories on an optimal policy which solves an underlying entropy-regularized MDP (see Theorem 1 in [2]). This fundamental result justifies the use of discounting in the log-likelihood term. **Response to Question 2:** The general answer is that the entropy term in our formulation eq. (2b) is translated from the maximum entropy objective of MaxEnt IRL. As we discussed in the response to Question 1, in online IRL with linear parameterized reward, it has been shown that the dual problem of MaxEnt IRL is a maximum likelihood formulation of IRL where the optimal policy is modeled as a solution to a entropy-regularized MDP. Here, in the derivation of maximum likelihood formulation of IRL, the entropy term in the objective of MaxEnt IRL has been translated to the model of agent’s behavior. Hence, in this paper, when we propose the maximum likelihood formulation of IRL with nonlinear reward parameterization in the offline setting, we include the entropy term in the definition of the soft value function and soft Q-function in eq. 3a - 3b. **Response to Question 3:** We would like to clarify that our theoretical analysis aligns with the algorithmic framework of our practical implementations, since we have taken the approximation error into account in our theoretical analysis. More specifically, we have considered the approximation error from the soft actor-critic (SAC) algorithm [3] into our policy optimization subroutine eq. 13 - 14. Our theoretical analysis does not require the policy or the soft Q-function to be optimal at each step, therefore, running a few steps of the soft actor-critic algorithm will be sufficient. At each step k, we first run policy evaluation steps (critic steps) to approximate the corresponding soft Q-function $Q_k$ of the current policy $\pi_k$ by $\hat{Q}\_k$, as shown in eq. 13. Then we run the soft policy iteration (action steps) to obtain the updated policy $\pi_{k+1}(a|s) \propto \exp \hat{Q}_k(s,a)$, as shown in eq. 14. Note that our policy optimization subroutine matches the practical implementation of SAC. Moreover, the approximation error $||\hat{Q}_k - Q_k ||\_{\infty}$ in estimating the soft Q-function has been explicitly considered in our theoretical analysis (see Theorem 2). **Response to Question 4:** The general answer is that the maximum likelihood formulation is a broader problem formulation compared with MaxEnt IRL. First of all, as we discussed in the response to Question 1, the MaxEnt IRL is based on the online IRL setting with linear reward. In this case, the maximum likelihood IRL is the dual problem to MaxEnt IRL. Hence, we can find resemblance between the maximum likelihood IRL and MaxEnt IRL. However, since MaxEnt IRL is limited to the online setting with linear reward, its formulation is incompatible with broader problems. Therefore, the maximum likelihood IRL formulation offers greater flexibility to model IRL problems in a broader scope. It can be applied effectively to scenarios involving either linear or nonlinear reward parameterization, as well as online or offline settings. **Response to Reviewer's Suggestion:** We appreciate the reviewer’s suggestion. As we clarified in our response to Weakness 2, one expert demonstration in our paper means one transition sample $(s, a, s^\prime)$ collected from an expert trajectory. In our numerical results, we have considered the setting that only 1 / 5 / 10 expert trajectories are available. **Response to Limitations:** We have include them in the section of Limitations and broader impacts in Appendix. ______ [1] Ziebart, Brian D. Modeling purposeful adaptive behavior with the principle of maximum causal entropy. Carnegie Mellon University, 2010. [2] Zeng, Siliang, et al. "Maximum-likelihood inverse reinforcement learning with finite-time guarantees." Advances in Neural Information Processing Systems 35 (2022): 10122-10135. [3] Haarnoja, Tuomas, et al. "Soft actor-critic: Off-policy maximum entropy deep reinforcement learning with a stochastic actor." International conference on machine learning. PMLR, 2018. --- Rebuttal Comment 1.1: Comment: Thanks for the replies. I have adjusted my evaluation based on them. --- Reply to Comment 1.1.1: Title: Many thanks for your positive feedback to our response Comment: We truly thank you for taking time to review our paper and recognizing the contributions of this work!
Summary: Offline inverse reinforcement learning (IRL) is a method for finding an unknown reward function optimized by an agent from demonstrations using only a finite dataset. The most common framework is maximum entropy (MaxEnt) IRL, which attempts to find a reward function that induces a policy which achieves the same expected reward as the trajectories from the expert demonstrations while maximizing its entropy. Prior work has shown that this is equivalent to finding a policy which maximizes the likelihood of the demonstrations under the constraint that this policy comes from solving a MaxEnt RL problem. This formulation as a bi-level optimization problem reduces the computational burden that results from alternating between finding the policy and updating the reward. However, it requires access to the true dynamics of the environment, which is incompatible with the offline IRL setup. Instead, this work proposes to learn the dynamics model in an uncertainty-aware fashion and incorporate a measure of this uncertainty in the learned reward function. This results in a two-stage procedure: 1) fitting a dynamics model from transition samples in the dataset and 2) recovering the reward function using the maximum likelihood (ML) formulation of the IRL problem. To perform the second step, the authors propose a novel decomposition of the upper-level objective, which consists of a surrogate objective that is more computationally tractable to optimize. The authors provide statistical guarantees about the optimality of the recovered reward function in terms of dataset coverage, a concept common in offline RL. Importantly, their bounds depend on dataset coverage on expert-visited state-action pairs, not the full joint space. Strengths: - Offline inverse RL is an important area for tackling challenging sequential decision making problems in potentially safety-critical applications. - The paper is well organized and clearly written. It does a good job explaining the novelty and results and provides enough information to support its claims. - The paper presents an extensive experimental evaluation on several benchmarks, comparing to both model-based and model-free offline IRL algorithms and existing imitation learning approaches. Their algorithm outperforms these baselines in most cases. - The authors provide a nice analysis of surrogate objective and its relation to the true upper-level objective. This provides a nice motivation for optimizing the surrogate instead, which is more computationally tractable. - The authors present a nice optimality guarantee of the stationary point of their algorithm in terms of the surrogate objective in the case where the reward function is linear in a feature vector of states and actions. They also relate this stationary point to the true optimal solution of the original problem. - The additional reward transfer experiments indicate that the learned reward function may transfer to dynamics models trained on different state-action distributions. This appears to hold even if the state-action coverage used to train the reward function is close to expert-visited states. Weaknesses: - The alternating optimization scheme discussed in this work appears to be identical to that presented in [21]. If true, that is fine, as the main contribution of the work lies in modeling the conservative MDP and providing novel bounds in the offline setting. However, it should be made explicit in the paper and mentioned in the contributions. - Theorem 2 seems very similar to Theorem 5.4 in [21], except that the Q function approximation error is considered explicitly. If this is true, it should be discussed that this is the novelty in the text. - Section 6 should discuss the differences in the three dataset types used, as the current text does not explain what they entail. This makes it difficult to understand the performance of the proposed algorithm in each setting without carefully looking at the Appendix. It should also talk about the purpose of using these different datasets. From the Appendix, it appears that they are only used to train the dynamics model. Thus, they are evaluating the effect of dataset coverage around the expert on performance. This should be explicitly discussed in the paper. I know space is limited, but these are important details that should be in the main text. - A minor comment is that it would be nice for the main paper to end with a conclusion section rather than ending abruptly. And this conclusion should mention limitations of the current method. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Is there a difference in the alternating optimization scheme presented here and the one in [21]? - How does Theorem 2 in this text relate to Theorem 5.4 in [21]? - Am I correct in assuming that the three different datasets are used to evaluate how the coverage of the dataset around expert state-action pairs affects the algorithm's performance? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: There is no discussion of limitations in the paper. The paper would be made a lot stronger if this was discussed in a conclusion section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed review of the paper and the valuable feedback. Below, we address the reviewer's comments in a point-by-point manner. **Response to Weakness 1&Question 1:** We appreciate the reviewer’s comments. However, we do not agree that the proposed alternating optimization algorithm in this work is identical to the one presented in [21]. In this response, we would like to highlight the key difference between our proposed algorithm and the one presented in [21]. Compared with the online IRL algorithm presented in [21], one key difference is that our proposed offline IRL algorithm solves a **much harder** IRL problem (2a) - (2b) than the one in [21]. This is due to the fact that our outer objective $L(\theta)$ in eq (2a) is defined on the *ground-truth* dynamics model P while the inner problem in eq (2b) considers policy optimization in the *estimated* dynamics model $\hat{P}$. Due to the existence of the dynamics model mismatch, we turn to optimize a novel surrogate objective by developing a *new* algorithm. To be more specific, compared with the existing algorithm in [21], there are a few major differences in our proposed algorithm. Indeed, as reviewer has mentioned, we adopt a generic “alternating optimization” scheme, which alternates between one conservative policy improvement step and one reward optimization step. However, the algorithm under the hood of “alternating optimization” is very different. * First, compared with the existing algorithm in [21], the proposed algorithm in paper aims to solve a completely different problem – offline IRL. Due to the difference in problem settings and formulations, we need to analyze the gradient expression of the surrogate objective function w.r.t. the reward parameter, which leads to a different reward update step compared to the one in [21]. * Second, for the policy optimization under each reward estimator, we consider updating the policy under the estimated dynamics model with a regularization term (penalty function) based on the model uncertainty. In contrast, the existing algorithm in [21] only solves the online IRL problem where there is no estimated dynamics model and uncertainty estimation. This difference in algorithm design also leads to new algorithm implementations where we solve the underlying optimal policy in the estimated dynamics model through taking advantage of the recent advances in model-based offline policy optimization and uncertainty estimation. * Third, in the policy optimization subroutine (13) - (14), we consider a more realistic scheme compared with [21], where we first approximate the current soft Q function $Q_k$ by $\widehat{Q}_k$ in (13) and then update the policy by an approximate soft policy iteration in (14). Compared with the existing algorithm in [21] which simply assumes the soft Q-function can be accurately estimated without any approximation error, the proposed algorithm in our paper is more practical since the approximation error has been explicitly taken into account. **Response to Weakness 2&Question 2:** We appreciate the reviewer’s comments. In this response, we would like to highlight the key differences between Theorem 2 in this work and Theorem 5.4 in [21]. In the convergence analysis of our proposed algorithm, the previous analysis in [21] does not hold anymore. This is due to the fact that we are optimizing a *surrogate objective function* which is different from [21]. Moreover, as opposed to the analysis considered in [21], the analysis of our proposed algorithm involves two dynamics models: the ground-truth dynamics model $P$ and the estimated dynamics model $\hat{P}$. Due to the existence of the dynamic model mismatch and regularized penalty function in the setting of offline IRL, it is not clear whether previous properties, such as the Lipschitz continuity proved in [21] can still hold here. Furthermore, as shown in eq (13) - (14) which is different from [21], we do not assume that the soft Q function $Q_k$ can be accurately approximated at each step. In the convergence analysis of [21], the accurate approximation of the soft Q function at each step can guarantee a monotonic improvement for each soft policy iteration step. However, in our work, the approximation error between $\widehat{Q}_k$ and $Q_k$ in (13) - (14) also makes it more challenging to analyze the stability of our proposed alternating algorithm. To tackle this problem, we develop new proof techniques in the proof of Theorem 2 to guarantee the finite-time convergence analysis of our proposed algorithm. We appreciate the reviewer’s comments, which encourages us to discuss the key difference in proof techniques compared with [21]. We will include the discussion above into our paper. **Response to Weakness 3:** We appreciate the reviewer’s comments. These transition datasets are used to train the dynamics model and further evaluate the effect of the dataset’s coverage over the expert-visited state-action space on the performance of benchmark algorithms. We will explicitly include these details and move the discussion in Appendix into the main paper. **Response to Weakness 4:** We appreciate the reviewer’s suggestion. We present a conclusion and discuss the limitations in the global response and we kindly refer the reviewer to check it (see https://openreview.net/forum?id=oML3v2cFg2&noteId=oCZvqhc8PI). **Response to Question 3:** Yes. Indeed, the three types of datasets are gathered from policies with varying performance levels and exhibit different coverage across the expert-visited state-action space. We leverage these three dataset types to assess how their coverage of the expert-visited state-action space influences the algorithm's performance. **Response to Limitations:** We would like to kindly remind the reviewer that the limitations of this work have been discussed in the section of Limitations and broader impacts in Appendix. We will discuss it in the conclusion section. --- Rebuttal Comment 1.1: Comment: Thank you for your clarifications! I have adjusted my score accordingly, assuming you make the proposed modifications to the text. --- Reply to Comment 1.1.1: Title: Many thanks for your comments and positive feedback Comment: We truly appreciate your detailed review and insightful comments! We will include the discussions and the proposed modifications in our revised version.
Summary: In this paper the authors present a two level maximum likelihood based framework for offline inverse reinforcement learning, where both a world model and a reward model are learnt from expert demonstrations. In this two level algorithm the outer level or loop involves estimating the reward function, while the inner loop estimates the optimal policy for the chosen reward function in a conservative MDP setting, where a penalty is added which is loosely proportional to the uncertainty in the learnt model. The authors provide theoretical guarantees for the performance of their algorithm under fairly standard technical assumptions. They also show the numerical performance of their algorithm on 3 MuJoCo environments, comparing them with other state of the art offline RL algorithms. Strengths: 1. The paper is novel, clearly written and is easy to comprehend. 2. The authors have stated their results formally in the form of Lemmas and Theorems and have proved them in the supplementary material. This analysis proves the validity and utility of their proposed approach. 3. While model based offline inverse RL has been studied, I think the theoretical guarantees from this paper are novel and important. Weaknesses: 1. The authors have demonstrated performance on only 3 environments, in which in one of the cases, their proposed algorithm is not the best. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. How is the state action visitation measure defined in Eq (5)? Specifically, I did not understand the term $P(S_t = s|s_0 \sim \eta)$. Does this imply that the transitions are action independent? 2. In the proof of Lemmma 1, how is the equation below the one labelled (iii) obtained? 3. How is the last step in Eq(30) in Proof of Lemma 1 obtained? 4. [Typo] Line 806 "bonded" -> "bounded". 5. I have understood most of the proofs as they are well written with adequate justification, but since I am not sure about the definition of the state-action visitation measure, I was unable to verify some proofs in the supplementary material. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The authors address the limitations of this work and also some suggestions to overcome some of these limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed review of the paper and the valuable feedback. Below, we address the reviewer's questions in a point-by-point manner. **Response to Question 1:** We appreciate this question raised by the reviewer. This question indeed helps us find a typo in the term $P(s_t=s|s_0\sim\eta)$, which should be corrected as $P^\pi(s_t=s|s_0\sim\eta)$. Under any fixed policy $\pi(a|s)$ and transition function $P(s^\prime | s,a)$, the term $P^\pi(s_t=s|s_0\sim\eta)$ denotes the probability that $s_t = s$ at time step t when $s_0$ is sampled from the initial state distribution $\eta(\cdot)$ and the actions $\{a_0 , a_1, …, a_{t-1}\}$ in the Markov decision process are sampled from the policy $\pi$. To avoid potential confusion, under any policy $\pi$ and transition function $P$, let us correct the typo and rewrite the definition of the corresponding state-action visitation measure $d^\pi\_{P}(s,a)$ as below: $d^\pi\_{P}(s,a) := (1 - \gamma) \pi(a|s) \sum\_{t=0}^{\infty} \gamma^t P^\pi(s_t = s | s_0 \sim \eta)$. Hence, in Eq (5), the state-action visitation measure under the expert policy $\pi^E$ is defined as below: $d^E(s,a) := (1 - \gamma) \pi(a|s) \sum\_{t=0}^{\infty} \gamma^t P^{\pi^E}(s_t = s | s_0 \sim \eta )$. **Response to Question 2:** In (iii), we have shown that the objective function $L(\theta)$ can be expressed as $L(\theta)=\sum\_{t=0}^{\infty} \gamma^t \mathbb{E}_{(s_t,a_t)\sim(\eta,\pi^E,P)}[ r(s_t,a_t;\theta) + U(s_t,a_t) + \gamma \mathbb{E}\_{s\_{t+1} \sim \hat{P}(\cdot|s_t,a_t) }[V\_{\theta}(s\_{t+1})] ] - \sum\_{t=0}^{\infty}\gamma^t \mathbb{E}\_{s_t \sim (\eta,\pi^E,P)}[V\_{\theta}(s_t)].$ Based on the equation in (iii), we can further write down the following equality: $L(\theta)=(\sum\_{t=0}^{\infty} \gamma^t \mathbb{E}_{(s_t,a_t)\sim(\eta,\pi^E,P)}[ r(s_t,a_t;\theta) + U(s_t,a_t) ] + \sum\_{t=0}^{\infty} \gamma^{t+1} \mathbb{E}\_{(s_t,a_t)\sim(\eta,\pi^E,P),s\_{t+1} \sim \hat{P}(\cdot|s_t,a_t)}[V\_{\theta}(s\_{t+1})] ) - (\mathbb{E}\_{s_0 \sim \eta(\cdot)}[V\_{\theta}(s_0)] + \sum\_{t=0}^{\infty} \gamma^{t+1} \mathbb{E}\_{s\_{t+1} \sim (\eta,\pi^E,P)}[V\_{\theta}(s\_{t+1})] ).$ Then it leads to the equality below the one labeled (iii) by interchanging terms. **Response to Question 3:** In the first equation of eq. (30), we have obtained the following equation of the term T2: $ T_2 = \sum\_{t=0}^{\infty} \gamma^{t+1} \mathbb{E}_{(s_t,a_t)\sim(\eta,\pi^E,P)}[\sum\_{s\_{t+1} \sim \mathcal{S}} V\_{\theta}(s\_{t+1}) (\hat{P}(s\_{t+1} | s_t, a_t) - P(s\_{t+1} | s_t, a_t))]$ Here, recall that we have defined $P^{\pi^E}(s_t=s|s_0 \sim \eta)$ as the probability that $s_t=s$ when $s_0$ is sampled from the initial state distribution $\eta(\cdot)$ and the actions in the MDP are sampled from the expert policy $\pi^E$. Hence, we can obtain the second equality in (30): $T_2 = \gamma \sum\_{t=0}^{\infty} \sum\_{s \in \mathcal{S}, a\in \mathcal{A}} \gamma^t P^{\pi^E}(s_t = s|s_0\sim \eta) \pi^E(a_t = a|s_t = s)( \sum\_{s^\prime \in \mathcal{S}} V\_{\theta}(s^{\prime}) (\hat{P}(s^{\prime} | s_t=s, a_t=a) - P(s^{\prime} | s_t=s, a_t=a)) ).$ Then we can further express the term $T_2$ as below: $T_2 = \gamma \sum\_{s \in \mathcal{S}, a\in \mathcal{A}} ( \pi^E(a|s) \sum\_{t=0}^{\infty} \gamma^t P^{\pi^E}(s_t = s|s_0 \sim \eta)) \cdot ( \sum\_{s^\prime \in \mathcal{S}} V\_{\theta}(s^{\prime}) (\hat{P}(s^{\prime} | s, a) - P(s^{\prime} | s, a)) )$. Given the expert policy $\pi^E$, recall that we have defined the corresponding state-action visitation measure as $d^E(s,a) :=(1-\gamma)\pi^E(a|s) \sum\_{t=0}^{\infty} \gamma^t P^{\pi^E}(s_t =s|s_0 \sim \eta).$ Then we obtain the following expression of the term $T_2$ as below: $T_2 = \frac{\gamma}{1 - \gamma} \sum\_{s \in \mathcal{S}, a\in \mathcal{A}} d^E(s,a) \cdot ( \sum\_{s^\prime \in \mathcal{S}} V\_{\theta}(s^{\prime}) (\hat{P}(s^{\prime} | s, a) - P(s^{\prime} | s, a)) ).$ Finally, we can show the last equality in eq (30): $T_2 = \frac{\gamma}{1 - \gamma} \mathbb{E}\_{(s,a)\sim d^E(\cdot, \cdot)}[ \sum\_{s^\prime \in \mathcal{S}} V\_{\theta}(s^{\prime}) (\hat{P}(s^{\prime} | s, a) - P(s^{\prime} | s, a)) ].$ **Response to Question 4:** We thank the reviewer for pointing out this typo. We will correct it in our paper. **Response to Question 5:** We thank the reviewer for raising the question on the definition of the state-action visitation measure. The question indeed helps us find a typo in the definition of the state-action visitation measure. As we clarified in the response to Question 1, under any policy $\pi$ and transition function $P$, the corresponding state-action visitation measure $d^{\pi}\_{P}(s,a)$ is defined as below: $d^{\pi}\_{P}(s,a) := (1 - \gamma) \pi(a|s) \sum\_{t=0}^{\infty} P^\pi(s_t = s | s_0 \sim \eta)$. Here, $P^\pi(s_t=s|s_0\sim\eta)$ denotes the probability that $s_t = s$ when the actions in the MDP are sampled from the policy $\pi$ and the initial state $s_0$ is sampled from the initial state distribution $\eta(\cdot)$. We hope our correction of the typo and the corrected definition of the state-action visitation measure can address the reviewer’s question, and can improve the readability of our paper. --- Rebuttal Comment 1.1: Comment: I thank the authors for responding to my comments and for providing these clarifications. Based on the explanation provided, I am happy to increase my score. --- Reply to Comment 1.1.1: Title: Many thanks for your comments and positive feedback Comment: We truly appreciate your detailed comments and review! We will include these clarifications in our revised version.
Rebuttal 1: Rebuttal: We thank the detailed review and comments from all reviewers. In the global response, we present a conclusion here and we will include it into our final version: In this paper, we model the offline Inverse Reinforcement Learning (IRL) problem from a maximum likelihood estimation perspective. We develop a computationally-efficient algorithm that effectively recovers the underlying reward function and its associated optimal policy. We have also established statistical and computational guarantees for the performance of the recovered reward estimator. Through extensive experiments, we demonstrate that our algorithm outperforms existing benchmarks for offline IRL and Imitation Learning, especially on high-dimensional robotics control tasks. One limitation of our method is that we focus solely on aligning with expert demonstrations during the reward learning process. In an ideal scenario, reward learning should incorporate diverse metrics and data sources, such as expert demonstrations and preferences gathered through human feedback. One direction for future work is to broaden our algorithm framework and theoretical analysis for a wider scope in reward learning. Pdf: /pdf/89df6067faadb91fae502145bc2b98ac0b6b999a.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper "Understanding Expertise through Demonstrations: A Maximum Likelihood Framework for Offline Inverse Reinforcement Learning" proposes an innovative approach of offline inverse reinforcement learning. After a deep theoretical analysis of the inter-dependence between errors arising from dynamics modeling from limited offline data and performance gaps of resulting policies, authors propose an efficient algorithm to practically exploit conclusions for reward/policy learning. A small experimental section validates the approach. Strengths: - Very well written with every notation well defined and every choice well justified - Very interesting problem and strong theoretical analysis - A practical algorithm that looks easy to reproduce - Good results Weaknesses: - My main concern is that there is very few discussion about model uncertainty U in the paper, and particularly in section 3. I am surprised to not see it involved in the derivations and bounds, with no assumptions about it (except that it is bounded). No real meaning is given to it and it seems that it could be removed without changing anything in the theoretical conclusions. Is it true ? If yes, why introducing it in that part ? Also its impact is therefore not well understood from the theoretical analysis, which is a little be limitative to me (as it looks to have importance). - Still on U, I feel that experimental results on the choice of U would have been very useful. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - Could authors give more insights about U, both theoretically if possible, and by some experimental results that show its impact ? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: . Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer 5szs for your positive comments and recognizing the importance of this work. Below, we address the reviewer's comments in a point-by-point manner. **Our Response to Weakness 1:** We appreciate the reviewer for raising this insightful question. In our practical implementation, we utilize a set of ensemble models to construct the penalty function. It's important to note that the penalty function is based on heuristic design and relies solely on the estimated dynamics model. Consequently, once the construction of the estimated dynamics model is complete, the associated penalty $U(s,a)$ for each state-action pair $(s,a)$ becomes a fixed constant when using the proposed alternating algorithm for reward/policy update steps. It is essential to differentiate our approach from model-based offline reinforcement learning, where the reward function remains fixed and the penalty function plays a crucial role in alleviating distribution shifts. In contrast, in IRL, the parameterized reward function is kept adjusted and optimized to align with expert demonstrations. Thus, the penalty function $U(s,a)$, being a fixed regularization term added to the parameterized reward function, does not significantly impact the theoretical analysis of our offline IRL method. This characteristic makes model-based offline IRL less sensitive to the construction of the penalty function U, as compared to model-based offline reinforcement learning. Empirically, we've observed that the uncertainty-based penalty function effectively mitigates distribution shift effects during policy optimization in the estimated dynamics model, consistent with [R1]. In offline IRL settings, where a suitable initialization for the reward function may be lacking, the agent might suffer from distribution shifts in the initial training stages. Including the penalty function to regularize the agent's behavior and guide it to remain in the low-certainty region can facilitate the imitation of expert demonstrations. Furthermore, since the policy optimization subroutine in our proposed algorithm utilizes the practical implementations of model-based RL methods that explicitly incorporates the penalty function, it is natural to include the penalty function in our approach and theoretical analysis. [R1] Yu, Tianhe, et al. "Mopo: Model-based offline policy optimization." Advances in Neural Information Processing Systems 33 (2020): 14129-14142. **Our Response to Weakness 2:** We appreciate the reviewer for raising this insightful question. To address the reviewer's question, we provide a supplementary experiment on three different choices of U. Below, we will elaborate the experiment details. To estimate the model uncertainty by ensembles models, we have independently trained an ensemble of $N$ estimated dynamics models $\\{ P\_{\phi,\varphi}^i(s\_{t+1}|s_t,a_t)=\mathcal{N}(\mu^i\_{\phi}(s_t,a_t), \Sigma^i\_{\varphi}(s_t,a_t)) \\}\_{i=1}^N$ via likelihood maximization over transition samples. Here, each model estimates the location of the next state by gaussian distributions. Three common choices of the penalty function have been considered: 1) Max Aleatoric: $U(s,a)=-\max\_{i = 1,\cdots,N} || \Sigma^i\_{\varphi}(s,a) ||_F$, 2) Ensemble Variance: $U(s,a) = -(\frac{1}{N} \sum\_{i=1}^N (\Sigma^i\_{\varphi}(s,a))^2 + \frac{1}{N} \sum\_{i=1}^N (\mu^i\_{\phi}(s,a))^2 - (\bar{\mu}(s,a))^2)$ where $\bar{\mu}(s,a) = \frac{1}{N}\sum\_{i=1}^N \mu^i\_{\phi}(s,a)$, 3) Ensemble Standard Deviation: $U(s,a) = -\sqrt{\frac{1}{N} \sum\_{i=1}^N (\Sigma^i\_{\varphi}(s,a))^2 + \frac{1}{N} \sum\_{i=1}^N (\mu^i\_{\phi}(s,a))^2 - (\bar{\mu}(s,a))^2}.$ We evaluate the effect of the three penalty functions in our proposed algorithm on HalfCheetah. We provide 10 expert trajectories and utilize three transition datasets (medium-expert, medium, medium-play) from D4RL. We show the numerical results in the following table: | HalfCheetah | Ensemble Standard Deviation | Ensemble Variance | Max Aleatoric | |----------|----------|----------|----------| | Medium-Expert | $11137.21 \pm 872.47$ | $10941.80 \pm 878.90$ | $10752.50 \pm 655.96$ | | Medium-Replay | $8214.46 \pm 491.64$ | $8142.77 \pm 621.12$ | $8612.29 \pm 108.25$ | | Medium | $6324.95 \pm 556.42$ | $6454.43 \pm 664.95$ | $7973.86 \pm 108.86$ | According to our supplementary experiments, we observe that these three choices of penalty functions lead to similar final performance in our offline IRL method. **Our Response to Question 1:** We thank the reviewer for raising this insightful question. As we clarified in the response to Weakness 1, the penalty $U(s,a)$ is a fixed regularization term added on the reward function $r(s,a;\theta)$ and the reward function $r(s,a;\theta)$ will be updated every reward optimization step. Given that the reward function will be updated to align with expert demonstrations, the theoretical analysis will not heavily rely on the (fixed) penalty function. To better understand the effect of the penalty function, we provide a supplementary experiment on three different choices of U. We observe the final performance of the proposed offline IRL algorithm with different choices of penalty function will not deviate from each other by too much, since the penalty function represents a constant regularization term while the magnitude of the parameterized reward function will continuously update during the reward learning process.. We also note that the uncertainty estimation is still an active research problem and the theoretical understanding is limited, especially in the context of neural networks. Hence, we utilize common heuristics design of the penalty function from the literature in model-based offline RL to develop our algorithm. We would like to thank the reviewer’s insightful comments again and we will consider developin theoretical understanding on the impact of penalty function in model-based offline IRL as one of the directions for future work. --- Rebuttal Comment 1.1: Title: Thanks Comment: Many thanks for these insightful answers, that well confortate the score I assigned to this submission. --- Reply to Comment 1.1.1: Title: Thank you for your positive evaluations Comment: We sincerely appreciate you for taking time to review our paper and thank you for recognizing the contributions of this work.
null
null
null
null
null
null
Worst-case Performance of Popular Approximate Nearest Neighbor Search Implementations: Guarantees and Limitations
Accept (poster)
Summary: This paper theoretically analyzes the worst-case performance of DiskANN. It shows that DiskANN (with slow preprocessing) can provably solve the approximate nearest neighbor problem with a constant approximation ratio. It also provides empirical results for other algorithms, such as DiskANN with fast preprocessing, HNSW, and NSG, and shows that there are hard instances that require linear query time. Strengths: 1) The paper theoretically analyzes graph-based nearest neighbor search in quite a general context. While previous works made some strict assumptions about the distribution of elements (uniform), here the analysis is performed in terms of doubling dimension, which is a significant step forward. 2) For one algorithm, DiskANN with slow preprocessing, the constant approximation ratio can be guaranteed. Moreover, it is proven that the theoretically obtained approximation is tight. The proofs are mostly clear and easy to follow. 3) To the best of my knowledge, this is the first paper that shows the effect of neighbor pruning on the performance of graph-based NNS - previous works only considered nearest-neighbor graphs. Weaknesses: Most of my concerns are about the experimental part: - Theoretical analysis only shows the constant approximation factor, while the experiments mainly focus on Recall@5 – note that for Recall@5, there are no theoretical guarantees for DiskANN with slow preprocessing. - It would be informative if all the algorithms were tested on all the “hard” examples and both Recall@5 and approximation ratio were shown. These hard examples may also include the example for DiskANN from Section 3.4. Otherwise, it is hard to conclude that DiskANN is empirically better. - On the figures illustrating the performance, Recall@5 is shown as a function of L. The figures would be more informative if they showed the fraction of considered nodes (distance computations) instead, as is usually done. - For a better comparison of the two versions of DiskANN, it would be better to plot Recall@5 as a function of the number of distance computations for both of them. - Figure 6 analyzes the approximation ratio for three algorithms but not for DiskANN with slow preprocessing. In summary, currently, it can be hard to conclude the advantage of DiskANN with slow preprocessing over other algorithms since all the algorithms are evaluated in different setups. Some comments on the presentation: - Description of DiskANN is important for understanding the theoretical part of the paper. Thus, I suggest moving the key steps of DiskANN to the main text — for instance, the definition of RobustPruning. - A short description of NSG and HNSW would also help follow their empirical analysis. - To make a connection with previous works, in Lemma 3.3, it can be useful to comment that for uniformly distributed data, $|U|$ is logarithmic in $n$ and exponential in dimension. - After Theorem 3.4, it would be helpful to write how the claimed statement follows from (1) and (2). - Figure 2: $0.5 \epsilon$ should probably be $0.5 \epsilon / \sqrt{n}$ here. Minor comments: - I suggest defining the aspect ratio in Section 2 (preliminaries). - $q$ denotes the query but also some other points in $X$ (e.g., in Sections 2 and 3). - Proof of Lemma 3.2 is very simple and can be omitted – it can be written that the lemma directly follows from the RobustPruning definition. - In l136, a reference to Lemma 2.1 can be helpful. - In Section 4.1, it is better to move the description of the instance to the beginning. Otherwise, it is hard to follow the beginning of the section and Figure 3. - Parameter R (degree limit) is not discussed in Appendix A (except for the pseudocode). Some typos: - l14: “in a some” - l37, l63: footnotes are properly typeset after punctuation marks - l70: “they live a 2-dimensional” – missing “in” - l110: “run” -> “runs” - l146: “the algorithms performs” - l153: redundant “for” - l234: “!.” - l406: “of with” Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. In l134, should it be $diam/2^i$ instead of $\Delta/2^i$? Since we want the radius to vary from the minimum distance to diameter. 2. l182 – here a constant bound is given for $\epsilon$, but we also need $\epsilon < 1/(\alpha-1)$ for $a$ to be the nearest neighbor. 3. How specific is Theorem 3.6 to $l_1$ distance? Can similar examples be constructed for $l_2$? 4. Can the analysis be extended to beam search instead of greedy search? 5. Is it possible to guarantee finding the exact nearest neighbor under some conditions? I assume this should be possible if the distance between $q$ and its nearest neighbor is sufficiently small. 6. Could you give more details on how a logarithmic number of steps follows from Theorem 3.4? 7. Can the intuition in the last paragraph of Section 4.1 be potentially transformed to a formal result? 8. Is $d$ assumed to be constant in the paper? Or do all the results hold for $d = d(n)$? 9. I think that it is good to mention that the standard neighbor pruning (used, e.g., in HNSW) uses $\alpha = 1$. Thus, a constant approximation cannot be guaranteed. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Limitations are not discussed, the negative societal impact is not relevant for this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Reviewer GZP3 W1: Theoretical analysis only shows the constant approximation factor, while the experiments mainly focus on Recall@5 A: We believe that our paper treats both measures (Recall@5 and approximation factor) in a fairly balanced way. In particular, for both of the three main algorithms studied (DiskANN, HNSW and NSG) we perform experiments for both Recall@5 and approximation factor. Note that the bulk of sections 4.1, 4.2 and 4.3 is used to describe the hard instances, which are used for both measures. Section 4.3 covers our evaluation of the approximation ratios W2: It would be informative if all the algorithms were tested on all the “hard” examples and both Recall@5 and approximation ratio were shown. .. Otherwise, it is hard to conclude that DiskANN is empirically better A: This is an interesting idea and we will perform these experiments for the final version of the paper . We note though that our goal was *not* to demonstrate that the DiskANN is better (or not) on benchmarks, as such a comparison was already made in [19]. Instead our goal was to show that it (the slow-preprocessing version) has strong worst-case guarantees. As we mention in the paper, we believe that understanding worst case behavior of popular algorithms is very important. For example, they demonstrate the importance of validating the quality of answers reported by the algorithms when applying them to new datasets. They also shed light on the types of data sets which result in suboptimal performance of the algorithms. W3 & W4: For a better comparison of the two versions of DiskANN, it would be better to plot Recall@5 as a function of the number of distance computations for both of them A: Our goal for fast-preprocessing DiskANN was to demonstrate linear-time behavior of the search algorithm. The queue length L lower bounds the number of vertices scanned, which in turn lower bounds the number of distance computations. Thus, the number of distance computations is at least 0.1*n, i.e., 10^4 for n=10^5. . In contrast, the slow-preprocessing DiskANN search algorithm reported the true nearest neighbor in all cases in just two steps for 10^5 points, so the number of distance evaluations in this case is at most 2*maxdeg<80. This demonstrates a large gap between the two variants (on our synthetic examples). Unfortunately, performing the slow preprocessing takes lots of time for larger data sets, so we won’t be able to perform more in-depth experiments by the rebuttal deadline. We will, however, perform them for the final version of the paper. W5: Figure 6 analyzes the approximation ratio for three algorithms but not for DiskANN with slow preprocessing A: As mentioned on page 7, in the slow preprocessing version of DiskANN, the search algorithm finds the *exact* nearest neighbor in only 2 steps. Thus, the approximation factor is 1. We will include this note in Figure 6 for clarity. Q1: In line 134, use $diam/2^i$ instead of $\Delta/2^i$ A: In the earlier version of the paper, we assumed w.l.o.g. that the distances were scaled so that the diameter was equal to Delta, but we removed this assumption during editing. Without this assumption, the radius of the balls should indeed be diameter/2^i instead of $\Delta/2^i$. Thank you for bringing this to our attention. Q2: line 182, we also need $\epsilon<1/(\alpha-1)$ for $a$ to be the nearest neighbor A: Yes, in our proof of theorem 3.6, we consider the case when $\epsilon$ approaches 0, so this constraint is satisfied. Q3: How specific is theorem 3.6 to l1 distance? Can similar examples be constructed for l2. A: This is a great question! We began with the l2 norm but found that this straightforward structure doesn’t hold in l2. While it’s possible that challenging instances could exist in the l2 normed space, they might involve greater complexity. To keep things simple, we have chosen to present it using the l1 normed space. Q4: Can the analysis be extended to beam search instead of greedy search? A: Our analysis shows that even if the queue length L is equal to 1 (i.e., the search uses a pure greedy algorithm), the algorithm quickly converges to an approximate nearest neighbor. If L>1 (i.e., when the algorithm performs beam search), the algorithm performance is not worse than for L=1. Q5: Is it possible to guarantee finding the exact nearest neighbor under some conditions? A: Yes. For example, the Cover Tree algorithm [5] identifies the exact nearest neighbor in O(log n) time, under a so-called “bounded growth assumption”. Unfortunately, in general it is not possible to achieve this assuming a bounded doubling dimension, which we consider in this paper. Q6: Could you give more details on how a logarithmic number of steps follows from Theorem 3.4? A: The fundamental idea here is that if a point p is not directly connected to q’s nearest neighbor a, then as per RobustPruning outlined in Algorithm 1 on page 12, point p should establish an a alpha-shortcut to a, so each step the distance to the nearest neighbor a is shortened by a factor of alpha. Q7: Can the intuition in the last paragraph of Section 4.1 be potentially transformed to a formal result? A: It might be possible to convert this intuition into a formal argument. This would, however, require addressing some subtle probabilistic dependence issues, due to the fact that DiskANN performs multiple passes over the input. Since our goal was to demonstrate a linear running time behavior for specific *implementations* of the studied algorithms, we decided not to pursue this direction in this paper. Q8: Is d assumed to be constant in the paper? Or do all the results hold for d=d(n)? A: Yes, d can be an arbitrary parameter, it does not have to be constant. Q9: I think that it is good to mention that the standard neighbor pruning (used, e.g., in HNSW) uses $\alpha=1$. Thus, a constant approximation cannot be guaranteed. A: Thank you for this observation, we'll add it to the paper. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed responses and clarifications! I enjoyed reading the paper and believe that its strengths outweigh its weaknesses. Theoretical analysis in quite a general setup is a strong point. Experiments were a bit confusing for me and the response clarified most of the concerns. If the paper will be accepted, I highly recommend the authors to extend the theoretical part in the main text to make it clear and self-contained. In contrast, the experimental part can be reduced (or partially moved to the supplementary). In particular, the theoretical part of the paper focuses on the approximation ratio, thus the experiments with Recall@5 can be moved to the supplementary material.
Summary: This paper studies a specific class of graph-based similarity search algorithms, and establishes approximation upper bounds, and runtime lower bounds for this algorithm. In the process of establishing lower bounds, the paper also presents various point configurations that would be hard for these graph-based search algorithms, and then evaluates multiple graph-based search algorithms on these hard instances, highlighting that these examples are hard for most search algorithms, even though these graph-based search algorithms are known to perform really well in practice. Strengths: **Important class of algorithm analysed.** Graph-based approximate nearest-neighbour search algorithms can be extremely efficient in practice, and the goal of this paper to analyse one such algorithm, DiskANN, is well motivated. This paper establishes upper bounds on the approximation in the nearest-neighbour search result at any given point of the greedy search algorithm. Then, the paper goes on to present explicit examples with interesting geometrical configurations, embedded into a 2 dimensional plane, which are challenging instances of nearest-neighbour search for DiskANN. These examples make explicit one scenario where the graph built on the data forces the greedy search to visit almost all nodes. **Wide coverage of graph-based search algorithms.** Both in the empirical evaluation, and the development of "hard instances", this paper covers multiple graph-based search algorithms such as DiskANN, HSNW, NSG, and SPTAG-KDT. Each of these examples exploit the specific graph construction scheme for these methods. The empirical evaluation of these multiple algorithm highlights how these hard instances are hard (to different extents) for all such graph-based algorithms. Weaknesses: **Interpreting the worst-case upper-bounds.** One weakness of this paper is that I am unable to get an intuition of what the bound in Theorem 3.4 is telling us. One interpretation is that, as $i \to \infty$ (in the asymptotic range), we get a solution that is $\left(\frac{\alpha+1}{\alpha-1}\right)$-approximate (as stated in line 170-172). But the runtime for $i$ iterations is $O(i (4\alpha)^d \log \Delta)$ (as per Lemma 3.3). But in the nearest-neighbor search, the runtime for the asymptotic $i \to \infty$ scenario is bounded by $O(n)$. So it seems that we are saying that if we wish to achieve constant approximation with DiskANN, we have to do $O(n)$ work, which seems a somewhat vacuous result -- if we are ready to do $O(n)$ work, usually we are able to get the exact solution (not even an approximation). And Theorem 3.5 seems to be saying the same -- we always need to do $O(n)$ work. Does that mean that we are unable to get anything better than $O(n)$ guarantees for DiskANN (even with the slow preprocessing)? **Interpreting the motivation behind the hard synthetic examples.** The authors do a great job at creating examples such as the one in Figure 1 & 2, where the slow preprocessing and the fast preprocessing versions of DiskANN respectively are unable to find the constant approximation nearest-neighbor in time less than $O(n)$. Figure 4 is another great synthetic example that is hard for NSG and HNSW. However, it is not directly clear to me why these examples are of interest, or something we should evaluate graph-based algorithms on. These examples are very structured, with very specific careful placements of the query $q$ and its nearest-neighbor $a$. First, it is not clear whether these examples are unique and canonical in some sense, or are there other (similarly constructed) point configurations which would similarly be a hard instance for the graph based algorithms. If they are not unique, why should these examples be studied and not others? Or if we are able to do well on these examples, can we say anything about other "hard" or "easy" instances? Secondly, even if these examples are canonical in some way, it is not clear if these resemble real datasets in any form or if these examples are somehow practical viable. Without such motivation and justification, it is not clear why we should care about these examples. Minor: - Without the algorithms being analyzed in the main paper, the analysis is hard to follow. In my opinion, it would useful to have the algorithm being analysed in the main paper. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - Aspect ratios are somewhat misleading because they sometimes provide counter-intuitive results. For example, searching for near-duplicates (that is, the distance to the nearest-neighbor is close to zero) are arguably the easiest scenario for most nearest-neighbor search methods -- For branch & bound algorithms, the greedy branching usually finds the near-duplicate very quickly, and obtains the tightest possible bound and is able to prune most aggressively. For hashing based methods, near-duplicates almost always collide, hence the search will always find the nearest-neighbor even with a very small number of hash tables. For graph-based algorithms, as long as the graph is well-connected, the greedy search on the graph will get to the near-duplicate node quite quickly and prune everything else after that in the queue. However, bounds based on aspect ratios appear to tell a different story -- near-duplicates cause the aspect ratio to be arbitrarily high, leading to larger upper-bounds on the runtime complexity or the approximation ratio, making it appear that the search problem is harder with near-duplicates. This appears counter-intuitive. How do the results in this paper handle this issue? - The aspect ratio $\Delta$ used in Lemma 3.3 is based on only the set of points $X$, while the aspect ratio $\Delta$ used in Theorem 3.4 is based on the set of points and the query $q$. So in that case, is it fair to assume that the aspect ratio in Theorem 3.4 **can be** significantly greater than the aspect ratio in Lemma 3.3, especially if (say) $d_{min} \approx 0$. - In Lemma 3.3, why is it that the index $i$ of the Rings are restricted to $i \in [\log_2 \Delta]$? - In Theorem 3.4, what is the range of interest for $i$? As mentioned in lines 170-171, we can "asymptotically get to an $\left(\frac{\alpha+1}{\alpha-1}\right)$-approximate nearest-neighbour". However, it is easy to see that, for different values of $i$, different terms will be the dominating term in $\left( \frac{\Delta}{\alpha^i} + \frac{\alpha+1}{\alpha-1} \right)$, and it would good to understand which values of $i$ we are interested in, and what the guarantees are for those values. - Can you please provide a motivation for Theorem 3.5? It is quite well known that $O(\log n)$ bounds are hard to get, and usually they come with exponential dependence in dimensionality (in the best case, exponential in the intrinsic dimensionality). Even the celebrated LSH guarantees sublinear bounds in $n$ with polynomial dependence on the dimensionality. So it is generally hard to expect that we can just replace the aspect ratio with the number of points. - For the experiment in Section 4.4, Table 1, are the results averaged over different "hard instances"? As in, are multiple problem configurations are selected for each of the Figure 1, 2, 4 and 7, and then the average recall@5 is reported across all such problem instances is reported? This point is not clear even in the discussion in Appendix C.2. Furthermore, it would have been good to understand the performances grouped by the hard instances (at least in the appendix). For example, with algorithms such as NSG, DPG, KGraph, where the average is significantly above 0, it would be good to see whether they performed at this mediocre level for all hard instances, or if they performed really well on some hard instances (for example, ones from Figure 2) while struggling on others (for example, figure 7). This information seems important. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: I did not find any explicit discussion of limitations by the authors. However, I do not anticipate any potential negative societal impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Reviewer EYFR W1: [interpreting the worst-case upper-bounds] Interpreting the worst-case upper-bounds. One weakness of this paper is that I am unable to get an intuition of what the bound in Theorem 3.4 is telling us. One interpretation is that, as $i\to\infty$ (in the asymptotic range), we get a solution that is $\frac{\alpha+1}{\alpha-1}$-approximate (as stated in line 170-172). …. A: We referred to the asymptotic behavior only for simplicity. It can be seen that already for $i=\log_{\alpha} \frac{\Delta}{\epsilon}$ the additive term becomes $\epsilon$, for any $\epsilon>0$. Thus, the convergence to the $\frac{\alpha+1}{\alpha-1}$-approximation factor is very fast. W2: [interpreting the motivation behind the hard synthetic examples] Interpreting the motivation behind the hard synthetic examples. It is not directly clear to me why these examples are of interest, or something we should evaluate graph-based algorithms on. A: As we mention in the paper, we believe that our results provide important information about the behavior of these algorithms. For example, they demonstrate the importance of validating the quality of answers reported by the algorithms when applying them to new datasets. They also shed light on the types of data sets which result in suboptimal performance of the algorithms. In general, understanding the worst-case performance of approximate nearest neighbor methods is a popular subject of study. See e.g., the references in our submission, and a recent paper: Elkin, Yury, and Vitaliy Kurlin. "A new near-linear time algorithm for k-nearest neighbor search using a compressed cover tree." International Conference on Machine Learning. PMLR, 2023. Q1: Aspect ratios are somewhat misleading because they sometimes provide counter-intuitive results. A: We agree. However, as we show in the paper, the logarithmic dependence on $\Delta$ in the running time bound for DiskANN is necessary (Theorem 3.5). Q2: The aspect ratio $\Delta$ used in Lemma 3.3 is based on only the set of points X, while the aspect ratio $\Delta$ used in Theorem 3.4 is based on the set of points and the query q. So in that case, is it fair to assume that the aspect ratio in Theorem 3.4 can be significantly greater than the aspect ratio in Lemma 3.3, A: Indeed, the main result (as stated in the introduction) uses $\Delta$ defined by X and q, while the result stated in Lemma 3.3 uses $\Delta$ defined by X, which could be smaller. Apologies for the confusion, we will use two different symbols to denote these two quantities. Q3: In Lemma 3.3, why is it that the index i of the Rings are restricted to i belongs to $i\in [\log_2 \Delta]$? A: In the earlier version of the paper, we assumed w.l.o.g. that the distances were scaled so that the diameter was equal to $\Delta$, but we removed this assumption during editing. Without this assumption, the radius of the balls should be diameter/2^i instead of $\Delta/2^i$. Thank you for bringing this to our attention. Q4: in theorem 3.4, what is the range of interest for i? A: See answer for W1 above. Q5: Can you please provide a motivation for Theorem 3.5? A: The theorem shows that the logarithmic dependence on $\Delta$ cannot be avoided. In contrast, some algorithms achieve O(log n) query time, though under stronger assumptions about the input (bounded growth assumption). See e.g., [5] and the Elkin - Kurlin paper listed earlier. Q6: For the experiment in Section 4.4, Table 1, are the results averaged over different "hard instances"? A: For Table 1 in section 4.4, the results for each algorithm are obtained by running it on its specific instance and averaged over all values of n with 5 or 10 repetitions. For every algorithm, it is executed solely on one specific hard instance (Figure 2, 4, or 7). Please refer to their figure captions to see which instance it is run on. Our objective is to analyze the worst case performance for each algorithm, leading us to construct distinct challenging instances tailored to each algorithm. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I will increase my score. My only follow-up is that W2 was not asking for the motivation for analyzing the worst-case (that is always important as the authors notes). My point about the W2 is that the worst case examples are very structured, and I was asking whether such structured worst case scenarios can be practically motivated. --- Reply to Comment 1.1.1: Title: Thank you for the response and increasing the score! Comment: Regarding the issue of “whether such structured worst case scenarios can be practically motivated”: since graph-based nearest neighbor search algorithms generally perform well in practical scenarios, we believe that real-world situations may not be as unfavorable as the worst-case scenario in our paper indicates. However, our construction is motivated by a simple idea, which is to trap greedy search in a local minimum. Therefore, similar scenarios should exist in practice, but the penalty might not be as severe. This variability in the penalty could contribute to the variance in running times when graph-based nearest neighbor search algorithms are used to answer different queries.
Summary: Nearest neighbour (NN) queries select for a given data point the closest point in a set and can be answered exactly in linear time by scanning the whole set. c-approximate Nearest Neighbour (ANN) queries ask for an arbitrary data point that has at most a c times larger distance than the NN. A popular approach in the literature is DiskANN that builds as an index structure a proximity graph that links close-by points together. In this manuscript two variants of DiskANN are analysed. Strengths: S1. The work seems to fill a gap left by prior works: works proposing the studied methods seem to lack a more extensive theoretical analysis as featured in this work. S2. Constructed problem instances used in experiments provided in supplementary material (as well as links to code of studied ANN approaches) and details of how various methods have been implemented are extensively documented in the supplementary material. S3. Combination of empirical and theoretical results to furthen the understanding of the worst-case / average-case performance of ANN methods. Weaknesses: W1. Some parts of the paper are a bit confusing. It is not clear which point is meant to be the (exact) nearest neighbour (NN) in Figure 1. If "a" is the NN, then it would seem that "p0" would be an approximate nearest neighbour given the values. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Q1. What is meant to be the (exact) nearest neighbour (NN) of "q" in Figure 1? If the NN of "q" is meant to be "a", why would "p0" not be a c-approximate nearest neighbour with c = (alpha+1)/(alpha-1)? Q2. If a point that coincides with the query point Q is in the data set, do all c-approximate nearest neighbours (for c > 1) then coincide with the (exact) nearest neighbours? What are the implications of this edge case on the claims in the paper? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Relevant limitations seem to be discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Reviewer ZVW4 W1: It is not clear which point is meant to be the (exact) nearest neighbor (NN) in Figure 1. If "a" is the NN, then it would seem that "p0" would be an approximate nearest neighbor given the value A: In our figure 1, q represents the query point and a represents the true nearest neighbor. In the proof of Theorem 3.6, when $\epsilon$ approaches 0, the approximation ratio of p0 is no better than $\alpha+1/\alpha-1$. We will modify the statement of Theorem 3.6’s statement to “before getting to a vertex with an approximation ratio *smaller* than $\alpha+1/\alpha-1$”. Thank you for your observation! Q2: If a point that coincides with the query point Q is in the data set, do all c-approximate nearest neighbors (for c > 1) then coincide with the (exact) nearest neighbors? What are the implications of this edge case on the claims in the paper? A: Yes. In this case the distance to the nearest neighbor is zero, so an approximate nearest neighbor algorithm must report the exact nearest neighbor. We believe this fact does not affect the claims in the paper, but if there are any specific concerns, please do let us know. --- Rebuttal Comment 1.1: Comment: Thank you, that clarifies Q2 and I think that answers Q1 as well, but have to still revisit the paper and double-check.
null
null
Rebuttal 1: Rebuttal: We thank all reviewers for their useful comments and feedback. We will fix the typos and presentation issues in the final version of the paper. In what follows we address the issues identified by the reviewers as weaknesses and/or listed as questions.
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
UniT: A Unified Look at Certified Robust Training against Text Adversarial Perturbation
Accept (poster)
Summary: The paper presents UniT, a unified certified robust training against text adversarial perturbation. The paper identifies two frameworks of robust training: type (I) does random data augmentation over the text inputs, and type (II) adds Gaussian noise to the latent feature space. The paper unifies two types of training by doing data augmentation in the inputs and adding Gaussian noise to the latent feature space. The paper also introduces a novel loss function, which modularizes the feature extraction regularization and robust regularization as a loss term to fuse these two types of training besides the original cross-entropy loss. The paper evaluates UniT and demonstrates that it consistently outperforms SAFER and CISS. The paper also does ablation studies of different loss function designs and different hyper-parameter settings (in the appendix). Strengths: 1. The modular loss function design is interesting and novel. The ablation studies of the loss function are very strong. 2. The paper outperforms existing certified approaches SAFER and CISS. Weaknesses: 1. The paper only handles the perturbation space of synonym substitutions but not other text adversarial perturbations such as insertion, deletion, and their combinations. This weakness makes the title "text adversarial perturbation" overclaimed. 2. The comparison between UniT and CISS needs to be presented more clearly. See questions 2-4. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Comment: 1. It would be better to also present natural Acc in Table 7 and 8. And in all the hyper-parameter tables. 2. The motivation part in lines 202-208 is not well-justified. I think the IBP is necessary if one wants to get a certification. The reason why UniT can get rid of IBP but still gets a certification is because UniT uses BERT embedding and add noise before feature extraction. Then IBP in CISS only needs to analyze the embedding part, which is easy to analyze and in fact, do not need IBP at all. 3. In lines 283-285, the paper states "which will not regularize g if z is well learned". However, it is not the case, the only thing the -max(., 0) does is to not penalize the wrongly classified cases. In other words, "which will not regularize g if z is NOT well learned". Questions: 1. Is there any intuition why the paper chooses the l2 norm over cosine similarity? 2. The original $l_{\hat{R}}$ loss used in type II training also tries to minimize the distance of $\| z-z'\|$ in the latent space of IBP, no? 3. What's the performance of CISS using their original loss term in Table 3? Table 3 only shows the results of CE and DR loss. 4. Why can CISS not do IBP on BERT? UniT and CISS both add Gaussian noise before feature extraction. Then CISS only needs to analyze the BERT embedding part, which is easy because of the independence of embedding of each word, as mentioned in line 231. I agree that IBP might be looser, consider x=[0,0], x_1=[0,1], x_2=[1,0], then $\hat{R} = 1$ but the over-approximated $\hat{R}$ via IBP should be $\sqrt{1+1}=\sqrt{2}$. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 2 fair Limitations: The authors do not address the limitation. One limitation of the paper is that the paper only handles word substitutions but not general perturbation spaces such as insertion, deletion, and their combinations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to sincerely express our thankfulness to the reviewer for the positive and insightful feedback on our paper. We are deeply appreciative of the acknowledgment of our paper on the novelty of loss design and the practical effectiveness of the proposed UniT framework. We take every comment or question seriously and hope that our response can clarify the concerns. Meanwhile, we will be more than happy to address any additional concerns. **Q1: Is there any intuition why the paper chooses the l2 norm over cosine similarity?** **A**: UniT chooses l2 norm over cosine similarity in Eq. (4) because l2 norm can limit the influence of perturbation on downgrading the prediction margin (defined in Sec. 2.2) and is good for certification. For the ease of illustration, we confine our discussion in a binary classification case and the (unbiased) weight of the last layer is for classifying the original input vector $z$ into class $a$ and class $b$ are $w_a$ and $w_b$, respectively. Suppose the ground truth is class $a$. Thus, the prediction margin will be $M(z)=w_a^T z-w_b^T z=(w_a-w_b)^T z$. Similarly, the prediction margin given the perturbed representation in $z^\prime$ will be $M(z^\prime)=(w_a-w_b)^T z^\prime$. Thus, the influence on the prediction margin after the attack will be $M(z^\prime)-M(z)= (w_a-w_b)^T (z^\prime-z)$. Since we want to reduce this influence for better certification results, we shall use the l2 norm of $z^\prime-z$ as an optimization target to limit the change $ M(z^\prime)-M(z)$. We will include this discussion in the final version. **Q2: Does the original $l_{\hat{R}}$ loss used in type II training also try to minimize the distance of $||z−z^\prime||$ in the latent space of IBP?** **A**: No, it doesn’t. $z$ and $z^\prime$ are the output from the penultimate layer (the last but one layer), while $l_{\hat{R}}$ in the Type II training is calculated from the mid-level layer. To specify, $l_{\hat{R}}$ uses the feature output from the feature encoder that is before the base model, and the base model is the module that outputs $z$ and $z^\prime$. Therefore, the original $l_{\hat{R}}$ cannot minimize $||z−z^\prime||$. **Q3: What's the performance of CISS using their original loss term in Table 3?** **A**: To clarify, the performance of CISS using their original loss terms is the first row in Table 3. We use CE to denote the loss of CISS because the main difference between our loss and the loss of CISS lies in the CE term, and both losses include $l_{\hat{R}}$. We have discussed this difference in L343-L345 and illustrated our loss terms for Type II in L294-L297. **C2 and Q4: On the necessity of IBP in CISS.** **A**: We believe C2 and Q4 is correlated and would like to answer them together. Recall that in the context of certified robustness there are two important things for obtaining high certified robust accuracy: training and certification. Thus, for this research task, the design of the training mechanism is related to or depends on the design of the certification condition. CISS designs a certification condition in the mid-level latent space, so it has to introduce IBP for the purpose of certification. Since the parameters in IBP has to be learned, it needs to be trained with the base model together to make the extracted features satisfy the certification condition (shown in L148). Thus, IBP cannot be removed from CISS unless CISS has a certification condition not related to the latent space. And in fact, one of our main contributions in this paper is to propose a stronger certification condition by taking into consideration the independence of words in the embedding space, and this certification condition helps us design our Type II training without IBP. **C1: It would be better to also present natural Acc in Table 7 and 8. And in all the hyper-parameter tables.** **A**: We thank the reviewer for the suggestion. We also have natural accuracy in the record and will present them in the final version. For example, Table 7 added with natural accuracy will be as follows. The natural accuracy is calculated on all test samples. Table R1: Ablation study results on loss design in IMDB. CE | ALP | $l_{fr}$| $l_{cr}$ | Natural Acc (%) | CRA (%) ---|---|---|---|---|--- $\checkmark$ | | | | 86.03 | 85.36 $\checkmark$ | $\checkmark$ | | | 86.36 | 86.64 $\checkmark$ | | $\checkmark$ | | 87.62 | 88.08 $\checkmark$ | | | $\checkmark$ (Input: $z+\epsilon$) | 87.42 | 85.92 $\checkmark$ | | $\checkmark$ | $\checkmark$ (Input: $z$) | 87.45 | 87.68 $\checkmark$ | |$\checkmark$ | $\checkmark$ (Input: $z^\prime$) | 85.71 | 85.76 $\checkmark$ | | $\checkmark$ | $\checkmark$ (Input: $z+\epsilon$) | 87.87 | 89.04 **C3: A typo in L285.** **A**: Thank you for your carefulness when reading our paper. You have understood our loss design correctly and it should be "which will not regularize g if z is NOT well learned". We will correct this typo in the final version. **C4: The paper only handles the perturbation space of synonym substitutions but not other text adversarial perturbations such as insertion, deletion, and their combinations.** **A**: Throughout our literature review on text adversarial perturbations, synonym substitution has been considered as the standard way to construct text adversarial perturbation in adversarial attack and defense. This is because insertion and deletion have been regarded as two special forms of word substitution: insertion can be considered as replacing an ‘[EOS]’ token with a new word and putting the replaced word in the inserted position, and deletion can be considered as replacing a word by the '[UNK]' token. Thus, previous literature holds that the ability to defend against synonym substitution generally reflects the ability to defend against other two forms of adversarial perturbations. Thus, we follow the existing setup to conduct adversarial perturbations in the context of synonym substitutions. --- Rebuttal Comment 1.1: Title: Response to authors Comment: Thanks for the efforts and detailed response. The response answers all my questions. Currently, I think that this paper should be accepted. I will read other reviews and corresponding response in the next few days. --- Reply to Comment 1.1.1: Comment: Dear Reviewer yTLz, Thank you for the encouraging words! We deeply appreciate your acknowledgment of the high quality of our paper! Sincerely, Authors
Summary: This paper proposes a unified framework called UniT for certified robust training against text adversarial perturbation. UniT can train and certify in both discrete word space and continuous latent space by working in the word embedding space, without extra modules or loose bounds. It also introduces a decoupled regularization loss called DR loss to improve the base model robustness by regularizing the feature extraction and classifier modules separately. The paper shows that UniT outperforms existing state-of-the-art methods in both scenarios, and DR loss enhances the certified robust accuracy by providing modular regularization. Strengths: The paper clearly identifies the limitations of existing approaches and highlights the advantages of combining them with a novel DR loss into a unified framework. The authors have made an effort to address these limitations and have proposed a novel solution that capitalizes on the strengths of both discrete word space and continuous latent space approaches. Its evaluation demonstrates the effectiveness of their proposed approach. Weaknesses: 1. The selection of synonyms (Type I) is indeed an important aspect of the paper, and it would be helpful if the authors could provide more clarity on how synonyms are chosen based on embeddings. I'm not sure about the probability of changing labels due to synonym substitution for the task of sentiment analysis. Could the authors provide some concrete examples? Additionally, a discussion on how the base model is obtained (whether it's fine-tuned BERT or not) and any improvements in generalization performance could shed more light on the robustness of the proposed approach. 2. The paper's focus on BERT and sentiment classification may indeed limit its applicability to other tasks or models, especially in the era of large-scale pre-trained models with improved generalization capabilities. The authors could address this concern by discussing the potential of their framework to be extended to other tasks and models, and whether the problem they are investigating remains relevant in the current research context. 3. The paper mainly discusses synonym substitution and noise addition, but there are now more advanced perturbation methods based on large language models (LLMs) that could potentially generate more realistic adversarial examples (e.g., through rephrasing or prompting). A comparison or discussion of these alternative methods and their implications for the proposed approach could provide a more comprehensive understanding this paper's contribution. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The introduction of DR loss indeed results in a more unified model, but it also brings additional hyperparameters. The paper could provide guidance on how to efficiently and accurately select the best hyperparameters for practical applications. This would make the proposed approach more accessible and easier to implement for researchers and practitioners alike. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See the weakness section Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We genuinely thank the reviewer for the positive and constructive feedback on our paper. We thank the reviewer for recognizing the novelty and effectiveness of our solution. Each comment and question is of great importance to us, and we hope our response can address the raised concerns. Moreover, we are eager to answer any additional questions. **Q1: On the guidance for selecting the best hyperparameters.** **A**: This is a good question, and in fact, we have provided guidance on selecting hyperparameters in Section A.6 in the submitted Appendix. Due to the word limit, we kindly ask the reviewer to view our experimental results on the influence of hyperparameters in the Appendix. From those results, we can find that the tuning of hyperparameters is not tricky for the DR loss due to their clear interpretation, and these settings can be used widely for different datasets. To specific, $\mu$ is kept as small as 0.1 to keep Gaussian noise relatively small. $\alpha$ is set to 0.7 to ask the margin to increase while penalizing $l_2$ norm of text representation. $\epsilon=0.6$ is fine for allowing some relaxation. And we just make CE loss and MR term have equal weights by setting $\beta=1$. **C1: On the selection of synonyms, their influence on prediction labels, and how the base model is obtained.** **A**: We would like to provide those implementation details as follows. With respect to synonym selection, in Type I scenario we follow the baseline SAFER and use the same set of synonyms and the selection of synonyms is based on the similarity between two words in a pretrained embedding space. To illustrate, by using the embedding space of GLOVE embeddings (Pennington et al., 2014), for a specific word, any other word that has a cosine similarity that is greater than 0.8 are considered a synonym of that word. As for how synonym substitution can change the label, we would like to show one example in the Yelp dataset. Given a text sample “Great buffet. Lots of selections. The prime rib was delicious. It was worth the 30 dollars” that is correctly predicted as a sample with positive sentiment by a text classifier, the prediction can be changed to negative sentiment when only the word “delicious” is changed by “loverly”. That is, “Great buffet. Lots of selections. The prime rib was loverly. It was worth the 30 dollars” will be classified incorrectly by the text classifier when only one word is changed. We also include a long text sample in Table 9 in the Appendix for illustrating how synonym substitution changes the prediction results. Since our work mainly focuses on certified robustness and not on text adversarial attacks, and due to the word limit, we kindly refer the reviewer to representative works on text adversarial attacks such as TextBugger (Li et al., 2019), and TextFooler (Jin et al., 2019) if needed to get a detailed look on how synonym substitution changes prediction labels and fool text classifiers. Finally, the base model is obtained from pre-trained “bert-base-uncased” BERT provided by the transformers package by huggingface. We have included this discussion in Section A.5 in the Appendix and kindly ask the reviewer to review that section if further information is needed due to the word limit. **C2: On the generalization capabilities of the proposed method.** **A**: We thank the reviewer for this question. As mentioned in L160-L163, our focus on BERT and sentiment classification is mainly because of the existing literature related to the task of prediction certification for text data. Our structural decomposition analysis of the base model in Figure 2 is general among different models, such as RoBERTa, ALBERT and even more recently proposed foundation models such as ViLT and UNITER. Since the proposed DR loss is designed from this general point of view, DR loss can still be used for those latest structures, and it can help improve the robustness of those models in other classification tasks such as visual-question answering. We will include this discussion in the final version. **C3: On the other ways of generating adversarial examples such as through rephrasing or prompting.** **A**: We have also noticed the methods of generating adversarial examples by rephrasing and prompting, but they have the weakness of changing a lot of words and increasing the perturbation rate, which disobeys the definition of good adversarial examples that the difference between adversarial examples and the original samples should be imperceptible. For example, the method proposed by (Qi et al., 2021) rewrites the text with a different style by rephrasing and the changes are easily noticeable. In this sense, the adversarial examples generated in those ways are not of high quality because of their high perturbation rates. We thank the reviewer for the suggestion and will include those works in the discussion to distinguish why we shall focus on the adversarial examples conducted by synonym substitution. **References** Pennington, J., Socher, R., & Manning, C. D. (2014, October). Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP) (pp. 1532-1543). Li, J., Ji, S., Du, T., Li, B., & Wang, T. (2019, January). TextBugger: Generating Adversarial Text Against Real-world Applications. In 26th Annual Network and Distributed System Security Symposium. Jin, D., Jin, Z., Zhou, J. T., & Szolovits, P. (2019). Is bert really robust? natural language attack on text classification and entailment. arXiv preprint arXiv:1907.11932, 2, 10. Qi, F., Chen, Y., Zhang, X., Li, M., Liu, Z., & Sun, M. (2021). Mind the style of text! adversarial and backdoor attacks based on text style transfer. arXiv preprint arXiv:2110.07139. --- Rebuttal Comment 1.1: Comment: Thank you for your response, which has partially allayed my concerns. While I remain unconvinced that the proposed training method will become a predominant solution in the near future, I acknowledge the paper's merits in terms of concept and overall quality. Consequently, I maintain my initial score and lean towards accepting this paper. --- Reply to Comment 1.1.1: Comment: Dear Reviewer 3SMy, We thank you for your positive feedback on our paper and the acknowledgment of the strength of our paper in terms of concept and overall quality! Sincerely, Authors
Summary: This paper focuses on the certified robustness of language models. To improve the certified robustness, the authors propose a better robust training method that enables robust feature extraction and a larger prediction margin. Experiment results show the effectiveness of the proposed DR loss, leading to a better certified accuracy compared to traditionally used CE loss. Strengths: This paper is well-motivated. The authors first identify the existing challenge of certification when using CE loss. Based on that, the DR loss is proposed to alleviate the non-robust feature extraction and improve the prediction margin. Empirical evaluations also demonstrate the effectiveness of the proposed DR loss. Weaknesses: - An important baseline is missing: robust training. The overall objective of the proposed DR loss is to improve the robustness of the base classifier so that it is easier to be certified. Robust training such as min-max adversarial training [1], and TRADES [2] is also known to be beneficial for the certification. Since the proposed method manipulates the training objective, which is very similar to min-max adversarial training and TRADES, it would be great to compare the certification performance of the model after training with [1,2] and the proposed loss. For example, one may use greedy search-based attack methods from [3] to find the perturbation for each batch during training. Then minimize the CE loss on the perturbed batch (when using TRADES it will be slightly different). Please note that the synonym set for each word should be consistent with SAFER's (if compare in the Type-I scenario) rather than using the original candidate set of the attack methods. - An empirical robustness evaluation would make the evaluation more comprehensive. It would greatly demonstrate the effectiveness of the proposed method if improvements in the empirical robustness can be observed and outperform robust training methods. Overall, this paper is well-motivated and clearly written. The proposed technique makes sense and is verified to be effective in some settings. If my concerns could be addressed, I would like to raise my rating. [1] Madry, Aleksander, et al. "Towards deep learning models resistant to adversarial attacks." arXiv preprint arXiv:1706.06083 (2017). [2] Zhang, Hongyang, et al. "Theoretically principled trade-off between robustness and accuracy." *International conference on machine learning*. PMLR, 2019. [3] Morris, John X., et al. "Textattack: A framework for adversarial attacks, data augmentation, and adversarial training in NLP." *arXiv preprint arXiv:2005.05909* (2020). Technical Quality: 3 good Clarity: 3 good Questions for Authors: The maximum deviation caused by all synonyms computed in Equation (1) seems to make the $l_{2}$ the constraint for the perturbation too large. Assuming the average $l_{2}$ radius for each token embedding is $r$, then the total $l_{2}$ radius would be enlarged by $\sqrt{n}$ times. The input length of some datasets used in this paper could be rather large. For example, IMDb contains average 300-500 words. After tokenization, the number of tokens, $n$, could be more than 500. Then the radius of the perturbation is enlarged more than 10x. In that case, the certification process in Theorem 1 (Equation 3) would be very possible to lead to a 0 certified accuracy. I would ask for more details about the computation of $\hat{R}$ and additional certification operations to ensure the validity of the results. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors discussed the limitations and potential negative social impact of their work Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the encouraging and constructive comments. We are deeply grateful to the reviewer for acknowledging the significance of our research and thinking highly of the novelty and effectiveness of the proposed DR loss. We take the posted comments and questions seriously and would like to address the concerns of the reviewer on the comparison with robust training loss and empirical robustness evaluation with the following response. We will be more than happy to address any further concerns. **C1: An important baseline is missing: robust training.** **A**: We thank the reviewer for the comment and would like to further demonstrate the effectiveness of the designed loss. Without loss of generality, we use Type I scenario for illustration and extend our experiments in Table 1 with the following comparison results. We use the representative PGD (Madry et al., 2017) loss and TRADES (Zhang et al., 2019) loss to adversarially train the model with the perturbation added to the word embedding space, which are the robust training baselines. Table R1: Comparison of certified robust accuracy (%) with robust training loss in the Type I scenario. Base Model | Loss | IMDB| SST2 | Yelp | AG ---|---|---|---|---|--- BERT | CE Loss|85.36 |91.65| 97.19| 93.78 BERT | PGD Loss | 87.52 | 90.28| 97.86 | 93.98 BERT | TRADES Loss | 86.80 | 90.44 | 97.56 | 93.96 BERT | DR Loss | **89.04** | **93.02** | **97.87** |**94.31** From Table R1, we can find that DR loss consistently outperforms PGD loss and TRADES loss in four datasets. The increment of certified robust accuracy (CRA) is relatively considerable for the datasets with longer text such as IMDB and SST2. Specifically, the CRA of DR loss is 1.52% and 2.24% higher than those of PGD and TRADES in IMDB, respectively, and it is 2.74% and 2.58% higher than those of PGD and TRADES in SST2, respectively. Thus, DR loss is more suitable for the certified robust training task because (1) the perturbation for each word is randomly drawn from their corresponding synonyms, which covers a larger variety of perturbed samples, and (2) our designed modular regularization eyes on reducing the negative effect of the perturbation on prediction margin, which is the key to satisfying certification condition. We will include this discussion in the final version. **C2: An empirical robustness evaluation would make the evaluation more comprehensive.** **A**: We thank the reviewer for the comment and would like to address this concern with the following results on empirical robustness evaluation. Without loss of generality, we conduct a comparison of empirical robust accuracy with PGD loss and TRADES loss on the four datasets. We use BERT as the base model, and train BERT with PGD loss, TRADES loss, and the proposed DR loss, respectively. After that, we follow the existing adversarial attack setup and randomly draw 1,000 test samples for adversarial example generation. The adversarial examples are generated by TextBugger (Li et al., 2019), a representative text adversarial method. The results are as follows. Table R2: Comparison of empirical robust accuracy (%) with robust training loss. We also show the natural accuracy which is indicated by the parentheses. Base Model | Loss | IMDB| SST2 | Yelp | AG ---|---|---|---|---|--- BERT | PGD Loss | 62.0 (87.1) | 87.6 (91.5) | 92.4 (96.9) | 85.5 (93.7) BERT | TRADES Loss | 57.8 (84.8) | 86.3 (91.7) | 91.9 (97.0) | 84.4 (94.3) BERT | DR Loss | **72.7** (86.9) | **89.8** (93.3) | **96.9** (98.4) | **87.6** (92.9) From the comparison results in Table R2, the proposed DR loss in UniT also outperforms PGD loss and TRADES loss with respect to empirical robust accuracy. This also attributes to the individual robustness supervision provided by DR loss for each module, which helps each module to work together more coherently to defend against adversarial attacks. We will include this discussion in the final version. **Q1: On the computation of $\hat{R}$ and additional certification operations to ensure the validity of the results.** **A**: This question is mainly about the certification process in Type II and especially the calculation of $\hat{R}$. It is a good question for $\hat{R}$ is the key to certification in Type II scenario. To answer this question, firstly, it is true that some datasets generally have long text inputs, and we follow previous works and truncate the text inputs for efficiency reasons. For example, the input text of IMDB is truncated to 128 tokens in training and certification. Secondly, the embedding of each word is not frozen during training, i.e., they are regarded as part of model parameters. Since in Type II scenario, the word embedding is supervised with a regularizer $\ell_{\hat{R}}$ (shown in L295), the embedding of each word tends to shrink its norm and becomes small. As a result, in certification, $\hat{R}$ will be relatively small after training, which makes the satisfaction of certification in Theorem 1 much easier. **References** Madry, A., Makelov, A., Schmidt, L., Tsipras, D., & Vladu, A. (2017). Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083. Zhang, H., Yu, Y., Jiao, J., Xing, E., El Ghaoui, L., & Jordan, M. (2019, May). Theoretically principled trade-off between robustness and accuracy. In International conference on machine learning (pp. 7472-7482). PMLR. Li, J., Ji, S., Du, T., Li, B., & Wang, T. (2019, January). TextBugger: Generating Adversarial Text Against Real-world Applications. In 26th Annual Network and Distributed System Security Symposium.
Summary: This research paper discusses the vulnerability of Deep Neural Networks (DNNs) used in Natural Language Processing (NLP) tasks against adversarial attacks, specifically word-level adversarial perturbation (or synonym substitution). The authors delve into two existing training frameworks (Type I and Type II) for these NLP models, highlighting the shortcomings related to unified training frameworks and the robustness of the base model. To overcome these limitations, the authors suggest a novel framework, UniT, which merges the two types of models to provide stronger certified robustness. They also propose a Decoupled Regularization (DR) loss to optimize robustness regularization for individual modules. Experimental results provided deliver evidence that the UniT with DR loss improves the certified robust accuracy of both types of certification scenarios. Strengths: 1. **Novelty**: The paper presents an original perspective on certified robust training for adversarial attacks in text data bringing unique insights and implementations, which addresses the identified gaps in the field. 2. **Improved Certification Accuracy**: The proposal of a unified framework (UniT) and a novel decoupled regularization (DR) loss show promising results, achieving higher certified robust accuracy. This moves us towards creating models with stronger robustness. 3. **Bypasses IBP Issues**: The UniT framework allows Type II methods to bypass using Interval Bound Propagation (IBP) during training, which has been shown to have problems in certification due to its loose bound problem. This successfully solves a major complexity in the training process. Weaknesses: 1. **Increased Complexity**: While the UniT framework and DR loss may improve robustness, they potentially increase the complexity of model training because it requires handling the embedding space as an intermediate for unification and decoupling the CE loss, which could be time and resource consuming. 2. **Limited Validation**: Although the paper claims improved results, tests and validations seem limited. It would be beneficial to test the proposed methods on different tasks or datasets to better assess their efficacy and robustness. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - No particular questions Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the positive feedback and comments, and we appreciate the reviewer very much for the recognition on the novelty, improved performance, and meaningfulness of our solution. Taking the raised questions seriously, we answer each of them with utmost sincerity as follows. We genuinely hope our response can clarify the concerns and will be more than happy to address any further concerns. **C1: Increased Complexity.** **A**: We admit that compared to the training using CE loss, calculating the proposed DR loss needs to increase the computation complexity due to the calculation of getting the features of perturbed input and original input during training. Such a tradeoff is natural: to obtain better model robustness we have to increase computation cost. In fact, the increase of computation cost also occurs for existing robust training techniques such as using PGD (Madry et al., 2017) loss and TRADES (Zhang et al., 2019) loss for improving empirical adversarial robustness. Additionally, we want to highlight the fact that compared to existing robust training techniques, the computation cost of calculating DR loss is relatively lower. Suppose the calculation cost of CE loss is $C$. The computation cost of DR loss would be approximately $2C$. However, for existing robust training techniques such as using PGD loss and TRADES loss, suppose the number of steps of generating adversarial examples is $k$, and the computation cost of PGD loss or TRADES loss will be $k\cdot C$. Since $k$ is generally larger than 2 in their implementation, the computation cost of DR loss is relatively smaller. We will include this discussion in the final version. **C2: Limited Validation.** **A**: We thank the reviewer for the suggestion and introduce the empirical robust training methods including PGD loss and TRADES loss as baselines to further validate the effectiveness of the proposed methods. Firstly, we want to compare DR loss with the added baselines on certified robust accuracy. Secondly, we conduct a comparison between ours and those baselines in terms of empirical robust accuracy. We will include those results in the final version, which further demonstrates the effectiveness of the proposed DR loss in improving model robustness. To detail: Firstly, without loss of generality, we extend our experiments in Table 1 in Type I scenario by adding PGD loss and TRADES loss as baselines. The results are shown in Table R1. Table R1: Comparison of certified robust accuracy (%) in the Type I scenario. Base Model | Loss | IMDB| SST2 | Yelp | AG ----|--|---|---|---|--- BERT | CE Loss|85.36 |91.65| 97.19| 93.78 BERT | PGD Loss | 87.52 | 90.28| 97.86 | 93.98 BERT | TRADES Loss | 86.80 | 90.44 | 97.56 | 93.96 BERT | DR Loss | **89.04** | **93.02**| **97.87** |**94.31** From Table R1, DR loss can consistently perform better than PGD loss and TRADES loss in four datasets. Especially, DR loss is able to obtain a relatively high increment in terms of certified robust accuracy (CRA) on the datasets with long text samples such as IMDB and SST2. Thus, DR loss is a more suitable choice for certified robust training for its decomposed regularization for each module, which requires each module to be robust under perturbation. Secondly, we also conduct experiments in the task of defending against adversarial attacks. We employ BERT as the base model and use PGD loss and TRADES loss as baselines. In our experiments, we conduct adversarial attacks by utilizing TextBugger (Li et al., 2019), a commonly used method for conducting text adversarial attacks. Following the existing adversarial attack setup, we randomly draw 1,000 test samples for adversarial example generation. The defense results of different methods are as follows. Table R2: Comparison of empirical robust accuracy (%) with robust training loss. We also show the corresponding natural accuracy indicated by the parentheses. Base Model | Loss | IMDB| SST2 | Yelp | AG ---|---|---|---|---|--- BERT | PGD Loss | 62.0 (87.1) | 87.6 (91.5) | 92.4 (96.9) | 85.5 (93.7) BERT | TRADES Loss | 57.8 (84.8) | 86.3 (91.7) | 91.9 (97.0) | 84.4 (94.3) BERT | DR Loss | **72.7** (86.9) | **89.8** (93.3) | **96.9** (98.4) | **87.6** (92.9) From Table R2, the empirical robust accuracy of DR loss is higher than those of PGD loss and TRADES loss. Compared to the baselines in Table R2, the improvement is obtained by the individual robustness supervision for each module provided by DR loss. Such results further show the effectiveness of the designed DR loss in improving model robustness, and we hope they can address the raised concern of the reviewer. **References** Madry, A., Makelov, A., Schmidt, L., Tsipras, D., & Vladu, A. (2017). Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083. Zhang, H., Yu, Y., Jiao, J., Xing, E., El Ghaoui, L., & Jordan, M. (2019, May). Theoretically principled trade-off between robustness and accuracy. In International conference on machine learning (pp. 7472-7482). PMLR. Li, J., Ji, S., Du, T., Li, B., & Wang, T. (2019, January). TextBugger: Generating Adversarial Text Against Real-world Applications. In 26th Annual Network and Distributed System Security Symposium. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thanks for the clarification and updates. I have read other reviews and the authors' rebuttal to them. Given the new information and results in the rebuttal, I'd like raise my rating score to 6. --- Reply to Comment 1.1.1: Comment: Dear Reviewer FiMX, We really appreciate your time and effort. We are thankful for your acknowledgment of the effectiveness of our rebuttal response. Sincerely, Authors
Rebuttal 1: Rebuttal: Dear Reviewers, We sincerely thank you for your time and effort spent on our paper. We deeply appreciate all your constructive feedback for improving our paper. We have responded to every raised concern with utmost sincerity, deadly seriousness, and the hope that our response can address them. Please find our response to your specific comments and questions in each corresponding separate response. For each response, Q, C, and A are short for question, comment and answer, respectively. We are excited that most of the reviewers acknowledge the novelty of the designed unified framework UniT and the proposed DR loss for certified robustness training, the effectiveness of our method, and the high quality of our presentation. In our response, we have provided additional illustrations on certain concepts as requested. In addition, we have conducted all requested experiments and put the results and analysis in the corresponding response section. We will accordingly further enhance our paper in the final version with these useful suggestions. Thanks again for your time and effort! Best regards, Authors
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper introduces a unified framework called UniT, aiming to solve the limitations of existing certified robust training pipelines against text adversarial perturbations. The main contribution is that it works in the word embedding space and provides stronger robustness guarantees without extra modules. Additionally, the paper proposes a decoupled regularization (DR) loss to improve the base model's robustness. Experimental results show the effectiveness of the unified framework and DR loss in enhancing certified robust accuracy. Strengths: This work presents three advantages: 1. It successfully combines the smoothed model in the discrete word space and the latent space, effectively bridging the structural gap between the two spaces and providing a unified approach for certified robustness. It avoids the loose bound problem caused by IBP. 2. The introduction of a decoupled regularization (DR) loss specifically targets improving the robustness of the base model, separating the robustness regularization terms for feature extraction and classifier modules, leading to better performance. 3. Experiments are conducted on used text classification datasets, demonstrating the effectiveness of the proposed unified framework and the DR loss in improving certified robust accuracy. Weaknesses: See the question part. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: * It appears to be a forced combination of two robust training methods. Can you provide more insight, such as whether our motivation is to unify Type 1 and Type 2 methods, or to address the issue of requiring IBP in Type 2 methods? * Upon reviewing the supplementary code, it appears that the authors have used transformers for their experiments, specifically focusing on BERT. To provide a more comprehensive evaluation, it would be beneficial to include additional experimental results with other models such as BERT-large, RoBERTa, and ELECTRA. This would further showcase the effectiveness and compatibility of the proposed framework across various state-of-the-art models. (I think it is very easy to switch to other models) * Can you provide more explanation and justification for why UniT does not require Interval Bound Propagation (IBP), it is essential to understand the core concept of UniT. * I have also discovered some other methods that were not compared in the paper. (Such as Jiehang Zeng, Xiaoqing Zheng, Jianhan Xu, Linyang Li, Liping Yuan, and Xuanjing Huang. 2021. Certified robustness to text adversarial attacks by randomized [MASK], Yada Pruksachatkun, Satyapriya Krishna, Jwala Dhamala, Rahul Gupta, and Kai-Wei Chang. 2021. Does robustness improve fairness? Approaching fairness with word substitution robustness methods for text classification. arXiv preprint arXiv:2106.10826) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: See above Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We deeply thank the reviewer for the time spent reading our paper and for recognizing the reasonableness of the proposed unified framework and DR loss. The questions raised in the review are constructive for our paper. We understand that there is some misunderstanding, and we would like to address it and answer each raised question with deadly earnestness and seriousness. We sincerely hope the following response can clarify any misunderstanding and address the posted questions, and we will be more than happy to answer any followed-up questions. **Q1: Can you provide more insight?** **A**: With all due respect, we hold that our method is not a forced combination. Our design is based on a detailed analysis of bottlenecks of existing certified robust training by (1) conducting a structural analysis on two types of frameworks to analyze the similarity and difference between them and (2) validating that IBP is the hindrance against obtaining higher certified robust accuracy for Type II training. We would like to further illustrate our motivation and insights as follows. In this work, our aim is to improve the certified robustness performance of text classifiers, and it depends on two things: (1) the satisfaction of certification condition in randomized mechanisms and (2) the robustness of the base model. As for our motivation and insights: - Firstly, after showing the structural analysis in Figure 1, we notice that Type II methods require IBP to conduct the calculation of certification condition, which has a loose bound problem that makes certification condition harder to satisfy, leading to a lower certified robust accuracy. We thus want to address the IBP issue in Type II methods by removing it from certified robust training and certification. However, removing IBP would mean that a new certification condition is needed, so we further address the certification condition design problem without IBP by using the newly designed one shown in Theorem 1 which can be directly calculated from the embedding space. Thus, one aspect of our motivation is to use the unified viewpoint on certified robust training to address the IBP issue in Type II scenario. And we correspondingly tackle the challenge of designing a certification condition without IBP in Type II scenario based on the independence of each word. - Secondly, based on our analysis, both types of certified robust training use CE loss for supervising the robustness training of base model. Nonetheless, CE loss lacks specialized robustness supervision for each module (feature extractor and classifier). Since improving the robustness of the base model also helps improve the certified robustness, we further design a modular regularization term for improving the robustness of each module based on their corresponding responsibility. And the designed DR loss can work well with the designed unified framework in both scenarios. **Q2: It would be beneficial to include additional experimental results with other models.** **A**: We thank the reviewer for the suggestion. Before showing the evaluation suggested by the reviewer, we just want to point out that as mentioned in L160-L163, we focus on using BERT as the base model because it outputs the best performance for existing methods and we follow their setting. As for the additional experimental results with other models, without loss of generality, we take the case of Type I training on IMDB dataset for illustration. The results are shown in Table R1. Table R1: Comparison of certified robust accuracy (%) with different base models on IMDB. |Base Model |SAFER | UniT | |---|---|---| |BERT-Base | 85.36 | 89.04| |RoBERTa | 89.12 | 89.20| |BERT-Large | 87.84 | 89.76| From Table R1, UniT consistently outperforms baseline SAFER in Type I scenario with different types of base models including the added RoBERTa and BERT-Large, which further validates the effectiveness and compatibility of the proposed framework. **Q3: Can you provide more explanation and justification for why UniT does not require IBP?** **A**: UniT does not require IBP because we have proposed a novel certification condition that is calculated directly in the embedding space. Recall that IBP is used for certification in Type II methods such as CISS. Thus, if we have a new way to certify the prediction results without IBP, we do not require IBP in our training and certification. However, designing a certification condition without IBP is challenging. By utilizing the independence of embedding of each word, we propose Eq. (1) to calculate the maximum deviation $\hat{R}$ caused by all synonyms. We correspondingly propose a novel certification condition based on the word embedding space as shown in Theorem 1, which indicates that the prediction result can be certified if the certified radius $R$ calculated from Eq. (3) is greater than $\hat{R}$. Since this certification does not need IBP, UniT does not require IBP. **Q4: There are some other methods that were not compared in the paper.** **A**: We thank the reviewer for pointing those papers out, but they are not appropriate for comparison in certified robust accuracy. Firstly, the method proposed by Zeng et al. is to certify the prediction given a limited amount of unconstrained word perturbation, e.g., only 2 words for the SST2 dataset. For our comparison, we concentrate on the scenario where any word can be replaced by its synonyms. The method proposed in that paper cannot handle this situation and is not suitable for being used as the comparison baseline. Secondly, the other mentioned paper by Pruksachatkun et al. is more about a discussion on whether certified robustness helps fairness, and it is not about designing a method for getting certified robust prediction. The proposed method in that paper is supposed to be a baseline for fairness-related methods, which is not related to our task. We will include those papers in the discussion of related work in the final version. --- Rebuttal Comment 1.1: Comment: I thank the authors for providing comprehensive and insightful responses. These answers address most of my concerns and emphasize the advantages of the proposed framework. After reading the clarification as well as the other reviews, I decided to raise my initial rating. --- Reply to Comment 1.1.1: Comment: Dear Reviewer ZzE2, Thank you for your positive feedback on our rebuttal response! We are very glad that our response has addressed your concerns. Sincerely, Authors
null
null
null
null
null
null
Generate What You Prefer: Reshaping Sequential Recommendation via Guided Diffusion
Accept (poster)
Summary: This paper presents an innovative paradigm shift in sequential recommendation systems. It departs from the traditional learning-to-classify approach, which includes negative sampling, to a learning-to-generate model, proposing a novel system called Diff4Rec, grounded on guided diffusion. This approach is predicated on the observation that users imagine an ideal or "oracle" item after several interactions with a recommendation system. The authors posit that Diff4Rec can generate these oracle items, modeling the underlying item-generation distribution through a diffusion process, and is not limited to a predefined candidate set. This approach allows for discarding negative samples, which previous models could not accomplish, as it models the data-generation process directly. The paper suggests that Diff4Rec effectively mirrors human behavior better than previous systems by capturing a user's preference more accurately and directly, avoiding the noisy or easy supervisions from negative samples. The authors evaluate Diff4Rec through various experiments, where it demonstrates consistent improvements over existing sequential recommendation models. In conclusion, this paper introduces a transformative methodology in sequential recommendation, potentially broadening the capabilities of recommendation systems. Strengths: 1. Raise a new learning-to-generate paradigm for recommendation using relative new diffusion model approach. 2. Good reproducibility: Code is released for readers; Hyperparameter settings are available. 3. Good to explain what each step is doing in the Algorithms. 4. Improvements over the baseline appear significant. But statistical significance analysis is missing. Weaknesses: 1. In terms of recommendation, an important and indispensable step is not discussed and stated in the algorithm. After an oracle item is generated, to fetch one / or some items to recommend to users, we need to find the top nearest items to the oracle item in the embedding space. Though this step is described in the Experiment, it is fully ignored in the Method section. I understand the Method part described mainly the learning-to-generate paradigm. But this step is needed to complete the recommendation task. Moreover, to retrieve top nearest items, how to encode properly the embedding space for items becomes also a relevant question. 2. Might be good to list out the main contributions of the work. 3. Please avoid abusing the footnotes. Footnotes are good for external links and websites. If you do think footnotes are providing important information, please try to integrate them in the main text instead, e.g. 3 and 4. And try to avoid the controversial remarks irrelevant to the main contents, like 2: people working on learning to rank may not fully agree on 2. 4. In table 1, if authors meant to have %, better to add (%) after HR@20 and NDCG@20. It’s easy to be confused, even when I read the caption. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. A key success of the diffusion models in image generation is that it can encode some of undetermined features in the latent space so that random initializer helps to randomly sample those features in additional to the guidance signals. This implies, to a given user and interaction history, we can always generate different oracle items from different random seeds. How will this help on top item retrieval? For example, we can generate different oracle items and make a pooling to get an ensemble of oracles and then retrieve from that. How will this be different from retrieving one for each generated oracle? It would be very interesting for users to investigate on this end. 2. Can authors add one / more of the other diffusion model based recommendation model (19,20,21,36) results as a baseline? Tells us better the effect of with/without negative sampling. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: 1. Authors may disagree on whether selecting the nearest items on the embedding space is a necessary step for recommendation. I could also imagine a recommendation with a generative item, for example, a LLM decoder to generate recommended results from the oracle item embedding generated with the diffusion result. Maybe authors could discuss how to fetch / generate the recommended item(s) from the oracle item in the main text. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > 1. *The step to retrive recommendation list is needed to complete the recommendation task. Moreover, to retrieve top nearest items, how to encode properly the embedding space for items becomes also a relevant question.* Thank you for highlighting this point. We recognize the importance of identifying the top nearest items to the oracle item in recommendation tasks. We regret the oversight of detailing this in the Experiment section instead of the Method section. In the revision, we will introduce Section 4.2.3, titled "Retrieval of Recommendation Results," to address this aspect. We hope this addition clarifies your concerns. Regarding the encoding of the item embedding space, we utilize a Transformer encoder in Diff4Rec to process item sequences. We concur that the choice of encoding strategy for this space is pivotal. To explore alternatives, we conducted experiments replacing the Transformer encoder with a GRU, with the results presented below: || Yoochoose|Yoochoose| KuaiRec| KuaiRec|Zhihu|Zhihu :--:|:--:|:--:|:--:|:--:|:--:|:--: ||HR@20(%) | NDCG@20(%)|HR@20(%)|NDCG@20(%)|HR@20(%)|NDCG@20(%) Transformer encoder|4.78| 2.23 |5.26|4.11|2.26|0.79 GRU encoder|4.48|1.92|5.58|5.17|2.05|0.73 --- > 2. *Might be good to list out the main contributions of the work.* Thank you for this advice. We will list out the main contributions in the revised version, focusing on: (1) Diff4Rec reshapes sequential recommendation as an oracle item generation task, (2) Diff4Rec does not require negative samples, since it explores the underlying distribution of observed interactions with diffusion model, and (3) We conduct experiments on three datasets to show the effectiveness of Diff4Rec. We believe this makes the presentation of our work more clear! --- > 3. *Please avoid abusing the footnotes. * We apologize for the abuse of footnotes. We will integrate the key point of Footnote 3 and 4 into the main text in the revised version. We understand Footnote 2 is controversial since learning-to-rank has difference with learning-to-classify. What we would like to convey was to highlight their similarity in requiring negative samples for recommendation tasks. In the revision, we will clarify their similarity more precisely in the main text, avoiding any controversial statements. Thanks again for reminding us of this issue. --- > 4. *better to add (%) after HR@20 and NDCG@20.* Thank you for this advice. We will add (%) after HR@20 and NDCG@20 in Table 1 in the revised version. --- > 5. *For a given user and interaction history, we can always generate different oracle items from different random seeds. How will this help on top item retrieval? For example, we can generate different oracle items and make a pooling to get an ensemble of oracles and then retrieve from that. How will this be different from retrieving one for each generated oracle? It would be very interesting for users to investigate on this end.* Thank you for this question. We feel that this question is more about the understanding of diffusion model, and we are glad to share our understanding of it. In image generation, if guidance signals are involved, the generated images may be different, but they would center around the guidance signal. Even though they are different in RGB, it is hard to say that they are always different in certain latent spaces with certain measurements. In general, the generation process of diffusion is about removing certain noise from a pure Gaussian sample, and the guidance signal provides more delicate information about which part of noise to remove. Back to Diff4Rec, the generated oracle items can be different with different random seeds, but they are all recovered from Gaussian noises with the guidance of the same behavior sequence. Therefore, they could be similar in certain latent spaces with certain measurements, which, we believe, is an open scientific topic to explore. Moreover, we do find it insightful to generate different oracle items and make a pooling to get an ensemble of oracles and then retrieve from that. We conduct mean pooling of 5 different oracle items, and the result is shown as follows: || Yoochoose|Yoochoose| KuaiRec| KuaiRec|Zhihu|Zhihu :--:|:--:|:--:|:--:|:--:|:--:|:--: ||HR@20(%) | NDCG@20(%)|HR@20(%)|NDCG@20(%)|HR@20(%)|NDCG@20(%) Diff4Rec|4.78|2.23|5.26|4.11| 2.26|0.79| ensemble|4.83|2.41|5.47|4.32|2.32|0.78| The reason may be that the ensemble of oracles can effectively reduce the variance. --- > 6. *Can authors add one / more of the other diffusion based recommendation model (19,20,21,36) results as a baseline?* Thank you for the suggestions. We have added models from references [19], [20], and [36] as baselines for comparison in Table 4 of the PDF. Of these works, only [36] provides open-sourced code that allows direct reproduction. For [19] and [20], we carefully implemented their methods based on the paper's details to enable fair comparisons. We are still working to reproduce the approach [21] and will include it in the subsequent stages. --- > 7. *A LLM decoder could generate recommended results from the oracle item embedding. Maybe authors could discuss how to fetch / generate the recommended item(s) from the oracle item in the main text.* Thank you for this insightful comment! After analyzing this comment, we do believe it is more elegant to decode the oracle items explicitly, possibly by LLMs. We do some initial attempts and would like to finetune Vicuna, an open-sourced LLM to interpret the oracle item. However, we find that our available GPU resources currently can not afford the finetuning. We will keep investigating this promissing direction. Moreover, in the revised version, we will discuss more about how to fetch the recommended item(s) from the oracle item, and what LLMs can do in the process. Hopefully, it can inspire more research in this direction! Thanks again for this insightful comment! --- Rebuttal Comment 1.1: Title: Supplement to Point 6(the additional baseline [21]) Comment: As a supplement to Point 6, we have implemented model [21] based on the paper's details, and the result is shown as follows: ||YooChoose | YooChoose | KuaiRec | KuaiRec | Zhihu | Zhihu :--: |:--: | :--: | :--: | :--: | :--: | :--: ||HR@20(%) | NDCG@20(%) | HR@20(%) | NDCG@20(%) | HR@20(%) | NDCG@20(%) DSR [21] | 4.12$\pm$0.06|1.81$\pm$0.05|3.79$\pm$0.03|1.64$\pm$0.03|1.80$\pm$0.04|0.65$\pm$0.02 Note that the results of models [19], [20], and [36] are provided in Table 4 of the PDF. We are anticipating a deeper discussion with you!
Summary: This paper describes a new learning-to-generate paradigm for sequential recommendation based on diffusion models. The authors discuss the limitations of previous approaches in sequential recommendation, where a recommender model learns to classify user preferences based on positive and negative item samples. The paper highlights two inherent limitations of this approach: (1) it differs from human behaviour, which involves knowing an ideal item and selecting potential matches, and (2) the classification is limited to the candidate pool, which may contain noisy or easily distinguishable negative samples, diluting the preference signals. This paper proposes Diff4Rec, a new learning-to-generate paradigm based on guided diffusion, which is able to circumvent the limitations of existing approaches. Empirical results on benchmark datasets show the effectiveness of Diff4Rec. Strengths: - The authors analyzed the potential limitation of using negative sampling when training classification based recommendation models, and proposed a generation based framework using diffusion models, which is technically sound. - Diff4Rec is able to outperform a number of state-of-the-art baseline alternatives. - Code is available, which helps reproduce the results. Weaknesses: - The authors analyzed the potential limitations of using negative sampling when training classification based recommendation models. In practice, however, a lot of models can be trained without using negative sampling at all. They directly use a softmax over all possible items to obtain the probability of each item being the next item. These softmax based approaches, in my opinion, are closely related to the spirit of Diff4Rec, i.e., by not relying on negative sampling and treat all items other than the next item as a whole. Thus the authors need to explain why Diff4Rec is superior to these approaches. - Diff4Rec can be very slow in both training and inference stages. In addition, the authors should analyze the time / computational complexity of Diff4Rec and present them in the paper. - The number of compared approaches is relatively small. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - In Diff4Rec, the max number of interactions is set to 10. Why not choosing a larger number of historical interactions? Is it due to Diff4Rec running too slow, or is it because of Diff4Rec not performing well when the sequential length is large? - The abstract in the OpenReview console is not consistent with the submitted paper. In OpenReview, the model is termed SRDiff, while in the paper it is termed Diff4Rec. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have done a good job discussing the limitations of Diff4Rec. Diff4Rec has a very high computational cost, which limits its applicability. When considering using Diff4Rec in application, it is important to analyze the tradeoff between computational need and recommendation performance. However, Diff4Rec offers a new view of reshaping sequential recommendation as an item generation task, which has the potential to inspire more research in this direction. This outweighs the existing limitations of Diff4Rec. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > 1. *Why Diff4Rec is superior to softmax-based approaches.* We appreciate your comments, but respectfully emphasize the fundamental distinction between Diff4Rec and softmax-based approaches. Conceptually, Diff4Rec is a learning-to-generate paradigm that casts off negative sampling, while softmax-based approaches are in the learning-to-classify paradigm that contrasts the positive items against the explicit or implicit negatives. We focus on the softmax loss with implicit negatives, which may not conduct the negative sampling explicitly, but implicitly treat all items (excluding the next positive item) as negatives [a]. Specifically, the softmax loss can be derivated as follows (cf. Equation 14 in [a]): $L_{softmax} = -\sum\limits_{(c, i) \in S} \ln \frac{\exp(\hat{y}(i|c))}{\sum\limits\_{j \in I}\exp(\hat{y}(j|c))} = -\sum\limits_{(c, i) \in S}\left[ \hat{y}(i|c) - \ln \sum\limits_{j \in I}\exp(\hat{y}(j|c))\right],$ where $S$ is the observed interactions. Clearly, it treats the observed interactions as positives, while implicitly designating unobserved interactions as negatives. Then, it optimizes their margin appropriately. Furthermore, work [b] interprets softmax loss from the perspective of contrastive learning: $L_{SSM} = -\sum\limits_{(u, i) \in D} \log \frac{\exp(f(u, i))}{\exp(f(u, i)) + \sum\limits_{j \in N}\exp(f(u, j))},$ which aligns with softmax loss when $N$ is the set of all items apart from the next positive item. Concurrently, softmax loss can be interpreted by InfoNCE, a well-known objective in contrastive learning, by setting the temperature $\tau = 1$: $L_{InfoNCE} = -\sum\limits_{(u, i) \in D} \log \frac{\exp(f(u, i) / \tau)}{\exp(f(u, i) / \tau) + \sum\limits_{j \in N^-}\exp(f(u, j) / \tau)},$ and InfoNCE is acknowledged to discriminate between positive sample (u, i) and negative ones (u, j). The training of Diff4Rec, as shown in Algorithm 1, involves no implicitly usage of negative samples, compared to softmax. The comparison can be summarized as: method|observed interactions|unobserved interactions|treatment :--:|:--:|:--:|:--: softmax |positive|negative (explictly)| contrastive learning Diff4Rec|positive|not use|recover observed interactions by diffusion [a] Rendle, S. (2021). Item recommendation from implicit feedback. In Recommender Systems Handbook (pp. 143-171). New York, NY: Springer US. [b] Wu, J., Wang, X., Gao, X., Chen, J., Fu, H., Qiu, T., & He, X. (2022). On the effectiveness of sampled softmax loss for item recommendation. arXiv preprint arXiv:2201.02327. --- > 2. *The authors should analyze the time / computational complexity of Diff4Rec and present them in the paper.* Thank you for the insightful comment. Indeed, the diffusion model's inference phase can be time-consuming—an intrinsic limitation of diffusion. While Diff4Rec reframes sequential recommendation as a learning-to-generate task using diffusion, it is not immune to this limitation. However, we're optimistic that as diffusion model research advances, emerging efficient inference algorithms will mitigate this slow-inference concern. Thus, Diff4Rec stands to benefit from these advancements in diffusion. We recognize the importance of analyzing Diff4Rec's computational complexity. Consequently, we provide Table 5 in the PDF, detailing the time taken at each stage. We can observe that the training efficiency of Diff4Rec and SASRec is comparable, since diffusion would sample a step for training as described in Line 5 of Alg. 1. The inference stage of Diff4Rec is more time-consuming than SASRec. Moreover, we show the trade-off between performance and time cost of inference *wrt* total diffusion steps in Figure 8. We'll definitely incorporate these computational costs in the revision. We're grateful for your insightful remarks. --- > 3. *Why not choose a larger number of historical interactions than 10?* Thank you for raising this point. Our decision to set the maximum number of interactions at 10 was not due to performance or efficiency constraints of Diff4Rec. We thought 10 is an appropriate number. Indeed, Diff4Rec's ability to handle longer sequences is not compromised. In Diff4Rec, historical interactions are initially encoded using a transformer encoder to obtain a 1-D representation, which then guides the diffusion process. As such, the length of retained historical interactions has a limited impact on learning efficiency. To provide further clarity, we conducted experiments with larger sequential lengths (20 and 30). The results are in Table 6&7 in the PDF, which shows similar trends with the results in Table 1. Meanwhile, the training and inference time show little difference with Table 5. We hope the results address your concerns. --- > 4. *The number of compared approaches is relatively small.* Thank you for the suggestion. We've incorporated three additional baseline models (i.e., (DiffRec [36], DiffuRec [19], and CDCR ec [20]) in Table 4 of the PDF, inclusive of recent work applies diffusion to recommendation. Note that these models still operate under the learning-to-classify paradigm. As described in Lines 101-110, they necessitate the use of negative samples. --- > 5. *The abstract in the OpenReview console is not consistent with the submitted paper.* Thank you for bringing this issue to our attention. We apologize and will perform careful proofreading in the revision. --- > 6. *It is important to analyze the tradeoff between computational need and recommendation performance.* Thank you for your feedback. As mentioned in Point 3, the balance between computational cost and applicability is a fundamental challenge of diffusion models. Following your suggestion, we've presented experimental results and analysis. Moreover, your comments underscore our belief that, despite its limitations, Diff4Rec has the potential to spur further research in this domain. --- Rebuttal 2: Title: We are looking forward to your further comments. Comment: Dear Reviewer VUDN, Thank you again for your insightful feedback on our submission, particularly your suggestions to 1) **explain why Diff4Rec is superior to softmax-based approaches**, 2) **discuss the time / computational complexity of Diff4Rec in the paper**, 3) **verify our method with a larger number of historical interactions than 10**, and 4) **incorporate more baseline approaches**. These valuable suggestions better strengthen the quality of our work. The deadline of the discussion stage is approaching, and we are looking forward to your further comments. We sincerely hope that these improvements will be taken into consideration. If we have properly addressed your concerns, we would deeply appreciate it if you could kindly re-evaluate our paper. If you have further concerns, please let us know and we remain open and would be more than happy to actively discuss them with you. Best, Authors --- Rebuttal Comment 2.1: Comment: I would like to thank the authors for the detailed explanation. After reading other reviewers' comments and the authors' rebuttal, I will keep my original score. Firstly, softmax-based approaches should also be considered as generative methods since they directly models the probability of a target item being the next item. In that way, the distinction between softmax and diffusion is really marginal. The authors also argued that softmax-based approaches explicitly used negative samples, while diff4rec does not. In fact, both softmax and diffusion implicitly use negative sampling. If a diffusion model does not implicitly use negative sampling, then the model will simply collapse. Another potential problem of the evaluation protocol is that, the authors said "For all baselines, we conduct negative sampling from the uniform distribution at the ratio of 1: 1, which is not conducted in Diff4Rec." This is not really a good practice for training sequential recommendation models. A negative size of 1 is simply too small for the model to perform well. In my experience, for example, on the YooChoose dataset, a negative sample size of 49-99 would achieve much better result. Therefore the results reported in the paper might not be a fair comparison against baseline methods. --- Reply to Comment 2.1.1: Title: Responce to your further concerns. Comment: Dear Reviewer VUDN, Thank you very much for your valuable feedback. We sincerely hope that our following response could properly address your concerns. 1. >*Softmax-based approaches should also be considered as generative methods since they directly model the probability of a target item being the next item. In that way, the distinction between softmax and diffusion is really marginal.* We do agree that softmax directly models the probability of a target item being the next item. Meanwhile, we would like to kindly emphasize that **softmax and generative models are different due to their adherence to distinct paradigms (discriminative and generative respectively)**. Specifically, generative models (GANs, VAEs and diffusion models, e.t.c) directly model the underlying data generation distribution by learning the map from Gaussian distribution and the underlying distribution. **In contrast, softmax is commonly employed in discriminative models, since it models the probability distribution over a set of discrete classes, with limited presence in the literature of generative models**. Besides, softmax is limited to the discrete candidate set, whereas diffusion can generate samples beyond the candidate. Therefore, we would respectfully emphasize that softmax and diffusion models are quite distinctive. 2. >*The authors argued that softmax-based approaches explicitly use negative samples, while diff4rec does not. In fact, both softmax and diffusion implicitly use negative sampling. If a diffusion model does not implicitly use negative sampling, the model will simply collapse.* We agree that softmax implicitly uses negative sampling (we feel sorry about the potential confusion caused by that it should be 'implicitly' instead of 'explicitly' in how softmax uses unobserved interactions in the table of Point 1 in the rebuttal). Meanwhile, we would kindly emphasize that **negative sampling is not used in Diff4Rec, either explicitly or implicitly**. The primary objective of Diff4Rec is: $L_{t-1} = \mathbb{E}_{e_n\^0, \epsilon}\left[\frac{\bar\alpha\_{t-1}}{2\tilde{\beta}\_t} ||e\_n^0 - f\_\theta(\sqrt{\bar\alpha\_t} e_n^{0} + \sqrt{1-\bar\alpha\_t}\epsilon, c\_{n-1}, t)||^2\right] + C,$ where $e_n^0$ is sampled from **only observed interactions**. We have also provided the code and training algorithm, and we hope these materials could help address your concern. **Even if negative sampling is not used (implicitly or explicitly) in Diff4Rec, it would not collapse, which is one of the contributions of this work**. The reason can be attributed to the fundamental paradigm shift: the paradigm of Diff4Rec is learning-to-generate through diffusion, instead of traditional learning-to-classify. Specifically, Diff4Rec distinctly models the underlying generation distribution of observed interactions with the power of diffusion, which does not require negative samples. However, previous sequential recommenders adhere to the learning-to-classify paradigm and distinguish positive and negative samples, which strictly requires negative sampling. **We do understand that recommendation has long been recognized as a discriminative task requiring negative sampling. Yet our work does show that sequential recommendation can be reshaped as a generative task with diffusion, and discard negative sampling.** We sincerely hope that the initial exploration of Diff4Rec could enable recommendation task to embrace the benefits offered by the rapid development of diffusion models. 3. >*A negative size of 1 is too small for the model to perform well. In my experience, on the YooChoose dataset, a negative sample size of 49-99 would achieve much better result. Therefore the results reported in the paper might not be a fair comparison against baseline methods.* We recognize the significance of a fair experimental setting, particularly the number of negative samples. In the literature of recommendation, **a fair experimental setting regarding the number of negatives could be that the number of negatives keeps the same across all models, addressing any potential bias arising from varying numbers of negatives.** Given that Diff4Rec discards negative samples, and the number of negatives of Diff4Rec can be seen as 0, we believe it is justifiable for other baselines to set their number of negatives as 1. Meanwhile, we acknowledge the concern that more negative samples in classification-based baselines can improve performance, and we conduct experiments to search the number of negatives in [50, 60, 70, 80, 90, 100] following your advice. The results are as follows: ||YooChoose |-|KuaiRec|-|Zhihu|- :--:|:--:|:--:|:--:|:--:|:--:|:--: ||HR@20(%)|NDCG@20(%)|HR@20(%)|NDCG@20(%)|HR@20(%)|NDCG@20(%) SASRec|4.18|1.79|4.12|1.96|1.87|0.69 CL4SRec|4.64|1.93|4.31|2.11|2.10|0.75 Diff4Rec|4.78|2.23|5.26|4.11|2.26|0.79 Note that CL4SRec constructs many negative samples with augmentation, therefore more negatives result in limited improvement. Best, Authors
Summary: The paper presents a new approach Diff4Rec, a guided diffusion model for sequential recommendations. The paper could be divided in three main parts: 1. Problem formulation and method: It describes the sequential recommendation as oracle item generation and then explains how diffusion is applied. 2. Experiments: Describes the datasets used and various experiments done. Moreover, they have provided code also. 3. Results: Presents the results from their experiment. The results are much superior compared to the previous methods. Strengths: The paper presents a new idea in the learning-to-generate paradigm, which aimed to used diffusion process. They have supported the results with exhaustive set of experiments and validated the results on multiple datasets. Weaknesses: 1. The paper doesn't show the impact of negatives on the recommendations 2. Would it be possible to present the A/B test results. I suspect the recommendations would over hinge on the users history while recommending for a longer period of time.. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. Noob q: As per my understanding, during the inference time we will get embedding for nth item as a oracle item. How do we recover what item to recommend based on the predicted embedding? 2. Do we have any results from an A/b test? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: A/B testing on real system would have helped to validate the proposed idea. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > 1. *The paper doesn't show the impact of negatives on the recommendations.* Thank you for raising this point. We feel a little confused about the "impact of negatives", and try to analyze it from two distinct perspectives: - **Negative Societal Impacts**: we would like to discuss the negative social impact of recommendation. Firstly, one major concern about recommender system is the potential of privacy disclosure and information leakage, and it is not a risk in our work since the datasets are all anonymized by the provider. Secondly, recommender system may bring issues such as Information cocoons and echo chambers, which are also significant research topics beyond the scope of our work. - **Impact of Negative Sampling**: we would like to discuss more about negative sampling in recommender system. As described in Line 44-52 and Figure 1, the learn-to-classify based recommenders are demanding of negative samples to discriminate between positive samples and negative ones when learning the decision boundary. Without negative sampling, the item embedding of learn-to-classify based recommenders would be pathologically distributed as shown in Figure 3 of the main text and Figure 6 of the Supplementary Material. Diff4Rec, by contrast, explores the underlying distribution of observed interactions with diffusion model, and does not require negative samples in the learning process. We eagerly anticipate a deeper discussion on these matters in the subsequent stages! --- > 2. *How do we recover what item to recommend based on the predicted embedding?* Thank you for bringing up this point! As mentioned in Lines 253-256, once we obtain the embedding of the oracle item, our next step is to identify the top K-nearest items from the candidate set for top-K recommendation, based on their similarity (i.e., inner product between the embeddings of the oracle and candidate). We acknowledge the significance of this step in achieving the recommendation task, and to provide a comprehensive understanding, we will detail this step in Method of the revision. We're grateful for your astute feedback, which has undeniably enriched our presentation. --- > 3. *Would it be possible to present the A/B test results? I suspect the recommendations would over hinge on the users history while recommending for a longer period of time.* Thank you for bringing this point to us. We recognize the potential of A/B testing to validate the efficacy of recommendation models, though our current access does not extend to online A/B testing platforms. Moreover, we notice that the main concern comes from that the recommendation would over hinge on the user history while recommending for a longer period of time. To address this, we further simulate the A/B testing with the assistance of ChatGPT, inspired by its remarkable generalization and simulation ability, since it encodes a wide range of human behavior from their training data [a]. Such ChatGPT-simulated A/B testing is less prone to the user history, as compared to the conventional evaluation on the fixed test data. We list the steps: - Data Split: We sample 100 real users from the MovieLens-100k dataset for evaluation, reserving the rest as the training data; - Recommender Training: We train Diff4Rec and CL4SRec (the best baseline model under top-K evaluation) on the training data; - User Simulation: For each user being evaluated, we convert his/her history to a textual prompt to profile the user, and feed it into ChatGPT to simulate the user. Then, three movie lists derived from Diff4Rec, CL4SRec, and Random are presented to ChatGPT to select which list he/she would prefer. We conduct the simulation five times per user with different ChatGPT accounts, and the list with the most votes will have its *Success Score* plus one. - Simulated A/B Testing: We compare the *Success Score* among Diff4Rec, CL4SRec, and Random, based on the selection of ChatGPT-simulated users. The results are shown as follows (Note that two lists may have even votes (3 out of 100), and we denote this inside the bracket): Random | CL4SRec | Diff4Rec :--:|:--:|:--: 12|33(3)|52(3) Clearly, Under the evaluation of ChatGPT-simulated users, Diff4Rec performs better than CL4SRec. Considering that this evaluation is not based on the pre-collected test data, Diff4Rec is less likely to over hinge on users history. [a]Park, J. S., O'Brien, J. C., Cai, C. J., Morris, M. R., Liang, P., & Bernstein, M. S. (2023). Generative agents: Interactive simulacra of human behavior. arXiv preprint arXiv:2304.03442. We're grateful for your insightful suggestions, which have enriched our evaluations and inspired us for future directions. --- Rebuttal 2: Title: We are looking forward to your further comments. Comment: Dear Reviewer RbjC, Thank you again for your valuable feedback on our submission, particularly your suggestions to 1) **highlight the inference of recommendation results**, and 2) **demonstrate that the recommendations would not over-hinge on the users' history with A/B test**. We also try our best to address your concern about **the impact of negatives on the recommendations from the perspectives of negative societal impacts and the impact of negative sampling**. These insightful suggestions better strengthen the quality of our paper. The deadline of the discussion stage is approaching, and we are looking forward to your further feedback. We sincerely hope that these improvements will be taken into consideration. If we have properly addressed your concerns, we would be grateful if you could kindly re-evaluate our paper. If you have additional concerns, please let us know and we remain open and would be more than happy to actively discuss them with you. Best, Authors
null
null
Rebuttal 1: Rebuttal: We thank the reviewers for their insightful and positive feedback! We are encouraged that they find Diff4Rec introduces a transformative methodology in sequential recommendation (Reviewer $\color{Blue}\text{b6yV}$), achieves much superior performance compared to state-of-the-art baseline alternatives (Reviewer $\color{Goldenrod}\text{RbjC}$ and Reviewer $\color{Red}\text{VUDN}$), potentially broadening the capabilities of recommendation systems (Reviewer $\color{Blue}\text{b6yV}$), and has the potential to inspire more research in this direction (Reviewer $\color{Red}\text{VUDN}$). We would also like to express our gratitude to the reviewers for highlighting that Diff4Rec presents an innovative paradigm shift in sequential recommendation systems (Reviewer $\color{Blue}\text{b6yV}$), effectively mirrors human behavior (Reviewer $\color{Blue}\text{b6yV}$), and is technically sound (Reviewer $\color{Red}\text{VUDN}$). After carefully analyzing the reviewers' comments, we are more than glad to find these comments very insightful. We respond to the comments point by point in the rebuttal and provide a PDF containing the comparison with more baseline models and the demonstration of the efficiency of Diff4Rec. We hope that our response can address the concerns raised by the reviewers. Once again, we sincerely thank the reviewers for their valuable feedback and insightful suggestions, which undoubtedly contribute to enhancing the quality of our work. We eagerly anticipate the ensuing discussions in the next phase! Pdf: /pdf/3b20e615954f65df0a56302faf9ed9e69b7edf26.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Gradient Informed Proximal Policy Optimization
Accept (poster)
Summary: This paper studied the combined use of both the analytical policy gradient and the likelihood ratio policy gradient for training policy networks, based on the PPO algorithm. To make the combination feasible, a new alpha-policy is introduced and its approximation technique has been successfully developed. Besides some theoretical studies, empirical results further show the potential usefulness of the newly proposed algorithm in solving some benchmark problems. Strengths: The theoretical study in this paper regarding the analytical policy gradient and the alpha-policy sounds interesting and novel. The experiment results show that the new algorithm can be very useful on some benchmark problems. Weaknesses: While it is interesting to mix analytical policy gradients with learned policy gradients to enhance the reliability and performance of the policy network training algorithms, this idea is clearly not restricted to on-policy algorithms such as PPO. Although the authors made it clear that PPO is one of the most popularly used on-policy algorithms, it remains questionable why this paper only studies the effectiveness of using combined policy gradients in PPO. The possibility and potential limitations of using the proposed policy gradient combination technique on other algorithms, particularly off-policy algorithms, may need to be further justified and investigated. This paper requires prior knowledge of the environment dynamics that must be differentiable (or partially differentiable) in nature. Many real-world reinforcement learning problems may not satisfy this requirement. Hence, the practical usefulness of the new algorithm remains a bit questionable. The authors are highly recommended to clearly evaluate and justify the practical value of the new algorithm. Furthermore, if the environment model is known in advance, it is possible to conduct effective model-based policy training without using analytical policy gradients. It remains unclear to me what the advantages of using analytic policy gradient techniques would be, compared to planning and other model-based reinforcement learning techniques. It is also desirable if the performance strength of using the analytic policy gradients, compared to other model-based reinforcement learning methods, can be experimentally evaluated and reported in this paper. The authors stated several times in the paper that the analytical policy gradients would be highly biased when they are incomplete. I don't understand what this means and why it is true. Incompleteness and bias present two separate dimensions regarding the quality of analytical policy gradients. Why should they be strongly correlated? Some mathematical claims require more clarity. For example, Lemma 4.4 mentioned a particular advantage function A-hat. However, the actual definition of this function is not presented, making it hard to accurately understand this lemma and its applicability. Some critical details were missing regarding the new algorithm design in the main text. For example, the new algorithm requires you to adjust alpha from time to time. However, it is not clear how alpha is actually adjusted. The formula for adjusting alpha is not presented and clearly justified in the main text. Meanwhile, some claims in the algorithm design also need strong justifications. For example, it is stated on page 6 that by constraining the determinant of a matrix to be near 1, we can guarantee stable policy updates. However, the validity of this statement is not proved or properly explained. Moreover, if all eigenvalues are far greater than 1, alpha would be set to be very close to 0. In this case, would the analytical gradient become useless? Similarly, it is not clear why the ratio of the two determinants gives us the difference between the two policies. According to the experiment results presented in the main text, it seems that the new algorithm is only useful when the problem being solved has many poor local optima (hence the algorithm needs to inject some uncertainties or conducts more explorations) or when the complete analytical policy gradient cannot be obtained. On problems of differentiable physics simulations, using the analytical policy gradients alone appear to be able to achieve the best results. In view of this, the real practical value of the new algorithm may need to be further investigated on more benchmark problems. Technical Quality: 3 good Clarity: 3 good Questions for Authors: What are the possibilities and potential limitations of using the proposed policy gradient combination technique on other algorithms, particularly off-policy algorithms? What is the practical impact of the assumption regarding the prior knowledge of the environment dynamics that must be differentiable? What are the theoretical and practical advantages of the new algorithm design, compared to other model-based reinforcement learning methods? Incompleteness and bias present two separate dimensions regarding the quality of analytical policy gradients. Why should they be strongly correlated? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: I do not have any concerns regarding this question. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments, we really appreciate them. **Q1. What are the possibilities and potential limitations of using the proposed policy gradient combination technique on other algorithms, particularly off-policy algorithms?** A1. We'd like to emphasize that **on-policy algorithms operate on a very different theoretical foundation from off-policy learning algorithms**. While on-policy algorithms estimate policy gradients that are only valid for the current policy, off-policy algorithms do not use them. Therefore, incorporating analytical gradients into off-policy algorithms requires a shift in theoretical perspective from ours. Moreover, there are already several works to which we can refer when it comes to off-policy algorithms [1]. While the idea of using analytical gradients to enhance off-policy learning algorithms is compelling, it is beyond the scope of this work and deserves a separate investigation. **Q2. What is the practical impact of the assumption regarding the prior knowledge of the environment dynamics that must be differentiable?** A2. Please refer to A4 of global rebuttal at the top. **Q3. What are the theoretical and practical advantages of the new algorithm design, compared to other model-based reinforcement learning methods?** A3. To the best of our knowledge, model-based RL methods approximate world dynamics from sample trajectories, which require another optimization procedure. Because of this, [2] reported that the SE-MBPO, which is one of the model-based RL methods augmented with analytical gradients, requires a lot more wall-clock time for training (8 hour) than the RP method (15 min) like ours in Ant environment. Therefore, our method is expected to be more efficient than other model-based RL methods. **Q4. Incompleteness and bias present two separate dimensions regarding the quality of analytical policy gradients. Why should they be strongly correlated?** A4. About the relationship between the two terms, we believe that "bias" is a broader term that includes "incompleteness". "Bias" in this context means that we cannot estimate the correct gradient even if we have infinite number of samples. In the traffic environments that we discussed, **gradients convey only partial information about world dynamics**. To be specific, they convey information along the lanes, but not across lanes. Please refer to Appendix 7.5.2. This is the reason why we called the gradients "incomplete", and under this circumstance we cannot estimate correct gradients even if there are infinite samples, which dictates their biases. **Q5. Lemma 4.4 mentioned a particular advantage function $\hat{A}$, but its actual definition is not presented.** A5. The definition of $\hat{A}$ is presented in Appendix 7.1.5. Please refer to it for details. **Q6. It is not clear how alpha is actually adjusted.** A6. There is a full pseudocode for our algorithm in Appendix 7.3.3, which illustrates how alpha is adjusted during training in detail. Please let us know if it is still unclear. **Q7. The claim that we can guarantee stable policy updates by constraining the determinant of a matrix to be near 1 is not proved or properly explained.** A7. Thank you for the comment, but we'd like to say that it is **one of necessary conditions that we have to satisfy to make policy updates more stable**, and we already suggested three reasonings for this claim. To briefly reiterate them, we proved that the determinant should stay near 1 to make $\alpha$ policy not become an invalid policy. Also, as the determinant is related to variance of the RP gradients, (Figure 2), we have to keep it close to 1 to constrain the variance of analytical gradients used in policy updates. Finally, it helps us keep $\alpha$ policy near the current policy - please refer to A9. **Q8. If all eigenvalues are far greater than 1, $\alpha$ would be set to be very close to 0. In this case, would the analytical gradient become useless?** A8. Yes, this is our intention. If all eigenvalues are far greater than 1, it means that the variance of the analytical gradients is very large, and thus it's very likely that the policy updates based on them would be unstable. In such cases, we decrease alpha to nearly 0, and rely mostly on PPO. This is the same spirit as others do when mixing LR and RP gradients for policy updates - if RP gradients have far higher variance than LR, they can be neglected [3]. **Q9. It is not clear why the ratio of the two determinants gives us the difference between the two policies.** A9. When $\alpha = 0$, it is trivial that the ratio equals to 1. Therefore, we can say that the deviation of the ratio from 1 is one of (indirect) sufficient conditions for detecting differences between the two policies, rather than necessary conditions. **Q10. It seems that the new algorithm is only useful when the problem being solved has many poor local optima or when the complete analytical policy gradient cannot be obtained.** A10. In fact, our research is motivated by the observation that many environments we counter have these very properties you mentioned, thus this work is widely applicable. For instance, for rigid body physics simulation, there are a lot of discontinuities incurred due to collisions, which require special techniques to make them differentiable. Instead of doing that, we design a policy learning algorithm that can leverage (possibly biased) analytical gradients, because it is quite time-consuming to make every environment fully differentiable. We expect our algorithm could become a good baseline algorithm to test with only partial differentiability, which is much easier to obtain than full differentiability. [1] Qiao, Yi-Ling, et al. "Efficient differentiable simulation of articulated bodies." [2] Xu, Jie, et al. "Accelerated policy learning with parallel differentiable simulation." [3] Parmas, Paavo, et al. "PIPPS: Flexible model-based policy search robust to the curse of chaos." --- Rebuttal 2: Title: Thank the authors for their response Comment: I would like to thank the authors for their response, which has addressed some of my concerns. I will increase my rating a bit. In the meantime, I am not fully convinced by the practical applicability of the algorithm. Regarding the discussion on model-based RL, my point is when the model is known (hence no cost is involved in learning the model), similar to the condition of the newly proposed algorithm, how well the new algorithm can outperform model-based RL in terms of both theoretical and empirical advantages. Additionally, I still don't understand the difference between completeness and bias based on the authors' explanation. I think a more thorough mathematical definition is required. This is also the case regarding the explanation on the ratio of the two determinants and why this can measure the difference between two policies. Finally, I believe the main text of the paper should be self-contained, without relying on any appendices. Hence, the definition of A hat should be in the main text. --- Rebuttal Comment 2.1: Comment: Thank you for raising the rating and additional comments, we appreciate them! Here are our answers to additional questions. **Q1. When the model is known, hence no cost is involved in learning the model, how well the new algorithm can outperform model-based RL in terms of both theoretical and empirical advantages.** A1. According to [1], we believe that your suggested model-based RL with known models falls into the same category as models like AlphaZero [2], which leverage planning methods such as Monte Carlo Tree Search (MCTS). To briefly recap how MCTS works: since the world dynamics model (which are the Go rules for AlphaZero) is fully known, it "simulates" future steps of gameplay using the estimations of its neural network as a prior. That is, it rolls out future gameplay steps based on rough suggestions from the neural networks, builds a search tree based on them, and selects the action that yields the best reward. In this approach, the model is fully known and is thus used as a tool to simulate various scenarios. However, we’d like to note that this strategy works in a very different environment than ours - it works in the **discrete** action space, while our method is for **continuous** one. It is unclear if we can apply planning methods like MCTS to our case or not, and thus it is difficult to compare it directly to our method. In spite of that, we’d like to point out that even if we could apply MCTS-like planning methods to our physics simulation problems, it could require much more computational cost for training. This is because it has to expand its search tree by taking a large number of timesteps, and take a SINGLE action based on it. For instance, AlphaZero runs 1600 simulation steps to build a search tree for determining a SINGLE action. Considering that the most of the computational cost comes from the expensive simulation costs for physics simulations, it is prohibitive to run such a number of simulation steps for determining action at a single time step, and thus training the network. Therefore, we’d like to underline the difficulty to directly compare model-based RL with a fully known model to our method. At the same time, the planning method like MCTS used for such model-based RL methods could require much more computational cost than ours in the worst case scenario. Exploring how analytical gradients can be efficiently applied to model-based RL methods would be another very exciting research direction. **Q2. I still don't understand the difference between completeness and bias based on the authors' explanation. I think a more thorough mathematical definition is required. This is also the case regarding the explanation on the ratio of the two determinants and why this can measure the difference between two policies.** A2. First, after some reconsideration, we realize that ‘completeness' is not an appropriate term to use in the broader, general context. We will remove the term in the revision and explain our method using bias only. However, it does not change our main claim that our method can leverage possibly biased analytical gradients better than the other baseline methods. Thank you again for pointing it out to improve the exposition of the paper. About the second issue, we’d like to reemphasize that the ratio of the two determinants does not “measure” the difference between the two policies. Instead, our view is that it is more likely that the two policies are more different from one another when the ratio deviates more from 1, at least locally, because the ratio is 1 when the two policies are the same. In the revision, empirical results will be included to support this claim. **Q3. I believe the main text of the paper should be self-contained, without relying on any appendices. Hence, the definition of A hat should be in the main text.** A3. We agree with your point, and will move the definition of A hat to the main paper. Thank you for your suggestions. [1] Moerland, Thomas M., et al. "Model-based reinforcement learning: A survey." [2] Silver, David, et al. "Mastering the game of go without human knowledge."
Summary: This paper introduces a novel policy learning method that adopts analytical gradients into the PPO framework without the necessity of estimating LR gradients. The authors introduce an adaptive α-policy that allows us to dynamically adjust the influence of analytical gradients by evaluating the variance and bias of these gradients. The experiments show that the proposed method, GI-PPO, strikes a balance between analytical gradient-based policy updates and PPO-based updates, yielding solid results compared to baseline algorithms in various environments. Strengths: This paper introduces original ideas about how to incorporate analytical gradients into the PPO framework, taking their variance and bias into account. As a result, a locally superior policy, adaptive $\alpha$-policy is introduced and fits into PPO framework seamlessly, by dynamically adjusting the influence of analytical gradients. This paper is of solid technical quality. It provides theoretical derivations and analysis to demonstrate how $\alpha$-policy fits into PPO framework seamlessly, the relationship between reparameterization RP gradient and $\alpha$-policy, and theoretical guidance to adjust $\alpha$ based on variance and bias. The experiments are extensive in general, demonstrating the solid performance compared to baseline in different environments. Last, the paper is clearly written in general. Weaknesses: In Section 5.1 (page 7, line 229), Figure [8] is missing in main paper, although we may find the figure in Appendix. In Fig. 3, the plot is too small to distinguish the relative performance for different algorithms. In Section 5.2 (page 8), there is typo line 259, it should be Figure [4] instead of Figure [5] for Differential Physics Simulation. In Fig. 4(b,c), GI-PPO is no better than RP for tasks Ant and Hopper in term of returning rewards. To strengthen the impact of the work, it might be better to add more challenging tasks, for example, Humanoid in Mujoco environment. Technical Quality: 3 good Clarity: 3 good Questions for Authors: This paper is well written in general. However, for the readers to better understand the algorithm, can we move Algorithm 1 GI-PPO from Appendix and add into main paper? To utilize analytic gradients, it seems GI-PPO needs more extra computations from back-propagation. Can the author provide the computational cost comparison to different baseline algorithms? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: There is no code provided to reproduce the results. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments and support. Here we address your key concerns. **Q1. In Section 5.1, Figure 8 is missing in main paper, although we may find the figure in Appendix. Also, the plots in Figure 3 are too small to distinguish the relative performance of different algorithms.** A1. Thank you for the comment. We will rearrange the figures to better align with the manuscript text, and enlarge them as well. The current appearance is mainly due to the page limit and latex placement. **Q2. There is a typo in line 259, it should be Figure 4 instead of Figure 5 for Differential Physics Simulation.** A2. Thank you for finding the typo! We will fix it in the revised version. **Q3. In Figure 4, GI-PPO is no better than RP for tasks Ant and Hopper in term of returning rewards.** A3. Please refer to A2 of global rebuttal at the top. **Q4. To strengthen the impact of the work, it might be better to add more challenging tasks, for example, Humanoid in Mujoco environment.** A4. Thank you for the suggestions to further improve the exposition of this paper. We will add those results in the revised paper. **Q5. Can we move Algorithm 1 GI-PPO from Appendix and add into main paper?** A5. We can definitely do that, but for now the page limit makes it hardly possible. If we can get additional pages, we will move it into main paper from the supplementary document. **Q6. Can the author provide the computational cost comparison to different baseline algorithms?** A6. Thank you for the question, we can add computational cost comparison in supplementary material for sure. Briefly, GI-PPO only needs additional computational cost for PPO-based update than the RP method. Considering that the most of the computational cost comes from simulation steps and backpropagation for RP, such additional cost is not much. --- Rebuttal Comment 1.1: Title: Thanks to the authors for the rebuttal Comment: I’ve read comments from all the other reviewers. Thank you for your rebuttal, and I appreciate that my concerns have been addressed. --- Reply to Comment 1.1.1: Comment: Thank reviwer(s) for valuable comments, we are happy that we could address reviewer(s)' concerns!
Summary: This paper integrates analytical gradients with PPO. They introduce an \alpha-policy to control the bias and variance during the update of the policy. Results on some differentiable environments show the effectiveness of the proposed method. Strengths: This paper study how to incorporate analytical gradient into existing Rl algorithm, which is interesting. Weaknesses: The main weakness of the paper is that the writing of the paper is somewhat poor. The presentation lacks motivation or necessary intuitive explanations. There are some imprecise statements and typos (see below). These flaws make the paper hard to follow. Major concerns: - This paper claims that it's a combination of analytical gradient and PPO. However, I didn't see where you use PPO (eq. 4 is not PPO!). - Some statements lack motivation. I don't understand why the algorithm is composed of three parts (Line 182). Could you give more explanation? And why the adjusting $\alpha$ should be designed in that way? Besides, it would be good if there was more explanation of Def 4.1 - The proposed method does not perform well on the physical simulation tasks. In Figure 4, the proposed method performs worse than the RP method on two high-dimensional tasks: Ant and Hopper. - In most of the practical tasks, it is hard to get the analytical gradient. This limits the potential of the proposed method. The paper only does experiments in simulation tasks. Could you give more examples of where we can get analytical gradients in practice? There are some minor technical issues in the paper: - Line 106: The symbol $A$ is repeated with the action space $A$ (in Line 97). - Line 97: You assume the probability to be within $[0,1]$. This means that the action space is discrete. However, in Line 101, you assume action space is continuous. Some statements in the paper are wrong: - "PPO relies on the observation that we can evaluate a policy πθ with our current policy πθ¯". This is not right. Because the $\rho_{\pi_\theta}(s)$ is not available. - Line 137: "the difference between πθ¯ and πθ must be small enough to get an accurate estimate." In fact, in the theory of TRPO, they introduce a lower bound with KL divergence penalty term. They don't care whether that's an accurate estimate. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: - What do you mean by inherent chaotic nature (Line 25)? Could you please give some examples? - I'm confused about how is eq. 2 used in your method. Please also see my comments in the weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 1 poor Contribution: 3 good Limitations: Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments. We address some possible confusions below. **Q1. How is PPO used in this paper? Eq. 4 is not PPO, and description about PPO, especially that related to availability of $\rho_{\pi_{\theta}}(s)$, is wrong.** A1. Though the exact formulation is different, the **main objective function of both TRPO and PPO is same as Eq. 4**. The exact formulation differs between these two algorithms because they opt for different ways to constrain the policy optimization space. To be specific, if we see Eq. 14 in [1] (TRPO), we can observe that the objective function is the MC estimate of our Eq. 4, and it imposes KL divergence constraint to limit optimization space. For PPO, as seen Eq. 7 in [2], we note that it also optimizes MC estimate of our Eq. 4, but clips the estimates for constraining optimization space. Therefore, we'd like to suggest that Eq. 4 explains the objective function of PPO well, and thus is a good theoretical basis to show how we can incorporate analytical gradients to PPO framework. Additionally, we note that $\rho_{\pi_{\theta}}(s)$ is not available, given collected experience with $\pi_{\bar{\theta}}$. This is the **exactly the reason why we use $\rho_{\pi_{\bar{\theta}}}(s)$ instead of $\rho_{\pi_{\theta}}(s)$, as shown in Eq. 4 and Eq. 3 in [1]**. If we replace $\rho_{\pi_{\theta}}(s)$ with $\rho_{\pi_{\bar{\theta}}}(s)$, then it becomes the first order approximation of the expected return of $\pi_{\theta}$ [3]. In that sense, we stated that "we can evaluate a policy $\pi_{\theta}$ with our current policy $\pi_{\bar{\theta}}$", even if we do not have access to $\rho_{\pi_{\theta}}(s)$. **Q2. Why is the algorithm composed of three parts as shown in Line 182?** A2. We’d like to reemphasize that we use **variance and bias of analytical gradients** to control their role in policy update. It has been already used in several other literature [4], [5] as valid criteria to evaluate the validity of (analytical) RP gradients against LR gradients. However, **since we do not have LR gradients to compare, we need another measure to evaluate the variance and bias of analytical gradients**. To that end, we suggested $\alpha$ policy. Based on our propositions, **we can evaluate the variance and bias of analytical gradients if we have $\alpha$ policy**. Therefore, we first need to update our policy toward $\alpha$ policy - which is the first step of our algorithm. After approximating $\alpha$ policy, we evaluate the variance and bias of analytical gradients using criteria suggested in the paper, which is the second step. In this step, if our criteria is not met, we then decrease $\alpha$ so that analytical gradients do not contribute to the policy update much in the next iteration. Finally, we conduct policy updates based on PPO - in fact, as we have shown in the paper, this step is not necessary if the analytical gradients are correct, because $\alpha$ policy is locally better than the current policy in PPO viewpoint (Proposition 4.2). However, if the analytical gradients are flawed and thus $\alpha$ goes to 0, our policy update will be very slow without PPO. Therefore, we can **consider PPO as a safeguard, which guarantees certain amount of policy update even when the analytical gradients are not reliable**. **Q3. Why $\alpha$ is adjusted in the way suggested in the paper?** A3. Please refer to A3 of global rebuttal at the top. **Q4. It would be good if there was more explanation of Def 4.1.** A4. Def 4.1 introduces the concept of $\alpha$ policy, which can be considered as a **new policy that selects slightly better action than the old policy with the same probability (intuitively)**. We will add more explanation, and move Figure 8 in Appendix 7.6.1 to the main paper if possible to help understanding. **Q5. The proposed method does not perform well on the two physical simulation tasks.** A5. Please refer to A2 of global rebuttal at the top. **Q6. How can we apply this method to practical tasks?** A6. Please refer to A4 of global rebuttal at the top. **Q7. Minor technical issues in Line 97 and 106** A7. Thank you for pointing them out. We will correct them in the revised version. **Q8. Line 137: Wrong description about $\pi_{\bar{\theta}}$ and $\pi_{\theta}$.** A8. TRPO [1] suggests the lower bound based on the observation that they can use $L_{\pi_{old}}(\pi_{new})$ to approximate the desired $\eta(\pi_{new})$ up to the first order. Furthermore, in their practical algorithm, they constrain the optimization space by upper bounding KL divergence, instead of using it as a penalty term. Likewise, **their logic is grounded on the fact that we can locally approximate the expected return of the new policy using the old one**, and in this sense, we'd like to respectfully suggest that our description was indeed correct. **Q9. What is inherent chaotic nature?** A9. The term in this context mainly describes exploding variance of the analytical gradients. We'd like to introduce related papers that discuss this topic in detail: [4], [6]. **Q10. How Eq. 2 is used is confusing.** A10. Eq. 2 shows the basic gradients that our differentiable environments provide us. With those gradients, we can compute gradients of advantage value w.r.t. the actions ($\frac{dA}{da}$). If we use GAE for estimating advantage, we can compute the gradients using formulas provided in Appendix 7.2.1. In the formula, note that we use $\frac{d \delta}{da}$, which can be computed using the basic gradients in eq. 2, as $\delta$ is defined using $r$ and $s$. [1] Schulman, John, et al. "Trust region policy optimization." [2] Schulman, John, et al. "Proximal policy optimization algorithms." [3] Kakade, Sham M. "A natural policy gradient." [4] Parmas, Paavo, et al. "PIPPS: Flexible model-based policy search robust to the curse of chaos." [5] Suh, Hyung Ju, et al. "Do differentiable simulators give better policy gradients?." [6] Metz, Luke, et al. "Gradients are not all you need." --- Rebuttal 2: Comment: Dear Reviewer 8kSu, Did our rebuttal address issues you raised? Do you still have any more questions? Thank you. Best, The Authors
Summary: In this manuscript, the author(s) combined the PPO framework with analytical gradients and proposed a policy learning method to learn better policy quickly. Through empirical experiments, the author(s) demonstrated that the proposed method was superior to baseline methods in several scenarios. Strengths: ### Strong points: 1) Utilizing $\alpha$-policy to deal with the influence of analytical policy gradients when training; 2) Adopting criteria to detect/calculate variance and bias; 3) An algorithm was designed to handle the strength of analytical gradients during updating; 4) Empirical results look good. By the way, in the Supplementary Materials, the author(s) provided a mp4 video file, where the animation on the scenario of the traffic problem looks fantastic. Thanks for providing that. Weaknesses: ### Weak points: Although the work of this manuscript looks interesting, some points in this manuscript are not so clear to me, I hope the author(s) can explain/detail them. Thanks. 1) Some descriptions of empirical experiments are not so clear. 2) Missing codes related to the proposed method in this manuscript. 3) If possible, some theoretical results in this manuscript should be empirically verified. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Detailedly, I have the following comments/questions: 1) For the description or empirical experiment details, some places in this manuscript seem unclear. For example, (1) Definition 5 was mentioned several times in the manuscript, such as, in Lines 179, 188, and even in the Supplementary Materials, Line 419, but where is Definition 5? (2) The proof is mainly for Proposition, Lemma, Theorem, etc. In Line 419 of Supplementary Materials, it mentioned "Proof of Definition 5", I am not clear. (3) Could you please explain/clarify the fluctuation of the blue curve (i.e., GI-PPO) in Figure 3 (c)? Why caused this phenomenon? (4) The order number (or citation) of Figures in this manuscript is messy. For instance, in Line 229, Figure 8 is in the Supplementary Materials, right? Figure 1 seems to be cited until Line 245? 2) If possible, some theoretical results, for example, Proposition 4.5, can be verified by empirical experiments? 3) Some other tiny issues/typos: (1) The font size used for the coordinates in Figure is too small. (2) The format of mathematical formulas in this manuscript is inconsistent. Some end with punctuation, and some don't. (3) Many issues in References. For instance, Sometimes the conference name is the full name plus the abbreviation, and sometimes only the full name; Missed source of the cited references; and so on. Hope the author(s) will check carefully and correct them. In addition, I did not scrutiny the proofs step-by-step, but I think the proofs should be Ok. Thank the author(s) for submitting this interesting work. Except for the several possible issues/questions mentioned above. I think this is a basically good manuscript. I look forward to the response from the author(s). If the author feels any of my comments are inappropriate, please feel free to point them out :-) Thanks. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Yes. As the author(s) revealed, there might be several limitations, such as(1) How to approximate $\alpha$-policy efficiently; (2) Adjusting PPO's clipping range; (3) Computational efficiency. Hope that the author(s) can further investigate these limitations and solve these potential problems in the near future. Good luck! Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments and support, we really appreciate them. Here we address some of your concerns. **Q1. Where is Definition 5, and what does it mean by proof of Definition 5?** A1. Sorry for the confusion, Definition 5 means Definition 4-1. We will fix it in the revised version, and separate it into definition and proposition. Right now Definition 4-1 defines what $\alpha$ policy is, and suggests some mathematical properties of it (which should be proposition), which we prove in the "Proof of Definition 5". We will clearly mention that the proof in line 419 is for the proposition part of Definition 4-1. **Q2. Could you please explain / clarify the fluctuation of the blue curve (i.e., GI-PPO) in Figure 3 (c)?** A2. Thank you for the question, the 1-d Ackley’s function that is dealt with in Figure 3 (c) has its optimum at $x = 0$. Therefore, the best policy would give us a mean of 0 and variance of 0 for this problem, as shown in Figure 1 (a). In Figure 1 (a), we can observe that the optimum is formed at mean = 0, var = 0, **but the optimum is very unstable** - that is, small perturbations in either mean or variance lead to a large degradation in the results. The fluctuation in the blue curve in Figure 3 (c) stems from this nature of the target function. However, since we use Adam optimizer, which adjusts its step size based on the optimization trajectory, we can observe that the fluctuation disappears in the end. **Q3. The order number (or citation) of Figures in this manuscript is messy.** A3. Thank you for pointing this out. Due to the page limit, the figures are not always ordered and appear, as desired. We will rearrange the figures in their placement within the latex file to better align figures with the page break in the manuscript, as suggested. **Q4. If possible, some theoretical results, for example, Proposition 4.5, can be verified by empirical experiments?** A4. Thank you for the suggestion, we can definitely verify it with additional experiments. We will incorporate those results in the final version. **Q5. There are miscellaneous typos / issues in the writing.** A5. Thank you for the detailed feedback! We will further polish our exposition to incorporate your suggestions -- very much appreciated! --- Rebuttal Comment 1.1: Title: Thank the author(s) for the responses Comment: Thank the author(s) for the responses. I read the reviewers' comments and the rebuttals from the author(s). The author(s) answered my questions and cleared up my doubts. I increase my score and vote to accept this manuscript, but on the condition that the author(s) will carefully make the changes and correct the issues in the final version, as claimed in responses/rebuttals. Thanks and good luck! --- Reply to Comment 1.1.1: Comment: Thank the reviewer, we really appreciate the support! We’ll carefully revise the paper, as the reviewers suggested. Thanks.
Rebuttal 1: Rebuttal: We really appreciate all of the valuable comments from reviewers! Here we address the major concerns, with a 1-page supplementary document. Our code will be released publicly, when the paper is published. **Q1. This paper does not fully utilize the analytical gradients, because it does not consider dynamically changing PPO clip limit.** A1. This is a limitation of the current implementation. To dynamically update the PPO's influence, another metric is required to **compare PPO against analytical gradients**. Even if we could evaluate the quality of analytical gradients in PPO framework, we need an entirely new perspective to do it in the opposite direction. This new exploration is an exciting future research directions worthy of a full investigation on its own, as many other issues (pros & cons) needs to be considered. **Q2. The experimental results showed that GI-PPO outperforms PPO, but does not achieve state-of-art performance in some environments, especially in physics environments.** A2. Our method clearly outperforms PPO in EVERY environment, but it could not achieve the best performance in some of them, especially in physics simulations (Ant, Hopper). We'll briefly discuss reasons for this from several aspects. (1) We'd like to reiterate that the **RP method is proven to be much more effective at solving those physics problems than other methods, especially PPO**. This is because we have already applied the variance reduction techniques suggested in [1] to these problems. However, because of the reason described in A1, even if it is more advantageous to use analytical gradients over PPO, the performance of our algorithm might be constrained by PPO. To validate this claim, we conducted additional experiments for the Ant environment, which are displayed in **Figure 1** of the attached document. In the figure, **it's evident that GI-PPO performs slightly better than the RP method when $\delta_{oorr} = 1.0$**. Observing how $\alpha$ and out-of-range-ratio evolve over time in the plots, we see that $\alpha$ is clearly constrained by $\delta_{oorr}$, and this places a detrimental effect on the learning process. If analytical gradients are indeed more beneficial, we can prioritize them over PPO. However, systematizing this approach is currently beyond the scope of our paper. It is one of the key future research directions currently under consideration. (2) It requires a more effective strategy to adjust alpha during training. **At present, our approach finds balance between computational cost and performance**. (Please see A3 for details.) We discovered that our algorithm's performance in the physics environment, especially in the Hopper environment, is hindered by this issue. We aim to devise a more systematic strategy and will incorporate it, if feasible, in the paper's revision. **Q3. There is not so much explanation for the algorithm design to adjust $\alpha$. There could be more theoretical analysis for the algorithm, and even better ways to adjust $\alpha$ during training.** A3. There are indeed multiple strategies available to adjust the $\alpha$ parameter during the training process. In the current paper, our chosen strategy involves adjusting $\alpha$ contingent on the optimization results of each iteration. This is achieved by multiplying or dividing it by a predetermined (hyperparameter) constant. The intent behind this is to employ the modified $\alpha$ in the subsequent iteration to better satisfy our proposed conditions. Therefore, there is no guarantee to meet the conditions in every iteration, though works well in practice. As an alternative, we experimented with fine-tuning the $\alpha$ parameter during every iteration to more closely adhere to the constraints. However, our findings suggest that this approach entails a significantly higher computational expense relative to our primary approach -- thus with a tighter bound but less efficient in practice. We recognize that superior methods likely exist beyond these techniques. We continue to investigate possible improved methods grounded in the same principles, as suggested in this paper, with more theoretical analysis *and* better empirical results. In the meantime, our proposed method here already shows considerably better performance than most of known baselines in diverse environments consistently and we also demonstrated the validity of our approach. **Q4. There could be limitations in applying this method to practical use cases, because there is no analytical gradients in real world.** A4. We’d like to introduce two key insights on how our algorithm is beneficial for such cases. First, we’d like to note that our algorithm can leverage analytical gradients even when they are biased, as shown in SECTION 5.3, **while the original RP method cannot**. This means that we can **roughly estimate the gradients even if they are not very accurate (e.g. using finite difference method) and leverage them for real world problems**. Second, nowadays there are many approaches that try to distill policies learned in simulation with access to excess information, including analytical gradients, to student models with access to less information, which are deployed in real world [2]. More generally, policies trained robustly in simulation with analytical gradients can also be used as expert policies for other applications, such as model compression [3] or domain generalization [4]. Likewise, there are many benefits we can take advantages of using our approach, even when it comes to real world applications. [1] Xu, Jie, et al. "Accelerated policy learning with parallel differentiable simulation." [2] D. Chen et al., “Learning by Cheating”. [3] A. Ashok et al., “N2N learning: Network to Network Compression via Policy Gradient Reinforcement Learning”. [4] D. Li et al., “Learning to generalize: Meta-learning for domain generalization”. Pdf: /pdf/de612212e8ee0d7fc5d1e66bfc1cceddd7a2e3fc.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper introduces a policy gradient algorithm based on Proximal Policy Optimization (PPO) algorithm. This new algorithm utilizes the analytical gradients from differential environments and achieves competitive performance in various scenarios including function optimization, classic control environments, and traffic control environments. To integrate the analytical gradient into the PPO framework, they use the reparameterization trick and introduce the concept of $\alpha$-policy which is a locally superior policy. When $\alpha$ is sufficiently small, the difference between the original policy and this $\alpha$-policy would be small enough to satisfy the difference constraint in the PPO algorithm. Moreover, they propose metrics considering variance, bias, and out-of-range-ratio to dynamically adjust $\alpha$ during the training. Strengths: 1. The idea of utilizing analytical gradients in physical control environments is powerful and can dramatically increase the efficiency of policy gradient algorithms when analytical gradients are available. This paper advances this idea by integrating it with the PPO algorithm and achieves competitive performance in various environments. 2. This paper proposes the concept of $\alpha$-policy and connect it with the reparameterization trick. $\alpha$-policy is an adjustable policy such that when $\alpha$ is small enough the difference between $\alpha$-policy and the origin policy can be small enough to fit into PPO algorithm. 3. This paper proposed metrics considering variance, bias, and out-of-range-ratio to dynamically adjust $\alpha$ during training. Weaknesses: 1. When trying to fit the analytical gradients into the PPO algorithm, this paper didn't consider dynamically changing the PPO clip limit. Thus, the analytical gradients are not fully utilized. 2. More theoretical analysis and better ways on how to adjust $\alpha$ during training could be done and proposed. 3. The experimental performance showed improved performance compared to PPO but does not achieve state-of-art performance in some environments. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: 1. Line 419 "Proof of Definition 5" does not match the actual definition index. The author should separate definition 4.1 into a definition and a proposition. 2. Can the author integrate the variance reduction techniques in [Xu et al., 2022] to achieve better performance? If not, what changes can be made to accommodate this? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: I did not identify any potential negative societal impact from this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments and support, we really appreciate them. Here we address some of your concerns. **Q1. The analytical gradients are not fully utilized, because this paper didn't consider dynamically changing the PPO clip limit.** A1. Please refer to A1 of global rebuttal at the top. **Q2. More theoretical analysis and better ways on how to adjust $\alpha$ during training could be done and proposed.** A2. Please refer to A3 of global rebuttal at the top. **Q3. The experimental performance showed improved performance compared to PPO but does not achieve state-of-art performance in some environments.** A3. Please refer to A2 of global rebuttal at the top. **Q4. Line 419 "Proof of Definition 5" does not match the actual definition index. The author should separate definition 4.1 into a definition and a proposition.** A4. Sorry for the confusion, and thank you for pointing it out. Definition 5 means Definition 4-1. We will fix it in the revised version, and separate the definition into definition and proposition as you commented for better understanding. **Q5. Can the author integrate the variance reduction techniques in [Xu et al., 2022] to achieve better performance? If not, what changes can be made to accommodate this?** A5. Thank you for the suggestion! In fact, **we already applied techniques in [Xu et al., 2022] to the analytical gradients that we used for GI-PPO in this work**. To be specific, we used the truncated time window suggested in the paper to estimate the analytical gradients and the following $\alpha$ policy. We will add additioinal experimental results that show how the length of truncated time windows affect the training results in the supplementary material. --- Rebuttal Comment 1.1: Title: Thank the authors for the responses and supplementary materials Comment: I would like to thank the authors for the responses and supplementary materials. They have addressed my questions. I encourage the authors to integrate discussions in supplementary materials to the main paper because they provide more direct insight on $\alpha$. --- Reply to Comment 1.1.1: Comment: Thanks for the support, we will revise the paper as the reviewers' suggestions, thank you.
null
null
null
null
null
null
Directional diffusion models for graph representation learning
Accept (poster)
Summary: The authors present directional diffusion model (DDM) to learn graph / node representation. Compared to vanilla diffusion based representation learning techniques, DDM adds a batch based mean and variance, as well as preserving direction of diffusion. The authors then demonstrate that their model perform better than other models on representating learning by experimenting on various dataset and compare with other methods. Strengths: - It is a nice finding by the authors that compared to diffusion model for generative modeling, diffusion model for representation learning does not require sampling from final distribution, thus we don't need to know the limiting behavior of the diffusion process. - The experiments supports the authors' claim on the performance of the model. Weaknesses: - The authors should include a brief introduction on how diffusion models work on representation learning Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: - A lot of citations by the paper, including some baselines like Infograph and GraphMAE, come from arxiv. Are these methods reliable? - Directional diffusion preserves direction, so in recovery process, won't the representation fail to learn the direction of the entries? If it doesn't learn the directions, does directions don't matter? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 2 fair Contribution: 4 excellent Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > A lot of citations by the paper, including some baselines like Infograph and GraphMAE, come from arxiv. Are these methods reliable? > We apologize for our problem, in section 5, the articles we compared and cited are all published in top-tier conferences, as follows: GIN-ICLR 201, DiffPool-NIPS 2018, GCC-KDD 2020, JOAO-ICML 2021, GraphMAE-KDD 2022, MVGRL-ICML 2022, GPT-GNN-KDD 2022, DGI-ICLR 2019, GRACE-ICML 2020, BGRL-ICLR 2022, InfoGCL - NIPS 2021, CCS-SSG nips 2021 We will correct the problem of citing the arxiv version of GIN, GraphMAE, GARGE, BGRL in the subsequent version. > Directional diffusion preserves direction, so in recovery process, won't the representation fail to learn the direction of the entries? If it doesn't learn the directions, does directions don't matter? > In this paper, $\mu$ and $\sigma$ in section 3 are calculated in mini-batch during the model training stage. The core idea of this design is that, through random batch learning of multiple epochs, the model can both approximate the global (law of large numbers) and retain the signal-to-noise ratio in each learning process. > The author should include an introduction on how diffusion models work on representation learning. > Thanks for your suggestions. Actually, Section 1, the introduction part of our article has discussed some notable and recent works about how diffusion models work in representation learning. We would like to provide a thorough summary in this regard. Several methods based on diffusion models[1,2,3,4] have been proposed for effective representation learning. Notably, [4] have demonstrated the value of intermediate activations obtained from denoising networks, as they contain valuable semantic information that can be utilized for tasks like image representation and semantic segmentation. Their findings emphasize the effectiveness of diffusion models in learning meaningful visual representations. More recently, [5] has revealed that the restoration of data corrupted with specific noise levels provides an appropriate pretext task for the model to learn intricate visual concepts, and prioritizing such noise levels over other levels during training improves the performance of diffusion models. [1] Zhang Z, Zhao Z, Lin Z. Unsupervised representation learning from pre-trained diffusion probabilistic models[J]. Advances in Neural Information Processing Systems, 2022, 35: 22117-22130. [2] Preechakul K, Chatthee N, Wizadwongsa S, et al. Diffusion autoencoders: Toward a meaningful and decodable representation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 10619-10629. [3] Abstreiter K, Mittal S, Bauer S, et al. Diffusion-based representation learning[J]. arXiv preprint arXiv:2105.14257, 2021. [4] Baranchuk D, Rubachev I, Voynov A, et al. Label-efficient semantic segmentation with diffusion models[J]. arXiv preprint arXiv:2112.03126, 2021. [5] Choi J, Lee J, Shin C, et al. Perception prioritized training of diffusion models[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 11472-11481. --- Rebuttal Comment 1.1: Comment: Thank you very much for your questions. If you have any remaining concerns regarding our article, we cordially welcome you to present them, and we shall exert our utmost efforts to address them comprehensively.
Summary: This paper offers a good study on anisotropic and directional structure learning of the diffusion model. The authors first conduct a statistical analysis emphasizing the importance of anisotropic structure in graph dataset and demonstrate the disadvantage of the vanilla diffusion model through signal-to-noise ratio. Later on, they propose their work, which is a new pipeline considering preserving the characteristic of data in the diffusion process. Its idea is simple: adding directional diffusion on data, preserving more information than vanilla diffusion. Its model is simple, leveraging 4 layers GNNs and one layer MLP, it achieves improvement in several benchmarks. The idea is elegant. Strengths: Good idea! The work not only offers inspiration to the graph learning community, but it also contributes to the diffusion model. The two constraints added in the work are elegant and proper. Weaknesses: The writing is not so well, especially the formula part. There are too many mistakes there. Their performances aren't satisfying, but I believe the underlying problem doesn't come from the pipeline, it comes from the simple model. There aren't convergence proofs for their learning architecture. Noticed that the diffusion model, which stems from the quasi-static process, possesses a good property of converging. I hope to see proof of the effectiveness of the constraints adding to the original process. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. Do you have proof of the effectiveness of your idea? 2. Can you offer more experimental results? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 4 excellent Limitations: 1. They haven't provided limitations for their work. 2. Though the idea is inspirational, the writing is not good, and there are too many writing mistake. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Do you have proof of the effectiveness of your idea? > Indeed, in this paper, we didn’t provide some theoretical proof. However, we believe that the problem studied in this paper and our contributions are considerably interesting and extensible. Just as mentioned by Reviewer 5, for the graph learning community, our research introduces a novel paradigm for graph representation learning and provides comprehensive experimental evidence of the effectiveness of this paradigm. For the diffusion model community, our study also demonstrates impacts on specific data distributions with diffusion models and we propose a data-dependent forward process. And thus, as a pioneering work studying the impacts of data and focusing on data-dependent cases, it is difficult to formulate our method into existing theoretical frameworks. We are actively working on the theoretical proof and believe that it will be addressed in the future. > Can you offer more experimental results? > Extensive results on baselines (2 tasks and 12 datasets) demonstrate that this method provides a novel solution for graph representation learning. Apart from contrast experiments, we added an ablation experiment on the model structure: | | GCN | MLP | Number layers of GCN | |----------|-----|-----|----------------------| | GraphMAE | ✓ | ✓ | 2-4 | | MVGRL | ✓ | ✓ | 4 | | Our(DDM) | ✓ | ✓ | 4 | | | Citeseer | PubMed | MUTAG | | --- | --- | --- | --- | | wo-head | 73.1±0.2 | 80.2±0.2 | 87.8±1.4 | | wo-encoder | 73.4±0.1 | 81.4±0.3 | 88.9±1.3 | | wo-skip_connection | 73.5±0.2 | 81.3±0.5 | 86.7±1.1 | | Baseline | 74.3±0.3 | 81.7±0.8 | 91.51 ±1.4 | it can be seen that considering the similar parameter amount and modules with the baseline, our introduction of diffusion pre-training such as the U-net structure plays an important role. Moreover, as we discussed with review 2 by quantifying the anisotropy of data, we show that our method can achieve better performance on datasets with strong anisotropy --- Rebuttal Comment 1.1: Title: Response to the authors' rebuttal. Comment: Thanks for offering kind responses to my questions. I hope you could correct writing mistakes and make the paper more readable. Please offer a elegant theorectical proof as soon as possible, the idea of the paper is innovative and good.
Summary: This paper presents a method named Directional Diffusion Model (DDM) for unsupervised representation learning, targeting applications in graph and node classification. The model's performance is evaluated on various benchmark datasets and compared to both unsupervised and supervised models. The results demonstrate that DDM can effectively learn meaningful graph- or node-level representations. Furthermore, the paper offers some exploration of how different types of noise impact the learning process. Strengths: 1. The authors present empirical evaluations of DDM on multiple benchmark datasets, which validate the effectiveness of the proposed method to some extent. 2. The paper includes an investigation into the impact of different types of noise, which adds depth to the analysis and understanding of the proposed method. Weaknesses: 1. The SVD visualizations could be more insightful. It's observed that the 2D projections computed from graph datasets appear biased and are predominantly on the right plane. However, the methodology behind these visualizations remains unclear. Are they based on singular values? Further explanation would be beneficial. 2. The Signal-to-Noise Ratio (SNR) plots in the supplementary material show comparable results under white noise and directional noise. This implies that the DDM may not offer significant advantages on these datasets. The authors should provide some clarification. 3. The paper is primarily empirical and lacks theoretical foundations, which might limit its generalizability. 4. The algorithm snippet doesn't seem to provide sufficient implementation details, which might impede attempts to reproduce the study. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. Please clarify the issues raised in the weaknesses section. 2. How does DDM compare with widely-known baselines in terms of performance, and are there any specific scenarios where DDM particularly excels or falls short? 3. Regarding the means and deviations at line 170: are these calculated relative to the node features at time 0, or do they consider other time steps? In its current form, I believe the paper does not meet the rigorous quality and contribution standards expected at a top-tier conference like NeurIPS. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The authors did not explicitly state any limitations of their study. However, the identified weaknesses could be viewed as potential limitations. These include the lack of theoretical foundations and potential issues in reproducibility due to insufficient implementation details. Additionally, the paper could benefit from a clear discussion of the limitations of DDM in handling specific scenarios or types of data. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The SVD visualizations could be more insightful. However, the methodology behind these visualizations remains unclear. > Thanks for your suggestions, Using the SVD decomposition to analyze the anisotropic is first proposed by [1] and has been widely used in NLP[1,2,3,4,5]. Specifically, we calculate the SVD decomposition of the graphs’ node feature matrix and then choose the first and second singular vectors as the x-axis and y-axis. The methodology behind these visualizations, as discussed in section 1, line 44, is to gain an intuitive understanding of the differences in distribution between graph data and image data. Prompted by this, we further examine the disadvantage of the vanilla diffusion model through the signal-to-noise ratio under such data and propose our new pipeline considering preserving the characteristic of data in the diffusion process. From a singular value perspective, we can draw similar conclusions about anisotropy. This method was also introduced in [1]. We follow the method in [1] to measure the anisotropy of the data by measuring the ratio of the sum of the first two singular values to the sum of the first ten singular values. The results are as follows, > The Signal-to-Noise Ratio (SNR) plots in the supplementary material show comparable results under white noise and directional noise..... > We follow the approach of [1] to measure the anisotropy of the data by measuring the ratio of the sum of the first two singular values to the sum of the first ten singular values. The results are as follows: | | Top@2/Top@5 | | --- | --- | | ogbn-arxiv | 0.460 | | citseer | 0.532 | | Pubmed | 0.533 | | Computer | 0.722 | | Cora | 0.419 | We can observe that on datasets with low anisotropy (Ogbn-arxiv, cora), our approach performs comparably to previous methods and falls short of supervised models. On datasets with strong anisotropy (Pubmed, Citeseer, Computer), our model has made significant strides, even surpassing some supervised training outcomes. As we discussed in section 5.1 & section .5.2 By utilizing the directional noise diffusion, our method acts as a pseudo-infinite-step data augmentation technique that generates numerous samples while preserving data structure. > The paper is primarily empirical and lacks theoretical foundations, which might limit its generalizability. > Indeed, in this paper, we didn’t provide some theoretical proof. But we believe that the problem studied in this paper and our contributions are considerably interesting and extensible. Just as mentioned by Reviewer 5, for the graph learning community, our research introduces a novel paradigm for graph representation learning and provides comprehensive experimental evidence of the effectiveness of this paradigm. For the diffusion model community, our study also demonstrates impacts on specific data distributions with diffusion models and we propose a data-dependent forward process. And thus, as a pioneering work studying the impacts of data and focusing on data-dependent cases, it is difficult to formulate our method into existing theoretical frameworks. We are actively working on the theoretical proof and believe that it will be addressed in the future. > The algorithm snippet doesn't seem to provide sufficient implementation details, which might impede attempts to reproduce the study. > We apologize for not providing the definitions of $X_i$ and $A$ in the algorithm in Appendix: as we have defined in the paper, $X_i$ is the feature matrix of graph $\mathcal{G}_i$, while $A$ is the adjacency matrix of $\mathcal{G}$. Every time taking the gradient step, the parameters $\theta$ are renewed. > How does DDM compare with widely-known baselines in terms of performance, and are there any specific scenarios where DDM particularly excels or falls short? > Actually, we have conducted extensive experiments in section 5.1 and section 5.2 to compare our models with baselines, and the results are shown in Table 1 and Table 2: in graph classification tasks, we surpass the performance of all baselines on IMDB-B, IMDB-M, COLLAB, PROTEINS, and MUTAG datasets; in node classification tasks, we outperform those baselines on Citesser, PubMed, Amazon-Computer and Amazon-Photo datasets. The falling short and excelling performance are also discussed in section 5, which are attributed to the isotropy of data in short. > Regarding the means and deviations at line 170: are these calculated relative to the node features at time 0, or do they consider other time steps? > As mentioned in line 171, $\mu$ and $sigma$ are the means are deviations of the raw features vectors at time 0, and they can alleviate the impacts resulting from the rapid decline of SNR as discussed in section [1] Gao J, He D, Tan X, et al. Representation degeneration problem in training natural language generation models[J]. arXiv preprint arXiv:1907.12009, 2019. [2] Qiu R, Huang Z, Yin H, et al. Contrastive learning for representation degeneration problem in sequential recommendation[C]//Proceedings of the fifteenth ACM international conference on web search and data mining. 2022: 813-823. [3] Zou D, Wei W, Mao X L, et al. Multi-level cross-view contrastive learning for knowledge-aware recommender system[C]//Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2022: 1358-1368. [4] Wang L, Huang J, Huang K, et al. Improving neural language generation with spectrum control[C]//International Conference on Learning Representations. 2019. [5] Yu S, Song J, Kim H, et al. Rare Tokens Degenerate All Tokens: Improving Neural Text Generation via Adaptive Gradient Gating for Rare Token Embeddings[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2022: 29-45. --- Rebuttal Comment 1.1: Title: thank you for the responses Comment: The feedback from the reviewers has addressed some of my concerns, prompting me to increase the score to 5. --- Reply to Comment 1.1.1: Comment: Thank you very much for your active engagement. If you have any remaining concerns regarding our article, we cordially welcome you to present them, and we shall exert our utmost efforts to address them comprehensively.
Summary: This study proposed adding directional noise on learning anisotropic graphs using diffusion models. The study adds new perspectives in exploring the anisotropic structures in graph data. The numerical results are promising to support the authors' ideas. Strengths: I find this is an interesting paper - the rational is convincing, and the authors performed extensive experiments to support the idea. The discussions on the noise and ablation studies provide further insights into understanding the usefulness of adding directional noises. The paper provides a valuable perspective in understanding diffusion models for graph learning. Weaknesses: The authors mainly prove the utility of the proposed approach through experiments; there is a lack of theorectical proof. Technical Quality: 3 good Clarity: 3 good Questions for Authors: The authors have performed extensive experiments, but it would add more evidence if the authors can add some theoretical proof. For figure 3, does each dot represent a graph? Are they from simulated datasets? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have discussed several future directions that can be used to improve the current model. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > For figure 3, does each dot represent a graph? Are they from simulated datasets? > Thanks for your question, Figure 3 shows a set of simulated data. Each sample point comes from an anisotropic (covariance is not equal to the identity matrix) normal distribution. As discussed in section 4, the simulation is designed to show that when the data distribution is anisotropy, our proposed directional can better preserve the structural information of the data during the forward process. > The authors have performed extensive experiments, but it would add more evidence if the authors can add some theoretical proof. > Thanks for your advice! Indeed, in this paper, we didn’t provide some theoretical proof. However, we believe that the problem studied in this paper and our contributions are considerably interesting and extensible. Just as mentioned by Reviewer 5, for the graph learning community, our research introduces a novel paradigm for graph representation learning and provides comprehensive experimental evidence of the effectiveness of this paradigm. For the diffusion model community, our study also demonstrates impacts on specific data distributions with diffusion models and we propose a data-dependent forward process. And thus, as a pioneering work studying the impacts of data and focusing on data-dependent cases, it is difficult to formulate our method into existing theoretical frameworks. We are actively working on the theoretical proof and believe that it will be addressed in the future. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' responses. I don't have any other questions.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This work proposes a class of diffusion models to improve the accuracy of graph representation learning. The model incorporates both data-dependent and anisotropic noise in the forward noising process, by scaling its magnitude and direction based on each coordinate of the data. This structured noise maintains the signal present in the data over longer time windows than using standard isotropic white noise during the forward noising process. The authors show that this model improves upon state-of-the-art methods on a large collection of benchmark datasets. Moreover, they perform an ablation study to understand the effect of the two proposed modification to the noise process. Strengths: - The authors visually explain and empirically demonstrate the effect of using non-isotropic noise in the forward diffusion process to improve classification tasks. These noise processes provide clear intuition why this is preferred to white noise for these tasks. - The authors propose a novel strategy for extracting graph representations based on time-dependent denoiser that combines graph neural networks and UNet architectures. - The application of diffusion models to graph representation learning is novel and the new model is shown to yield superior results to existing algorithms. Weaknesses: It would be great for the authors to comment and compare with other non-isotropic noise processes that have been proposed for diffusion models for sampling and generative modeling. Some examples outside of the graph representation learning context are: * Score-based Denoising Diffusion with Non-Isotropic Gaussian Noise Models, Vikram Voleti, Christopher Pal, Adam Oberman, 2022 * Blurring Diffusion Models, Emiel Hoogeboom, Tim Salimans, 2023 In addition to the changes in the noise process, the authors propose a specific architecture for the denoiser, and selection of representative features from their hidden layers. This architecture could be relevant on its own to extract representations of the dataset, without the denoising time components. Do any of the compared methods investigate how this denoiser architecture, without the diffusion model, would perform for representation learning. This might be helpful as an additional ablation study to see the effect of the architecture. The metrics used to evaluate their experiments can be described in more detail. The values for the results in Tables 1,2,3 were not clearly mentioned in the caption or the main text. Mathematical equations may also be helpful to precisely describe the accuracy measurements in Figure 5. The authors comment that the word vectors often exhibit greater anisotropy, which yields superior performance in node classification tasks. It would be great if the authors could quantify this to validate this claim that the performance improves with more anisotropy. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: An important choice for defining the representations are the user selected time-steps where the outputs are extracted from the denoising network. Do the authors have a guideline or recommendation (e.g., based on the empirical studies) for how to choose these time steps. This seems particularly relevant given that the SNR does not degrade monotonically for some datasets (e.g., IMDB in Figure 2). Even if the SNR in standard diffusion models decay quickly with increasing time, how do the results presented here compare to a baseline diffusion model with isotropic noise where the time-steps are chosen "optimally". For example, what if the time-steps are chosen all near the initial time, $t = 0$, when there is still signal contained in the noisy data? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The authors show improved results on almost all datasets. It would be great if the authors could comment on what leads to the similar performance on the Ogbn-arxiv dataset. Is it because the data is more isotropic in this case? When do the authors expect the algorithm to under-perform for the evaluated tasks? In addition to the questions above, it would be good to address these minor comments: - Define the acronym for their proposed framework, DDM - Include standard errors for the results in Table 3 - Explain why $f_\theta$ depends on $A$ in equation (4). It is sufficient to highlight that this is the structure of a GNN, which may be helpful for some readers. - Clarify in Figure 4 that the directional noise is only added to $X_0$ and not $A$. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > It would be great for the authors to comment and compare with other non-isotropic noise processes > With full respect, The existing literature you mentioned differs fundamentally from our approach. As pointed out by reviewer 6, our paper proposes that the ultimate distribution of the diffusion process isn't essential; what's crucial is the rate of SNR attenuation. Thus, the noise we propose is data-dependent, with its distribution determined by the mini-batch data.(Section 4 eq2 & eq3) .The main objective of this design is to curb the speed of SNR reduction. In contrast, existing literature still relies on methods that are non-data-dependent, using fixed noise distributions. > This might be helpful as an additional ablation study to see the effect of the architecture. > We sort out the network structures of the baselines that appeared in the comparison experiments below. Existing schemes, including contrastive learning, GrapMAE, and our proposed method use similar basic structures (gat) and amounts of parameters, which shows that our proposed unsupervised representation learning based on diffusion models, plays a dominant role in improving the representation. In addition, we conduct further ablation experiments for our special designs in the denoiser architecture, including symmetric skip-connection, and symmetric network structure. The results are as follows, which prove the correctness of choosing to transplant the U-net idea and the effectiveness of the detailed design. | | GCN | MLP | Number layers of GCN | | --- | --- | --- | --- | | GraphMAE | ✓ | ✓ | 2-4 | | MVGRL | ✓ | ✓ | 4 | | Our(DDM) | ✓ | ✓ | 4 | | | Citeseer | PubMed | MUTAG | | --- | --- | --- | --- | | wo-head | 73.1±0.2 | 80.2±0.2 | 87.8±1.4 | | wo-encoder | 73.4±0.1 | 81.4±0.3 | 88.9±1.3 | | wo-skip_connection | 73.5±0.2 | 81.3±0.5 | 86.7±1.1 | | Baseline | 74.3±0.3 | 81.7±0.8 | 91.51 ±1.4 | > The metrics used to evaluate their experiments can be described in more detail. > We explained the evaluation metrics used in Sections 5.2 and 5.3. For node classification tasks, we report the classification accuracy of nodes in the validation set. And for graph classification tasks, we report the accuracy of the validation set. We will add this to the caption. > The authors comment that the word vectors often exhibit greater anisotropy, ...... It would be great if the authors could quantify this to validate the claim that the performance improves with more anisotropy. The authors show improved results on almost all datasets. It would be great if the authors could comment on what leads to the similar performance on the Ogbn-arxiv dataset. (Question 1) > We follow the method in [1] to measure the anisotropy of the data by measuring the ratio of the sum of the first two singular values to the sum of the first five singular values. The results are as follows: | | Top@2/Top@5 | | --- | --- | | ogbn-arxiv | 0.460 | | citseer | 0.532 | | Pubmed | 0.533 | | Computer | 0.722 | | Cora | 0.419 | We can observe that on datasets with low anisotropy (ogbn-arxiv, cora), our approach performs comparably to previous methods but falls short of supervised models. On datasets with strong anisotropy (Pubmed, Citeseer, Computer), our model has made significant strides, even surpassing some supervised training outcomes. This indicates that our approach can better learn the specific structure of such graph data. > An important choice for defining the representations are the user selected time-steps where the outputs are extracted from the denoising network. Do the authors have a guideline or recommendation (e.g., based on the empirical studies)..... > Thank you for your inquiry. We are pleased to offer our guidelines in this regard. Generally speaking, the signal-to-noise ratio indicates important information in both the training and inference stages. From fig5 in the main text and the fig2 in the appendix, we can see that the trend of SNR is almost proportional to the accuracy of the test set. For example, when T-->1000, SNR of Citseer 1.5-->0.5, acc 0.741-->0.695. Additionally, considering that our final metric surpasses the performance of each individual timestep. we suggest that the selected multi time-step can cover different intervals of the SNR curve to get the best results. > Include standard errors for the results in Table 3 > We have updated the new table with the standard deviations below. | DataSet | w/o S&R | w/o R | Full | |---------|------------|------------|------------| | Citseer | 34.37±0.5 | 60.77±0.2 | 74.3 ± 0.3 | | PubMed | 73.07±0.7 | 77.60±0.4 | 81.7 ± 0.8 | | IMDB-M | 49.80±0.53 | 50.87±0.49 | 52.53±0.31 | | COLLAB | 80.50±0.36 | 81.04±0.17 | 81.72±0.31 | | MUTAG | 82.89±1.16 | 87.25±1.12 | 91.51±1.45 | > Explain why $f_\theta$ depends on $A$ in equation (4). It is sufficient to highlight that this is the structure of a GNN, which may be helpful for some readers. > In this paper, we parameterize $f_\theta$ as GCN. We will correct this unclear expression in the subsequent version. > Clarify in Figure 4 that the directional noise is only added to $A_{0}$ and not $A$. > Thank you for your review, noise is indeed only added to x, and the specific form is in equation (1), (2), (3). [1] Gao J, He D, Tan X, et al. Representation degeneration problem in training natural language generation models[J]. arXiv preprint arXiv:1907.12009, 2019. --- Rebuttal Comment 1.1: Comment: We thank the authors for their detailed response! The feedback from the reviewers has addressed some of my concerns, so I've raised my score. I would still appreciate some comparisons with non-isotropic noise or with choosing different schedules for the noise as baselines for the proposed data-dependent noise (in the absence of theoretical results on the method). --- Reply to Comment 1.1.1: Comment: Thank you very much for your active engagement. We are willing to provide more comparisons with non-isotropic noise or with choosing different schedules here ( Same as what we provided to Reviewer 1). | Noise schedule function | Noise type | Citeseer | PubMed | MUTAG | |--------------------------|-------------|----------|--------|-------| | cosine (s=0,e=1,τ = 1) | DDM | 0.715 | 0.824 | 0.867 | | cosine (s=0,e=1,τ = 1) | White Noise | 0.371 | 0.453 | 0.692 | | sigmoid (s=-3,e=3,τ = 1) | DDM | 0.735 | 0.803 | 0.889 | | sigmoid (s=-3,e=3,τ = 1) | White Noise | 0.672 | 0.606 | 0.689 | | sigmoid (s=0, e=3,τ = 1) | DDM | 0.710 | 0.806 | 0.877 | | sigmoid (s=0, e=3,τ = 1) | White Noise | 0.581 | 0.434 | 0.691 | It shows that. Different schedulers do indeed influence the final effectiveness of the model. However, due to the data-independent nature of white noise, its ultimate performance heavily relies on the hyperparameters of the scheduler. Yet, irrespective of the scheduler employed, our proposed data-dependent anisotropic noise consistently yields superior outcomes. This confirms our impression of anisotropic structures in section 1. We will add this experiment to the paper in the subsequent version.
Summary: The authors address the gap in unsupervised graph representation learning by exploring the use of diffusion models. They propose directional diffusion models that incorporate data-dependent, anisotropic, and directional noises in the forward diffusion process to better handle anisotropic structures in graphs. Experiments on publicly available datasets showcase the superiority of their models over state-of-the-art baselines, demonstrating their effectiveness in capturing meaningful graph representations. Overall, the paper presents a compelling approach that contributes to the advancement of unsupervised graph representation learning. Strengths: 1. **Motivation** The introduction is well-motivated, providing a thorough explanation of the challenge and task at hand. The authors go beyond textual descriptions and use simple visualizations on both real and synthetic data to demonstrate their points effectively. This approach enhances the clarity and understanding of the presented research, making it accessible to a wider audience. &nbsp; 2. **Method** The authors present a straightforward yet effective solution for incorporating directional noise into node embeddings. This approach effectively addresses the challenge posed by anisotropic structures in various domains, including graphs and potentially text data. Their proposed method demonstrates promising results in handling directional noise and enhancing the quality of node and graph embeddings. &nbsp; 3. **Architecture** I appreciate the authors intention to adapt the well-known and effective U-Net architecture from the image domain to the graph domain. The incorporation of skip connections in the U-Net is particularly relevant for denoising tasks. This thoughtful adaptation enhances the model's ability to handle graph-related denoising effectively. &nbsp; 4. **Experiments** The authors conduct a comprehensive comparison with numerous baselines across multiple datasets. Furthermore, their evaluation settings, which involves 10-fold cross-validation with standard deviation after five runs, are robust and reliable. This rigorous evaluation methodology ensures the validity and statistical significance of their results. Weaknesses: 1. **missing releted work** There are existing works in the intersection of graphs and diffusions are missing, contradicting the authors statement "To the best of our knowledge, there have been no works for diffusion-model-based graph representation learning.". Some for example: - Niu, Chenhao, et al. "Permutation invariant graph generation via score-based generative modeling." International Conference on Artificial Intelligence and Statistics. PMLR, 2020. - Xu, Minkai, et al. "Geodiff: A geometric diffusion model for molecular conformation generation." International Conference on Learning Representations, 2022. - Vignac, Clement, et al. "Digress: Discrete denoising diffusion for graph generation." International Conference on Learning Representations, 2023. Furthermore, some simple techniques can be applied to create a smoother SNR curves over the different diffusion steps. For instance: - Chen, Ting. "On the importance of noise scheduling for diffusion models." arXiv preprint arXiv:2301.10972 (2023). The same problem was presented in it over the image domain (on high-resolution images), and the solution was to use different noise schedulers. Why not simply try this trick? &nbsp; 2. **missing details** There are missing details in the paper that makes it hard to fully understand and reproduce the papers results. For example: - "µ and σ are calculated using graphs within the batch.” -- what is done during inference? it is EMA over what was been seen in training time? does it handle only batch inference? - They paper did not state how exactly the entire graph-level representation is obtained, it is sum/mean of all node representations? - Algorithm 2 in appendix, line 6: what is exactly $A_t$? should it be just $A$? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. "µ and σ are calculated using graphs within the batch.” -- what is done during inference? why not just pre-calculate those on the entire train split? 2. "allowing the latent code to contain meaningful compressed knowledge while preventing the decoder from degrading into an identical mapping” -- why will that happen that requires a dedicate MLP? what is this different from the simple U-Net used in the image domain? 3. Algorithm 2 in appendix, line 6, what is $A_t$? 4. In some cases, the SNR goes even higher for larger diffusion steps (like in the Computer dataset for instance), why is that? 5. How is the entire graph-level representation obtained? 6. Referring to figure 2, shouldn't the SNR be 0 in the last diffusion steps? It doesn't reach zero as the directional noise keeps them separated, is that a wanted outcome? If so, why? 7. Why not apply just a different noise scheduler? instead of linear, coisne for instance. Similar observation was shown on high-resolution images and different schedulers where proposed [1]. I know it is still isotropic noise, so at least show results as a baseline/ablation. ___ [1] Chen, Ting. "On the importance of noise scheduling for diffusion models." arXiv preprint arXiv:2301.10972 (2023). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors address the limitations of their work, and support their claims with experiments and ablations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **missing releted work,** here are existing works in the intersection of graphs and diffusions are missing .... First, with full respect, the three existing works you mentioned do not conflict with our statements. These three works you pointed out are devoted to exploring graph structure generation based on diffusion models, whose core goal is to **generate graph–discrete graph structure**. However, instead of graph structure generation, our research focuses on unsupervised graph representation learning tasks, whose purpose is to **generate node-level or graph-level vector unsupervised learning**. We first pre-train a model without supervision and use it to generate node or graph representations for downstream classification tasks. > Furthermore, some simple techniques can be applied to create a smoother & Why not apply just a different noise scheduler?Why not apply just a different noise scheduler? ....(Question 7) > Regarding the impacts of different schedules, We adopt the same experimental setup as described in the article you referenced. The experimental results are shown in the following table: | Noise schedule function | Noise type | Citeseer | PubMed | MUTAG | | --- | --- | --- | --- | --- | | cosine (s=0,e=1,τ = 1) | DDM | 0.715 | 0.824 | 0.867 | | cosine (s=0,e=1,τ = 1) | White Noise | 0.371 | 0.453 | 0.692 | | sigmoid (s=-3,e=3,τ = 1) | DDM | 0.735 | 0.803 | 0.889 | | sigmoid (s=-3,e=3,τ = 1) | White Noise | 0.672 | 0.606 | 0.689 | | sigmoid (s=0, e=3,τ = 1) | DDM | 0.710 | 0.806 | 0.877 | | sigmoid (s=0, e=3,τ = 1) | White Noise | 0.581 | 0.434 | 0.691 | Different schedulers do indeed influence the final effectiveness of the model. However, due to the data-independent nature of white noise, its ultimate performance heavily relies on the hyperparameters of the scheduler. Yet, irrespective of the scheduler employed, our proposed data-dependent anisotropic noise consistently yields superior outcomes. This confirms our impression of anisotropic structures in section 1. > µ and σ are calculated using graphs within the batch.” -- what is done during inference? .... > In this paper, $\mu$ and $\sigma$ are also calculated in mini-batch during the model inference stage. In node classification tasks, the term "mini-batch" refers to the nodes currently included in the computation, a requirement that can be easily fulfilled in node classification tasks. In graph classification tasks, "mini-batch" refers to the graphs currently included in the computation. > why not just pre-calculate those on the entire train split > During model inference, compared to pre-calculate or EMA, calculating the *μ* and *σ* in the mini-batch provides a more effective constraint on the diffusion process within the local neighborhood of the batch. This approach can more efficiently ensure a higher SNR. In addition, it is worth noting that in order to prevent inference batch size from affecting, we adopted the same batch size in the GraphMAE [2] experiment to ensure the fairness of the benchmark. > why will that happen that requires a dedicate MLP? what is this different from the simple U-Net used in the image domain? > The MLP in our network is composed of linear – relu – linear, which projects the feature map from the hidden dimension back to the data dimension. This is necessary from a computational point of view. By introducing such a nonlinear structure for dimension conversion, GAT can be restricted to learning in the hidden space. This design has proven effective in many classic self-supervised training methods (simclr, simclrv2, SimSiam, moco). Our statement is a possible explanation for the effectiveness of this design. If this structure, containing nonlinearity, is removed, the latter half of the decoder will have to learn the mapping from the hidden space to the original data distribution, and the former half of the decoder may degenerate into an identity map of the encoder part. We also provide more ablation experiments related to the network structure in the responses to reviewer 2. > The paper did not state how exactly the entire graph-level representation is obtained, it is sum/mean of all node representations?How is the entire graph-level representation obtained? (Question 5) > Yes, graph-level representation is derived from pooling node-level representations, and we employ either max-pooling or mean-pooling. The specific approach remains consistent with that of the GraphMAE [2] baseline. > Algorithm 2 in appendix, line 6, what is $A_{t}$ > Thank you for your question and we apologize for our negligence. This is just a typo and it should be $A$. Our proposed DDM is for node features. > Referring to figure 2, shouldn't the SNR be 0 in the last diffusion steps? > This phenomenon is consistent with our design. In section 4, our method transforms the data-independent Gaussian noise into an anisotropic and batch-dependent noise ϵˉ and ensures that noise ϵˉ into the same hyperplane of the feature. This makes $\epsilon$ related to $x_{0}$ so that SNR may not reach 0 at the last diffusion steps . > In some cases, the SNR goes even higher for larger diffusion steps.... > We believe that this phenomenon arises from the inherent characteristics of certain graphs data. In fact, introducing slighter noise to node features on some graph data has been a well-established technique to enhance classification accuracy. This concept is elaborated [1], So our approach better leverages this particular characteristic of the data and renders it prominent enough to be observable in the SNR curve. [1] Sato, R., Yamada, M., & Kashima, H. (2021). Random features strengthen graph neural networks. In *Proceedings of the 2021 SIAM international conference on data mining (SDM)* (pp. 333-341). Society for Industrial and Applied Mathematics. [2] Hou, Zhenyu, et al. "Graphmae: Self-supervised masked graph autoencoders." *Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining*. 2022. --- Rebuttal Comment 1.1: Comment: I appreciate the response from the authors which answered almost all my concerns. The only thing I'm not yet convinced about is why the SNR doesn't reach zero - isn't a graph, for example, with all possible edges have SNR=0? I'd appriciate an explanation. Anyway, I'm upgrading my score to accept, as besides this issue, all my questions where fully answered. --- Reply to Comment 1.1.1: Comment: Thank you very much for your active engagement. In our SNR analysis, we use the LDA method to quantify the signal-to-noise ratio of the **node features** for the node or graph class labeling (this design aims to be consistent with downstream tasks ). Since our batch-dependent noise is always correlated with $x$, the $x_{t}$ will always be more Informative than pure white noise, so it is still possible that the SNR doesn't reach zero in the last diffusion steps. If you have any remaining concerns regarding our article, we cordially welcome you to present them, and we shall exert our utmost efforts to address them comprehensively.
null
null
null
null
Unified Off-Policy Learning to Rank: a Reinforcement Learning Perspective
Accept (poster)
Summary: This paper aims to unify the assumptions on user behavior in a ranking called click model by formulating the ranking problem as a click-model agnostic Markov Decision Process (MDP). By doing so, the paper proposes to reduce the Off-Policy Learning to Rank (LTR) problem to a variant of offline RL, which does not require precise estimation of the click models. Under this formulation, the proposed offline RL method, which incorporates state representation learning to the well-known Conservative Q-Learning (CQL), enables more accurate estimations of ranking metrics than baseline estimators in real-world datasets. Strengths: 1. The manuscript is easy to follow. The motivation for unifying click models and introducing the offline RL framework is clearly explained. 2. The RL formulation of LTR is reasonable, and showing the connection between offline RL and Off-Policy LTR would be insightful for the LTR community. 3. The ablation study on state representation learning is insightful. Weaknesses: 1. While the connection between LTR and offline RL is interesting, the proposed method (CUOLR) itself is not a fundamentally new framework for offline RL. In particular, the proposed state representation learning method, which applies positional encoding and attention to the inputs, seems to be an engineering effort rather than a very novel framework. 2. In experiments, an offline RL algorithm (CQL) does not show advantages over a simple RL baseline (SAC). While I acknowledge the author(s)’ contribution to formulating LTR as an RL problem, I think the algorithm has room for improvement. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - In general, SAC does not perform very well in the offline RL setting and thus the offline RL algorithms, including CQL, have been proposed. However, the experiment results show that CUOLR (CQL) and CUOLR (SAC) perform competitively. Could you provide some justification for this result? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: This limitation is not special for this paper, but classical click models and MDP both assume that the reward observed at each position are not affected by lower positions including the neighboring ones. If this assumption does not hold, LTR may introduce some bias and offline RL may also ignore some causal relation between actions and rewards. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer for recognizing the advantages of our unified RL formulation, connecting LTR and RL solution, and our ablation study. **Q:** The proposed method (CUOLR) itself is not a fundamentally new framework for offline RL, and the proposed state representation learning method seems to be an engineering effort rather than a very novel framework. **A:** While applying positional encoding and attention to the inputs contains certain engineering efforts, we would like to emphasize that our proposed CUOLR method is novel in terms of its unified MDP formulation that is agnostic to various click models, novel state representation design and flexible to apply any offline RL solvers. This also gives a much better performance than the tailored method for each specific click model. **Q:** Classical click models and MDP both assume that the reward observed at each position are not affected by lower positions including the neighboring ones. If this assumption does not hold, LTR may introduce some bias and offline RL may also ignore some causal relation between actions and rewards. **A:** We agree with the reviewer that classical click models assume that the reward is not affected by lower positions. It would be a challenging yet interesting future work to broaden the scope and relax the assumption.
Summary: This paper presents a unified approach for off-policy learning to rank (LTR) that is adaptable to general click models. The authors formulate off-policy LTR as a Markov Decision Process (MDP) and leverage offline reinforcement learning (RL) techniques to optimize the ranking process. They propose the Click Model-Agnostic Unified Off-policy Learning to Rank (CUOLR) method, which can be easily applied to a wide range of click models. The authors provide empirical evidence demonstrating the effectiveness of CUOLR in comparison to state-of-the-art off-policy LTR algorithms. Strengths: ● The paper presents an innovative and practical methodology for off-policy learning to rank. The formulation of off-policy LTR as an MDP and the use of offline RL techniques provide a comprehensive and adaptable approach for ranking optimization. ● The authors provide insightful empirical findings, showing that the CUOLR method consistently outperforms state-of-the-art off-policy learning to rank algorithms. The results on various large-scale datasets demonstrate the effectiveness, consistency, and robustness of CUOLR under different click models. ● The paper presents a detailed introduction on state representation learning. There are also empirical analyses on its practical effect. Weaknesses: ● The synthetic dataset and the offline evaluation can give biased evaluation results of the algorithms. ● There are some related works that are not mentioned in this paper. Cai, et al. [1] also apply RL techniques to recommendation systems and provide a new MDP formulation. It also includes ranking scores in the formulation that can be regarded as a downstream application to this paper. Xue, et al. [2] provides another MDP formulation of RL for optimizing long-term user engagement. ● There remain some issues unsolved in the paper. See the questions for details. [1] Cai, Qingpeng, et al. "Reinforcing User Retention in a Billion Scale Short Video Recommender System." arXiv preprint arXiv:2302.01724 (2023). [2] Xue, Wanqi, et al. "PrefRec: Preference-based Recommender Systems for Reinforcing Long-term User Engagement." arXiv preprint arXiv:2212.02779 (2022). Technical Quality: 3 good Clarity: 3 good Questions for Authors: ● Lines130-133 are hard to understand. Why not include remaining documents that are yet to rank? How will the policy perform on $s_0$, where nothing will be included as input? ● The introduction of dynamic action space can introduce dynamics shift in RL and unstable training. Are there any specific techniques to handle this issue? ● Why does SAC obtain similar performance with CQL when learning in static datasets? Will the issue of policy distribution mismatch lead to poor performance? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The formulation and evaluation are limited to off-policy/offline RL that learn from a static dataset. In applications where there are adequate online interaction data, it will be helpful to consider some online RL counterparts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive comments on our work. **Q:** The synthetic dataset and the offline evaluation can give biased evaluation results of the algorithms. **A:** First, same as previous off-policy learning to rank research (consistent with all baseline methods), we conduct the semi-synthetic experiments where the training data are all the direct feedback from a "simulated" user (the click model reflects his/her behavior) on real-world queries. This is different from offline evaluation since we have control of the environment and know the groundtruth of any ranking list (trajectory). In this setup, the only bias comes from the click model, which is different from the bias in typical offline evaluation RL. For more details, please see the experiment setup section (section 5.1). **Q:** There are some related works that are not mentioned in this paper. Cai, et al. [1] also apply RL techniques to recommendation systems and provide a new MDP formulation. It also includes ranking scores in the formulation that can be regarded as a downstream application to this paper. Xue, et al. [2] provides another MDP formulation of RL for optimizing long-term user engagement. ([1] Cai, Qingpeng, et al. "Reinforcing User Retention in a Billion Scale Short Video Recommender System." arXiv preprint arXiv:2302.01724 (2023). [2] Xue, Wanqi, et al. "PrefRec: Preference-based Recommender Systems for Reinforcing Long-term User Engagement." arXiv preprint arXiv:2212.02779 (2022).) **A:** Thank you for providing these two papers on RL for recommendation. Their settings are different from our LTR setup, we will definitely add the discussion in our next version. **Q:** Lines130-133 are hard to understand. Why not include remaining documents that are yet to rank? How will the policy perform on s0, where nothing will be included as input? **A:** According to click models, the click at each position only depends on current position and previous documents. Remaining documents that are yet to rank are considered in action set instead of our state representation. At position 0 (state $s_0$), the policy will choose the document with the highest relevance/attractiveness from all candidates. **Q:** The introduction of dynamic action space can introduce dynamics shift in RL and unstable training. Are there any specific techniques to handle this issue? **A:** Thank you for pointing out this question. We agree that the raw action space is dynamic. However, as we utilize the attention model and project the raw action space into a latent action space, there is intended to be less dynamic shift happening there. **Q:** Why does SAC obtain similar performance with CQL when learning in static datasets? Will the issue of policy distribution mismatch lead to poor performance? **A:** Please refer to our answer in the general response to all reviewers regarding the performance difference between SAC and CQL. --- Rebuttal Comment 1.1: Comment: Dear Reviewer79J3, We were wondering if you have gotten a chance to go through our responses and the additional experiments we add to the paper, and if these revisions and responses address your concerns regarding the paper. We are happy to address any remaining concerns and would really appreciate it if you engage in a discussion with us. Thank you so much!
Summary: The paper talks about how to model user behaviors with positional biases in an online search system. The paper proposes a unified RL framework to generalize three common types of positional bias models: Position-Based Modeling (PBM), CASCADE (each click depends on the previous click), and Dependent Click Models (DCM). By capturing all previous clicks into a state, the paper suggests that standard offline RL algorithms, such as Conservative Q-Learning (CQL) and Soft Actor Critic (SAC), can be used to estimate and optimize for the positional-aware ranking policies. Simulation experiments are included to support the claims. Strengths: Originality. The paper observes commonalities in three different types of methods and proposes a unified method to combine them. The generality of the proposal is further examined by simulating different cases from each of the method and showing similar performance using the proposed method. The observation and the empirical validation feel original to me. The authors addressed my concerns and I have adjusted my scores accordingly. Weaknesses: Significance. The paper made a rather limiting assumption that all users in the same search system follow a single pattern of positional bias. This may not be true, as different users may have different positional biases. A more practical approach is to insert perturbations to the ranking positions to estimate the true positional effects using IPS. To mitigate the risks of perturbations, methods have been introduced to swap only adjacent search results. Please see this paper as an example: https://www.cs.cornell.edu/people/tj/publications/agarwal_etal_19a.pdf Clarity. The description of the proposed method lacks sufficient details (see additional questions). Also the experimental results (Table 1) did not seem to contain significant differences between the algorithms being considered. This leads me to wonder if the proposed RL solution is actually easy to find, or is it fundamentally difficult to implement and the results may be sensitive to hyperparameter choices. The paper did not discuss common limitations of RL algorithms. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: Table 1. The table compares the proposed method with baseline methods, one of which is used to simulate the ground-truth user behaviors. However, in the results, the baseline method which was used to simulate the user behaviors did not perform the best in estimating the user behaviors. Is this expected? Also, the differences between all methods seem pretty small. Algorithm 1. Line 6, where is psi defined? Eq (5). What is the difference between CQL and SAC? Why is there only one equation being presented for both algorithms? More details would be appreciated for clarity purposes. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: No, the paper did not discuss the choice of hyperparameters or the closeness of the experimental results. They seem to be common challenges with RL algorithms. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for recognizing the originality of our unified formulation and solution. **Q:** The paper made a rather limiting assumption that all users in the same search system follow a single pattern of positional bias. This may not be true, as different users may have different positional biases. A more practical approach is to insert perturbations to the ranking positions to estimate the true positional effects using IPS. To mitigate the risks of perturbations, methods have been introduced to swap only adjacent search results. **A:** We thank the reviewer for pointing out the reference on the estimation of propensities and it is worth pointing out that in the reference paper, the probability of being examined actually only depends on the rank, and it is the same across users. Instead, we want to point out that one of our main advantages is that we do not need to learn a separate propensity estimation model. **Q:** The description of the proposed method lacks sufficient details. Also the experimental results (Table 1) did not seem to contain significant differences between the algorithms being considered. The paper did not discuss common limitations of RL algorithms. **A:** We provide details of our solution using SAC and CQL and their hyper-parameters in the Appendix (see Table 5 in Section B for details). Please refer to our answer in the general response to all reviewers regarding performance difference between SAC and CQL. **Q:** In the results, the baseline method which was used to simulate the user behaviors did not perform the best in estimating the user behaviors. Is this expected? Also, the differences between all methods seem pretty small. **A:** If we understand correctly that "the baseline method which was used to simulate the user behaviors" refers to IPW/DLA in PBM model and CM-IPW in Cascade model in Table 1, we would like to clarify that it does not have to perform best. Note that while designed for the specific click models, these methods still need to estimate parameters such as attractiveness and/or bias parameters in click model, and whether they can learn efficiently determines their performance. But we can see that methods designed for specific click model generally perform better than mismatched methods, e.g., CM-IPW>IPW/DLA in Cascade model. **Q:** Algorithm 1. Line 6, where is psi defined? **A:** We apologize for the confusion. $\psi$ is but mentioned in Line 2, the suffix of $\phi_\psi(\cdot,\cdot)$. It refers to the trainable parameters in the embedding model $\phi$. **Q:** Eq (5). What is the difference between CQL and SAC? **A:** According to Eq.5 in Line 244, the CQL algorithm overcomes the distribution shift problem in offline RL by Conservative Q-learning. In its loss function, a conservative term is added (the first term in the RHS of the equation after $\alpha$) to constrain the difference between the logging policy and our trained policy in case of the overestimation problem. More details can be found in [1] and we will add more discussion in next version. [1] Aviral Kumar, Aurick Zhou, George Tucker, and Sergey Levine. Conservative q-learning for offline reinforcement learning. Advances in Neural Information Processing Systems, 33:1179–1191, 2020* --- Rebuttal Comment 1.1: Comment: Dear Reviewer atvb, Thanks for your review! We were wondering if you have gotten a chance to go through our responses and the additional experiments we add to the paper, and if these revisions and responses address your concerns regarding the paper. We are happy to address any remaining concerns and would really appreciate it if you engage in a discussion with us. Thank you so much!
Summary: - Authors propose a unified off-policy reinforcement learning-based approach for learning to rank (LTR) problem that is adaptable to all general click models. - Authors argue that a user's behavior in terms of clicks can be modeled by a Markov Decision Process (MDP), thereby allowing them to look at LTR from the perspective of offline reinforcement learning. - Authors' key contribution is a novel state representation that takes into account features of all the items/documents presented to a user. Specifically, they represent state at position $k$ by $s_k = [(d_1, d_2, \cdots, d_{k-1}), k]$ to capture the context of user behavior prior to reaching position $k$-ranked position. The action set at position $k$ is all the available items/documents that haven't been presented to the user in prior $k-1$ positions, and the reward is simply the users' click behavior when presented with action $a_k$. - Authors use positional encoding and self-attention layer to extract the state representation given the sequence of items/documents at prior ranked positions, i.e., $(d_1, d_2, \cdots, d_{k-1})$. They train an end-to-end model that jointly optimizes for state representation as well as the policy. Strengths: **Motivation** - Authors present a strong motivation for their approach based upon the need for a unified a click model-agnostic LTR method which generalizes well to use of any RL algorithm. - Authors do a good job at placing their work in context of related work in the field by comparing/contrasting their approach with other works. _Note_: I am not updated on the state-of-the-art RL approaches for LTR, and therefore may not be aware of some recent works. **Technical Presentation** - Authors do a good job at formalizing the problem and providing all the relevant technical information in Sections 3 and 4 that would be required for reproducibility. - Authors contribution is quite easy and intuitive to understand. **Experiments** - Authors evaluate their approach (using CQL and SAC) in comparison to baselines such as Dual Learning Algorithm (DLA), Inverse Propensity Weighting (IPW) algorithm and Cascade-model based IPW, as well as LambdaMart model which serves an upper bound. The metrics of authors' proposed approach are significantly favorable in comparison to baselines. _Note_: Just as above, because I am unaware of the SOTA, I am unable to comment on whether the right baselines have been used for comparison. - Authors provide ablation study to validate the need for each component of their proposed approach. Weaknesses: **Is this a Markov Decision Process?** Formally, the Markovian property of MDP refers to the fact that given states $s_i$ and actions $a_i$, the equality $\mathrm{Pr}(s_t | s_0, a_0, s_1, a_1, \cdots, s_{t-1}, a_{t-1}) = \mathrm{Pr}(s_t | s_{t-1}, a_{t-1})$ holds. Intuitively, it means that the probability of landing in state $s_t$ of the Markov chain only depends on the last state-action pair $(s_{t-1}, a_{t-1})$, and not the state-action pairs preceding that. Given the authors' approach uses state representation as $s_k = [(d_1, d_2, \cdots, d_{k-1}), k]$, it is unclear to me how/if the Markovian property still hold? **Effect of Click Data Generation** Authors mention that they use 1% of training data to train a Ranking SVM to generate initial ranked list of items, upon which clicks are simulated using different click models. How does the usage of more/less portion of training data affect the comparative baselines and in turn the relative performance improvement provided by the proposed approach over those baselines? Technical Quality: 2 fair Clarity: 3 good Questions for Authors: - Could the authors address the questions raised in the weaknesses section of the review? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: - Authors have not explicitly addressed limitations of their approach or any potential negative societal impact in the manuscript. However, authors' proposed approach is not a significant departure from the prevalent large scale recommender systems. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the appreciating our motivation, presentation and experiments. **Q:** Is this a Markov Decision Process? Given the authors' approach uses state representation, it is unclear to me how/if the Markovian property Pr(st|s0,a0 …st-1, at-1) = Pr(st| st-1, at-1) still holds? **A:** We would like to clarify that according to the state definition in Eq (3), the Markovian property holds. Specifically, the state only depends on the current position and previous documents. **Q:** Effect of Click Data Generation. How does the usage of more/less portion of training data affect the comparative baselines and in turn the relative performance improvement provided by the proposed approach over those baselines? **A:** Thank you for the question. Please refer to Table 1 and our answer in the general response to all reviewers for new experimental results. Due to time limit, we have compared CUORL-CQL against different logging policies and will add more baselines to the experiment in next version. --- Rebuttal Comment 1.1: Comment: Dear authors, Thank you for your response and additional experimental results presented in the rebuttal document. I shall take this additional information into account during the discussion phase. Thank you. --- Reply to Comment 1.1.1: Title: Thank you Comment: We sincerely thank the reviewer for the time and effort in reviewing our paper. Regards, Authors
Rebuttal 1: Rebuttal: We thank all the reviewers for your detailed comments and questions, which helped us to improve our paper. We appreciate that the reviewers generally agree our unified formulation is well-motivated, our click model-agnostic approach is novel and original, and experimental results are extensive and support the claim. We have responded to your individual comments and will add the following new results and discussion in the paper. **CQ1:** Performance Comparison of CQL and SAC (by Reviewer XmTH, 79J3, ynTd). **A1:** We first emphasize that the main contribution of the paper is formulating off-policy learning to rank (LTR) problem as an MDP and proposed an offline RL framework that leverages **any** off-the-shelf RL algorithm as the plug-in solver. We choose the popular CQL algorithm to conduct experiments as it can alleviate over-estimation caused by distributional shift with conservative Q-learning. However, as reviewers have noticed from the results in main paper, SAC (which is equivalent to CQL with conservative parameter $\alpha=0$) performs closely to CQL with the initially chosen fixed $\alpha=0.1$. To further investigate this, we report the performance of SAC and CQL with optimal $\alpha$ in Table 2 of the PDF, where we search for optimal value separately for each click model. We observe that the performance of CQL with optimal $\alpha$ is much higher than simple SAC especially in NDCG. This observation suggests that distribution shift and over-estimation are still an issue in the off-policy LTR setting, and making CQL work needs additional effort of careful tuning, which is the case for other domains as well. In this paper, we have shown that the off-policy LTR problem can be unified and solved by our click-model agnostic offline RL framework with simple RL solvers like CQL and SAC. Comparing the pros and cons of various advanced RL algorithms in off-policy LTR is beyond the scope of this paper, but an interesting follow-up. **CQ2:** The impact of logging policy (by Reviewer XmTH, fhCJ) **A2:** We conduct new experiments on the impact of logging policies on Web10k dataset. On each click model, we compare three different logging policies: SVMRank trained with 1% training data, SVMRank trained with 0.01% data, and random policy. The result of CUOLR-CQL with fixed $\alpha=0.1$ is reported in Table 1 of the PDF. Not surprisingly, we can see that with a worse logging policy, the performance of learned policy also decreases. This also highlights the need for a good logging policy in offline LTR. Pdf: /pdf/91388a2779d7f59749a9b224e683a0d28b44b31e.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: In this work, the authors model the process of learning optimal ordering of ranked lists, called Learning to Rank (LTR) as a Markov decision process (MDP). This allows them to utilize techniques from reinforcement learning to solve for the policy that generates the optimal ordering. In practical applications of LTR, a pre-collected dataset (offline) is typically used which is collected using a logging policy that is not optimal (off-policy). As a result, techniques for offline off-policy reinforcement learning are used in this work. In the MDP formulation, the reward is defined as an item of the ranked list being clicked. The distribution of clicks is referred to as a click model, which results in various reward functions. An RL algorithm can optimize a wide range of reward functions and allows this approach to be largely click-model agnostic. The approach is empirically validated on semi-synthetically generated click datasets, and it demonstrates competitive performance across different click models compared to click-model-specific baselines for learning optimal rankings. Strengths: The paper succinctly models LTR as a sequential decision-making problem, building on top of prior work, in a manner that allows the use of transformer architecture to be employed in conjunction with an RL algorithm. This allows for a couple of benefits: 1) being agnostic of specific click models since any click distribution in the offline dataset can in theory be encoded as the reward that the RL algorithm can optimize for, and 2) allowing for flexibility in defining the state representation through the use of attention and position embeddings. Both of the claims are supported by strong empirical analysis. Weaknesses: The paper extended prior formulations of LTR as an MDP to allow for using the transformers architecture, with the aim performing policy learning without click model specific methods. In addition to the existing experiments, one experiment that would highlight the point would be one that learns the optimal ranking for data generated from a non-standard click model – say a randomly picked click distribution – where all other methods fail. Additionally, the utility of the section on the optimality of rankings (Definition 3; Assumption 3.1) is unclear. These are possibly included to tie this work to existing literature, however, this method should not require those conditions for finding an optimal policy. On a related note, L122-L123 incorrectly states that Definition 1 is explained in Appendix C, which only discusses the definitions of examination probabilities of various click models. The writing and presentation could be improved in places, a few examples being highlighted below. Minor/Typos: - “Debiasing” as caused by bias in the data, versus bias due to estimation procedure has been used interchangeably through the paper leading to a fair amount of confusion (Ex: L26-L29) - Equation (1) should have a $\propto$ and not $=$ - L121: mutually independent of what? - L227: $d\\_model$ undefined. - Algorithm 1: Mistake on Line 7 Technical Quality: 3 good Clarity: 2 fair Questions for Authors: A few questions: - Assumption 3.1 holds only for monotonically decreasing examination probabilities. The reference [64] proves the result in Assumption 3.1 only for PBM and cascade models, and is not the case for other click models. Why is this top-down scanning of the ranked list necessary for the working of the algorithm? - The robustness to change in $\alpha$ in CQL seems to indicate that just fitting a Q function on the offline data should suffice. This would be true if the logging policy provides enough support for all actions, which is commonly the case in practice where uniform random logging policies are used. Why is a conservative algorithm (CQL) necessary for the experiments? - (L148-L149) How is the action set restricted to not repeat actions implementationally? - The gains in performance are more significant for ERR as compared to NDCG. Why is that the case? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Like any other method for LTR, the performance of the algorithm should depend on the coverage provided by the logging policy. A comment on the effect of the coverage of the logging policy and it's effects on the learnt rankings would be useful. State representations play an important role in the performance of the algorithm and in the context of this work the representations are learned via architectural choices in the Q-function estimator. The claim that (L248) that an additional component is added on top of CQL is overstated. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for appreciating the advantages of our click model-agnostic solution. **Q:** The paper extended prior formulations of LTR as an MDP to allow for using the transformers architecture, with the aim of performing policy learning without click model specific methods. In addition to the existing experiments, one experiment that would highlight the point would be one that learns the optimal ranking for data generated from a non-standard click model – say a randomly picked click distribution – where all other methods fail. **A:** Thank you for the suggestion of testing on non-standard click models such as randomly picked click distribution. According to Definition 1, the click model at any position should only be determined by the rank of the current position and previous documents, where a randomly picked click distribution may not satisfy this assumption. We do want to point out this assumption is not restrictive as the classic PBM and CASCADE model all satisfy this, and we have already evaluated on several more complicated click models including UBM, DCM, and CCM and reported the results in the main paper and Appendix. **Q:** Additionally, the utility of the section on the optimality of rankings (Definition 3; Assumption 3.1) is unclear. These are possibly included to tie this work to existing literature, however, this method should not require those conditions for finding an optimal policy. On a related note, L122-L123 incorrectly states that Definition 1 is explained in Appendix C, which only discusses the definitions of examination probabilities of various click models. **A:** Definition 3 is used to define the optimality of rankings. Note that the goal of LTR is to find the optimal ranking list while the objective of RL method is the optimal value function. We want to emphasize that these two optimality are not necessarily aligned, and they are only equivalent when Assumption 3.1 is satisfied. For most classical click models, such as PBM and cascade model Assumption 3.1 is satisfied and we can show the equivalence between the two. Sorry for the confusion in L122-L123. As we can see in Appendix C, we explained the specific form of various examination probabilities, while the attractivenesses $\alpha(\cdot)$ only depend on the documents and are the same across all the click models. By showing these forms, we are explicitly showing how these models are instances of Definition 1 and could be incorporated into our unified framework. **Q:** Assumption 3.1 holds only for monotonically decreasing examination probabilities. The reference [64] proves the result in Assumption 3.1 only for PBM and cascade models, and is not the case for other click models. Why is this top-down scanning of the ranked list necessary for the working of the algorithm? **A:** As we have explained above, the optimality assumption in Assumption 3.1 is needed, as the optimal value in RL may not give an optimal ranking list. It is unclear if click models with non-monotonically decreasing examination probabilities can be solved by our framework since optimizing rewards may not align with optimizing ranking list. We empirically showed that besides PBM and cascade model, major click models like UBM, DCM, CCM (all have monotonically decreasing examination probabilities) can be effectively solved by our offline RL framework. It would be interesting to look beyond this assumption in future. **Q:** (L148-L149) How is the action set restricted to not repeat actions implementationally? **A:** Each action is a query-document pair, which is unique in the dataset. **Q:** The gains in performance are more significant for ERR as compared to NDCG. Why is that the case? **A:** Compared to NDCG, ERR is more sensitive to the relevance of top results. This observation suggests our improvements are more beneficial to the top positions of ranking list compared to baselines. --- Rebuttal Comment 1.1: Comment: The authors' response addresses some of my concerns while overlooking a couple of others. My question about the non-standard click model still stands: The method is designed to be click-model agnostic, and should in theory be able to optimize for any reward --- click probability --- independent of Definition 3. In that setting, all other click-model specific baseline would be expected to struggle. This would be an insightful result about the utility of the method. I will keep my initial score. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the follow-up comment and are glad to hear that some concerns have been addressed by our initial response. The reviewer's comment on > The method is designed to be click-model agnostic, and should in theory be able to optimize for any reward --- click probability --- independent of Definition 3. caught our attention. Firstly, we would like to emphasize that we have accurately pointed out that our method can be applied to "a wide range of click models" as early as in the abstract. It would be a misunderstanding and an overstatement that our method (or any algorithm) can solve offline LTR with click feedback without basic assumptions of how clicks are generated. Secondly, our Definition 3 and Assumption 3.1 are mild but necessary assumptions on the optimality of the ranking list. Definition 3 assumes optimal ranking is the one with descending attractiveness of documents, which is the standard assumption of LTR problem and aligns with evaluation metrics such as ERR and NDCG. Assumption 3.1 assumes optimal ranking list maximizes total rewards (clicks), which is needed to align the goal in Definition 3 and the solution of RL. The assumption has been proved correct for PBM and Cascade model [1] and we empirically showed that our method worked well for other click models such as DCM, CCM and UBM. However, for certain click models, optimal ranking list does not maximize total clicks. We here provide a counterexample: consider three documents with attractiveness $d_1 = 0.1, d_2=0.2, d_3=0.3$ and the examination probabilities depend on both document and position. For the two ranking list * attractiveness: 0.3 0.2 0.1, examination prob: 0.5, 0.1, 0.1 * attractiveness: 0.1 0.2 0.3, examination prob: 0.5, 0.4, 0.3 , the optimal ranking list (first one) has smaller expected total clicks (0.18 < 0.22) because this click model encourages postponing good documents to keep user browsing and generating more clicks. Thus this example cannot be solved by our method. However, we note that this example does not belong to any click models we studied (PBM, Cascade, DCM, CCM, UBM) and to the best of our knowledge, has not been studied in previous offline/counterfactual LTR literature. Actually, our method can already cover most of the click models that have been studied in offline LTR. We are happy to answer any questions that the reviewer finds not fully addressed. [1] Masrour Zoghi, Tomas Tunys, Mohammad Ghavamzadeh, Branislav Kveton, Csaba Szepesvari, and Zheng Wen. Online learning to rank in stochastic click models. In International Conference on Machine Learning, pages 4199–4208, 2017.
null
null
null
null
null
null
Indeterminate Probability Neural Network
Reject
Summary: This paper proposes a new probability theory named indeterminate probability theory and a new model name Indeterminate Probability Neural Network (IPNN). For the new probability theory, it is an extension of the classical probability theory, with which some intractable probability problems become tractable (analytical solution). For the new model IPNN, it can perform unsupervised clustering while doing classification and make very large classifications with very small neural networks. Strengths: 1. This paper is well organized. The intuition of the new concept of indeterminate probability is clearly demonstrated from a simple example. The theory of indeterminate probability theory is well formulated and all assumptions are clearly numbered. 2. To the best of my knowledge, the indeterminate probability theory is proposed for the first time. Weaknesses: 1. The proposed new probability theory and the IPNN model are interesting contributions of this paper, but not the CIPNN. As listed in the second contribution in the introduction part, the authors claim CIPNN as the contribution of this paper. However, CIPNN is the main contribution of the authors' other papers. Thus, the authors should either delete this contribution and introduce CIPNN in other places (such as related work) or contain CIPNN in this paper (then the ICCV paper should be withdrawn). 2. There are many grammar mistakes in the use of articles. For example, 'as example' on line 46 should be 'as an example', and 'an unique category' on line 257 should be 'a unique category'. 3. On lines 110 to 112, the authors mention that the event in classical probability theory can only be happened or not happened, while for IPNN, we can have the probability of an event state. However, the authors do not consider (or demonstrate) the properties of the probability in IPNN. In classical probability theory, the happening of the event is an unbiased estimate of the probability, which means that we can approximate the true probability through plenty of experiments. But what about the probability in IPNN? What is the quality of the probability output by IPNN? Typically, the softmax output of a neural network can only be treated as a probability distribution (since the sum of all coordinates is 1) but does not indicate the true probability of one class. 4. On line 120, the authors mention that $\alpha_{ij}^j (k) \in \{ 0 , 1 \}$ in classical probability. But as defined in Eq. (2), $\alpha_{ij}^j (k)$ is a conditional probability. Then it can be any decimal between 0 and 1 if there is no more constraint. Can authors provide more detailed explanations for the claim on this line? 5. As mentioned in the abstract, IPNN 'is capable of making very large classification with very small neural network'. And the idea is using binary encoding ('the binary vector is the model inputs from 000000000000 to 111111111111, which are labeled from 0 to 4095') as mentioned in appendix D.1, which is a basic concept in information theory. But in practical implementations, one-hot encoding is preferred since it does not introduce a redundant distance between different labels. For example, the binary code 001 is closer to 000 than to 111. If binary encodings can successfully reduce the network size, this would be a great contribution to this paper. Can authors provide a more detailed introduction about output encodings (especially the comparison between binary encodings and one-hot encodings in history) and more discussions about the key tricks to making binary encodings work in IPNN? 6. As mentioned in the abstract, the indeterminate probability theory makes 'some intractable probability problems have now become tractable (analytical solution)'. It seems that the authors mix up the concept of tractable and analytical solutions. A tractable problem is a problem that can be solved with acceptable complexity (usually polynomial time and space complexity). Tractability has no relation with analytical solutions. An analytical solution can be intractable when there are exponential operations in the solution, and a tractable problem may have no analytical solution (there is no analytical expression of the error function but we can approximate the error function efficiently). 7. To my understanding, the contribution of this paper is IPNN as a new neural network (architecture, engineering trick, or training algorithm). The indeterminate probability theory is far from an extension of the classical probability theory. To extend the classical probability theory, the authors should at least formulate the new theory from measure-theoretic probability theory. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: See weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer #Xo5z, Thank you for taking the time to review our paper! Please find detailed responses in the following. **Q1: ...the authors claim CIPNN as the contribution of this paper.** A1: The purpose of highlighting CIPNN is to demonstrate that our indeterminate probability theory is general, CIPNN is an application of this theory. If you still think this is inappropriate, we will move it to the related work section. **Q2: the authors mention that the event in classical probability theory can only be happened or not happened...** **But what about the probability in IPNN?...** **On line 120, the authors mention that $\alpha_{ij}^j (k) \in { 0 , 1 }$ in classical probability. But as defined in Eq. (2), $\alpha_{ij}^j (k)$ is a conditional probability.** A2: We presented another simple example, we hope that will be helpful now. See reply to reviewer #1Vya in Q3. **Q3: For example, the binary code 001 is closer to 000 than to 111. If binary encodings can successfully reduce the network size, this would be a great contribution to this paper.** A3: In IPNN, the network size is reduced not due to binary encoding but rather to last model layer size reduction. We agree with you that one-hot encoding is preferred, we use binary encoding is only for easy of implementation, as you can see, this experiment is quite simple. We design this example only for validating the multi-degree classification and the argument in Abstract: with 100 nodes IPNN is able to classify 10,000,000,000 classes. **Q4: A tractable problem is a problem that can be solved with acceptable complexity (usually polynomial time and space complexity). Tractability has no relation with analytical solutions.** A4: Thank you very much for your detailed explanation. Our theory is: 1. We propose a general analytical solution of posterior, which did not exist before. 2. Our theoretical complexity is acceptable, as discussed in common rebuttal point 1. **Q5: To extend the classical probability theory, the authors should at least formulate the new theory from measure-theoretic probability theory.** A5: Our knowledge to measure-theoretic probability theory is limited, unfortunately, we don't know how to formulate our theory from it. If you could provide us with more details, we would greatly appreciate it. Best regards Authors --- Rebuttal Comment 1.1: Comment: Thanks for your detailed response. The authors' answers partially solve my questions, but the central contributions are still not convincing to me. It is not clear what is new to the classical probability theory, and why is the interminate probability theory needed. To my current understandings, this paper provides a method to calculate a certain posterior used in latent variable modeling, which can be done in the framework of traditional probability theory. Thus, I still think that this paper should be improved further and prefer to remain my review unchanged. --- Reply to Comment 1.1.1: Title: little Question. Comment: Thanks for your response. As you have replied, our IPNN mode can be done with the framework of traditional probability theory. Little question: How to use current probability theory to solve the problem mentioned in Sec. 3? How to use current probability theory to find the pattern of 'observer_3'? See reply to reviewer #1Vya in Q3. These examples are easy, and we hope you are interested in doing some small calculations. We look forward to your reply.
Summary: The main contribution is the proposal of the inference architecture which creates parallel softmax outputs (the authors call “splits”), which are combined into a joint soft-label space to make classification decisions under MAP rule; the joint label space can help with sub-classification tasks. Strengths: The architecture tries to create more Bayesian information before making final classification, which is a potential Bayesian neural network approach that can be developed in the future. On the other hand, this work may be inspiring to people who are interested in large-dimensional label representation. Weaknesses: 1. It is not clear to see in this paper what is new to the “classical theory of probability”; 2. The novelty is limited. 3. The assumptions may not be very reasonable: The assumption 2 and 3 are ok at initialization, however when the weights are updated, A and Y are not generally independent; The assumption 4 is very counterintuitive. Normally we have a joint distributions between X and Y, but assuming that X and Y are not independent mutually, then they are not given A; and if X and Y are independent (Y is random label for example), then they are independent given A. 4. The main contribution of this paper is the splitting part of the architecture, creating parallel softmax outputs and combine them to make MAP decision. I think the paper should emphasize on the reasoning of this mechanism, for example, the ensemble of different likelihoods that contributes to the performance, or the geometric interpretation of the proposed label embedding/representation that makes sense. 5. Lacking of the verification of the advantage of the new method over original softmax. After reading this paper I still do not know if this method creates more or reduce a little uncertainty in the classification compared to original softmax approach. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. Can you compare your work with the literature on label embedding/representation? 2. Can you make some theoretical explanation on the advantage of your methods? 3. Can you give us better explanation on Assumption 4? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer #zXvC, Thank you for taking the time to review our paper! Please find detailed responses in the following. **Q1: ...which is a potential Bayesian neural network approach.** A1. We respectfully disagree this point. **Q2: The assumptions may not be very reasonable:...The assumption 4 is very counterintuitive.** **Can you give us better explanation on Assumption 4?** A2: We suggest that you can read more details in Appendix B, which is a very intuitive aspect of understanding our theory. And Assumption 4 is the mathematical way to realize the inference. ## Core code of IPNN forward process. In addition, our indeterminate probability theory seems easy to cause confusion, you may still have many inquiries. We recommend that you can have a quick look at our implementation, you will see that from the model output 'logits', complicated calculations have been carried out until the Loss function. **If our theory is not correct, how can IPNN even converge?** ``` # b --> batch size # y --> number of classification classes # [M_1, M_2, ..., M_N] --> split shape # inputs: # logits: [b, M_1 + M_2 +, ..., M_N] # neural network outputs # y_true: [b,y] # labels # outputs: # probability: [b,y] # loss logits = torch.split(logits, split_shape, dim = -1) # 43 # Shape of variables: [[b, M_1], [b, M_2], ..., [b, M_N]] variables = [torch.softmax(_,dim = -1) for _ in logits] # 52 # Joint sample space calculation # Shape of joint_variables: [b, M_1, M_2, ..., M_N] for i in range(len(variables)): if i == 0 : joint_variables = variables[i] else: r_ = EINSUM_CHAR[:joint_variables.dim()-1] joint_variables = torch.einsum('b{},ba->b{}a'.format(r_,r_),joint_variables,variables[i]) # 112, see Eq. (3) # OBSERVATION PHASE r_ = EINSUM_CHAR[:joint_variables.dim()-1] num_y_joint_current = torch.einsum('b{},by->y{}'.format(r_,r_),joint_variables,y_true) # 120, see Eq. (14) num_joint_current = torch.sum(joint_variables,dim = 0) # 121, see Eq. (15) # numerator and denominator of conditional probability P(Y|A^1,A^2,...,A^N) num_y_joint += num_y_joint_current # 167, see Eq. (16) num_joint += num_joint_current # 168, see Eq. (17) # Shape of prob_y_joint: [y, M_1, M_2, ..., M_N] prob_y_joint = num_y_joint / num_joint # 174, see Eq. (13) # INFERENCE PHASE # Shape of probability: [b,y] r_ = EINSUM_CHAR[:joint_variables.dim()-1] probability = torch.einsum('y{},b{}->by'.format(r_,r_),prob_y_joint,joint_variables) # 135, see Eq. (13) # loss function loss = cross_entropy_loss(probability,y_true) # 78 - 81, see Eq. (18) ``` Where '# number' indicate the location of the code in src/ipnn.py in supplementary file. You may try to e.g. replace the 'softmax' function with 'sigmod', or make other changes, and you will find the model will no longer converge. **Q3: Can you compare your work with the literature on label embedding/representation?.** A3: Our theory is the analytical solution, the related analytical solutions are the classical general probability equation and Naïve Bayes, and we discussed it in Sec. 1, more details to see common rebuttal part. In our opinion, our theory stands independent from other approximate probability solutions, including label embedding and neuro-symbolic AI. **Q4: Lacking of the verification of the advantage of the new method over original softmax..** A4: see reply to reviewer #1Vya in Q8. **Q5: Can you make some theoretical explanation on the advantage of your methods?** A5: see reply to all reviewers, point 3. Best regards Authors
Summary: This paper proposes what I consider to be a type of neuro-symbolic AI model involving neural networks for multi-class classification problems; the authors refer to the model as an indeterminate probability neural network (IPNN). Frankly I did not fully understand the model, but the main intent appears to be a form of latent variable modeling for multi-class classification. An important line to summarize the paper is mentioned in Section 2: “our proposed IPNN is one DPVM (deep latent variable model) with discrete latent variables and the intractable posterior calculation is now analytically solved with our proposed theory”. I have reviewed a version of this paper previously, and it looks like the paper has not changed much; this is unfortunate, because I did not understand the paper clearly at that time, and I feel that I still do not understand it well. Strengths: The paper claims to blend neural networks and probability theory in a novel way; if this is true then it can be seen as an innovation in neural network modeling as well as neuro-symbolic AI. Another strength is that the paper is non-standard in that it tries to do something new around modeling multi-class classification problems. Weaknesses: In current form, the paper suffers from a number of weaknesses. A major weakness is that it remains unclear why a new theory of probability is needed in the first place. Note that the example in Section 3 can be studied using standard Bayesian modeling where X_i is the true coin toss result, A_i the adult’s reading and Y_i is the child’s reading, all for the i^{th} coin toss. Here X_i are i.i.d random variables, and A_i and Y_i are conditionally independent given X_i. Then the query of interest can be posed in terms of these random variables. I did not understand what the new theory is and why it is even needed. Also, I find it hard to follow the paper and feel it is not appropriately positioned in the literature. For instance, to me, it appears that the proposed approach is a form of neuro-symbolic AI, yet this is not even mentioned in the paper. I feel there is far too much lack of clarity in the paper in general. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: Some questions, comments and suggestion follow: What exactly is the “indeterminate” aspect of IPNN? Why is it needed? Could you please explain in the paper with an example? The distinction between the submission and the work under review on CIPNN is unclear from Section 1. The related work section is far too small – I’m sure there is a lot of other relevant work in related areas. Am I correct in assuming that multiple labels are not allowed for a specific sample? Could you please confirm? If so, how is this constraint enforced besides just choosing the label with maximum posterior P(y_l|x_{n+1})? The authors claim that very large classification problems can be solved with small neural networks. However, from my limited understanding of the paper, I understood that the network splits up based on the number of training samples N. How is this then a small neural network? Doesn’t it depend on N? What is the size of the neural network (in terms of number of edges) in the proposed approach, and how is it different from the baselines? What is the distinction between observation and inference? Is this just the distinction between learning a model and deploying it for a query? In my opinion, the experimental section does not provide much evidence for the usefulness of the proposed approach. I noticed several typos and grammatical errors in the paper. I suggest the authors review it carefully and fix these errors. Some references seem incomplete. I suggest using capital letter “B” for “Bayesian”. Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: The authors need to write more about the limitations of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer #1Vya, Thank you very much for taking the time to review our paper again, we are glad to see you again here, and we can continue with more discussions. We hope we can answer your confusions this time. **Q1: X_i is the true coin toss result, A_i the adult’s reading and Y_i is the child’s reading, all for the i^{th} coin toss...** A1: If you change it in this way, it may have a few issues: 1. $A_i$ and $A_{i+1}$ are two different random variables. 2. True coin toss result is not able to be known.(after using an observer, this will be another $Y $.) 3.Our example is quite easy, you can try to finish the calculation with your idea to see if you can get reasonable calculation results. **Q2: The related work section is far too small** A2: see reply to reviewer #zXvC in Q3. **Q3: Could you please explain 'indeterminate' with an example?** A3: Yes. ## What is Indeterminate Probability? | | | | | | | | | | | | | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | | Random Experiments ID X | $x_{1}$ | $x_{2}$ | $x_{3}$ | $x_{4}$ | $x_{5}$ | $x_{6}$ | $x_{7}$ | $x_{8}$ | $x_{9}$ | $x_{10}$ | | Ground Truth | $hd$ | $hd$ | $hd$ | $hd$ | $hd$ | $tl$ | $tl$ | $tl$ | $tl$ | $tl$ | | Observer_1's Record $A^1$ | $A^1 = hd$ | $A^1 = hd$ | $A^1 = hd$ | $A^1 = hd$ | $A^1 = hd$ | $A^1 = tl$ | $A^1 = tl$ | $A^1 = tl$ | $A^1 = tl$ | $A^1 = tl$ | | Observer_1's equivalent Representation | 1, 0 | 1, 0 | 1, 0 | 1, 0 | 1, 0 | 0, 1 | 0, 1 | 0, 1 | 0, 1 | 0, 1 | | Observer_2's Record $A^2$ | 0.8, 0.2 | 0.7, 0.3 | 0.9, 0.1 | 0.6, 0.4 | 0.8, 0.2 | 0.1, 0.9 | 0.2, 0.8 | 0.3, 0.7 | 0.1, 0.9 | 0.2, 0.8 | | Observer_3's Record $z$ | $\mathcal{N}(3,1)$ | $\mathcal{N}(3,1)$ | $\mathcal{N}(3,1)$ | $\mathcal{N}(3,1)$ | $\mathcal{N}(3,1)$ | $\mathcal{N}(-3,1)$ | $\mathcal{N}(-3,1)$ | $\mathcal{N}(-3,1)$ | $\mathcal{N}(-3,1)$ | $\mathcal{N}(-3,1)$ | | | | | | | | | | | | | **Observer_1** Let's say, observer_1 is an adult and record the outcome of coin toss always correctly, so the probability of $A^1$ can be easily calculated with the general form: $P(A^1=hd)=\frac{\text{number of }(A^1=hd)\text{ occurs}}{\text{number of random experiments}} = \frac{5}{10}$ If we represent observer_1's record in form of $P(A^1=hd|X=x_k)$, the probability is: $P(A^1=hd)=\sum_{k=1}^{10}P(A^1=hd|X=x_k)\cdot P(X=x_k) = \frac{1+1+1+1+1+0+0+0+0+0}{10} = \frac{5}{10}$ **Observer_2** However, let's say, observer_2 is a model, it takes the image of each coin toss outcome as inputs, and it's outputs are decimal values. In this case, although the ground truth of e.g. $x_3$ outcome is determinate $hd$, this outcome cannot be 100% confirmed with a model (ground truth is not able to be known). We only have the indeterminate prediction as $P(A^2=hd|X=x_3)=0.9$. How should we handle with this situation? The observer_2's record probability shall be: $P(A^2=hd)=\sum_{k=1}^{10}P(A^2=hd|X=x_k)\cdot P(X=x_k) = \frac{0.8+0.7+0.9+0.6+0.8+0.1+0.2+0.3+0.1+0.2}{10} = \frac{4.7}{10}$ This calculation result is a **combination of ground truth and observation errors**. **Observer_3** Let's say, observer_3 is a strange unknown observer, it always outputs a Gaussian distribution for each coin toss with a 'to-be-discovered' pattern. How can we find this pattern? This is the main theory contribution of CIPNN: regard continuous variables as indeterminate probability and make the inference solvable. We write it here, because this example is helpful to understand indeterminate probability more deeply. $P(z)=\sum_{k=1}^{10}P(z|X=x_k)\cdot P(X=x_k) = \frac{5\cdot\mathcal{N}(z;3,1)+5\cdot\mathcal{N}(z;-3,1)}{10}$ We get a complexer $P(z)$ distribution now, it's form is still analytical. And this distribution have two bumps, how can we know the representation of each bump mathematically? We need to use the observer_1's record $A^1$. $P(A^1=hd|z) = \frac{\sum_{k=1}^{10}P(A^1=hd|X=x_k)\cdot P(z|X=x_k)}{\sum_{k=1}^{10}P(z|X=x_k)} = \frac{5\cdot\mathcal{N}(z;3,1)\cdot 1 +5\cdot\mathcal{N}(z;-3,1)\cdot 0}{5\cdot\mathcal{N}(z;3,1)+5\cdot\mathcal{N}(z;-3,1)}= \frac{\mathcal{N}(z;3,1)}{\mathcal{N}(z;3,1)+\mathcal{N}(z;-3,1)}$ For next coin toss, let $P(z|X=x_{11})=\mathcal{N}(z;3,1)$, with Eq. (12) and Monte Carlo method, we have: $P^z(A^1=hd|X=x_{11})$ $=\int_{z}\left(P(A^1=hd|z)\cdot P(z|X=x_{11})\right)$ $= \mathbb{E}_{z\sim P(z\mid X=x11)}P(A^1=hd|z)$ $ \approx \frac{1}{C}\sum_{c=1}^{C}P(A^1=hd|z_{c}) $ $= \frac{1}{C}\sum_{c=1}^{C}\left(\frac{\mathcal{N}(z_{c};3,1)}{\mathcal{N}(z_{c};3,1)+\mathcal{N}(z_{c};-3,1)} \right)$ $\approx 1, z_\{c\} \sim \mathcal{N}(z;3,1)$ In this way, we know that the bump with mean value 3 is for $A^{1}=hd$. (In addition, Fig 5.(b,e) from CIPNN shows 10 bumps and each bump is for one category.) **Q4: The distinction between the submission and CIPNN is unclear .** A4: (D-) IPNN is for discrete latent variables, CIPNN is for continuous. They are two valid applications of our indeterminate probability theory. **Q5: multiple labels are not allowed?** A5: Multiple labels are allowed. But you need to redefine the random variable $Y$ as: $Y^1 \in \\{0,1\\},Y^2 \in \\{0,1\\},\dots$, similar to multi binary classification task. Examples see the model CIPAE from CIPNN. **Q6: the network splits up based on the number of training samples N.** A6: the network splits up based on classification classes, as we said in Abstract, with 100 nodes IPNN is able to classify 10,000,000,000 classes. **Q7: What is the distinction between observation and inference?** A7: see reply to reviewer #zXvC in Q2. **Q8: the experimental section does not provide much evidence for the usefulness of the proposed approach.** A8: The experiments are mainly to validate our theory, this is the most important thing. Although we get very interesting results in Fig. 3, we don't have any 'usefulness' results with IPNN. Best regards Authors --- Rebuttal Comment 1.1: Title: Thanks for your response Comment: Thanks for your detailed response. I will take a closer look at some of your responses, such as the example on "indeterminate".
null
null
Rebuttal 1: Rebuttal: Dear Reviewers, We sincerely thank all of you for taking time to read our paper. Our proposed indeterminate probability theory is far more important than our proposed IPNN model, let's firstly focus on the theory part here. ## 1. Analytical form of any general complex posterior is discovered by us. Currently, there is no mathematical analytical form for complex posterior, we find it now, see Eq. (8) and Eq. (12). Here is a comment from ChatGPT: The significance of finding an analytical form for a complex posterior distribution can be comparable to groundbreaking discoveries in other scientific disciplines. It can provide a solid foundation for understanding, predicting, and making informed decisions in complex systems, leading to advancements with far-reaching impacts in various fields. Analytical form of $P\left(Y=y_{l}\mid A^{1}=a_{i_{1}}^{1}, A^{2}=a_{i_{2}}^{2},\dots, A^{N}=a_{i_{N}}^{N}\right)$ is shown in following table: || | | | |:--: | :--: | :---: | :---------: | || General Form | Naïve Bayes Form |Indeterminate Probability Form| |Equation|$\frac{\text{number of }(Y=y_{l}, A^{1}=a_{i_{1}}^{1},\dots, A^{N}=a_{i_{N}}^{N})\text{ occurs}} {\text{number of }(A^{1}=a_{i_{1}}^{1},\dots, A^{N}=a_{i_{N}}^{N})\text{ occurs}}$ | $\frac{P(Y=y_{l})\cdot\prod_{j=1}^{N}P(A^{j}=a_{i_{j}}^{j}\mid Y=y_{l})}{P(A^{1}=a_{i_{1}}^{1},\dots, A^{N}=a_{i_{N}}^{N})}$|$\frac{ {\textstyle \sum_{k=1}^{n}\left ( P(Y=y_{l}\mid X=x_{k})\cdot {\textstyle \prod_{j=1}^{N}}P(A^{j}=a_{i_{j}}^{j}\mid X=x_{k}) \right ) } } {{\textstyle \sum_{k=1}^{n}\left ( {\textstyle \prod_{j=1}^{N}}P(A^{j}=a_{i_{j}}^{j}\mid X=x_{k}) \right ) } }$| |Assumption|No assumption | Given $Y$, $A^{1},A^{2},\dots,A^{N}$ conditionally independent | Given $X$, $A^{1},A^{2},\dots,A^{N}$ and $Y$ conditionally independent (See our Assumption 2 and 3.) | |Shortages|1. Not applicable if $A^{j}$ is continuous. 2. Not applicable for indeterminate case. 3. Joint sample space is exponentially large.|1. Assumption is strong. 2. $P(A^{j}=a_{i_{j}}^{j}\mid Y=y_{l})$ not always tractable.| (Joint sample space is exponentially large, but can be solved with Monte Carlo method.) | | Space Size| $m\cdot\prod_{j}^{N}M_{j}$| $m\cdot\sum_{j}^{N}M_{j}$ | $m\cdot n\cdot N\cdot C$ ( or $m\cdot\prod_{j}^{N}M_{j}$ if Monte Carlo is not used.) | | | | | | Where $C$ is for Monte Carlo number, we can set it even to 1 (as explained in CIPNN and VAE.). More symbols to see Appendix G. Note: If $P(A^{j}=a_{i_{j}}^{j}\mid X=x_{k}) \in\\{0,1\\}$ and $P(Y=y_{l}\mid X=x_{k}) \in\\{0,1\\}$, our indeterminate probability form is mathematically equivalent to above 'General Form'. (If necessary, we can use example to explain this part in second rebuttal phase.) Eq. (12) is for the inference with posterior, see Appendix B. ## 2. Special Random Variable X. To any general random experiment, it always has the random variable $X$, where $X=x_k$ is for $k^{th}$ random experiment. And the following probability is always true: $P(X=x_k) \equiv\frac{1}{n}, k = 1,2,\dots,n.$ Random variable $X$ is the experiment itself, it shall not be mixed up with other random variables. ## 3. Why Indeterminate Probability Theory is Good? || | | | |:--: | :--: | :---: | :---------: | || Case | Naïve Bayes |Indeterminate Probability | |Assumption| $A^{1},A^{2},\dots,A^{N}$ independent | Given $Y$, $A^{1},A^{2},\dots,A^{N}$ conditionally independent | See our Assumption 2, 3 and 4. | | Validity |Strongest assumption|Strong assumption| No exception| |Assumption Range | all samples | few samples (due to $Y=y_{l}$) | one sample (due to $X=x_{k}$) | || | | | Let's think the independent assumption in another way. Sometimes, $A^{1},A^{2},\dots,A^{N}$ independence assumption is strong. Nevertheless, in the case of Naïve Bayes, the whole samples are partitioned into small groups due to condition on $Y=y_{l}$, the conditional independence maybe not strong anymore. This maybe the reason why Naïve Bayes is successful for many applications. For our Assumption 2, 3 and 4, the whole samples are partitioned into a single sample due to $X=x_{k}$, our assumptions are the most weak one. For example, even if $A^1$ is identical to $A^2$, our independent assumption still holds true. Furthermore, we have already conducted tests with thousand of latent variables in CIPNN, these assumptions have proven to remain valid. (In IPNN, you can test with a few variables due to the exponentially large space size during the training phase, but not during the prediction phase (Monte Carlo).) From our perspective, constructing a toy dataset that contradicts our Assumptions 2, 3, and 4 seems practically impossible. The endeavor of finding a counterexample can be somewhat philosophical in nature, if you have interest, we can discuss it later. Finally, if no counterexamples are found in the future, our Assumption 2,3,4 can be considered as **Mathematical Axioms**. However, it would be too early to assert them at this stage. ## 4. Discussion Since we now view the state of event in an indeterminate way, we have opened the door to the applicability of indeterminate probability theory in various fields: For instance, we can interpret a point from data clusters as indeterminate probability, then we can do supervised classification task; We can interpret the outputs of multi-models as indeterminate probability, then we can do ensemble learning related task; Even in the field of physics, with our limited understanding of the 'Uncertainty Principle', we may interpret the position of particles as indeterminate probability distribution, and then do some inference tasks. Regarding the first two examples we have conducted preliminary validations, if required, we can provide the pseudocode in second rebuttal phase. Finally, we will correct our grammar mistakes according to your suggestions, thanks a lot. Best regards Authors
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Distributional Policy Evaluation: a Maximum Entropy approach to Representation Learning
Accept (poster)
Summary: This paper investigates policy evaluation of distributional RL by leverage of Maximum entropy density estimations without explicitly considering Bellman operator and contraction mapping. They further derived the new generalization error bound in the maximum entropy PE process. For a practical algorithm, authors proposed progressive factorization to refine the representation with desirable properties, including the monotonicity. They did some experiments to demonstrate their claims. Strengths: * Introducing Maximum entropy density estimate into distributional RL has not been studied. * Progressive factorization seems to refine the representation of RL. * The writing is basically clear. Weaknesses: * The motivation is not clear. Why should we consider the maximum entropy estimate to study the representation? * Without discussing the RL context in the proposal, e.g., Bellman operator and contraction mapping, the technical soundness of the so-called max-ent approach is questionable. * Experiments are weak and not directly related to the distributional RL setting. Technical Quality: 1 poor Clarity: 2 fair Questions for Authors: To the best of our knowledge, the representation issue in (distributional) RL is not well-defined yet. Without discussing what the representation issue you are going to study, directly incorporating an approach is very confusing to me. Also, there are three major issues I would raise here. **1.Motivation.** What is the representation issue in distributional RL? Is it just the representation of the value distribution? Note that people may also think distributional RL is also able to improve the representation of environments by additionally leverage of return distribution information. To be honest, after carefully the whole paper, I am still confused why your proposed methods are beneficial for representation. If authors refer to the monotonicity properties of progressive factorization, are there any experiments on real environments, e.g., Atari games, to show these benefits? **2.Less discussion within RL context**. The authors claim the proposed method can handle the representation issue in RL context, i.e., policy evaluation, but they did not rigorously prove the contraction properties of Bellman operator under the maximum entropy density estimate. Note that as an RL research paper, the discussion of contraction properties is the foundation of variants of policy evaluation algorithms. Otherwise, it would be technically problematic. In line 49, the authors mentioned PE in a distributional setting is directly linked to the representation issue, but as far as I can tell, they are not equivalent. Discussing the contraction mapping properties is necessary. **3.Experiments are weak**. If authors claim they are handling the representation issue of distributional RL, I do not find any direct experiment to demonstrate it on commonly used environments. Applying the state-aggregation approach in practice is also problematic as it hinders the generalization for RL with function approximation. In line 128, the authors claimed that Max-ent has multiple advantages. Have authors demonstrated them in their experiments? Similar issues also apply to the monotonicity property in Line 205. Overall, I do not think this paper really handles the representation issue in the real distributional RL setting. Some preliminary results are given without connection with Bellman equation and RL context. Experiments are also weak to support their ambitious targets mentioned in the introduction. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 1 poor Presentation: 2 fair Contribution: 1 poor Limitations: No Flag For Ethics Review: ['No ethics review needed.'] Rating: 2: Strong Reject: For instance, a paper with major technical flaws, and/or poor evaluation, limited impact, poor reproducibility and mostly unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thorough review. We will answer proceeding by points: *What is the representation issue in distributional RL? Is it just the representation of the value distribution? Note that people may also think distributional RL is also able to improve the representation of environments by additionally leverage of return distribution information. To be honest, after carefully the whole paper, I am still confused why your proposed methods are beneficial for representation. If authors refer to the monotonicity properties of progressive factorization, are there any experiments on real environments, e.g., Atari games, to show these benefits?* As for the high-level representation-learning definition, we referred to it rather informally mostly because the representation issue is in general not well-defined, but we will for sure add an explicit description of what we mean by representation-learning in this context: it consists of **finding a good feature representation of the decision-making space so as to make the overall learning problem easier**. In this sense, **a method able to reduce the dimensionality of the problem is said to address the representation-learning problem.** In this sense, the representation-learning issue is the same in distributional RL as in expected-value RL, and canonical (distributional) RL methods indirectly address it by means of generic function approximation. Nonetheless, **distributional methods further exacerbate this problem with respect to expected-value RL**, since they require learning distributions over (usually) large spaces. For this reason, we believe that investigating how to use distributions of returns to alleviate the issue is an interesting stream of research. *The authors claim the proposed method can handle the representation issue in RL context, i.e., policy evaluation, but they did not rigorously prove the contraction properties of Bellman operator under the maximum entropy density estimate. Note that as an RL research paper, the discussion of contraction properties is the foundation of variants of policy evaluation algorithms. Otherwise, it would be technically problematic. In line 49, the authors mentioned PE in a distributional setting is directly linked to the representation issue, but as far as I can tell, they are not equivalent. Discussing the contraction mapping properties is necessary.* Up to our knowledge, contraction considerations are indeed a fundamental tool in the RL methods that enforce the MDP structure (i.e. TD-based), as canonical distributional RL methods devised so far. On the other hand, **contraction considerations are meaningless for Monte-Carlo-based methods, since no operator is actually defined**; our work is indeed in this stream of work, and **this motivates the absence of contraction considerations in the analysis**. We will for sure add one comment giving reasons for the absence of contraction arguments, but more generally we do not agree with the claim that "Discussing the contraction mapping properties is necessary", we rather believe that "**Discussing the contraction mapping properties is necessary for operator-based methods**", which is not the case. We point out that a large portion of sound and effective RL algorithms (including policy gradients) are based on Monte Carlo simulations and do not make use of Bellman operators. While we agree that Mote-Carlo methods might overlook some of the MDP structure, they have the advantage of leading to (i) a simpler mathematical treatment and (ii) of being applicable even to non-Markovian environments. As for the final part of the question, the aim of the paper was indeed to show that policy evaluation and representation learning were linked, and in fact, **we showed that performing PE can drive the learning of a reduced-dimension representation**. Yet, **the fact that they were equivalent was not among the objectives of our work**. We can further specify this if the reviewer thinks some parts of the exposition were misleading. *If authors claim they are handling the representation issue of distributional RL, I do not find any direct experiment to demonstrate it on commonly used environments. Applying the state-aggregation approach in practice is also problematic as it hinders the generalization for RL with function approximation. In line 128, the authors claimed that Max-ent has multiple advantages. Have authors demonstrated them in their experiments? Similar issues also apply to the monotonicity property in Line 205.* The main contribution of the paper is of theoretical nature. For this reason, the purpose of the simulations was not to demonstrate empirically the effectiveness of the method, but to illustrate two essential features of the proposed method that were only suggested by the theoretical result, so as to close all the topics directly or indirectly introduced in the theoretical study: - whether it is possible to boost factorization by tuning the value of $\beta$ - whether the resulting factorization aggregated similar states in terms of the policy's true return distribution; The MDP instances we employed were designed to be coherent with the objectives of the simulations and to be as much interpretable as possible in terms of true return distributions, a feature which is hardly shared with more complex RL tasks (e.g., Atari). We believe that the adaptation of the proposed method to more practical contexts is out of the scope of the present paper. --- Rebuttal Comment 1.1: Title: Thank you for Response Comment: Firstly, I thank the authors for their detailed responses. However, after reading the other reviewers' responses as well as the authors' rebuttal, I am afraid I still hold divergent opinions from the authors on some of the key issues of this paper. * If the authors claimed that they are targeting representation learning by reducing the dimension of state features, why does this paper not consider real RL environments, like Atari games, where the state representation is critical enough? Although the authors claimed this is a theory paper, I do not think lacking important experiments on real envs will hit the standard of this venue. * For the theory contribution that the authors emphasized, I have been working on distributional RL for many years, and I do not think an analysis of KL divergence is directly related to typical distributional RL algorithms, including QRDQN, IQN, MMDRL. I noticed the discussion of authors in lines 302 to 311, which is actually critical in the practical distributional RL. Since the title of this is distributional policy evaluation, it will be misleading as a similar concept has already been investigated in distributional RL with an explicit distributional Bellman operator defined[1, 2]. That is why I suggest the contraction mapping of the Bellman operator should be important. * Based on my expertise in RL, MC-based algorithms are the sample-based extension of model-based Dynamical Programming (DP) algorithms. Since vanilla Policy evaluation is clearly a DP version, it is natural to expect some properties, like the contraction, in DP, should be explicitly defined and discussed. * Another question is whether a direct analysis of the generalization error bound is valid in RL theory paper, where policy evaluation is very different from supervised learning. Although there indeed exist some links, I am afraid this point has not been clearly stated in this paper. Based on these reasons, I do not think it is a solid theory paper, either, for the venue with high standards. In summary, I am afraid at this point I need to keep my score highly based on my first impression of this paper, although my evaluation is divergent from the others. I am also open to any discussion with the other reviewers and AC in the following period. [1] DSAC: Distributional Soft Actor Critic for Risk-Sensitive Reinforcement Learning [2] Interpreting Distributional Reinforcement Learning: A Regularization Perspective --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the reply. *If the authors claimed that they are targeting representation learning by reducing the dimension of state features, why does this paper not consider real RL environments, like Atari games, where the state representation is critical enough? Although the authors claimed this is a theory paper, I do not think lacking important experiments on real envs will hit the standard of this venue.* We believe that, for a theory paper, illustrative RL domains, such as the ones we provide, are more appropriate than applications to complex environments. Indeed, illustrative domains are able to clearly highlight the features of our approach, while in more complex environments the effect of our representation learning approach could be hidden behind other confounding effects (e.g., difficulty in the training of deep neural networks, choice of the proper architecture, overfitting). We also point out that the other reviewers suggested that further experiments would have been appreciated, but they believed that the contribution is still sufficient for acceptance. *For the theory contribution that the authors emphasized, I have been working on distributional RL for many years, and I do not think an analysis of KL divergence is directly related to typical distributional RL algorithms, including QRDQN, IQN, MMDRL. I noticed the discussion of authors in lines 302 to 311, which is actually critical in the practical distributional RL. Since the title of this is distributional policy evaluation, it will be misleading as a similar concept has already been investigated in distributional RL with an explicit distributional Bellman operator defined[1, 2]. That is why I suggest the contraction mapping of the Bellman operator should be important.* As the Reviewer noticed, **in the "Related Works" we highlighted that "Our work tries to answer different research questions compared to traditional policy evaluation in D-RL"**. Additionally, we highlighted the fact that KL-divergence is not related to other standard D-RL quantities without further assumptions (311) and that we apply completely different techniques (304-305). MC-approaches (which are alternatives to TD methods and do not make use of the Bellman operator) are qualified methods for policy evaluation (see also answer to the next point). Finally, we take the liberty to point out that **the fact that this work is different in many ways with respect to other D-RL works and does not follow the mainstream research in D-RL should be regarded as an element of novelty, rather than an issue**. --- Reply to Comment 1.1.2: Comment: *Based on my expertise in RL, MC-based algorithms are the sample-based extension of model-based Dynamical Programming (DP) algorithms. Since vanilla Policy evaluation is clearly a DP version, it is natural to expect some properties, like the contraction, in DP, should be explicitly defined and discussed.* **MC-based algorithms** for policy evaluation have been extensively used both in the expected RL and D-RL communities. They **are not "sample-based extensions of model-based Dynamical Programming (DP) algorithms"** since they make no use of the Bellman operator and, consequently, they have never been studied in terms of contraction. In particular, the chapter "Monte-Carlo prediction" of Sutton and Barto's book [1] states "Whereas the DP diagram includes only one-step transitions, the Monte Carlo diagram goes all the way to the end of the episode. These differences in the diagrams accurately reflect the fundamental differences between the algorithms. [...] **In other words, Monte Carlo methods do not bootstrap**". The same phrasing is reported in the chapter "Learning the Return Distribution" of Bellemare, Dabney, and Rowland's book [2], and here as well no contraction properties of distributional Monte-Carlo vanilla Policy Evaluation are shown. [1]- Reinforcement Learning: An Introduction, Richard S. Sutton, Andrew G. Barto (2022) [2]- Distributional Reinforcement Learning, Marc G. Bellemare and Will Dabney and Mark Rowland (2023) *Another question is whether a direct analysis of the generalization error bound is valid in RL theory paper, where policy evaluation is very different from supervised learning. Although there indeed exist some links, I am afraid this point has not been clearly stated in this paper. Based on these reasons, I do not think it is a solid theory paper, either, for the venue with high standards.* We believe to have indeed treated the link at the very beginning of Section 3: "The proposed approach turns distributional PE into a pure density estimation problem" (116-118), yet "[to] have access to a batch of i.i.d. samples, this is not necessarily restrictive: the result can be generalized for a single $\beta$-mixing sample path by exploiting blocking techniques" (135-138). It is common in the RL literature to derive theoretical results for the i.i.d. setting (like in supervised learning) since they can be comfortably converted into results for dependent samples under the $\beta$-mixing of the underlying chain [3,4]. Thus, **the fact that it is possible to formalize a PE problem so as to derive generalization error bounds when MC estimates are employed is indeed a valid result for RL and a part of the novel contributions of this work.** [3]- Rates of Convergence for Empirical Processes of Stationary Mixing Sequences, Bin Yu (1994) [4]- DualDICE: Behavior-Agnostic Estimation of Discounted Stationary Distribution Corrections, Ofir Nachum, Yinlam Chow, Bo Dai, and Lihong Li (2019) We hope that these answers might elucidate the remaining doubts and help the Reviewers discuss them in the following period.
Summary: This paper proposes two algorithms, distributional maximum entropy policy evaluation (D-Max-Ent PE) and D-Max-Ent Progressive Factorization. The first algorithm applies the formulations of maximum entropy RL to the problem of distributional policy evaluation, while the second extends the first by adding a progressive state space factorization step that recursively decomposes/factorizes the state space by evaluating the maxent PE at each resolution to find a decomposition that minimizes the generalization error bound. Strengths: I found this paper to be generally well written and understandable. A few grammar issues are present, but nothing that seriously hurts legibility, and the math in the main paper flows intuitively. Distributional maxent policy evaluation makes sense as an approach to PE. I'm not familiar with recent literature on PE methods, but it seems like a good approach to the problem, and maxent methods have had enough empirical success in deep RL work that it seems like a reasonable approach here too. I thought combining maxent PE with an iterative state space decomposition to be a clever and intriguing idea as well. On paper it makes sense to me as a way to divide and conquer large state spaces, and I'm curious how this idea could extend to more complex tabular and continuous MDPs (as they authors note it should). Overall this seems like an interesting theoretical approach with appealing generality and the potential for extension to more complex domains. Weaknesses: The main issue I found with this paper is the relative simplicity of the domain studied, especially in the limited empirical evaluation. I recognize this is a theory paper, and many significant practical questions are yet to be addressed. In particular the problem of how to factorize a state space is very hard in general and a worthy topic in its own right (the authors do acknowledge this). However, I'd still like to see some more analysis of what sorts of MDPs the proposed algorithm is strong in versus weak using just the simple factorizations discussed here, ideally in the form of additional experiments on other gridworld MDPs besides the single example case. I think this idea is interesting, so as an experimentalist I'd like to see its practical behavior demonstrated a little more. I'm still inclined to recommend acceptance as the core ideas seem well motivated and interesting, but I think this would be a stronger paper with some more validation. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I don't see distributional policy evaluation defined explicitly anywhere (though I think it's mostly covered by equation 3 and surrounding material?), unlike maxent RL which is defined in detail. For clarity this seems like it would be nice to have. I like the use of specific grounding questions in the introduction nice, but I found the wording of the questions to be kind of leading. They seem phrased in such a way that the fact of asking them means the answer will be "yes" which feels not very useful rhetorically. Perhaps they can be reworded to be more thought provoking? Absent further experiments, it'd be nice to have some analysis and discussion of what regimes these algorithms will tend to perform well in versus struggle in. For example, what happens to the test case when the number of states increases, in X versus Y dimension? Are there particular properties of an MDP that will lead to high performance/better error bounds? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The core of the limitations of this work are covered in my discussion of weaknesses. The authors are pretty clear about the extent of the contributions and the paper is pretty clear with its claims, however, which is always nice to see. I don't see any potential negative societal impact issues stemming from this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thorough review and for the constructive comments. *I don't see distributional policy evaluation defined explicitly anywhere (though I think it's mostly covered by equation 3 and surrounding material?), unlike maxent RL which is defined in detail. For clarity this seems like it would be nice to have.* Distributional PE was indeed left implicitly defined, but a **brief definition of the PE problem will be for sure added.** *I like the use of specific grounding questions in the introduction nice, but I found the wording of the questions to be kind of leading. They seem phrased in such a way that the fact of asking them means the answer will be "yes" which feels not very useful rhetorically. Perhaps they can be reworded to be more thought provoking?* The questions were mostly used to drive the discussion toward the desired direction, but as you suggest a more provoking wording might help the rhetoric of the whole work. We would propose as a rewording something like: - Q1: Does employing return distributions offer other tools out of the standard distributional RL methods? - Q2: How are representation learning and policy evaluation intertwined? Do distributional methods offer a new way to highlight and exploit this connection? *Absent further experiments, it'd be nice to have some analysis and discussion of what regimes these algorithms will tend to perform well in versus struggle in. For example, what happens to the test case when the number of states increases, in X versus Y dimension? Are there particular properties of an MDP that will lead to high performance/better error bounds?* This point is an extremely interesting point indeed; we started addressing it (261-263) and left as a future contribution (337-338). We will for sure add a section in the Appendix better explaining these considerations: - In the considered MDP instances, increasing X or Y would affect the results based on which kind of change they would introduce in the return distribution landscape. In case the change would be consistent with the current factorization (for example, adding over the X dimension with the same grid layout) the results would follow; in general, **as far as the added dimensions would maintain the same factorization structure in the return distribution, the results would follow.** - A factorizability structure was introduced but not directly addressed. In general, similarly to what is done in "On the Complexity of Representation Learning in Contextual Linear Bandits", Tirinzoni, Pirotta, Lararic (2023), we believe that **a condition of "realizability" of the state-aggregation feature class for $\eta^\pi$ would be required**, or at least in an approximate formulation. Anyway, for state-aggregation feature classes in particular, the **high-performance MDP instances the reviewer mentioned would be the ones where acting with one policy would induce approximately the same return distribution in many different states**, which are the ones that will end up being aggregated. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thanks for the added insight! Re: the rhetorical questions, I find that questions which should be answered either "yes" or "no" don't really engage the reader much, since the fact of asking the question in context tends to tell you what the answer is. The goal is for the reader to think about the question for a minute to get them invested in the answer you're about to provide, so in my experience something open-ended or hard-to-answer (but where an educated reader might be able to make a hypothesis) works well. In that vein, I like your proposed revised Q2, but Q1 still seems like the answer will be "yes" even if one doesn't understand the paper, since if the answer wasn't "yes" you probably wouldn't have written a paper about it. I'd suggest something like "What tools can return distributions provide for distributional RL?" or similar as an alternative. There's more interesting questions to dig into on this topic for sure, and I think the existing content is a good starting point. I hope to see more work on this topic to build on this foundation in future papers. --- Reply to Comment 1.1.1: Comment: We thank the Reviewer for the important suggestion, we will for sure change the questions to this more open-ended questions.
Summary: The authors proposed a novel Maximum Entropy framework for policy evaluation in a distributional RL that can explicitly take into account the features used to represent the state space while evaluating a policy. Based on the framework, the authors developed D-Max-Ent Progressive Factorization algorithm balancing the trade-off between bias and variance of the representation space. Strengths: Overall well written and well organized paper. The authors answered two interesting research questions of representation learning. The authors analysed the questions both theoritically and experimentally by proposing a novel maximum entropy framework for policy evaluation and a new algorithm based on the framework. Weaknesses: Experiment setting and results presentation are a bit hard to understand. I'm not very familiar with representation learning and distributional RL, so maybe this is standard but I find the figures hard to interpret, so as the importance of the contribution of the paper based on the experiments. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I see the experiments are conducted on a Grid task, will the proposed algorithm also applicable on continuous environments? Will the algorithm help with performance, training stability, sample efficiency? (Again, I'm not familiar with representation learning and distributional RL so let me know if my question is not the main concern in the field.) Maybe add some legends and more explanations about the figures to help present the results and emphasize the contribution. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Experiments designed in the paper help to support some theoretical analysis of the proposed algorithm, but I'm not sure about the benefits/advantage of the algorithm in a wider range of tasks based on my understanding of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the points outlined, in particular the ones about the readability of the figures. *I see the experiments are conducted on a Grid task, will the proposed algorithm also applicable on continuous environments? Will the algorithm help with performance, training stability, sample efficiency?* Indeed, the **proposed method works with continuous spaces as well**, yet the factorization rule is potentially less intuitive since it needs to cut continuous spaces. As for your second question, **reducing the input space dimension to the decision-making problem would help sample efficiency** for sure, once it assures that the new representation does not introduce bias, as the proposed method does. On the other hand, how to combine the proposed framework with existing methods in order to boost performance was left for future work. *Maybe add some legends and more explanations about the figures to help present the results and emphasize the contribution.* Due to space limitations, the description of the experiments was limited to the core, as for the figures which were made to contain all the needed information, but **we intend to extend this whole section in the additional page of the Camera Ready version**. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thank the authors for their response to my questions. I'll maintain the score.
Summary: This paper takes a maximum entropy based approach to learning the return distribution of a policy. This framework learns a distribution with maximum entropy subject to matching the expectation of a certain set of functions, referred to as the feature functions. They adapted a classical maximum entropy error bound to measure the approximation error of their learnt distribution to the true return distribution in the KL divergence. They propose a framework where the features are based on underlying state abstractions, and propose an algorithm which progressively learns a return distribution and underlying features. They then evaluate their algorithm in various small scale experiments. Strengths: - Using a maximum entropy approach to learning the return distribution is an interesting direction, and novel from the previous parametric approaches used. - This paper takes a step towards studying the interaction of distributional RL and representation learning. - The theoretical results are cleanly proved, with very clear and organized arguments. Weaknesses: - The paper takes an RL agnostic approach, and learns an approximate return distribution without taking into account the underlying RL decision problem (i.e. Bellman operators aren't used, nor reward nor transition information, etc). This seems contrary to the decision-aware RL direction, which broadly states that we should focus learning capacity on aspects which are important for the decision problem. Some guarantees that learning a model in this way comes with some benefits for the RL problem would improve the contribution. - To my knowledge, it appears that Algorithm 1 assumes knowledge of the true return distribution (or at least access to samples from it), which is a potentially unrealistic assumption. I may be misunderstanding however, so I listed this as a question below. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - Is there a way to connect this method to the underlying RL problem? - In Algorithm 1, how is $\mu_j(\mathcal{H}_N )$ evaluated? To my knowledge it seems we require knowledge of the true return distribution $\eta^\pi$ (or at the very least samples from it). Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors discuss limitations, such as their method requiring access to IID samples. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the important points outlined. *The paper takes an RL agnostic approach, and learns an approximate return distribution without taking into account the underlying RL decision problem (i.e. Bellman operators aren't used, nor reward nor transition information, etc). This seems contrary to the decision-aware RL direction, which broadly states that we should focus learning capacity on aspects which are important for the decision problem. Some guarantees that learning a model in this way comes with some benefits for the RL problem would improve the contribution. Is there a way to connect this method to the underlying RL problem?* The proposed method is agnostic with respect to the nature of the underlying decision problem, which has advantages and disadvantages. Indeed, as the reviewer noted, we do not make use of Bellman operators, but just of return distributions, as commonly done in Monte Carlo approaches to RL. **While we agree that this might overlook some of the MDP structure, it has the advantage of leading to (i) a simpler mathematical treatment and (ii) of being applicable even to non-Markovian environments**. As the reviewer noted, we see this work as the first contribution taking "a step towards studying the interaction of distributional RL and representation learning". **Future works should for sure investigate the interaction between Temporal Difference (TD) distributional methods and representation learning**. *To my knowledge, it appears that Algorithm 1 assumes knowledge of the true return distribution (or at least access to samples from it), which is a potentially unrealistic assumption. I may be misunderstanding however, so I listed this as a question below. In Algorithm 1, how is $\mu_j(H_n)$ evaluated? To my knowledge it seems we require knowledge of the true return distribution (or at the very least samples from it).* The proposed algorithm does not require to evaluate $\mu_j(H_n)$, but rather $\hat \mu_j(\mathcal H_n)$, that corresponds to the empirical estimate of the value of the feature function $f_j$ as for Equation 6, computed from $N$ trajectory samples. In this sense,** it indeed requires trajectories of experience coming from the policy which is being evaluated but not the full knowledge of the environment transition models (and so of the return distribution)**. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: I thank the authors for their response, and for clearing up my confusion regarding $\mu_j(\mathcal{H}_n)$. I am inclined to maintain my score.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper introduces a new framework called Distributional Maximum Entropy Policy Evaluation (D-Max-Ent PE) for policy evaluation in a distributional reinforcement learning setting. The framework considers the complexity of the representation used to evaluate a policy and incorporates it into a generalization-error bound. The authors then propose an algorithm called D-Max-Ent Progressive Factorization, which uses state-aggregation functions to progressively refine the state space representation while balancing information preservation and reduction in complexity. Numerical simulations demonstrate the effectiveness of the proposed algorithm and its relationship with aggregations and sample regimes. Strengths: This paper derived a practical formulation of the generalization error bound and proposed an algorithm, Distributional Max-Ent Progressive Factorization, that adaptively finds a feature representation to optimize the generalization error bound. Through illustrative simulations, the authors demonstrated the empirical behaviors of these approaches and explored the relationships between hyperparameters and the sampling regime. Weaknesses: 1. This paper lacks experiments in practical reinforcement learning tasks, such as Atari. The authors should conduct more experiments to demonstrate the efficiency of the proposed policy evaluation method. Technical Quality: 3 good Clarity: 3 good Questions for Authors: None Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: *This paper lacks experiments in practical reinforcement learning tasks, such as Atari. The authors should conduct more experiments to demonstrate the efficiency of the proposed policy evaluation method.* We thank the reviewer for this feedback. We remark that the main contribution of our work is of theoretical nature. Indeed, **the objective of the simulations was to illustrate two essential features of the proposed method that were only suggested by the theoretical result**, so as to close all the topics directly or indirectly introduced in the theoretical study: - whether it is possible to boost factorization by tuning the value of $\beta$, - whether the resulting factorization aggregated similar states in terms of the policy's true return distribution; rather than demonstrate its applicability to general reinforcement learning tasks. The MDP instances we employed were designed to be coherent with the objectives of the simulations and to be as much interpretable as possible in terms of true return distributions, a feature which is hardly shared with more complex RL tasks (e.g., Atari). We believe that the adaptation of the proposed method to more practical contexts is out of the scope of the present paper.
null
null
null
null
null
null
Towards the Universal Learning Principle for Graph Neural Networks
Reject
Summary: The authors propose a spectral GNN with geometrically decaying weight $\alpha^k$ and $P$-hop polynomial basis. They study the Lipschitzness and generalization bound of the model. Experiment results demonstrate the good performance of the proposed method over baselines Strengths: (+) Interesting generalization analysis. Weaknesses: (-) The recent spectral GNN literature is not discussed or compared, where the advanced methods are also polynomial graph filters but with a more careful design (i.e., different basis). (-) The motivation of the proposed new designs $\alpha$ and $P$-hop seems not convincing (-) The spectral GNN baselines are completely missing in the experiment. (-) The experiments do not demonstrate the benefit of the proposed design besides prediction accuracy. Since the authors talk about polynomial graph filters, I wonder how the proposed methods can learn different graph filters. ## Detail comments I have mixed feelings about this paper. On one hand, I find the discussion on requiring the Lipschitz graph filter and generalization analysis interesting. On the other hand, I feel the authors did not conduct a comprehensive literature survey on the recent development of spectral GNNs, which makes me feel the paper is a bit outdated. Note that the idea of learning polynomial graph filters has been extensively studied recently, as capable of learning high-pass graph filters enables good performance on heterophilic graph datasets [2]. Since GPR-GNN (using a monomial basis), many advances have been made. For instance, Bernnet [He et al. 2021] argue that using Bernstein polynomial basis has better properties. [Wang et al. 2022] further improve this line of work by using Jacobi polynomial basis. Both of them not only demonstrate great performance on homophilic and heterophilic datasets but also conduct experiments to directly examine how well spectral GNNs can learn each target graph filter such as low-pass, high-pass, band-pass and more. That is, these works show how to make $g_{\theta}$ learn better with different polynomial basis and thus constraints on the coefficients. I feel the proposed APGNN is very relevant to this literature and should be compared with them in both related works and experiments. Furthermore, Since APGNN is claimed to be a ”universal principle”, I wonder how well it can learn on heterophilic datasets and different graph filters (i.e., the experiment in [Wang et al. 2022] Table 1). Also, I find the motivation for using $\alpha$ and $P$-hop not very convincing. For $\alpha$, I agree that if we set $K = \infty$ then the geometric decaying weight is needed. However, as the authors also mentioned in Section 4, we cannot use $K= \infty$ in practice. In this case, I wonder what benefit would $\alpha$ bring to us. Can not we just learn it automatically? On the other hand, the argument for motivating the use of $P$ seems problematic. Intuitively, we should compare two methods with the same number of hops ($K$). In this point of view, $P$ does not help in any sense. Also, if we fix $T$ (the maximum hop range), then the effect of increasing $P$ is merely reducing $K = T//P$, which apparently will make the generalization bound smaller. However, this is at the cost of increasing the term $\hat{R}$, otherwise one should simply set $K=0$. Even from the experiment section, we can see that choosing $\alpha\approx 1$ gives roughly the same best-tuned results (Figure 3-(b)). A similar observation can be made for $P=1$ in Figure 3-(c), albeit the difference seems to be a bit larger. According to the argument above, I conjecture that the benefit of $P$ comes from having a smaller number of learnable weights (in the graph filter $g_{\theta}$). It is interesting to investigate deeper of how we can improve along this line but the current results and explanation are unsatisfactory. At last, I also hope the authors can repeat experiments of Figure 3 on heterophilic datasets and even on experiments of learning different graph filters (maybe too much to do though). I feel the finding therein can motivate the authors to some ideas to further improve the work. Minor: How can the authors answer "yes" in the reproducibility but not include their code (only after acceptance)? I feel the authors should be more serious about answering the checklist questions. ## References [He et al. 2021] Bernnet: Learning arbitrary graph spectral filters via Bernstein approximation, He et al. NeurIPS 2021. [Wang et al. 2022] How powerful are spectral graph neural networks, Wang et al. ICML 2022. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. Detail comparison with recent spectral GNN literature in both related works and experiments. 2. Conduct experiments on heterophilic datasets. 3. Conduct experiments in learning different graph filters. 4. Explain the benefit of $\alpha$ in finite $K$ regime. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: I do not find potential negative social impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your comments and detailed suggestions to improve our work. In response to the points raised in your comments, we give the following discussion. 1.To explain the necessity of $\alpha$, we look back to the motivation of the proposed learning framework. Consider graph filter $g(\lambda)=\sum_{k=0}^K\theta_k(1-\lambda)^k$. We observe the inconsistencies between GNNs and their infinite-depth versions, i.e., the graph filter cannot converge or possess good analytic property for $K\to\infty$. This is the reason why some GNNs fail to perform better when increasing order $K$. This motivates us to construct the framework that allows GNNs to keep consistency with the "infinite-depth" version. Therefore, we propose the learning principle requiring convergence and the Lipschitz continuity of the graph filter. From this point, we motivate $\alpha$, the exponential weight decay, following this principle. The substantial purpose of $\alpha$ is to keep the "consistency". This allows us to make $K\to\infty$, and also allows us to apply large $K$ in practice. On the other hand, the proper $\alpha$ reduces the generalization risk by Proposition 1, which claims the benefit of $\alpha$ in theory. Besides, $\alpha$ is not a learnable parameter. It is proposed as a hyperparameter to meet the learning principle. Even if $\alpha$ is learnable, it could cause exploding gradient during optimization. 2.We are thankful for your comprehensive advice for reviewing the related works. We will include these works in the literature review and experiments. Additional experiments are conducted to compare APGNN with spectral GNNs such as GPR-GNN and BernNet. We also include the heterophilic datasets: Cornell, Wisconsin, and Texas. The experimental results are shown in the attached PDF. The results show that APGNN still outperforms GPR-GNN, BernNet, and other compared methods on most datasets (both homophilic and heterophilic datasets). This suggests APGNN can learn more appropriate graph filters in the node classification tasks. We also show the learned coefficients $\beta$ on different datasets. 3.The experiments of $P$-hop on heterophilic datasets are conducted and shown in the attached PDF. --- Rebuttal Comment 1.1: Comment: I am not able to see the general response and the attached pdf that the authors mentioned. I keep my score unchanged.
Summary: This paper theoretically studies the criterion for the graph filter formed by power series using Lipschitz smoothness and then proposes a novel Adaptive Power GN architecture. Some convergence and generalization analyses are covered. Experiments also show the advantages of the proposed methods, with some ablation studies on some parameters. Overall, it is a work that combines theory and practice. I think this work can benefit from some revisions. I am going to raise my score if my concerns are addressed. I would like to see a revision in the manuscript (it is OK to appear in the appendix) or a plan for the revisions (if the revision uploading is not allowed). ------------------------------------------------------------------------------------------------ After rebuttal, I increased my score to 6, which is based on my first impression and the responses. I am generally satisfied with the responses. Strengths: 1. It is a paper that combines both theory and experiments. I like the logic from theory to practice. Hence, the writing is excellent and clear to follow. 2. As a reviewer from the theoretical area, I think the generalization analysis on non-GCN graph neural networks is novel and exciting. One great part is that this analysis seems to cover a lot of existing graph neural networks. So it is general. 3. The experimental results support the theory and show the advantages. Weaknesses: 1. Some parts of the paper are not very clear. It is mainly around lines 235 to 242. (a) For example, in line 239, it says, "larger $\alpha$ leads to a higher bound. " It is not clear what this "bound" refers to. At first, I thought it referred to $1-\alpha^K$ in line 235 or $1-\alpha^{T/P}$ in line 237. Later, I feel it should be the last two terms (or the second to the last term) in Equation 24. This needs clarification. (b) Another thing is at the end of this paragraph, it claims, "$\alpha$ should be moderate to...$. The discussion about small $\alpha$ is missing. I guess it is because a small $\alpha$ makes the filter trivial and not powerful (line 157). This also needs clarification. 2. Some discussions about theoretical works of generalization analysis on Graph Neural Networks are needed. I would like to know the comparison and theoretical novelties beyond existing works. Here are some recent related works. [1] Esser et al., 2021, "Learning Theory Can (Sometimes) Explain Generalisation in Graph Neural Networks." [2] Cong et al., 2021, "On Provable Benefits of Depth in Training Graph Convolutional Networks." [3] Li et al., 2022, "Generalization guarantee of training graph convolutional networks with graph topology sampling." [4] Zhang et al., 2023, "Joint Edge-Model Sparse Learning is Provably Efficient for Graph Neural Networks." [5] Tang et al., 2023, "Towards Understanding Generalization of Graph Neural Networks." Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. I think your theory covers other GNNs in Table 1. Can you use your Theorem 2 or another theory to show your proposed GNN is better than other GNNs in Table 1? It does not have to be very rigorous. I expect to see a comparison between different GNNs about $M$ and $L_M$ in equation 23. 2. I think the multiple P-hop message-passing strategy should work better on some heterophilous graphs. The citation graphs are homophilous. Could you please show some experimental results on heterophilous graph datasets? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: There is no potential negative societal impact of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We really appreciate your high approval and constructive suggestions for our work! As you mentioned, this work provides a comprehensive framework with generalization analysis to pursue deeper and more effective GNN. Here we will address your concerns and state the revision plan. 1.Your understanding of the analysis around lines 235 to 242 (in "weakness 1") is correct. The "bound" means the last two terms in equation (24), which indicate the complexity and "quantization error" from continuous graph to discrete graph. The reason why we do not use extremely small $\alpha$ is, as you said, to avoid the trivial filter. From the spatial perspective, too small $\alpha$ also limits the information passing over the graph. We plan to add detailed clarification in the revision. 2.We are thankful for your suggestion on the review of GNNs' generalization. The related works are reviewed. [1] and [5] present the generalization with transductive Rademacher complexity on node classification tasks (Note that their generalization error is only measured over the testing set). In contrast, [2] analyze the transductive uniform stability of GNN (this is also related to [6]). Considering the stochastic hypothesis, Ma et al. use PAC-Bayesian theorem to analyze the subgroup generalization bound of GNN [7]. [3] and [4] investigate the generalization guarantee of GNN via topology properties in the graph. Different from these works, we present the generalization guarantee over the whole sample space by extending the graph into its continuous version. This is not restricted to the testing set and allows more general analysis with inductive learning. 3.Here we show some applications of Theorem 2. 1)PPNP: $M=1$, $L_M=\beta$, where $\beta>0$ is the hyperparameter of PPNP. 2)DAGNN: $M=K$, $L_M=O(K^2)$, where $K$ is the maximal order. In this case, we can see that the complexity term of equation (24) becomes $O(K\log K)$. Therefore, it tends to show weaker generalization compared with APGNN as $K$ increases. 3)GPR-GNN: $M=1$, $L_M=K$. In this case, the last two terms of RHS in (24) become $O(\log K)$ and $O(K)$. We will add the above results as well as the analysis to the revision. 4.We conduct extra experiments on some heterophilic graph datasets, including Cornell, Texas, and Wisconsin. These results will be appended to the revision. Additionally, we will give a further analysis of the tendency of $\beta$. The typos and some language mistakes will be corrected in the revision. We thank you again for the meaningful comments and suggestions, which inspire us to explore more interesting insights in this work. [1] Esser et al., 2021, "Learning Theory Can (Sometimes) Explain Generalisation in Graph Neural Networks." [2] Cong et al., 2021, "On Provable Benefits of Depth in Training Graph Convolutional Networks." [3] Li et al., 2022, "Generalization guarantee of training graph convolutional networks with graph topology sampling." [4] Zhang et al., 2023, "Joint Edge-Model Sparse Learning is Provably Efficient for Graph Neural Networks." [5] Tang et al., 2023, "Towards Understanding Generalization of Graph Neural Networks." [6] Verma et al., 2019, "Stability and generalization of graph convolutional neural networks." [7] Ma et al., 2021, "Subgroup generalization and fairness of graph neural networks." --- Rebuttal Comment 1.1: Comment: Thank you for the response. My first impression of this paper is quite good. I expect a better performance on heterophilous graphs and hope it can improve this paper. I am generally satisfied with other answers to my questions. I have increased my score to 6, which is my initial evaluation.
Summary: This paper studies the polynomial filters in GNNs. Specfically, the authors propose a Adaptive Power GNN which employs exponentially decaying coefficients. A theoretically generalization analysis of the proposed framework is conducted. Experiments demostrate the proposed method can outperform some baselines on the selected datasets. Strengths: 1. The background provided is detailed and aids readers in comprehending the nuances of the paper. 2. The authors offer a generalization analysis of the proposed method. 3. The paper is well-written and easy to follow. Weaknesses: 1. The primary concern is the lack of novelty. The concept of polynomial filters in GNNs is well-established. Additionally, the paper looks at infinite orders, a domain where any function can be approximated by infinite polynomial functions. Also, the topic of GNN generalization has already been examined in several papers, such as [1], [2], and [3]. 2. The method proposed in this paper comprises decoupled GNNs, which implies that the framework and analysis may not be applicable to coupled cases, such as GCN and GAT. 3. The hyperparamters $\alpha, \beta$ are hard to choose. For instance, GPR-GNN struggles to learn these hyperparameters effectively without proper initialization. How do the authors suggest these hyperparameters be selected? 4. The authors opted for fixed data splits across all datasets, which may predispose the model to overfitting on the valid/test sets. Have the authors considered random data splits? Additionally, the study could benefit from incorporating larger datasets like the OGB datasets. [1] Baranwal, Aseem, Kimon Fountoulakis, and Aukosh Jagannath. "Graph convolution for semi-supervised classification: Improved linear separability and out-of-distribution generalization." arXiv preprint arXiv:2102.06966 (2021). [2] Verma, Saurabh, and Zhi-Li Zhang. "Stability and generalization of graph convolutional neural networks." Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2019. [3] Ma, Jiaqi, Junwei Deng, and Qiaozhu Mei. "Subgroup generalization and fairness of graph neural networks." Advances in Neural Information Processing Systems 34 (2021): 1048-1061. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Please refer to the weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are thankful for your positive comment on our method. But it seems that the reviewer misunderstood some important facts about our work. We will further explain these points and dispel your concerns one by one. 1.It should be noted that the main topic of this paper is the design principle of graph filter consistent with infinite order. To the best of our knowledge, this problem is not well investigated before. There are numerous methods to explain the generalization of GNNs. Though the existing works have given some ways to understand the generalization of GNNs, the theoretical properties of GNNs are still not thoroughly studied. In contrast, we explore the generalization with the learnable graph filter from the perspective of the continuous graph. This provides a novel viewpoint to understand the GNNs' properties on inductive learning, which is significantly different from the previous research, including your listed references. We still thank you for the advice on the review of GNNs' generalization. 2.The key to our work still lies in the design of graph filters. The reason we use decoupled setting is the convenience for explanation and implementation. In fact, our model is not contradictory to the coupled GNNs, because we can treat it as one layer in coupled GNNs. Therefore, we can easily extend our framework to coupled GNNs, like GPR-GNN [1]. 3.To be clear, the parameter $\beta$ is the weight to be learned, not the hyperparameter. Besides, The hyperparameter $\alpha$ is tuned via grid search in our experiments. This is also shown in the experiment section. 4.The fixed data split is well-recognized by researchers and has been used in many works, so we do not consider random split in our work. We have conducted extra experiments on heterophilic datasets with the random split. We will consider larger datasets in the extensive experiments. [1] Chien et al., 2021, "Adaptive universal generalized pagerank graph neural network." --- Rebuttal Comment 1.1: Comment: Thanks for the clarification from authors. I didn't see any further results. I tend to keep my score.
Summary: In GNNs, designing a graph filter or propagation mechanism plays a critical role. Spectral-based GNNs formulate graph filters in the graph Fourier domain, and those filters are usually in the form of polynomials or power series. The manuscript argues that a well-defined graph filter should be convergent when represented as power series and have desirable analytic properties such as the Lipschitz continuity. Graph filters of four existing GNNs are analyzed based on the proposed conditions; it is shown that they have some limitations. The proposed APGNN introduces an exponentially decaying rate to coefficients of the learnable graph filter. Due to the decaying rate, the filter of APGNN is guaranteed to converge, and APGNN can theoretically be extended to an infinite-depth GNN. The filter of APGNN also satisfies the Lipschitz continuity, implying that the model is stable and robust. A mathematical analysis of the generalization bound of APGNN is provided. In practice, a truncated polynomial instead of the power series is utilized as the filter of APGNN to avoid the infinite number of learnable parameters. A multiple P-hop strategy, which uses the P-th power of the adjacency matrix, is introduced to enlarge the receptive field of the graph filter while maintaining the same computational complexity. Experimental results on five real-world benchmark datasets show that APGNN has comparable performance to 11 baselines in accuracy. Strengths: 1. The manuscript points out that there exist inconsistencies between GNN models and their infinite-depth versions. A criterion for the polynomial-based graph filter is proposed with two constraints: the convergence and the Lipschitz continuity of the graph filter. The criterion reflects the stability of a GNN model with respect to the input graph and the consistency between a GNN and its infinite-depth version. 2. The motivation of APGNN is well described. The criterion for a polynomial-based graph filter is proposed, and the formulation of APGNN is provided as an instantiation of a graph filter satisfying the criterion. A truncated polynomial filter is suggested for practical use where the number of parameters should be finite. 3. The effectiveness of the P-hop strategy is shown in experiments. In Figure 3 (c), the P-hop strategy allows the model to keep the performance while decreasing the computational cost by omitting the terms that are not multiple of P from the graph filter polynomial. Weaknesses: 1. There should be a comparison between APGNN and other GNN methods with an infinite depth. Existing approaches utilize a residual connection [1] or formulate the state of equilibrium [2, 3] to model GNNs with an infinite depth. Those methods should be theoretically and empirically compared to APGNN. [1] Chen et al., Simple and Deep Graph Convolutional Networks, ICML 2020. [2] Gu et al., Implicit Graph Neural Networks, NeurIPS 2020. [3] Liu et al., EIGNN: Efficient Infinite-Depth Graph Neural Networks, NeurIPS 2021. 2. The node classification accuracy of APGNN is relatively low compared to the existing GNN models such as GCNII [1] and G$^2$CN [4]. For example, the accuracy of GCNII is 85.5 on Cora, whereas the accuracy of G$^2$CN is 73.8 on Citeseer. In addition, GNN models with an infinite depth should be compared to APGNN. [4] Li et al., G$^2$CN: Graph Gaussian Convolution Networks with Concentrated Graph Filters, ICML 2022. 3. The authors mentioned the hyperparameter sensitivity of PPNP in lines 130-131: "However, the performance of PPNP is heavily dependent on the hyperparameter $\beta$, which must be carefully tuned to achieve optimal performance." However, the same applies to APGNN since the node classification performance heavily depends on $\alpha$, a decay weight. Figure 3(b) shows that the accuracy gap between the best and the worst cases is more than 20% on Cora. 4. The polynomial order K is fixed to 10 for some baseline methods, such as ChebNet and DAGNN. However, the optimal value of K can vary depending on the baseline methods. In [1], the node classification results with various depths (Table 3 of [1]) show that the optimal depths are different for each combination of model and dataset. Therefore, the authors should show the node classification results with various polynomial orders, as in [1], or tune the polynomial order for each baseline method, as in [5]. [5] Liu et al., Towards Deeper Graph Neural Networks, KDD 2020. 5. Minor Comments - In line 54, $i = [n_l]$ should be modified to $i \in [n_l]$. - There is an inconsistency between Equations (2) and (3). - In line 93, "L-Lipschtiz" should be changed to "L-Lipschitz". - In line 136, “DAGGN” should be modified to “DAGNN” - In Equation 13, $(-1)^k$ should be modified to $(-1)$. - $\rho$ indicates the spectral radius in Theorem 1, whereas it represents a probability measure in Section 4. - In Lines 192-195, there is no description and definition of $c_{\mathcal{X}}$, $c_{\mathcal{U}}$, $c_{\mathcal{L}}$. - In line 206, ‘the hypothesis set over __ is described as’ should be modified to ‘the hypothesis set over \mathcal{X} is described as’. - In line 266, APPNP is mentioned twice. Technical Quality: 2 fair Clarity: 1 poor Questions for Authors: 1. Is there any tendency on the learned coefficients of the graph filter $\beta$? APGNN learns the coefficients $\beta$ during training to deal with arbitrary filter shapes. A higher $\beta$ value means that the corresponding hop of neighbors is more important than a hop with a lower $\beta$ value. Thus, by analyzing the learned $\beta$, more explanations about the neighborhood depth can be added, such as the appropriate polynomial order for the model or important hops of the dataset. 2. What is the total runtime of APGNN concerning K? As K grows, the proposed graph filter will become denser since its receptive field becomes larger. This aspect can increase the computation cost of applying the proposed graph filter. Meanwhile, the graph filter of GCN is relatively sparse since it sets K to 1. Therefore, comparing the runtimes of GNN models as K increases is encouraged. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 1 poor Contribution: 2 fair Limitations: 1. There might be some cases where capturing the long-range dependencies is essential to understand the context of the given graph [6]. However, the proposed model might not be appropriate for those graphs. To capture long-range dependencies, either the decay weight $\alpha$ has to be large enough, or $\beta$ assigned to the low-order should be small. The former increases the generalization bound of the model, and the latter might lead to a vanishing gradient problem. [6] Dwivedi et al., Long Range Graph Benchmark, NeurIPS 2022. 2. The performance of the model is highly sensitive to the choice of $\alpha$. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are thankful for your detailed and constructive suggestions. We carefully consider your meaningful questions and hope the following elaboration can answer your concern. 1.We discuss the essential difference of "infinite depth" in the mentioned works. GCNII stacks multiple graph convolutional layers elaborated with the residual connection and identity mapping. The author claims that in theory, the number of stacking layers can go infinite. IGNN and EIGNN share a similar idea of finding the state of equilibrium. The "infinity" behind these methods refers to the stationary point (fixed point), or the state that further propagation does not change the data distribution. In our work, we proposed the framework to design the graph filter formed with power series (infinite order polynomial). The convergence and Lipschitz properties are required. This leads to the principle for constructing GNN consistent with its "infinite" version. APGNN is designed following this principle and thus its order $K$ can go infinite (at least theoretically). Though $K$ is set as a finite number in practice, due to the expressive ability of the power series, APGNN can approximate any appropriate filter in controllable precision by learning (equation (14)). The experimental results of GCNII, IGNN, and EIGNN are shown in the attached PDF. 2.We have added the experiments comparing APGNN with more related methods. 3.We cannot compare the sensitivity of different methods by setting their hyperparameters as extreme values. This is because the hyperparameters should work in a rational range. In fact, we have discussed the property of $\alpha$ in the APGNN in section 3.3 and section 4. The extremely small $\alpha$ tends to induce a trivial graph filter. Therefore, the small $\alpha$ is not recommended in practice. In our experiments, the performance of APGNN is relatively stable when $\alpha$ changes in $[0.6, 0.99]$ in most datasets. These results not only actually suggest the robustness of $\alpha$, but demonstrate the reasonable range of $\alpha$. Therefore, the accuracy gap cannot declare the sensitivity of hyperparameters. In contrast, the performance of PPNP varies more than 20$\%$ on Cora when $\alpha$ changes in $[0.6, 0.99]$. We also see similar phenomena in other datasets during our empirical study. i.e., the performance of PPNP might drop rapidly with the slight change of $\alpha$. Therefore, we say the performance of PPNP heavily depends on the hyperparameter. 4.In the early experiments, we tuned the maximal order $K$ for ChebNet and DAGNN. We observed that there is no significant performance difference between $K=10$ and larger $K$. Thus, we directly set $K=10$ for these methods. 5.Thanks for the advice for discussion on the tendency of the learned coefficients $\beta$. The experimental results show that the sign of learned coefficients $\beta_i$ tends to be the same for all $i\in[K]$ on homogeneous datasets such as Cora, Citeseer, and Pubmed. But for the heterophilic datasets like Cornell, Wisconsin, and Texas, we usually obtain $\beta$ with both positive and negative elements. The reason might be that feature propagation on heterogeneous graphs is not always beneficial. Thus, the propagation of some hops should reach the opposite sign to suppress the inappropriate propagation. 6.We conducted extra experiments on the time comparison with different $K$. APGNN costs less time during the training process than other methods. The time tends to grow linearly as $K$ increases. The experimental results are shown in the attached PDF. 7.Long-range dependence is not part of our study. However, we can still capture such dependence via setting relatively large $\alpha$ (e.g. 0.95). 8.We really appreciate your careful review of our manuscript. The typos and mistakes will be corrected in our future revision. --- Rebuttal Comment 1.1: Comment: I cannot find the attached PDF the authors mentioned. Since I cannot check the results, I want to keep my score.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper proposes a regularized learning framework for creating deep Graph Neural Networks (GNNs), including the Adaptive Power GNN (APGNN) that uses exponentially decaying weights to aggregate graph information of varying orders. The proposed multiple P-hop message passing strategy efficiently perceives higher-order neighborhoods, and the APGNN can be extended to an infinite-depth network. Strengths: 1. The proposed APGNN model effectively captures higher-order neighborhood information, which is a crucial aspect of many graph-based tasks. This is a significant contribution to the field. 2. The regularized learning framework utilized in the proposed model provides a valuable theoretical guarantee for convergence. This adds to the credibility and reliability of the approach. 3. The experimental results presented in the paper demonstrate that the proposed method outperforms other state-of-the-art GNN models on several benchmark datasets. Weaknesses: 1. It would be beneficial for the paper to include a more detailed comparison with other recent GNN models that also aim to capture higher-order neighborhood information. This would provide readers with a clearer understanding of how the proposed APGNN model compares in terms of performance and capabilities. 2. The paper lacks a detailed analysis of the computational complexity of the proposed method. Considering the potential concerns related to large-scale graphs, it is important to provide insights into the computational requirements of the model. 3. Similarly, the paper should include a more detailed analysis of the sensitivity of the proposed method to hyperparameters. Understanding how the model's performance varies with different hyperparameter settings is crucial for practical implementation. 4. The paper lacks a thorough analysis of the interpretability of the learned graph filters. Providing insights into the interpretability of the model's learned representations would enhance the understanding and trustworthiness of the proposed approach. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Can you provide more details on the computational complexity of the proposed method, and how it scales with the size of the graph? 2. How sensitive is the proposed method to the choice of hyperparameters, and how did you select the hyperparameters used in the experiments? 3. Can you provide more details on the interpretability of the learned graph filters, and how they can be used to gain insights into the structure of the graph? 4. Have you considered the robustness of the proposed method to adversarial attacks, and how it compares to other GNN models in this regard? 5. How do you plan to extend the proposed method to handle dynamic graphs, where the structure of the graph changes over time? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Considering the increasing importance of robustness in real-world applications, it would be valuable to evaluate the model's performance under adversarial scenarios and discuss its limitations and strengths in this regard. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for your positive comments and meaningful suggestions. We hope the following explanation can help you better understand our work. 1.Here we give the analysis of complexity. Denote $N$ as the number of nodes, $E$ as the set of edges, $d$ as the hidden layer of MLP, and $c$ as the number of classes. The adjacency matrix is stored in sparse format (i.e., only the edge will be stored), so the space complexity is $O(|E|)$. In our implementation, we use a 2-layer MLP for feature extraction and $K$-order polynomial graph filter. Therefore, the time complexity of MLP and graph convolution is $O(NdcL)$ and $O(Kc|E|)$, respectively. Therefore, the complexity grows linearly as the number of samples increases, and it also depends on the number of edges in the graph. 2.As shown in Fig. 3, the performance is relatively steady when the hyperparameters vary within a reasonable range. As for the selection of hyperparameters, the grid search method is used. 3.From Fig. 1, we can observe that the sign of learned coefficients tends to be the same for all on homogeneous datasets such as Cora, Citeseer, and Pubmed. But for the heterophilic datasets like Cornell, Wisconsin, and Texas, we usually obtain with both positive and negative elements. The reason might be that feature propagation on heterogeneous graphs is not always beneficial. Thus, the propagation of some hops should reach the opposite sign to suppress the inappropriate propagation. The tendency of the coefficient can further reflect the homophilic and heterogeneity of the graph. 4.The core contribution of this paper is the establishment of the principle of graph filters. The related model design and the theoretical properties are more significant in our work. Therefore, it does not involve discussion on the robustness of adversarial attacks and dynamic graphs.
null
null
null
null
null
null
Recovering Unbalanced Communities in the Stochastic Block Model with Application to Clustering with a Faulty Oracle
Accept (poster)
Summary: This paper studies a classic problem of recovering clusters in a random graph. Concretely, the authors consider the stochastic block model. Here there is an underlying graph on n nodes. The n nodes are partitioned into k unknown clusters. There is then an edge independently between any two nodes in the same cluster with probability p and between any two in different clusters with probability q < p. This is an extensively studied problem and many algorithms have been designed that allow the recovery of all clusters of a reasonable size (somewhat larger than sqrt(n), which is anyway a requirement for computational efficiency under the planted clique conjecture). The previous state of the art allows recovering clusters under two assumptions (here simplified for clarity and brevity): 1. The clusters to recover have size at least max(sqrt(n), k)/(p-q). 2. There is a number alpha of about sqrt(n)/(p-q) such that no cluster has size in the interval [alpha/C, alpha] for a constant C. The assumption that the cluster sizes are at least sqrt(n) for those to be recovered is natural as mentioned above. However, the dependency on k is unfortunate when there are many small clusters. These would prevent the recovery of medium sizes clusters when k >> sqrt(n). Secondly, the assumption about the empty interval is quite unnatural. The main contribution of this work is to remove the dependency on k in 1. and to remove the assumption 2. all together. The authors also present applications of their algorithm in the related problem of clustering with a faulty oracle. Here one can ask whether two nodes v, w are in the same cluster or not. One is then returned a noise answer. Here the paper also improves over the state of the art in terms of the cardinality of clusters that can be recovered. Strengths: -The problem studied is fundamental in graph clustering. -Removing the dependency on k and the requirement of an empty interval of cluster sizes is significant and the algorithm guarantees of the algorithm much more natural than previously -The authors have implemented their algorithm and compared experimentally to previous work. The comparison is overall in favour of the new algorithm. Weaknesses: -I know this is a theoretical contribution, and also the authors probably did not attempt to optimize constants that much, but a factor 2^13 in the guarantees is quite severe in practice. Hopefully and probably, this constant is smaller in practice. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: -Could you say a bit about the running time of your algorithm in practice compared to previous work? -Can you comment on whether the 2^13 constant can be reduced to a more reasonable constant without too much effort? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks a lot for your thorough and detailed review! Please find the answers to the questions below. ### Run time comparison: From our experiments, we find our algorithm to be much faster compared to the state-of-the-art algorithms of Ailon, Chen, and Xu (JLMR 2015). For example, in experiment 3A of the JLMR 2015 paper, they mention that when dealing with an instance with $n=3500, k=4$, they need $182$ seconds to recover the clusters. In comparison, our method requires around $2.5$ seconds. Thus, this is another empirical improvement over the state-of-the-art algorithms! ### Value of constants: We acknowledge that $2^{13}$ might seem like a large constant, and we agree that it is an artifact of our current proof. However, based on extensive experimentation, we estimate that this value could be significantly reduced to around $10-20$, which we consider a considerable improvement. Moreover, we firmly believe that the constant $2^{13}$ can be further optimized in various aspects of our analysis through more meticulous calculations. Specifically, a tighter analysis of the ``low probability Chernoff bound'' that we employ could lead to more refined and improved results. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their rebuttal. I would recommend the authors to mention these favourable runtime results in their final paper as well. I keep my positive score of the paper. --- Reply to Comment 1.1.1: Comment: Thank you again for the detailed review and the response! We will definitely add a paragraph discussing the runtime comparisons in the final version.
Summary: This work studies stochastic block models where blocks/clusters can have different sizes. It proposed a simple SVD algorithm which recovers communities in this setting. The main technical improvement of this work is that the assumption is removed which requires there to be a ‘size interval’ where no clusters appear. A secondary result is a efficient clustering algorithm with sublinear query complexity. Strengths: - This work is a clear improvement over the previous state-of-the-art. As I understand it, a key technical contribution of this work that might influence future work is instead of finding $k$ clusters as is done using the SVD approach, the algorithm first aims to find large clusters one-by-one. Although these are not perfect (they form a so-called plural set), using some non-trivial techniques perfect recovery can be obtained. - Experiments on synthetic data indicate that the algorithm not only works well in theory but also in practice. - The write-up of this work is excellent. Weaknesses: - Given that the aim of the studied setting is to look at more realistic settings, I would have expected to find experimental results on real-world datasets as well. Although this work does provide better bounds for SBMs generated with differently sized clusters, SBMs still have a highly symmetric structure compared to real-world graphs. It would be interesting to see the performance of the proposed algorithm on some real-world graphs. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - How does the algorithm compare with respect to the previous work in terms of running time? - In practice the Spectral Clustering algorithm performs well in practice on graphs with clusters of unbalanced size. Even though not many bounds are known of spectral clustering with respect to SBMs, did you try to compare your algorithm experimentally with Spectral Clustering? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: none Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks a lot for your thorough and detailed review! Please find the answers to your questions below. ### Run time comparison: Regarding the asymptotic running time of Algorithm 1, a major contributing factor is the computation of the $(p-q)\sqrt{n}/\sqrt{p(1-q)}$ dimensional SVD projection, which can be significantly more efficient than the previous state-of-the-art algorithm by Ailon, Chen, and Xu (JLMR 2015). The latter involves solving a convex programming problem with $n^2$ constraints, rendering it computationally prohibitive. For instance, in experiment 3A of the JLMR 2015 paper, the authors mentioned that for an instance with $n=3500$ and $k=4$, they required $182$ seconds to recover the clusters. In contrast, our method finishes in approximately $2.5$ seconds. Thus, this is another empirical improvement over the state-of-the-art algorithms! ### Spectral Clustering: Spectral clustering is indeed a powerful tool. However, in this specific problem, some instances may be hard. For example, we ran a version of spectral clustering for the parameters of Experiment 5 of Table 2 of our paper, both with $K=2$ (which is the number of large clusters) as well as $K=1000$ (which is the number of total clusters). The algorithm failed to recover a ground truth cluster in either case. --- Rebuttal Comment 1.1: Title: Response Comment: Thank you for your response and clarification, I keep my positive evaluation of the paper. --- Reply to Comment 1.1.1: Comment: Thanks again for your detailed review and considering our responses!
Summary: The authors consider the problem of perfect recovery in a stochastic block model where the average degree is large and where the groups are not balanced. They provide an algorithm based on singular value decomposition to recover recursively the largest clusters. They provide a few numerical experiments illustrating their claims. The authors apply their results to the problem of clustering with a faulty oracle. Strengths: I have little knowledge as to this problem of perfect recovery in a dense SBM and I am not able to assess the correctness of the claims and their relevance. Weaknesses: The same. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Maybe the authors could precise the complexity of the algorithm 1. In experiment 6 it seems the authors are able to run this algorithm for n substantially larger than the other experiments. The authors could go to higher n and test how tight are their bounds; in particular taking p and q smaller. A small section to conclude the article and for future work would be appreciable. Some references are ill-formatted. Eg ref. 27 "svd" –> "SVD". Inconsistency: plural set vs plural-set. Confidence: 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: The same. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your efforts! We will definitely add a concluding section highlighting some future works that we think will be of interest. --- Rebuttal Comment 1.1: Comment: I thank the authors for their answers.
Summary: The paper deals with the problem of community detection for unbalanced community sizes. Specifically, the paper concentrates on the situation where both large (O(\sqrt{n})) and small communities exist in the network. The paper proposes a stepwise method of recovering the large clusters in the presence of small clusters for planted clique SBM and faulty oracle models. Strengths: The main strengths of the paper are as follows - (1) The paper addresses a gap in the literature on the simultaneous recovery of large and small communities in networks. (2) The paper deals with the problem of community recovery of large communities in the presence of small communities. The paper provides a stepwise method of recovering large communities in planted clique SBM and faulty oracle models. (3) The paper provides theoretical results supporting the recovery of large communities by overcoming the "small cluster barrier" of the size of the remaining small clusters. (4) The paper is well-written. Weaknesses: The main weaknesses of the paper are as follows - (1) The paper misses some relevant literature. Such as - Li, Tianxi, et.al. "Hierarchical community detection by recursive partitioning." Journal of the American Statistical Association 117, no. 538 (2022): 951-968. It describes an algorithm that is very similar to the algorithm proposed in this work. (2) Algorithms 2 and 3 assumes the knowledge of p and q, which are very strong assumptions. It is not immediately clear how the algorithm can be extended for general SBM. (3) The stopping criterion of the proposed algorithm is not clear. Technical Quality: 3 good Clarity: 3 good Questions for Authors: (1) Does the proposed algorithm assume the knowledge of p and q? (2) Does the proposed algorithm assume the knowledge of the number of communities, or is there a stopping criteria of the proposed algorithm for recovery of the number of large communities? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your detailed review. Please find our comments on the weakness and the answers to your question below. ### Knowledge of $p,q,k$: Our algorithm necessitates knowledge of the parameters $p$ and $q$ but not the number $k$ of communities. However, it is worth noting that even the previously state-of-the-art algorithms by Ailon, Chen, and Xu (ICML 2013 and JMLR 2015) **also** rely on knowing $p, q$, as well as $k$ (or in some special cases, an upper bound on $k$). This observation highlights the inherent difficulty of theoretically addressing the ``small cluster barrier.'' An open question arises: Can our algorithm be made parameter-free? Exploring this possibility remains an interesting direction for future research. ### Stopping Criterion of the algorithm: 1) The *EstimatingSize* subroutine (Algorithm 3) calculates $\bar{s}$, which is approximately $ \max ( 100\sqrt{p(1-q)}\sqrt{n} \log n/(p-q), 0.5\cdot s_{\max} ). $ Within the *RecursiveCluster* algorithm, in each iteration, we can check if the $\bar{s}$ value is sufficiently large and use that as the stopping criterion to determine if the algorithm should halt its iterative process. 2) To address the confusion of the unclear stopping criteria, we will add an explanatory line following line 2 in Algorithm 1, stating that if *EstimatingSize* does not return a valid $\bar{s}$ (i.e. when Exit(0) occurs in Algorithm 3), then Algorithm 1 will simply return an empty set. ### Relevant literature: We appreciate the reference to the paper (where the analysis is on the binary tree SBM model) you mentioned, and upon careful examination, we have noticed some fundamental differences between the referred algorithm and ours, despite being related. (We will also add the comparison to this relevant literature in the future version.) i) The algorithm in the mentioned paper utilizes the well-known eigenvector+K-Means approach, whereas our approach involves a distinct SVD+ ``plural-set-purification'' strategy. ii) Notably, the binary tree SBM problem comprises small clusters as "sub-clusters" within larger clusters, whereas in the SBM model (as studied in our paper), very small clusters can be considered as potential noise as the cluster sizes can be arbitrarily unbalanced. iii) Furthermore, it appears that their algorithm requires knowledge of the number of sub-clusters at each level (they can estimate whether there are any more levels), which is inherently different from the problem we are addressing, and we do not require the knowledge of cluster sizes and number of clusters. ### Extension to general SBM: To extend our findings to the general SBM, we propose adapting our algorithm by setting $p$ to represent the minimum intra-cluster probability and $q$ to denote the maximum inter-cluster probability. In a future extended version, we plan to explore this in depth. To achieve this, we will devise a more refined and nuanced method for estimating the largest cluster size. Then by leveraging the aforementioned approach, we aim to identify the largest cluster within general SBM efficiently. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed rebuttal comments. I am also increasing the score in response to the explanations. --- Reply to Comment 1.1.1: Comment: Thank you for your response and consideration.
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Regression with Cost-based Rejection
Accept (poster)
Summary: Learning with rejection is an important machine learning problem. Most of the existing papers focus on the classification setting, i.e., classification with rejection and selective classification, and seldom works are targeting at the regression setting. This paper aims to investigate regression with cost-based rejection. Although some papers have studied the selective regression problem, I consider that the problem of regression with cost-based rejection is new. To solve this new problem, this paper gives a formulation of the expected risk and derives the Bayes optimal solution. To train an ideal model, this paper also proposes a surrogate loss function that regards rejection as binary classification and provides conditions for the consistency. Experiments are conducted to demonstrate the effectiveness of the proposed. Strengths: - The problem of regression with cost-based rejection is interesting and new. - It is quite important to give the formulation of expected risk for cost-based rejection and the Bayes optimal solution is meaningful and significant, which might serve as a pioneer for follow-up works to check whether the derived model and rejector are consistent when the mean squared error is used as the evaluation metric. - A reasonable approach to training a good regression model with rejection is proposed and theoretical analyses are provided. - Experimental results are significant, which supports the importance of considering cost-based rejection in the regression setting. Weaknesses: In Theorem 4, I notice that the authors did not introduce the concept "classification calibrated binary classification loss". For this concept, I also think that some references are required, e.g., [1] and [2]. [1] P. Bartlett et al. Convexity, classification, and risk bounds. JASA 2006. [2] A. Tewari and P. Bartlett. On the consistency of multi-class classification methods. JMLR 2007. - I also notice that the first two cited references are repeated. I would suggest that the authors should further check the details of the references. - I also find some typos in this paper, e.g., missing a right parenthesis in Eq. (2). - It will be interesting to give a general Bayes optimal solution for arbitrary regression losses, instead of limiting to the mean squared error. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Compared with the selective setting, what is the key challenge of the proposed setting regression with cost-based rejection? The authors are encouraged to further explain this point. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 4 excellent Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive review! **Q1: The concept of "classification calibrated binary classification loss" is missing.** A1: Thank you for pointing out this issue, which will be very helpful for us to improve our paper. We have rechecked our manuscript and added the concept to the revised manuscript. **Q2: It will be interesting to give a general Bayes optimal solution for arbitrary regression losses, instead of limiting to the mean squared error.** A2: We agree that it is interesting and meaningful to give a general Bayes optimal solution for arbitrary regression losses. Since the mean squared error (MSE) is the most widely used regression loss, we only considered the case of MSE in our paper. Besides, if we use another regression loss, the derivation process would be much different and the corresponding optimal solution may be extremely different from that of MSE. Therefore, whether there exists a general Bayes optimal solution is unknown, which is also a huge challenge to be solved in the future. We believe that the study of the Bayes optimal solution for other regression losses or the general Bayes optimal solution will be an interesting research direction for future work. **Q3: Compared with the selective setting, what is the key challenge of the proposed setting regression with cost-based rejection?** A3: As shown in Theorem 2 in our paper, compared with the selective setting, the key challenge of regression with cost-based rejection is how to accurately estimate the variance of the real-valued label $y$ over the distribution $p(y|x)$. However, estimating the variance without knowing the distribution $p(y|x)$ is difficult. Unfortunately, empirically estimating $p(y|x)$ is also difficult because $p(y|x)$ is unknown and needs to be approximated by a well-performed model. **Q4: Miscellaneous minor issues.** A4: Thank you for pointing out these issues, which will be very helpful for us to improve our paper. We have rechecked our manuscript and corrected the clarity issues in the revised manuscript. For your mentioned issues, we have removed the repeated references. Besides, we have added the missing parentheses to Eq. (2). --- Rebuttal Comment 1.1: Title: To response Comment: Thank you for your answers. My concerns have been addressed. --- Reply to Comment 1.1.1: Title: Thank you for supporting our paper Comment: Thank you for your reply! We also sincerely thank you for your valuable time on our paper and thank you for supporting our paper.
Summary: The paper addresses problem of regression with the reject option. The authors propose the cost-based formulation of an optimal reject-option regression rule and they derive (Bayes) optimal strategy for the case the distribution is known. The authors further propose a surrogate loss to learn the reject-option regression rule from examples. They prove the the consistency and regret bounds for the proposed learning approach. Strengths: The paper is sound and it is very clearly written. The proposed method based on surrogate loss is simple and potentially effective. The authors derive theoretical guarantees for the proposed estimator, namely, they show the consistency and the regret bound. Weaknesses: The first contribution, i.e. formulation of the cost-based reject option regression and the optimal solution, is a known result. E.g. it is given in [41], see equation (1) of the paper. Besides [41], deriving the optimal strategy for a generic reject option predictor (of which the regression with L2-loss is a special case) is straightforward and appears in pattern recognition textbooks, e.g. Schlesinger et al. Ten Lectures on Statistical and Structural Pattern Recognition. Springer 2002. The proposed method, based on minimizing the surrogate loss, is not compared against any baseline solution nor any existing methods for learning reject option regression. As a result, when there is no reference, it is difficult to judge about efficiency of the proposed method. The minimal solution would be to use synthetic data with known ground-truth. On real data one could use any regression model which outputs estimate p(y|x), like e.g. Bayesian methods, and plugin Bayes rule. The author may argue that most existing methods formulate the optimal reject-option regression using the concept of selective risk and coverage [41][20][38]. However, all methods (including the proposed approach) can be compared in terms of the selective risk and the coverage which are reported by the authors anyway in the experiments (section 5) although the authors use different terminology. Namely, the selective risk is denoted as the "accepted loss" AL and the coverage equals 1 - rejection rate (RR). Note the cost-based formulation and the selective risk vs. coverage formulation (known as the bounded-improvement or bounded-abstention rejection models) are equivalent in the sense that both lead to the same Bayes-optimal solution, i.e. setting the rejection cost (as in the paper under review) has the same effect as setting threshold on the coverage (or the selective risk), see e.g. Franc et al. Optimal strategies for reject option classifiers. JMLR 2023. Minor problems: - Regarding the experiments in sec 5, errors observed on AgeDB dataset are excessively large. The mean error ~100, reported for the standard regression model (sup), makes no sense for age prediction regardless whether the authors report MAE or L2-loss which is not clear from the description. - The observations derived from the experiments (section 5.5) are questionable or trivial: "(1) Our proposed method significantly outperforms the supervised reression method" It is not clear in what sense the propsed method is better as it solves a different problem than the non-reject model. "(2) In most cases, the average loss of our method in the accepted test instances (AL) is always smaller than the average loss of the supervised regression model"; note that this holds true for any rejection rule regardless how good the rejector $r(x)$ is. Similarly the obsevation (3) is obvious and it hold for any rejection rule. - Line 273: "...RcR loss (RcRLoss) decreases" -> increases Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please explain reasons for not using any baseline method in the experimental evaluation? --- The authors satisfactorily addressed my questions in the rebuttal based on which I increased my ratings. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for your valuable comments! **Q1: The first contribution regarding the formulation of the cost-based reject option regression and the optimal solution.** A1: We agree that our derived optimal solution of regression with cost-based rejection is quite similar to the result in [41], but we would like to humbly argue that our derived optimal solution is more general and suitable for regression with cost-based rejection instead of selective regression. In [41], equation (1) is derived by introducing the tuning parameter $\lambda$ which is responsible for compromising error and rejection rate. Hence $\lambda$ can only be taken as a constant rejection cost. In contrast, our formulation is more general and can be naturally compatible with instance-dependent rejection cost, and thus is more suitable for regression with cost-based rejection. Therefore, despite being quite similar to the result in [41], we believe that our formulation is also meaningful. We admit that the derivation process of such an optimal solution with L2-loss is straightforward, which is also similar to that of [41]. However, since regression with cost-based rejection is a new problem that has not been studied before, it is quite essential to explicitly show the general optimal solution with instance-dependent cost. Thank you for providing us with such an insightful comment and we will properly revise our manuscript according to the above constructive discussionwith you. [41] A. Zaoui, C. Denis, and M. Hebiri. Regression with reject option and application to knn. In NeurIPS, 2020. **Q2: Comparison with selective regression.** A2: Since this question was repeatedly mentioned by reviewers, we think it is very important. We provide a detailed response to this question in Global Response, so please refer to **Global Response** to check the full response. Briefly, since our paper provides the first attempt to investigate regression with cost-based rejection, there does not exist a baseline that can be directly compared, so we propose a number of evaluation metrics and conduct extensive experiments to verify the validity of our method. However, this does not seem to be convincing enough, so we have adopted the suggestions of reviewers to add comparisons with the methods in selective regression [19, 41] to make our experimental results more convincing. Based on the comparison of accepted loss (AL), our proposed method outperforms (smaller AL with the same rejection rate) all compared methods, which validates the effectiveness of our method. [19] Y. Geifman and R. El-Yaniv. Selective classification for deep neural networks. In NeurIPS, 2017. [41] A. Zaoui, C. Denis, and M. Hebiri. Regression with reject option and application to knn. In NeurIPS, 2020. **Q3: The cost-based formulation and the selective risk vs. coverage formulation (known as the bounded-improvement or bounded-abstention rejection models) are equivalent in the sense that both lead to the same Bayes-optimal solution.** A3: Thank you so much for pointing out this conclusion by Franc et al. (2023) that the cost-based and sensitivity (known as the bounded-improvement or bounded-abstention rejection models) frameworks have the same Bayes-optimal solution. The conclusion can help us to theoretically connect different rejection-based frameworks. However, even if different frameworks have the same optimal solution, they actually do not face the same challenges and application scenarios. Selective regression aims to reject with a fixed rejection rate, while regression with cost-based rejection aims to reject with a fixed rejection cost. To the best of our knowledge, we provide the first study on regression with cost-based rejection. We cannot expect the first study (especially the conference paper) to accomplish all relevant tasks, although it is interesting and meaningful to analyze the connections with different rejection frameworks. Therefore, we think it is more appropriate to leave the study of theoretical links between different rejection frameworks for future work. **Q4: Excessive errors observed on the AgeDB dataset.** A4: Thank you for pointing out the lack of clarity in the presentation of our table. Specifically, all of our results are based on L2-loss (i.e., Mean Squared Error). We admit that an average error of about 100 even for the standard regression model (sup) on the AgeDB dataset is excessive. In fact, these results are normal without the use of additional training data, and similar errors also have appeared in a previous paper, see e.g. Yang et al. Delving into deep imbalanced regression. ICML 2021. **Q5: The observations derived from the experiments (section 5.5) are questionable or trivial.** A5: Thank you for pointing out this issue. We agree with you that these observations seem somewhat trivial because it may not be appropriate to directly compare our work with supervised regression. Our original intention is to show the effectiveness of our proposed method in regression with cost-based rejection. We will properly revise our manuscript for clarifying this part. **Q6: Please explain the reasons for not using any baseline method in the experimental evaluation?** A6: To the best of our knowledge, our paper provides the first attempt to investigate regression with cost-based rejection (RcR), hence there are no baseline methods for this task. In order to show the effectiveness of our method, we propose a number of evaluation metrics and conduct extensive experiments. However, these results do not seem convincing enough, so we added the comparison experiments with the selective regression method in Global Response. Please refer to **Global Response** to check the comparison experiments. --- Rebuttal Comment 1.1: Title: response Comment: Thank you for your elaborated response to my review. Please allow me a response to your response. add Q1/A1: I agree that your formulation differs from the known setup by having a different reject-cost per instance. Except this modification the setup and the solution is the same as in [41]. In the experiments you use a single reject-cost which does not demonstrate that the modification is acutely needed, however, you may have an application in mind where the instance specific cost is need ? I am glad that you agree that deriving the Bayes strategy is a straightforward. It was surprising for me that this tiny modification is claimed as the first main contribution of the paper. add Q2/A2: Thanks for additional experiments. Please describe how did you construct the uncertainty score for method [19]. Did you use MC-dropout as the uncertainty score? I would suggest slightly different way to compare the methods. Take any baseline approach which outputs an uncertainty score for given input and construct the risk-coverage curve (x-axis is the reject-rate; y-axis is the accepted loss using you terminology). Run you method for fixed reject-cost(s) and draw the obtained pair (rejection-rate, accepted loss) as a point to the risk-coverage curve. If the point(s) are below the risk-coverage curve of the baseline your method wins. As a baseline I would propose e.g. MC dropout as in [19]. add Q3/A3 and Q6/A6: The point of this comment is that the key problem when constructing reject option rule is the estimation of the uncertainty score. The optimal strategy for various formulation (e.g. cost-based formulation as in your paper or bounded-improvement as in "selective-classification" papers) is always based on thresholding uncertainty score (note that the optimal strategy you derived is also based on thresholded conditional variance). Once you have the uncertainty score, the threshold can be easily found on validation set based on the criterion you want, e.g. using sample estimate of (10). In turn any existing method on selective regression which outputs an uncertainty score can be readily applied to the cost-based problem you investigate in your paper. Therefore it is not difficult to construct a baseline for you method. --- Reply to Comment 1.1.1: Title: Thank you for your reply Comment: Thank you for your insightful comment. **Add Q1/A1:** We indeed have some applications in mind where the instance-specific rejection cost is required. For example, in safety-critical applications such as autonomous driving, we need to predict the suitable deflection angle for the tires to pass through an intersection based on sensor information. Obviously, the difficulty and risk faced by an intersection that can only be passed with a deflection angle from $5^{\circ}-10^{\circ}$ is not the same as an intersection that can be passed with a deflection angle from $10^{\circ}-40^{\circ}$, therefore they do not have the same rejection cost (rejection cost $2.5^{\circ}$ in the first intersection and rejection cost $15^{\circ}$ in the second intersection). We will not take Theorem 2 as one of the main contributions of our paper. In this case, our main contributions in this paper are summarized as follows: - We propose a surrogate loss function considering rejection as a binary classification process and give a condition of regressor-consistent that the classification calibrated binary classification loss is always greater than 0. In that condition, the optimal regressor can be derived by our method. - We propose a definition of rejector-calibration and show that our method is rejector-calibration when the regressor-consistent condition is satisfied. Based on this, we further propose a weaker version of the condition allowing the classification calibrated binary classification loss to be greater than or equal to 0. In the weakened condition, the regression consistency can only be satisfied in the accepted instances, and regressor-consistent is still satisfied. - We derive the theoretical analysis of the regret transfer and estimation error bounds for our proposed method, and extensive experiments demonstrate the effectiveness of our method. **Add Q2/A2:** We are really sorry that we have mistakenly linked [19] to the paper of [18], in our previous response. The correct links (in accordance with our originally submitted main paper) are as follows: [18] Y. Geifman and R. El-Yaniv. Selective classification for deep neural networks. In NeurIPS, 2017. [19] Y. Geifman and R. El-Yaniv. Selectivenet: A deep neural network with an integrated reject option. In ICML, 2019. This mistake is because we removed duplicate references in our revised version (not uploaded yet) and thus the serial numbers of the original references were changed. We really appreciate that you provided a wonderful visualized way (risk-coverage curve) to compare different methods for regression with rejection. Unfortunately, we cannot add images to OpenReview in the current stage, but we promise that we will definitely include these visualized figures in our final version. **Add Q3/A3 and Q6/A6:** We agree that the estimation of the uncertainty score is quite important to regression with rejection, which can be used as the metric to determine whether an example will be rejected. We also agree that the uncertainty score (the conditional variance) can be used to solve the problem we studied in this paper and such a method can be considered as a baseline. One intuitive baseline is to use the Plug technique proposed in [41], which we have already compared with (i.e., **MLP-Plug**). We will be much appreciated if you have other suitable methods to estimate the conditional variance and we will definitely try our best to include them for comparisons. In fact, we believe that our proposed method is better than those baselines relying on the uncertainty score. By Occam's razor principle, calculating the uncertainty score is an intermediate, which is normally not the optimal solution, especially when the uncertainty score cannot be accurately estimated. In contrast, our proposed method gives a direct solution by taking rejection as binary classification. Therefore, our method is expected to achieve better performance. [41] A. Zaoui, C. Denis, and M. Hebiri. Regression with reject option and application to knn. In NeurIPS, 2020.
Summary: This paper focuses on the problem of regression with rejection, specifically the approach of specifing a cost function and learn the pair of regressor and rejector at the same time. The paper prensents a concrete path to solving the problem. It first properly defines the problem and shows the Bayes optimal solution to it. Since the Bayes optimal solution requires knowing the expectation and the variance of the underlying distribution, it then proposes a learnable risk defined using surrogate loss function. Then, the paper theoretically investigate and show the usefullness of the proposed approach, from the perspective of classification calibration and error bound. Finally it empirically evaluates the proposed method on several typical datsets using various metrics. Strengths: Originality: This paper tackles the regression with rejection problem which is of significant importance in the field. The paper solves the problem from a novel perspective and can be seen as a novel combination of several well-known techniques. This paper addresses clearly how it is related and different from related publications. Quality: The paper is technically sound and self-contained. Its claims are properly supported by theoretical demonstrations. Clarity: The paper is clearly written and easy to follow. The structual is well organized. Significance: The proposed method has significance to some extent, as it considers a new approach to an important problem. Weaknesses: - Empricail comparison is no sufficiently conducted. - There is no comparison with existing methods. - There is no investigation on varying cost. - There is no investigation on slow-start. This would show the robustness of the proposed method, since slow-start introduces a hyper-parameter to the method. - There is no investigation on varying training data size. Some theoretical results shows how performance would change on different $n$. It would be pursuative to show it empirically to some extent. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - For the cost funciton $c(x)$, it is used as a function on $x$ instead of a constant in theoretical demonstrations but considered as a constant in experiments. Does theoretical results has some relationship or limitation on the form of the pointwise cost function? Does some results rely on cost being a non-constant function? - Are there any detailed discussion on the slow-start mechanism, since is part is not covered by any theory but has crucial practical importance? For example, how the slow-start epochs affect the overall performance? Can we completely stop the learning of $h$ after the slow-start epochs? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Techinal limitations on loss function are addressed by authors in appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for your valuable comments! **Q1: There is no comparison with existing methods.** A1: Since this question was repeatedly mentioned by reviewers, we think it is very important. We provide a detailed response to this question in Global Response, so please refer to **Global Response** to check the full response. Briefly, since our paper provides the first attempt to investigate regression with cost-based rejection, there does not exist a baseline that can be directly compared, so we propose a number of evaluation metrics and conduct extensive experiments to verify the validity of our method. However, this does not seem to be convincing enough, so we have adopted the suggestions of reviewers to add comparisons with the methods in selective regression [19, 41] to make our experimental results more convincing. Based on the comparison of accepted loss (AL), our proposed method outperforms (smaller AL with the same rejection rate) all compared methods, which validates the effectiveness of our method. [19] Y. Geifman and R. El-Yaniv. Selective classification for deep neural networks. In NeurIPS, 2017. [41] A. Zaoui, C. Denis, and M. Hebiri. Regression with reject option and application to knn. In NeurIPS, 2020. **Q2: There is no investigation on varying costs.** A2: We would like to kindly remind that we have already set various rejection costs $c$ for each dataset and performed repetitive experiments. As the rejection cost $c$ increases, the RcR loss and accepted loss gradually increase while the rejection rate gradually decreases. **Q3: There is no investigation on slow-start. Are there any detailed discussions on the slow-start mechanism? How do the slow-start epochs affect the overall performance? Can we completely stop the learning of $h$ after the slow-start epochs?** A3: In the implementation details of our experiments, we use a training trick called slow-start, where we first only train the regressor $h$ and then train the regressor $h$ and the rejector $r$ together. The slow-start trick is to alleviate the detraining issue in the early training stage when $h$ is disturbed by $r$, especially when deep neural networks with gradient descent optimization are used. Similar tricks were also used in other rejection learning methods [19]. After our preliminary study, the hyper-parameter of slow-start epochs has no significant effect on the experimental results, and we suggest setting it to 20\% of the total training epochs. It is worth noting that we cannot stop training $h$ after slow-start because slow-start is only used to accelerate the training of $h$ in the early training stage. [19] Y. Geifman and R. El-Yaniv. Selective classification for deep neural networks. In NeurIPS, 2017. **Q4: There is no investigation on varying training data size.** A4: Thank you for the wonderful suggestion. We have added additional experiments by varying the sizes of training data. Specifically, we considered using 20\%, 40\%, 60\%, 80\%, 100\% of the training set to train our model. The experimental results are reported following table. As can be seen from following table, there is a significant reduction in RcR loss, AL, and Rej, as more training data is used. This is clearly in accordance with our intuition that the performance will be improved if more training data is provided. | Datasets | Cost | Slice | RR | AL | RL | Rej | | :----:| :----: | :----: | :----: | :----: | :----: | :----: | | Abalone | 3.0 | 20\% | 2.81 $\pm$ 0.16 | 2.15 $\pm$ 0.55 | 6.51 $\pm$ 1.30 | 72.31 $\pm$ 11.12 | | Abalone | 3.0 | 40\% | 2.60 $\pm$ 0.14 | 2.18 $\pm$ 0.29 | 8.28 $\pm$ 1.35 | 51.58 $\pm$ 10.11 | | Abalone | 3.0 | 60\% | 2.48 $\pm$ 0.14 | 2.05 $\pm$ 0.30 | 8.50 $\pm$ 1.09 | 46.74 $\pm$ 2.96 | | Abalone | 3.0 | 80\% | 2.45 $\pm$ 0.13 | 2.03 $\pm$ 0.24 | 8.22 $\pm$ 1.16 | 45.66 $\pm$ 3.83 | | Abalone | 3.0 | 100\% | 2.41 $\pm$ 0.12 | 1.99 $\pm$ 0.21 | 8.13 $\pm$ 1.08 | 42.04 $\pm$ 3.18 | | Auto-mpg | 4.0 | 20\% | 4.14 $\pm$ 0.17 | 4.83 $\pm$ 1.37 | 12.35 $\pm$ 2.85 | 82.37 $\pm$ 10.51 | | Auto-mpg | 4.0 | 40\% | 3.91 $\pm$ 0.27 | 3.19 $\pm$ 1.28 | 10.73 $\pm$ 2.96 | 73.97 $\pm$ 8.43 | | Auto-mpg | 4.0 | 60\% | 4.15 $\pm$ 0.78 | 3.68 $\pm$ 2.15 | 11.97 $\pm$ 2.68 | 69.10 $\pm$ 12.92 | | Auto-mpg | 4.0 | 80\% | 3.90 $\pm$ 0.75 | 3.47 $\pm$ 1.69 | 13.08 $\pm$ 3.22 | 60.00 $\pm$ 20.23 | | Auto-mpg | 4.0 | 100\% | 3.64 $\pm$ 0.29 | 2.99 $\pm$ 0.83 | 13.98 $\pm$ 4.16 | 56.92 $\pm$ 13.00 | **Q5: Does theoretical results has some relationship or limitation on the form of the pointwise cost function? Does some results rely on cost being a non-constant function?** A5: In our theoretical analysis, we use pointwise cost functions $c(x)$, which shows that our method can be used for pointwise cost functions. For simplicity, in our experiments, we only consider the rejection cost as a constant $c$, but our method can easily handle various pointwise cost functions in different application scenarios. --- Rebuttal Comment 1.1: Title: Thanks authors and I remain positive Comment: Thank you for detailed response and I really appreciate the effort for addressing concerns of comparison with related existing methods. My concerns over slow-start trick and pointwise cost function are also addressed. My concerns of empirical performance over varying data size is also clearly addressed using different datasets and costs. --- Reply to Comment 1.1.1: Title: Thank you for remaining positive Comment: Thank you for letting us know that your concerns were addressed. We also sincerely thank you for your valuable time on our paper and thank you for remaining positive!
Summary: The paper explores the framework of regression with rejection, in which the model can opt to refrain from making predictions on certain instances at specific costs, with the intention of avoiding critical mispredictions. The paper determines the Bayes optimal solution and introduces a theoretically grounded surrogate loss within the framework. Strengths: 1. The paper is pioneering in studying the regression with rejection setting. 2. The presentation of the paper is clear and concise. Weaknesses: 1. While the regression with rejection setting represents a fresh concept in the literature, the technical approach seems to closely mirror the standard classification with rejection setting. This resemblance potentially limits the novelty of the paper. Could the authors elaborate on the specific technical challenges encountered within this setting? 2. A definition of regressor-consistency that parallels rejector-calibration (Definition 3) is missing. 3. The use of a supervised regression method may not serve as an appropriate and fair baseline for rejection experiments. It would be more convincing to conduct experiments comparing some straightforward rejection methods against the proposed rejection methods. 4. Additional commentary on the experimental results is required. For instance, the setup considers a range of binary classification loss functions; which among these yields the best results based on the experiments conducted? Technical Quality: 3 good Clarity: 3 good Questions for Authors: See Weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive review! **Q1: While the regression with rejection setting represents a fresh concept in the literature, the technical approach seems to closely mirror the standard classification with rejection setting. This resemblance potentially limits the novelty of the paper. Could the authors elaborate on the specific technical challenges encountered within this setting?** A1: Thank you for posing this nice question. We agree that our studied setting of regression with rejection shares some similarities with the setting of classification with rejection (CwR) due to the rejection option, but they are basically targeting two different tasks, i.e., regression and classification. The label space is continuous and infinite in regression while is discrete and finite in classification, which makes the labeling information harder to capture in regression than in classification. As shown in Theorem 2 in our paper, the main technical challenge of regression with cost-based rejection is **how to accurately estimate the variance of the real-valued label $y$ over the distribution $p(y|x)$**. However, estimating the variance without knowing the distribution $p(y|x)$ is difficult, and empirically estimating $p(y|x)$ is also difficult because $p(y|x)$ is unknown and needs to be approximated by a well-performed model. To address these challenges, we propose to a surrogate loss function considering rejection as a binary classification problem and provide theoretical analyses on regressor-consistency and rejector-calibration. There is an evident difference in the technical approach between our work and most of the existing studies on CwR, i.e., our proposed method avoids estimating $p(y|x)$ by taking rejection as binary classification while most of the existing studies on CwR need to accurately estimate $p(y|x)$ for rejection. **Q2: A definition of regressor-consistency that parallels rejector-calibration is missing.** A2: Thank you for pointing out this issue. A informal definition of regressor-consistency was provided in Lines 156-157 of our paper, i.e., we say a method is regressor-consistent, meaning that the regressor $h$ learned by the method converges to the optimal regressor $h^{*}$ as the number of training data increases. We will add a formal definition to our revised manuscript. **Q3: The use of a supervised regression method may not serve as an appropriate and fair baseline for rejection experiments. It would be more convincing to conduct experiments comparing some straightforward rejection methods against the proposed rejection methods.** A3: Since this question was repeatedly mentioned by reviewers, we think it is very important. We provide a detailed response to this question in Global Response, so please refer to **Global Response** to check the full response. Briefly, since our paper provides the first attempt to investigate regression with cost-based rejection, there does not exist a baseline that can be directly compared, so we propose a number of evaluation metrics and conduct extensive experiments to verify the validity of our method. However, this does not seem to be convincing enough, so we have adopted the suggestions of reviewers to add comparisons with the methods in selective regression [19, 41] to make our experimental results more convincing. Based on the comparison of accepted loss (AL), our proposed method outperforms (smaller AL with the same rejection rate) all compared methods, which validates the effectiveness of our method. [19] Y. Geifman and R. El-Yaniv. Selective classification for deep neural networks. In NeurIPS, 2017. [41] A. Zaoui, C. Denis, and M. Hebiri. Regression with reject option and application to knn. In NeurIPS, 2020. **Q4: Additional commentary on the experimental results is required. For instance, the setup considers a range of binary classification loss functions; which among these yields the best results based on the experiments conducted?** A4: Indeed, we found that different binary classification loss functions (e.g. mean absolute error, mean square loss, logistic loss, sigmoid, and hinge loss) perform differently on different datasets. For example, hinge loss and sigmoid have similar performances on the BreastPathQ dataset, but sigmoid loss performs better than hinge loss on the AgeDB dataset. Our experimental results show that the performance of MAE is more stable (no excessively high or low rejection rate), compared with other loss functions.
Rebuttal 1: Rebuttal: ## Global Response We sincerely appreciate the thoughtful feedback and insightful comments from all reviewers to help improve our work. We are glad that our study was recognized by the reviewers (Reviewer 1UF6, Reviewer 4Fnw, Reviewer J2H6 and Reviewer mRfi). We are delighted that they found our work to be novel (Reviewer 4Fnw), interesting (Reviewer mRfi), clear (Reviewer 1UF6), and simple (Reviewer J2H6), and we are encouraged by these positive comments. We have corrected all the typos mentioned by reviewers and made the following key changes to our revised manuscript: [Section 1] We made appropriate modifications to our first contribution regarding the formulation of the cost-based reject option regression and the optimal solution (Reviewer J2H6). [Section 4.1] We added the definition of regressor-consistency (Reviewer 1UF6) and the concept of class-calibrated binary classification loss (Reviewer mRfi). [Section 5] We added a comparison with the method of selective regression to demonstrate the validity of our method to make our work more convincing (Reviewer 1UF6, Reviewer 4Fnw and Reviewer J2H6). [Section 5.5] We revised Section 5.5 to clearly express the validity of our method (Reviewer J2H6). [Appendix D] We added experiments on varying training data sizes (Reviewer 4Fnw). [Appendix E] We added a discussion of pointwise cost functions and constant costs (Reviewer 4Fnw). ---------- Below are our responses to the commonest concern regarding the experimental comparison with previous (selective regression) methods. **Q: The experimental comparison with previous (selective regression) methods.** A: Thank you for raising this concern, which will be definitely helpful for us to improve our paper! To the best of our knowledge, our paper provides the first attempt to investigate regression with cost-based rejection (RcR), hence there are currently no direct baseline methods for this problem. So we propose a number of evaluation metrics and conduct extensive experiments hoping to show the effectiveness of our method. We admit that there are no directly comparable methods for our setting, but we can adopt the suggestions of reviewers to add comparisons with the methods in selective regression. We have conducted additional experiments to compare with SelectiveNet [19] and Knn-Plug [41], and the results are reported in Table 1 and Table 2, respectively (in the attached one-page PDF). Specifically, to be able to establish a connection between our studied RcR and selective regression, we set the expected rejection rate (RJ) of the selective regression method based on the results of RcR\_MAE (our surrogate loss equipped MAE). It is important to note that there is no way to perfectly match the rejection rate, as the set rejection rate (RJ) is "expected". From Table 1 and Table 2, our proposed method outperforms (smaller AL with the same rejection rate) all compared methods, which validates the effectiveness of our method. Although RcR and selective regression seem to have a lot of similarities, they actually do not face the same challenges and application scenarios. The selective regression aims to reject with a fixed rejection rate, while RcR aims to reject with a fixed rejection cost. In many real-world scenarios, the rejection rate is inaccessible and we can only have the rejection cost. For example, in safety-critical applications such as autonomous driving, we need to predict the most suitable deflection angle $20^{\circ}$ for the tires based on sensor information. However it is not realistic to have a perfect prediction, so we have a tolerance angle of $10^{\circ}-30^{\circ}$ to complete the turn. In such a case, we can easily set a rejection cost $(10^{\circ})$ instead of a rejection rate. Based on such a view, we believe that RcR is worth studying. [19] Y. Geifman and R. El-Yaniv. Selective classification for deep neural networks. In NeurIPS, 2017. [41] A. Zaoui, C. Denis, and M. Hebiri. Regression with reject option and application to knn. In NeurIPS, 2020. Pdf: /pdf/de6d9cd8205935402e630d00b7741ef4d68fbefe.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Label-Retrieval-Augmented Diffusion Models for Learning from Noisy Labels
Accept (poster)
Summary: This paper proposes a Label-Retrieval-Augmented (LRA) diffusion model for learning from noisy labels. The model leverages the neighbor consistency principle and incorporates pre-trained models to improve performance. The paper introduces the label-retrieval-augmented component, an accelerated label diffusion process, and a new conditional mechanism for incorporating pre-trained models. The proposed model achieves good performance both on synthetic and real-world benchmark datasets and the experimental results show that it can boost accuracy by 10-20 absolute points in many cases. Strengths: 1.The idea of using the diffusion model to address the noisy label learning problem is interesting. 2.Extensive experimental results demonstrate the proposed method can achieve the state-of-the-art method compared with multiple peer methods. Weaknesses: The experiment part needs to be improved: add state-of-the-art method, discuss the influence of the label noise type, and the influence of different pre-trained models. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. The paper mentions that the proposed model is flexible and general, but it does not provide a detailed discussion of the types of label noise that the model can effectively handle. For example, feature-dependent label noise. Since the comparison method [15] is a baseline that focuses on feature-dependent label noise. 2. It seems the pre-trained model is important for the proposed method. It would be better to discuss the effect of different backbone networks as well as pre-trained model on model performance. 3. The description of the comparison algorithms is lacking in the paper. 4. In Table 1, the comparison methods are outdated. It would be better to add some state-of-art methods published in 2022 or 2023. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Please refer to the weakness and limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Q1: The paper mentions that the proposed model is flexible and general, but it does not provide a detailed discussion of the types of label noise that the model can effectively handle. For example, feature-dependent label noise. Since the comparison method [15] is a baseline that focuses on feature-dependent label noise.\ A: Following PLC [15], we conduct experiments using their settings on polynomial margin diminishing (PMD) noise, a novel class of synthetic feature-dependent noise (synonyms of instance-dependent noise). This noise is more realistic to real-world label noise and more difficult to handle than i.i.d noise (instance-independent). However, our method can handle instance-dependent noise, since our model approximates the distribution of neighboring labels of each instance, allowing it to make instance-dependent predictions. Results shown in Table 1 demonstrated that our method with the SimCLR feature achieved state-of-the-art results on the PMD noise. We will provide a clearer explanation in the revised manuscript. Q2: It seems the pre-trained model is important for the proposed method. It would be better to discuss the effect of different backbone networks as well as pre-trained models on model performance.\ A: We agree that the network design of diffusion models is an interesting topic. However, the backbone we adopted from the CARD model for 1D diffusion (supplementary figure C.1) is already very efficient and effective, and there are no well-known alternative designs. Thus, the impact of different backbone networks is hard to evaluate. In addition, we believe the quality of the feature has a positive relationship with the classification accuracy. Q3: The description of the comparison algorithms is lacking in the paper.\ A: Thank you for noting the lack of detail in our comparison algorithms. We will add the required information to the supplementary materials in the revised manuscript. Q4: In Table 1, the comparison methods are outdated. It would be better to add some state-of-art methods published in 2022 or 2023.\ A: Table 1 reports the results on synthetic label noise, specifically focusing on the polynomial margin diminishing (PMD) noise, which is a new class of synthetic instance-dependent label noise. Recent state-of-the-art (SOTA) methods have not performed evaluations on PMD noise in their original papers. To address the problem, we additionally perform experiments using the source code of two SOTA methods, C2D (2022) and CC (2022) and show the results in Table 1 of the attached PDF. C2D also utilizes a pre-trained SimCLR encoder for initialization, but label noise may still affect the feature space during training. In contrast, our method freezes the feature encoder, shielding the pre-trained features from noise. Results on CIFAR-10 demonstrate that when the pre-trained feature is of high quality, our method achieves superior accuracy. On CIFAR-100, where the SimCLR feature has lower KNN accuracy, C2D is more effective, as it can refine the feature space through training. Freezing the feature encoder has another advantage: it enables us to efficiently incorporate more powerful pre-trained encoders, such as CLIP. This approach spares us from the prohibitive computational burden of fine-tuning the feature encoder, allowing for more effective integration.\ On the other hand, we have included more recent SOTA baselines in experiments on real-world datasets. This inclusion is of greater importance, as it reflects performance in real-world applications, providing a more practical and relevant assessment of our method. --- Rebuttal Comment 1.1: Comment: Thanks for the authors’ response. **To A4**, the author only uses the SOTA method to validate the effectiveness of the proposed method on ILSVRC2012 and Food-101N but does not give a validation on the commonly used datasets CIFAR-10 and CFIAR-100. **To A1** The author does not answer my question completely. Can the author provide the comparison result on the label noise produced by the baseline PLC? --- Reply to Comment 1.1.1: Comment: **To A1**, The author does not answer my question completely. Can the author provide the comparison result on the label noise produced by the baseline PLC? **R1**: We apologize for any confusion. Yes, our method can handle the noise produced by PLC (PMD noise). The results in Table 1 of our manuscript are exactly on the label noise produced by PLC. We downloaded the noisy labels from PLC's GitHub repository, and all the results were borrowed directly from PLC's original paper. **To A4**, the author only uses the SOTA method to validate the effectiveness of the proposed method on ILSVRC2012 and Food-101N but does not give a validation on the commonly used datasets CIFAR-10 and CFIAR-100. **R4**: We have included results on CIFAR-10 and CIFAR-100 for two SOTA methods (C2D and CC) in Table 1 of the attached PDF in the global response. These methods achieved the first and second-highest accuracy on ILSVRC2012. Since other methods (NCR and SANM) did not provide complete source code for training, we couldn't produce their results during the rebuttal. However, we will include more SOTA baselines in the final version.
Summary: The paper focuses on learning from noisy labels using diffusion models. They reformulate the noisy label problem from a generative perspective. Specifically, the clean label is our target variable ($y_0$) and the noisy label is the diffused variable ($y_T$), which they call Label-Retrieval-Augmented (LRA) diffusion model. Actually, we don't know the clean label, they extract the pseudo clean labels using the neighborhood consistency. Strengths: * The paper is well-written. * The proposed method is a well-extended approach that builds upon the utilization of diffusion models in classification tasks and applies it effectively to tackle the noisy label problem. Weaknesses: * Motivation of LRA diffusion model * It is important to explain why the noisy label process should be formulated as a diffusion process and to discuss in detail the advantages it offers. * Since I think it is a novel approach, it is crucial to provide detailed explanations of the motivation behind it. * When using the neighbor consistency principle, it is important to address whether the same problems mentioned by the authors in lines 44-51 would arise. If not, it is necessary to explain how the proposed model overcomes these challenges. * Comparison with CARD * It appears that the original CARD paper introduced two pre-trained encoders, while the current paper seems to have used only one. This distinction needs to be clarified, and the roles of the two encoders should be explained separately in the description. * Equation 4 seems to be adapted from the formulation presented in CARD, and proper citation should be provided. * Experiments * The utilization of CLIP in the performance evaluation raises concerns because it involves the incorporation of additional data. It is important to conduct a fair comparison by addressing this aspect. * An explanation is needed to clarify why the diffusion model is not a computational bottleneck. It is hypothesized that this may be due to the smaller dimensions, but this should be verified. * Minor comments * The appearance of $\tilde{y}$ in line 136 without prior definition should be addressed. * "Require: Input:" in Algorithm 1 is typo. * The inconsistent use of $\mu_{\theta}$ and $\epsilon_{\theta}$ should be corrected. * The subscript in line 153 needs to be written correctly. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Please see Weaknesses part. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: They discussed in Limitations section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Q1.1: It is important to explain why the noisy label process should be formulated as a diffusion process and to discuss in detail the advantages it offers. Since I think it is a novel approach, it is crucial to provide detailed explanations of the motivation behind it.\ A: Thank you for highlighting the need to explain our approach. The intrinsic ambiguity of data introduces uncertainty to the labeling process, resulting in controversial labels. Modeling this process with a stochastic conditional generative model is intuitive, and using the most probable generation is expected to improve the annotation (classification) accuracy. We chose the diffusion model because it has demonstrated the ability to model prediction uncertainty in the CARD model for classification. In addition, the mode coverage ability of the diffusion model (generating diverse samples given the same conditional information) is intuitively suitable for training with neighboring labels. We'll provide more detailed explanations in the revised manuscript. Q1.2: When using the neighbor consistency principle, it is important to address whether the same problems mentioned by the authors in lines 44-51 would arise. If not, it is necessary to explain how the proposed model overcomes these challenges.\ A: In our method, the feature encoder is fixed and does not change during the training process. As a result, the feature space is determined prior to the introduction of noisy labels and is not subject to distortion by label noise. Q2.1: It appears that the original CARD paper introduced two pre-trained encoders, while the current paper seems to have used only one. This distinction needs to be clarified, and the roles of the two encoders should be explained separately in the description.\ A: The original CARD paper only introduced one pre-trained encoder, which is used as the mean estimator of the diffusion start at $t=T$. This encoder's features were also used as input to CARD's neural network. However, due to its role as the mean estimator, the encoder was restricted to a low dimension (the number of classes), thereby limiting its representation capacity. Intuitively, one might expect that feeding higher-dimension, more powerful features to the network could lead to performance improvements.\ Therefore, in our method, we introduced a separate feature encoder specifically to feed conditional information to the network. We denote the mean estimator as $f_q$ (the same dimension as the labels), and the feature encoder (higher dimension) as $f_p$. As a result, the CARD model can be seen as a special case of our model, where $f_p = f_q$. In Table 2, we show CARD+LRA results in poor performance in our ablation study. This design is explained in section 3.4, lines 170-175 of our manuscript. We apologize if this was not clear in our initial presentation, and we will certainly make efforts to clarify this in the revised manuscript. Q2.2: Equation 4 seems to be adapted from the formulation presented in CARD, and proper citation should be provided.\ A: Thanks for pointing out this. We have updated our manuscript with the appropriate reference to the CARD work. Q3.1: The utilization of CLIP in the performance evaluation raises concerns because it involves the incorporation of additional data. It is important to conduct a fair comparison by addressing this aspect.\ A: Please refer to the global response. We have added additional results in Table 3 of the attached PDF file, where simply applying the CLIP features leads to poor performance. We hope this response addresses your concern. Q3.2: An explanation is needed to clarify why the diffusion model is not a computational bottleneck. It is hypothesized that this may be due to the smaller dimensions, but this should be verified.\ A: As depicted in the supplementary figure C.1, all the time embeddings are inserted into the feed-forward layers of the neural network. Thus, the computation of the ResNet encoder is required only once and is the same as that for a standard ResNet classifier. The additional computation cost involves the computation of the $f_p$ encoder and a label-dimension (e.g., 10 in CIFAR10) diffusion process, which is quite efficient. A qualitative comparison in Table 6 shows that the time cost of our method using a ResNet50 as $f_p$ encoder is comparable to a standard ResNet50 classifier, while the performance is much better. Minor Comments\ A: Thank you for identifying the writing errors in our manuscript. We appreciate your attention to detail. We will correct these mistakes and do proofreading to ensure that no further errors remain. --- Rebuttal Comment 1.1: Comment: Thank you for the author's response. I have addressed most of the concerns, but I still have a question. **Q1.2** My question is that this fixed feature encoder might introduce the problem as the author said (*The performance highly depends on the quality of the encoder that maps the data to the feature space, ... the training can also lead to overfitting or underfitting.*). --- Reply to Comment 1.1.1: Comment: **R**: Thank you for raising this critical point. As you pointed out, a low-quality fixed feature encoder will not benefit the learning. However, when the pre-trained feature is of high quality, our method is more effective than fine-tuning the feature space using the SOTA method (C2D) because it shields the feature space from distortion by label noise. Moreover, freezing the feature encoder enables the integration of more powerful pre-trained encoders, such as CLIP, because it frees us from the prohibitive computational burden of fine-tuning. For a detailed comparison, please refer to the new baseline results, 'C2D + SimCLR', in Table 1 of the attached PDF. C2D employs a pre-trained SimCLR encoder for initialization; thus, the weight is fine-tuned during training. Results on CIFAR-10 demonstrate that when the pre-trained feature is of high quality, our method achieves superior accuracy. On CIFAR-100, where the SimCLR feature has lower quality (lower KNN accuracy), C2D is more effective, as it can refine the feature space through training.
Summary: The paper proposes the application of denoising diffusion probabilistic models for modeling the true class probability distribution when training with noisy labels. In particular, the paper extends Classification and Regression Diffusion Models for this problem and uses pretrained self-supervised representation models for Label Retrieval Augmentation. At test time, they approximate the true label by starting with the reverse diffusion at the mean of the final gaussian distribution. The empirical results on CIFAR10, CIFAR100 and other real-world noisy datasets show improvements over considered baselines. Strengths: The main strength of this work is that it demonstrates successful application of diffusion models for learning the probability distribution of labels in noisy label settings. The paper considers a variety of datasets and includes the relevant ablation studies and inference times. Weaknesses: I believe the key weaknesses are: 1) The requirement of a pre-trained self-supervised image embedding network. 2) The diffusion model does not sample discrete labels and does not even model the categorical distribution. 3) I think the evaluation should also consider the following baseline: train a classifier network exactly as in Algorithm 1 except in step 5 where the network is trained with an MSE loss to predict one-hot vector obtained in step 4. More specifically, it would be good to check the performance when the classifier network can take in the image and the simclr/clip features and have a linear prediction head. This baseline would be more closer to the LRA-diffusion model than the other baselines considered in Table 2 in terms of the model input and output. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. In line 179, you write that you simply set $f_q({\bf x}) = {\bf 0}$ whereas you write in line 151 that the DDIM generation begins with a non-zero centered gaussian. So, the generalized DDIM is essentially used for the CARD+LRA Diffusion experiments in Table 2? And, you have a zero-centered gaussian distribution for the LRA-diffusion that you actually implement? 2. Do you have any intuition about the performance of the baseline proposed in Weakness (3)? 3. Do the baselines considered in Tables (3), (4) and (5) have access to extra training data? 4. Will you release all the trained model checkpoints used for reporting the results in tables? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Yes, the limitations are adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: W1: The requirement of a pre-trained self-supervised image embedding network.\ A: We believe the requirement of a self-supervised embedding model like SimCLR is not necessarily a limitation, as it can be obtained for free by training on the same training dataset. Furthermore, the utilization of SimCLR is a common practice in the field. It has been leveraged in previous state-of-the-art methods such as C2D and the methods proposed by (Smart 2023 and Cordeiro 2021). * Smart, Brandon, and Gustavo Carneiro. "Bootstrapping the Relationship Between Images and Their Clean and Noisy Labels." Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2023. * Cordeiro, Filipe R., et al. "Propmix: Hard sample filtering and proportional mixup for learning with noisy labels." arXiv preprint arXiv:2110.11809 (2021). W2: The diffusion model does not sample discrete labels and does not even model the categorical distribution. \ A: We would like to argue that this is not a fundamental issue, as the generated label vector can be interpreted as the logit of a prediction head of a traditional classifier. To further address your issue, we also performed experiments using diffusion on the probability simplex following the Star-shaped DDPM (Okhotin 2023). As stated in their paper, the utilization of Dirichlet distribution (the conjugate prior of the categorical distribution) in a diffusion mode requires a different mechanism design than DDPM. We have added the results as Dirichlet diffusion in Table 2. Although it has a one-hot label interpretation, the performance is indeed slightly worse than using standard DDPM. We hope this response adequately addresses your concern. * Okhotin, Andrey, et al. "Star-Shaped Denoising Diffusion Probabilistic Models." arXiv preprint arXiv:2302.05259 (2023). W3 and Q2: I think the evaluation should also consider the following baseline: train a classifier network exactly as in Algorithm 1 except in step 5 where the network is trained with an MSE loss to predict one-hot vector obtained in step 4. More specifically, it would be good to check the performance when the classifier network can take in the image and the simclr/clip features and have a linear prediction head. This baseline would be more closer to the LRA-diffusion model than the other baselines considered in Table 2 in terms of the model input and output.\ A: We have included the results of this baseline, denoted as "ResNet + linear," in Table 2 of the attached PDF file, and plan to incorporate it into the final version of the manuscript. We used a similar network architecture and the same pre-trained features and training labels as those used for our diffusion model. In certain settings, its accuracy is found to be lower than that of linear probing without using images as input. This difference may be attributed to the additional learning capacity introduced by the ResNet component being used to overfit the label noise. Moreover, the performance gap between our method and this baseline demonstrates the significance of the diffusion process. Q1: In line 179, you write that you simply set whereas you write in line 151 that the DDIM generation begins with a non-zero centered gaussian. So, the generalized DDIM is essentially used for the CARD+LRA Diffusion experiments in Table 2? And, you have a zero-centered gaussian distribution for the LRA-diffusion that you actually implement? \ A: You are correct. The non-zero centered Gaussian for the start of the generation process is an essential component introduced in CARD. We also include this design in our model to present a more theoretically general framework of our method. However, in practice, we must adopt the setting that is most effective. Through our empirical analysis, we found that utilizing a noisy classifier as the $f_q$ encoder does not consistently lead to improved results. For the sake of simplicity and to reduce model complexity, we thus chose to use only a zero mean. Q3: Do the baselines considered in Tables (3), (4) and (5) have access to extra training data? \ A: The baseline methods do not have access to extra training data. In the global response, we discussed that the performance of our method does not simply reflect the strength of the CLIP features by comparing our method to the KNN and linear probing results using CLIP features on the real-world datasets presented in Tables 3, 4, and 5. Q4: Will you release all the trained model checkpoints used for reporting the results in tables?\ A: Yes, we will release all checkpoints and the code to recreate the results of our experiments. --- Rebuttal Comment 1.1: Title: Response to Rebuttal. Comment: Thank you for the extra experiments and answers to my questions.
Summary: The paper proposes using a diffusion model to perform classification on a noisy-label dataset. The authors utilize diffusion to learn how to conditionally generate labels given an image. By combining recent work on classification diffusion models with label retrieval augmentations they demonstrate state-of-the-art results on real-world noisy datasets. Strengths: - The method provides a simple, yet seemingly effective way of performing classification on noisy data using generative models. The idea of using diffusion models for purpose this is novel and has the potential to significantly contribute to the "Learning With Label Noise" literature. - Some of the real-world dataset results showcased outperform the previous state-of-the-art by a significant margin. Weaknesses: - The novelty of the paper is limited. The authors combine the existing classification diffusion model with the retrieval augmentation technique to capture the noisy label distribution. The methodologies for both are inherited almost unchanged from their respective papers and there is no significant extension to either. Extending DDIM to a non-zero mean latent distribution is a minor contribution. - In lines 127 and 166, the authors claim that initializing with the predicted label mean and running the DDIM sampler approximates the maximum likelihood estimation of the labels. This is not justified anywhere, and there is no evidence to support the claim. - The experiments on the WebVision and ILSVRC2012 datasets should be examined further. The method presented in this paper relies heavily on having a high-quality pre-trained encoder network that can provide meaningful representations during training. For these two experiments, the authors utilized CLIP, whereas the next best-performing approaches, C2D and NCR, were only trained using samples from the smaller datasets. It is possible that the majority of the performance increases shown in this paper can be attributed to the discriminative ability of the CLIP encoder, since additionally on the Clothing1M dataset, where CLIP is not expected to perform well, the authors did not manage to greatly outperform the previous state-of-the-art results. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: - For a single sample, is the frequency of the neighbors seen in training the same regardless of their distances in the latent space? If so, how is a neighborhood defined? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The limitations are addressed. Broader societal impacts are not applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: W1: The novelty of the paper is limited. The authors combine the existing classification diffusion model with the retrieval augmentation technique to capture the noisy label distribution. The methodologies for both are inherited almost unchanged from their respective papers and there is no significant extension to either. Extending DDIM to a non-zero mean latent distribution is a minor contribution.\ A: Thanks for the comment. We wish to claim that there are at least two types of research papers: one focuses on theoretical innovation; the other focuses on improving practical performance of some particular problems. Our paper largely belongs to the second type. Our contribution lies in integrating and adapting diffusion models and the SOTA image encoder to a specific problem domain rather than proposing a new theoretical analysis. The proposed design allows us to achieve significant performance improvements, as demonstrated in our experimental results. On the other hand, extending DDPM to a non-zero mean latent distribution is essential to the previous CARD model. In our paper, we extended it to DDIM to provide a more general framework that includes CARD as a special case and potentially facilitates future work that uses a more refined $f_q$ encoder. W2: In lines 127 and 166, the authors claim that initializing with the predicted label mean and running the DDIM sampler approximates the maximum likelihood estimation of the labels. This is not justified anywhere, and there is no evidence to support the claim.\ A: We empirically find that initializing DDIM with the predicted label mean results in slightly improved testing accuracy compared to the stochastic version. Since the initial Gaussian distribution is unimodal, and it serves as the sole source of randomness in the generation process, using the mode to obtain a deterministic and most probable prediction emerges as an intuitive choice. W3: The experiments on the WebVision and ILSVRC2012 datasets should be examined further. The method presented in this paper relies heavily on having a high-quality pre-trained encoder network that can provide meaningful representations during training. For these two experiments, the authors utilized CLIP, whereas the next best-performing approaches, C2D and NCR, were only trained using samples from the smaller datasets. It is possible that the majority of the performance increases shown in this paper can be attributed to the discriminative ability of the CLIP encoder, since additionally on the Clothing1M dataset, where CLIP is not expected to perform well, the authors did not manage to greatly outperform the previous state-of-the-art results.\ A: As discussed in our global response, the KNN and linear probing results added to Table 3 illustrate that our method effectively incorporates CLIP features in supervised learning from noisy labels. Utilizing naive linear probing with neighboring labels degrades the pre-trained feature space, leading to diminished performance in comparison to the unsupervised KNN approach. Moreover, our method has the flexibility to incorporate other techniques to further enhance accuracy. Q1: For a single sample, is the frequency of the neighbors seen in training the same regardless of their distances in the latent space? If so, how is a neighborhood defined?\ A: Neighbor is defined the same as in the K-nearest neighbor algorithm, which is by the rank of distances. We choose K based on KNN performance on the validation set as discussed in supplementary C2. --- Rebuttal Comment 1.1: Comment: We wanted to follow up on the question you raised during the rebuttal phase. We hope that our responses have sufficiently addressed your concern. Should you have any further questions or require additional clarification, please do not hesitate to reach out. We stand ready to provide any additional explanations as needed.
Rebuttal 1: Rebuttal: We thank the reviewers for their valuable comments. We are happy that the reviewers find our manuscript well-written, and demonstrate effective application of our method. A common issue raised by the reviewers is the incorporation of the CLIP feature which uses external data to train. We will address this issue below. For other specific questions raised by each reviewer, we will post our responses separately. We also have added requested experiment results in the reviews in the attached one page pdf file. We will incorporate more detailed revisions into the camera-ready version according to our responses to the reviews. Q: The utilization of CLIP in the performance evaluation raises concerns because it involves the incorporation of additional data. It is important to conduct a fair comparison by addressing this aspect.\ A: We thank the reviewers for this general comment. We argue that the results do not simply reflect the strength of the CLIP features, for the following reasons. We will incorporate these arguments into the final revision for clarification. 1. We have added multiple baselines in the ablation study, the results are shown in Table 2 of the attached PDF. All the baseline methods use the pre-trained SimCLR and CLIP features. The SimCLR feature is trained on the same training data through contrastive learning without access to external dataset. The results demonstrate that our method is the key to the successful application of pre-trained features in learning from noisy labels. In addition, we want to emphasize that we are the first to develop a method that effectively uses the CLIP feature in learning with noisy labels. This innovation is a significant part of our contribution. 2. We show results of linear probing and KNN using CLIP features in Table 3 of the attached PDF to further demonstrate the effectiveness of our design. The results suggested that utilizing linear probing with LRA and mean labels degrades the pre-trained feature space, leading to diminished performance in comparison to the unsupervised KNN approach. On the other hand, the diffusion process can effectively incorporate the feature space to achieve higher performance. In addition, we included results of linear probing using average of neighboring labels (denoted as mean) instead of sampling from neighboring labels (denoted as sample) in Table 2 in response to reviewer KNdw’s inquiry. We found that mean labels are more robust than sample labels for linear probing. However, we show both sample and mean label works with the diffusion model and result in similar results. We also included the diffusion result with mean labels to show it is slightly worse than sample labels in most cases on real-world dataset. We have also discussed the use of mean label in Supplementary D2. 3. In Table 5 of the manuscript, we have shown improved results with CC, which isn't trained with external data, on the Clothing1M dataset. This demonstrates that our method can achieve improved results when coupled with other feature-learning methods. In fact, we would like to emphasize that a common practice adopted by most existing methods to achieve state of the arts is by using pretrained models (why wouldn’t if the pretrained models are available?). In our setting, for example, NCR achieves SOTA by the combination of NCR+Mixup+DA, C2D also has to collaborate with ELR+, DivideMix, and SimCLR to achieve SOTA results. We will highlight these hybrid methods in our comparison. Since the LRA diffusion method is orthogonal to many existing techniques, we believe that by combining it with other learning techniques, such as Mixup augmentation and Co-teaching, our method will facilitate the development of new methods for learning from noisy labels in the future. Pdf: /pdf/f381bce4abe49af5a0f03820c837bf8bd4ff1b87.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper considers image classification with noisy labels. Rather than consider a single label for each image, it considers the label distribution within the set of neighboring examples (determined using a pre-trained feature extractor), modelling this distribution by a learned diffusion process that is conditioned on image features, similar to CARD. The diffusion process maps a random label in the neighboring set to a normal distribution whose mean is either zero or the prediction of a baseline classifier, such that the reverse process should sample labels from the neighboring set. Inference is performed using DDIM to find the maximum likelihood label. The method assumes a pre-trained feature extractor (SimCLR trained on training data or CLIP trained on external data) is available for the purpose of finding neighbors and conditioning the diffusion process. The method is compared to strong baselines on several synthetic and real-world datasets and often provides a large improvement. Ablative studies demonstrate the importance of each component. Strengths: 1. The ablative study (Table 2; CIFAR10/100 with synthetic noise) demonstrates that the combination of CLIP/SimCLR features and LRA-diffusion is key to the effectiveness of the method: linear probing, LRA (label retrieval) and CARD (diffusion-based classification) alone do not achieve such an improvement. 1. Idea is well motivated. 1. Good coverage of related work. 1. Well written. 1. Synthetic experiments use strong type of noise based on second-highest confidence. 1. Inference time included in empirical results. Weaknesses: 1. Real-world evaluation (WebVision, ILSVRC, Food-101N) does not include simple baselines using CLIP features (KNN classifier and linear probe). This is important because it lets us verify that the results do not simply reflect the strength of the CLIP features. 1. Real-world evaluation does not include SimCLR features. This is important because it shows the performance of the method without external data. In general, it would be good to highlight which methods use (which) external datasets and/or pre-trained weights in the SOTA comparison. 1. (Sections 3.3 and 3.4) The modification of the DDIM procedure to allow non-zero mean seems to be redundant, since all experiments then used a zero-mean distribution by setting $f_q = 0$ (line 179)? 1. Unclear what "Linear probing + LRA" is in Table 2. Does this entail training a linear model using the average of the neighboring labels as the softmax-cross-entropy target? 1. It does seem inelegant to use a normal distribution to represent probability vectors (as acknowledged in text). Did the authors try working in real-valued logits instead? Suggestions: 1. I'm not a big fan of naming the method "retrieval", since the term suggests to me that training data is retrieved from a larger, external source of training data (not the case). 1. Discussion of impact of $k$ (number of neighbors) is deferred to the supp mat and I didn't see any references to this in the main text. It would be good to at least reference it and ideally add a plot. 1. Could be good to include some discussion of singly-labelled vs multiply-labelled noisy labels, since the proposed method effectively constructs a multiply-labelled dataset from a singly-labelled dataset. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please address weaknesses. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have identified limitations including: 1. dependence on pre-trained feature extractors 1. method is less effective beyond ~50% corrupted labels 1. normal distribution not ideal for working on probability simplex I do not see any ethical issues. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Q1: Real-world evaluation (WebVision, ILSVRC, Food-101N) does not include simple baselines using CLIP features (KNN classifier and linear probe). This is important because it lets us verify that the results do not simply reflect the strength of the CLIP features.\ A: We add the KNN and Linear Probing results to Table 3 in the attached PDF. Similar to our ablation study, linear probing results in decreased performance compared to KNN. On the other hand, diffusion methods are effective and robust in incorporating CLIP features. The test accuracy on ILSVRC2012 shows a performance drop due to supervised training bias on Webvision. This limitation does not apply to the unsupervised KNN. Q2: Real-world evaluation does not include SimCLR features. This is important because it shows the performance of the method without external data. In general, it would be good to highlight which methods use (which) external datasets and/or pre-trained weights in the SOTA comparison.\ A: We found that using SimCLR features is less effective compared to SOTA methods, partly due to the sub-optimality of the trained features. However, SimCLR+Diffusion (our method) still outperforms the simple SimCLR method. We also have shown improved results with CC, which isn't trained with external data, on the Clothing1M dataset, as presented in Table 5. This demonstrates that our method can achieve improved results when coupled with other feature-learning methods. Q3: (Sections 3.3 and 3.4) The modification of the DDIM procedure to allow non-zero mean seems to be redundant, since all experiments then used a zero-mean distribution by setting (line 179)?\ A: We would like to argue that we show that allowing DDIM with non-zero mean is a more theoretically general framework of our method, but do not advocate the exact adoption of the method. In practice, we do need to adopt the most practically effective setting. We empirically found that utilizing a noisy classifier as the $f_q$ encoder does not consistently lead to improved results. For simplicity, we thus only use a zero mean to reduce the model complexity. Q4: Unclear what "Linear probing + LRA" is in Table 2. Does this entail training a linear model using the average of the neighboring labels as the softmax-cross-entropy target?\ A: An sample label is a one-hot label vector, sampled from neighboring labels as outlined in Algorithm 1, step 4, on page 4. We've included the result of the linear probing with mean label setting that you requested in Table 2 of the attached file. It is more effective than using sample labels for linear probing. Additionally, we have discussed the utilization of mean labels for diffusion in Supplementary D2, where we find that the performance is comparable to that of sample labels. Q5: It does seem inelegant to use a normal distribution to represent probability vectors (as acknowledged in text). Did the authors try working in real-valued logits instead?\ A: We agree that the generated label vector is not confined to the probability simplex, but it can be interpreted as the logit of the prediction head of a classifier. To further address your concerns, we additionally performed experiments using diffusion on simplex following the current Star-shaped DDPM (Okhotin 2023). As stated in their paper, the utilization of Dirichlet distribution in a diffusion model requires a different mechanism design than DDPM. We have added the results as Dirichlet diffusion in Table 2. Although it has a one-hot label interpretation, the performance is indeed slightly worse than using standard DDPM. We hope this response adequately addresses your concern. * Okhotin, Andrey, et al. "Star-Shaped Denoising Diffusion Probabilistic Models." arXiv preprint arXiv:2302.05259 (2023). Suggestions: S1: I'm not a big fan of naming the method "retrieval", since the term suggests to me that training data is retrieved from a larger, external source of training data (not the case).\ A: We use the term 'retrieval' in the name because the label is augmented by the neighbor's label, and the neighbors are retrieved from an external feature space, which can be pre-trained by external data. However, we understand how the term 'retrieval' might cause misunderstandings. Accordingly, we will consider changing the name to “label-corrected diffusion” to better reflect the actual function. S2: Discussion of impact of (number of neighbors) is deferred to the supp mat and I didn't see any references to this in the main text. It would be good to at least reference it and ideally add a plot.\ A: Thank you for your suggestion. We will add a reference to the impact of the number of neighbors in the main text and include a small plot in the supplementary material to address this concern. S3: Could be good to include some discussion of singly-labeled vs multiply-labeled noisy labels, since the proposed method effectively constructs a multiply-labeled dataset from a singly-labeled dataset.\ A: Thank you for your suggestion. We plan to include a discussion of applying our method to multi-label datasets in the conclusion section, which we hope to explore as interesting future work. --- Rebuttal Comment 1.1: Comment: > Q1 (baselines with CLIP features for real-world datasets), Q2 (baselines with SimCLR features for real-world datasets), Q4 (is mean label used for "linear probe + LRA"?) Thank you for conducting these experiments, this improves my confidence in the effectiveness of the method. > Q3 (framework is more general than experiments) This is ok but the reader should be warned that experiments adopt $f_q = 0$ at the time that it is introduced, with the justification that the authors have provided in the rebuttal. > Q5 (Gaussian not on simplex) Thanks for conducting this initial experiment. This is interesting and important for completeness. > S1 (proposal to use name "label-corrected diffusion") To me, "label-corrected" could mean anything. Maybe something like "neighborhood label distribution diffusion", but it's up to you! > S2 (number of neighbors), S3 (discussion of multiply-labelled noisy labels) Thanks for taking these suggestions on board. **Overall** The paper proposes a clever use of diffusion to incorporate neighborhood consistency in noisy-label learning. While neither component is novel, I find the application to be highly apt, and moreover the method is effective. The additional experiments in the rebuttal help demonstrate that the improvement is not simply due to the use of pre-trained or self-supervised features. I would prefer to see a greater discussion of other methods for predicting a multimodal distribution in the final version. I preserve my initial positive rating.
null
null
null
null
null
null
Recursion in Recursion: Two-Level Nested Recursion for Length Generalization with Scalability
Accept (poster)
Summary: Recursion in Recursion combines the efficiency characteristics of Binary Balanced Tree Recursive Neural Networks (BB-Tree RvNNs) and quality characteristics of Beam Tree RvNNs by applying the recursion in a hierarchical fashion. Doing so is not straightforward and requires several "modifiers" such as Beam Alignment and being careful about chunk preprocessing in the outer recursion. Table 1 demonstrates that the method is efficient in both time and memory. Table 2 demonstrates that the model quality is compromised but not by much. Strengths: The paper provides necessary background which I appreciated since I am not too familiar with RvNNs. Experiments seem well done and reproducible. The method is efficient as promised and still maintains the ability to generalize to sequence length. While the method is still slow and seem less mature than other methods like transformers, I think RvNNs are interesting and worthy of discussion because of their explainability and ability to generalize to longer sequences. Weaknesses: While interesting, the method seems rather complex to apply generally with 2 levels of recursion and necessary modifiers to integrate them effectively. The beam alignment section I found particular confusing. Maybe there could be a figure to help understanding? The inability to generalize on argument length could use more discussion. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: I would be curious about how frequently bad chunk processing happens and the model's ability to recover? How much does S4D help here? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: The authors have a limitations section that discuss slowness and lack and difficulty to generalize. To me the the method seems complex. I am not sure if it is broad applicable given that it is still slow and model quality is still lacking. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review. **Re Weaknesses** 1. Please see the general response for the clarification on the complexity. 2. We provide figures and additional intuition on beam alignment in the general response. 3. Argument generalization was a recently proposed challenge, and currently, most models struggle with it - including GoldTreeGRC (a model with ground truth structure data). We will need to investigate the cell function or other training strategies for proper argument generalization which would require separate work and is left as future work. Note that while it’s a limitation of our model in not being as good at argument generalization, this limitation, at the moment, is nearly universal - shared by most of the popular models - S4D, Transformers (Transformers struggle even IID [2]), MEGA, and others. Only one or two models lacking explicit structure-information (OM, EBT-GRC) are moderately good but still falling behind 90%. **Re Questions** 1. Bad chunks should happen nearly 100% of the time since ListOps is not set up to have balanced structures at any level. We show the comparison of “RIR-EBT-GRC” vs “RIR-EBT-GRC-S4D” in various Tables (please see Table 2 and Table 4). As we find there, S4D actually does not help much. So this is an area open for improvement/further investigation. The ability to “recover” the right structure through bad chunks is hard to check because if it recovers it has to do it internally by organizing information in its hidden states in the right way (analogous to how an RNN may model tree structures [3]) which is more of a blackbox process. **Re Limitations** 1. Note that the difficulty to generalize (e.g., in argument generalization) is also present in other existing models like S4D, MEGA, and Transformers, which fail in length generalization too [1,2]. RIR-EBT-GRC does much better than them. 2. While RIR-EBT-GRC is slower than S4D/MEGA/BBT-GRC - their speed comes at the cost of worse performance in structure-sensitive contexts. Moreover, our proposal - RIR-EBT-GRC is several times faster than other powerful RvNN models (OM, CRvNN, BT-GRC) - making a massive computational improvement for RvNN models (Table 1) while retaining decent performance (Table 2, Table 3). For example, it’s 100s of times faster than OM. Also please see the Pareto frontier graphs in the general response pdf which show how RIR-EBT-GRC gains a better trade-off than others. [1] The Importance of Being Recurrent for Modeling Hierarchical Structure - Tran et al. EMNLP 2018 [2] Ordered Memory - Shen et al. Neurips 2019 [3] Tree-Structured Composition in Neural Networks without Tree-Structured Architectures - Bowman et al. International Conference on Cognitive Computation: Integrating Neural and Symbolic Approaches 2015
Summary: The paper studies a “recursion in recursion” (RIR) approach, the outer loop being a balanced k-tree recursive NN, and the inner loop a general recursive NN that is based on a beam tree. The goal is to obtain O(k*log_k(N)) time complexity. O(2*log_2(N)) is a special case of k=2 (for binary trees) and O(N) is the special case of k=N for RNNs. The authors study (mostly) the tradeoff between performance on ListOps and Long Range Arena (LRA). Namely, RIR seems competitive to strong baselines on LRA (e.g. state space models), yet it generalizes better on longer lengths in ListOps. Strengths: S1. The paper is very well written. The research process is embedded in the structure of the paper: “motivation -> problem -> solution -> new problem -> new solution -> etc.” S2. The modifications are quite reasonable and the experiments are thorough (the appendix was helpful too). S3. The papers’ evidence matches the claims quite well and the limitations are well-acknowledged. Weaknesses: W1. It is very hard to read through the tables. Since you are studying a trade-off of time/ memory efficiency and performance on ListOps, and likewise ListOps vs. LRA performances, I would expect scatter plots that trace out a Pareto frontier and where your RIR contributions lie on this frontier. W2. Despite the work being very engineeringly sound and well-presented, one could argue that the contributions as “combinatorial” and mostly heuristic vs. fundamental. That is OK, but one could consider it a weakness within the context of NeurIPS. W3. Following up on 2., it may be good to discuss the significance of your contribution to larger tasks and broader / realistic data distributions. For example, how would RIR might be a part of a state-of-the-art LLM architecture in the future? W4. Perhaps the most exciting discussion for me is about different inference strategies (whether to use RIR or not). I acknowledge that you have addressed some of that in the SM, but I would have expected to see a bit more discussion in the main text. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Q1. Why do you think that RIR is not very helpful for inference? I like it because of the more efficient inference that it enables. Is it possible to demonstrate the efficiency/ accuracy trade-off with another Pareto frontier plot? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: the authors adequately addressed the limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review. **W1:** We will add the Pareto frontier graphs. We currently show them in the General Response PDF. ----- **W3:** At this point, we can suggest some speculative moves we can make to set up RIR-EBT-GRC in an LLM context. First, we need to think about parameter scaling. The naive way to scale parameters would be simply increasing the dimensions of the recursive cell. This is inefficient in recursive contexts (this would lead to repeating a large layer for indefinite times based on sequence length). Some alternatives can be to look for some sort of modular/MoE approach (where each recursion operates on some sparse parameters from a bigger parameter set) or to try some form of stacking (just as we, sometimes, stack RNNs). Now, stacking RIR-EBT-GRC is not immediately possible because it’s a sentence-encoder (it compresses a sequence of vectors into a single vector after which there is not much room to add another sequence model). However, one thing we can do is try to create a token contextualization by sending top-down signals from representations of whole to representations of parts (initial token representations) (similar to [1]). In fact, we are exploring this idea in a concurrent paper by using a parent attention mechanism. This can allow stacking the output of RIR-EBT-GRC with other RIR-EBT-GRCs, Transformers, SSMs, or CNNs (and the overall model can utilize a mixture of inductive biases). Second, after parameter scaling, another main difficulty is utilizing it in a causal LLM training context. A difficulty is that RIR-EBT-GRC is a bidirectional model. We can still train a bidirectional model for LLM by sequentially entering the expanding context in a loop but this would be slow. To make it tractable, first, we can use a Block-Recursive style framework [2] and second, we can limit the use of RIR-EBT-GRC for specific intervals (use it sparsely), and use a simpler model that use the latest RIR-EBT-GRC-created past hidden states + free tokens (causally masked) to make predictions in between. Some loosely related ideas are used in [5] (they are also trying to work with looping a bidirectional model for autoregressive generation). Even after all these, the overall setup would probably not be as fast as Transformer training. But speed or even mere surface capacity to handle big contexts is not everything - and can come at other costs [3,4]. [1] Head-Lexicalized Bidirectional Tree LSTMs - Teng et al. TACL 2017 [2] Block-Recurrent Transformers - Hutchins et al. Neurips 2022 [3] Exposing Attention Glitches with Flip-Flop Language Modeling - Liu et al. ArXiv 2023 [4] Lost in the Middle: How Language Models Use Long Contexts - Lie et al. ArXiv 2023 [5] Real-World Compositional Generalization with Disentangled Sequence-to-Sequence Learning - Zheng et al. ArXiv 2022 ----- **W4-Q1:** We will add RIR inference details in the main text. We will also add trade-off information. But overall, to answer the question - in most tasks (including LRA) RIR-inference seems to work nearly as well as non-RIR-inference. So in most cases, RIR inference is a “win-win” (several times faster + nearly the same accuracy). However, when a model is working in a structure-sensitive context, it may find it difficult to learn to length generalize as well through an RIR format (it still does better than S4D/MEGA in the ablation Table 1 in Appendix) -- because ultimately the balanced-tree enforcement is a restriction on the search space that is generally unsuited for structure-sensitive tasks (and merely done for efficiency). However, the existence of data length <= chunk size, leads to some non-RIR training instances which probably helps teach the inner recursion to learn to length-generalize. Thus, testing without RIR inference (i.e., running the inner recursion model in the full input) can still lead to length generalization and be even better (in structure-sensitive contexts when the inner recursion indeed learns to utilize the relevant structures in a length-generalizable manner) due to the lack of balanced tree enforcement work. In these cases, RIR inference can reduce the performance compared to non-RIR inference. Note, however, since the use of -Beam Align or +Random Align during training reduces the non-RIR inference accuracy (these changes only make an impact for instances running in RIR mode), the RIR instances (i.e., instances with sequence length > chunk size) should be still providing some training signals in helping the inner recursion model to learn to generalize. --- Rebuttal Comment 1.1: Title: I read the rebuttal. I appreciate the clarifications and additions and will raise my score. Comment: Thanks. I would encourage you to add the discussions in the main text/ SM. --- Reply to Comment 1.1.1: Comment: Thank you for raising the score. We will add the discussions.
Summary: The authors propose a new tree-style RNN that combines the computational efficiency of tree RNNs with the computational power of more complex tree RNNs. The method makes each node perform a recursive computation, so there are logarithmically many nodes but they are more expressive than the standard tree network. The resulting model can operate on long sequences comparably to state space models and much better than vanilla transformers. **Ethics Review**: It appears to me that the authors broke the integrity of the double-blind reviewing procedure. In lines 144-146, the authors write: "we use an efficient variant of BT-RvNN (EBT-RvNN). We propose and explore EBT-RvNN in a concurrent work". NeurIPS policy states: "If you need to cite one of your own papers, you should do so with adequate anonymization to preserve double-blind reviewing. For instance, write “In the previous work of Smith et al. [1]…” rather than “In our previous work [1]...”). If you need to cite one of your own papers that is in submission to NeurIPS and not available as a non-anonymous preprint, then include a copy of the cited anonymized submission in the supplementary material and write “Anonymous et al. [1] concurrently show...”).". As such, I am flagging it for an ethics review. **Edit after rebuttal**: I have increased my score from 3 to 4. Strengths: The paper improves on other tree-based RNNs by reducing the computation. Weaknesses: This may be my lack of understanding on this subject area, but I think this is an overly complicated method that does not outperform existing methods or provide a clear benefit over them. There is some value to advancing a subfield of ML (i.e., tree-based RNNs), but I find it difficult to see the significance of this work. It may also be because of issues with the writing, see below. Besides the complexity of the method and the performance issues, the authors also do not benchmark on many tasks. They focus on the ListOps task from the Long Range Arena benchmark and don't consider Mega, which has achieved SoTA on this task. **Writing**: Overall the writing is very difficult for someone who has not been actively thinking about this problem and working with tree-style RNNs. I found it very difficult to read the introduction and abstract because there is too much jargon. Below are some suggestions and questions that may help the authors improve their writing for an audience that is not working on tree-based RNNs. I generally recommend refactoring the writing into formal definitions so it is clear to see how the inner and outer recursions work together. As it stands, I couldn't even write down how your method would actually process a given input. - Can you make a figure illustrating these different trees instead of trying to describe them in the text? - I don't know what a "strong RvNN" is or what it means to process only some arguments that are sent in from an outer loop. (lines 75-79) - Can you introduce clearly what ListOps is and why it is important to be length-generalizable for that task? - There is not enough information about these other methods in the tables and how they work. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. In what situations would I use this method instead of other stronger models (e.g., Mega)? 2. Is there any benefit to using a tree-based model instead of a modified transformer architecture? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The authors discuss the limitations clearly, which I appreciate. They mention that their architecture improves on tree-based RNNs but is much slower than other, seemingly stronger models. It also seems to not be able to learn from long sequences, which may be the point of a long-context model. One thing they do not mention is that their architecture seems to be very complicated and has many tricks involved. They also only evaluate on a few tasks, and they seem to be very focused on ListOps in particular. For other tasks, their method performs comparably to existing tree-based models (lines 325-326). Flag For Ethics Review: ['Ethics review needed: Failure to comply with NeurIPS Code of Ethics (lack of required documentation, safeguards, disclosure, licenses, legal compliance)'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review. **Re Ethics:** We will not say too much about this given Ethics Reviewers have already checked it out. We didn’t provide any deanonymizing information about the work. While we haven’t cited the work or provided the anonymized copy at the moment (we will cite it in the final copy), we have provided all the necessary details pertinent to this work in the main paper and the appendix. If there are still some questions, we will be happy to clarify. **Re Weaknesses:** * Please see the general response comment on complexity. * Significance: While models like S4D, and MEGA are generally powerful, they can struggle in structure-sensitive tasks and in length generalization (eg. in Logical Inference, and ListOps). On the other hand, models like OM, CRvNN, BT-GRC, EBT-GRC can generalize more robustly but they are very slow to run (please see Table 1). Our work proposes a framework to strike a better balance between length generalization capacity and competence in long sequence tasks (as alluded in the title of the paper). Also see the Pareto Frontier graphs in the General Response PDF for some visualizations. Moreover, as you noted, there is value in exploring niche areas and alternative approaches too (so as to not put all the eggs in one or two baskets) even if right now there are limitations to RvNNs. * We also have results of logical inference (Table 3), results on IMDB, AAN retrieval from LRA (Table 4), and other NLP tasks in the Appendix. We added MEGA results in the general response pdf. We will improve the writing. * “strong RvNNs” refers to the modern RvNNs that can perform well in tasks like ListOps/Logical Inference (eg. OM, CRvNN, BT-GRC). * ListOps is an arithmetic task with nested list operators example: “[MAX [MED [MED 1 [SM 3 1 3 ] 9 ] 6 ] 5 ]” the answer is 6 (here MED is median, and SM is modulo 10 summation). We will add more details in the paper. * Importance of length generalization: first, we can ask ourselves how do we know if a neural network learns the “right” function and is not simply exploiting some surface statistical bias from the training dataset. One simple way to test this is to test the model in unseen datasets (the test set). But this is not necessarily ideal. Test sets, if it is from the same (IID) distribution as the training set, may share some statistical biases which the model can be exploiting. So what can we do? One thing we can do is to think of specific isolated factors of generalizations that a neural network should exhibit if it learns the desired function from the data. For example, we may want the neural network to be robust against local perturbations. So we may try to test in some adversarially perturbed datasets. Or we may want the neural network to be robust to counterfactual modifications - and we may then create counterfactual test sets [1]. Similarly, “length generalization” is one such factor of generalization to test a model on. Particularly, if a human is taught arithmetic on examples up to length $100$, we would expect them to be able to solve arithmetic problems on length $>100$ without ever seeing higher-length examples before. We can check if a Neural network can exhibit similar robustness/generalization capacities. Overall, length generalization has been a topic of interest for a long time in the ML community. * Regarding details on other models, OM [2] is a form of stack-augmented RNN (think of pushdown automata), whereas CRvNN [3] is like a standard RvNN but it softens the sibling selection and updates mechanisms. We will add more details on these models. **Re Questions:** 1. You could use our method instead of Mega when you want better OOD generalizations, particularly with tasks that require stronger structural bias. 2. Modified Transformer and Tree-based models are not a dichotomy. There are Tree-based Transformers too. CRvNN is also Transformer-like in some respects. Regardless, models that are closer to the structure of Transformers have generally failed to show robust length generalization in tasks like ListOps, mathematical tasks, and propositional logical inference [2,4]. Neural Data Router [5] makes good progress but it still shows limitations in generalizing to order of magnitude higher depths and lengths (as studied in the “Beam Tree Recursive Cells” paper in the appendix). [1] Learning The Difference That Makes A Difference With Counterfactually-Augmented Data - Kaushik et al. ICLR 2020 [2] Ordered Memory - Shen et al. Neurips 2019 [3] Modeling Hierarchical Structures with Continuous Recursive Neural Networks - Ray Chowdhury et al. ICML 2021 [4] The Importance of Being Recurrent for Modeling Hierarchical Structure - Tran et al. EMNLP 2018 [5] The Neural Data Router: Adaptive Control Flow in Transformers Improves Systematic Generalization - Csordas et al. ICLR 2022 --- Rebuttal Comment 1.1: Comment: Thanks to the authors for their response. RE: the ethics review, I just wanted to do my due diligence following the instructions. The authors did a good job of addressing my concerns, particularly with respect to Mega and other Transformer-based models for long sequence modeling. I think the method is sound, and the authors' claims seem reasonable. Ultimately, I still have my concerns about the significance of the work, given how slow and complicated the method is. I recognize the authors' efforts to alleviate concerns about the difficulty of implementing this method, but I still feel that it is a much more complex method for a small gain. Ultimately, I am raising my score 3 -> 4 because I think the work is sound and represents an improvement over other works. I will leave it to the AC and SAC to judge the significance of this work (i.e., the utility of the proposed method) in the subfield of long-range sequence modeling. --- Reply to Comment 1.1.1: Title: Clarification on the gain Comment: Thank you very much for raising the scores. Re: > I still feel that it is a much more complex method for a small gain. We acknowledge (as we did in the limitations of the main paper) the limitation that RIR-EBT-GRC is slower compared to BBT-GRC, S4D, and MEGA but we would like to quantitatively highlight the gains: * RIR-EBT-GRC reduces the training time of EBT-GRC (in Table 1's 1500-2000 sequence length settings) from $10.5$ minutes to $0.3$ minutes. The gain is $35$x speed up here. RIR-EBT-GRC also reduces $10.97$ GB memory of EBT-GRC to $0.43$ GB in the same settings which is ~$25$ times memory reduction while maintaining comparable performance (as you acknowledged: "their method performs comparably to existing tree-based models"). The overall speed-up compared to OM ($255$x) and memory reduction compared to CRvNN/BT-GRC OS ($188$x) is even greater. * Moreover, MEGA gets $23.5$% accuracy in 900-1000 OOD settings of ListOps. RIR-EBT-GRC gets $97.1$%. That's a substantial difference of $73.6$%. * MEGA gets ~$64$% in logical inference with 12 operators (after training on <=6 operators). RIR-EBT-GRC gets ~$93$%. That's nearly a $29$% difference. We acknowledge that in IID settings for long-range sequence modeling, this method is not ideal compared to MEGA/S4D and others. However, the significance motivated for the proposal was not "doing better than others in long-range settings", but **striking a balance** between scalability and length generalizability (including in short-range settings) while maintaining reasonable performance in long-range settings. There are also strategies that follow from the results of the paper to mitigate the speed limitations: * By reducing the chunk size $k$ we can get RIR-EBT-GRC closer to BBT-GRC (a special case of RIR-EBT-GRC when k=2) and start to approach the speed of S4D (in Table 1 in the main paper BBT-GRC is as fast or faster than S4D). Using small $k$ is still competitive in LRA text sets (eg. BBT-GRC performance in Table 1). Although smaller $k$ provides lower performance in OOD settings of ListOps, it's still much better than MEGA/S4D (see k (chunk size)$=10$,k (chunk size) $=20$ results in Ablations Table 1 in Appendix) in comparable settings. * Also because of the length generalization ability of RIR-EBT-GRC, we may not need to train on long-range data. We can train on much shorter data and allow it to length generalize during inference. In fact, RIR-EBT-GRC learns even better from shorter data. RIR-EBT-GRC for example can already get near $61$% on LRA ListOps (~$2000$ sequence lengths) by training on shorter ListOps data ($\leq 100$ lengths) and possibly could get even better if LRA did not have a different number of arguments (most models still struggle to generalize to higher number of arguments. Pure length generalization performance of RIR-EBT-GRC is much better - for example, it retains $97.1$% accuracy when the only difference is sequence length such as 900-1000 sequence length split after training on data $\leq 100$ lengths). ------------- Also, the point about complexity can be a bit subjective if we are not talking about any objective measure like Lines of Codes or Time/Space complexity. It is not so clear why RIR-EBT-GRC should be counted as more complicated than OM/CRvNN or popularly adopted methods in the literature like chart-based RvNNs/CYK-based RvNN [1] which utilizes dynamic programming with nested loops requiring careful implementations for efficiency or its newer extensions with beam search [2] and pruning strategies [3]. Also note that RIR-EBT-GRC, as of now, runs nearly as well without S4D. Compared to RIR-EBT-GRC w/o S4D, S4-based models or successful linear RNNs models [4,5] generally also require sophisticated initialization schemes or hybridization with Transformers with chunking [6], Flash Attention, and other modifiers like adaptive sparsification [7] for memory-efficient performant implementations. [1] Jointly learning sentence embeddings and syntax with unsupervised Tree-LSTMs - Maillard et al. Natural Language Engineering 2019 [2] Unsupervised Parsing with S-DIORA: Single Tree Encoding for Deep Inside-Outside Recursive Autoencoders - Drozdov et al. EMNLP 2020 [3] R2D2: Recursive Transformer based on Differentiable Tree for Interpretable Hierarchical Language Modeling - Hu et al. ACL 2021 [4] Resurrecting Recurrent Neural Networks for Long Sequences - Orvieto et al. ArXiv 2023 [5] How to Train Your HiPPO: State Space Models with Generalized Orthogonal Basis Projections - Gu et al. ICLR 2023 [6] Mega: Moving Average Equipped Gated Attention - Ma et al. ICLR 2023 [7] Sparse Modular Activation for Efficient Sequence Modeling - Ren et al. ArXiv 2023
Summary: The paper proposes a novel framework - Recursion in Recursion (RIR) for tree recursive neural networks (Tree-RvNNs) so as to get around the issue of computational infeasibility of typical RvNN models (Beam tree RvNNs are $\mathcal{O}(n)$) while still being able to exhibit length generalization on simple arithmetic tasks like ListOps. The models based on RIR have recursive depth bounded by $k\log_k n$ and still demonstrates over $90\%$ length generalization performance. They explain their method thoroughly and also compare on ListOps and LRA with a fairly large suite of baselines showing that their method perfoms comparably with the best. Strengths: 1. The paper is very well written. - The method and related work is explained thoroughly while providing sufficient intuition. - If anything, I would urge the authors to explain the results a little more thoroughly even if it were at the cost of pushing some of the other content to the appendix. 2. The experiments are thorough. - The authors compare with a sufficiently large suite of baselines. 3. Length generalization is an important and difficult problem that existing methods (such as Transformers) struggle with. Getting RNN length generalization performance without the associated cost blowup is very interestin for the field. 4. The authors adequately list the limitations of their work, which I appreciate. Weaknesses: 1. The authors discuss the choice of $k$ somewhat briefly explaining that small $k$ surely hurts the performance of the resulting model. - On a few examples they try to suggest that $k=\mathcal{O}(\log n)$, however, without seeing this varied over several values of $n$, it is hard to justify this claim. - Note that if $k=\mathcal{O}(n)$, the method loses it's benefits. 2. The authors build on unpublished work which is cited as "Anonymous" and shared in the appendix. This is highly unusual and I have some concerns about this. - While there is nothing inherently wrong with this, it puts the burden of evaluating some of the claims within the paper to yet another unverified paper. 3. Some of the notation in the paper can be improved. Such as the example used for explanation in Sections 2 and 3: "$7 + 8 \times 5 - 2$". I feel this would be better if explained symbolically like $a_1 \cdot op_1\cdot a_2 \dots$" 4. Some claims are not justified very well (such as the Problem with th String-it solution on page 6) 5. I feel the results need more careful explanation. Table 1 seems like a very important result to justify the proposed method however, I don't understand why the table is split into two. (ListOps competitive and ?) Can you explain this? 6. In table 4, the proposed method RIR-GRC performs quite poorly, however this is not called out or explained. 7. Overall, the results are not particularly impressive. And while MEGA is mentioned multiple times, it does not appear to be listed in the benchmarks. 8. Efficiency and accuracy are compared in separate tables. Since the main contribution of the paper is an efficient implementation of a model that shows length generalization, I would like to see a computation vs performance tradeoff curve. I feel this would go a long way in proving the superiority of the proposed method. 9. This is very minor but I would recommend the authors include a short explanation of ListOps in the main paper. I understand that the authors chose to push it to the appendix due to space limitations but since it is such an essential part of the paper, I would recommend adding a few lines about it. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. What is "infinite receptive field" mentioned on line 37? Is there a reference for this? 2. Have you checked the scaling of $k$ with $n$? Does setting $k=\mathcal{O}(\log n)$ for some reasonably small constant consistently work? 3. Can you explain the problem with the string-it solution more clearly? I still don't follow it. Further, it seems like the beam-alignment solution applies it anyway after throwing some randomness into the mix. 4. The statement "set the chunk size according to one's computation need" seems quite vague. Can you explain? 5. What is the take away from Table 2? RIR-EBT performs well, but not as good as EBT-GRC. Is there a precise compute vs acc tradeoff here? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: Yes. I commend the authors for including an honest and thorough limitations section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review. We will take the formatting suggestions into account. **Re Weaknesses:** 1. $k$ is a constant. It is the chunk size hyperparameter and is not made to depend on sequence length or the input. Yes, if we set $k$ to be very large (e.g., as large as the maximum input sequence size) then the RIR framework loses its benefit but in the paper we show that a nice trade-off can be achieved by using smaller k. Also, please see the response to Reviewer 8Jr4 for more information on $k$. 2. The cited anonymous work is published in ICML 2023. We provided an anonymous copy because the latest version of the published work was not public at the moment of submission. 4. We have added more intuition about beam alignment in the general response (+ diagrams in the general response pdf). Please let us know if you have other questions. 5. One of our main aims is to balance scalability while preserving competence in length generalization in structure-sensitive tasks like ListOps/Logical Inference. So, our target for efficiency comparison is mainly other ListOps-competent models. We show the computational cost with other methods like S4D, and Binary Balanced Tree (which are not competent on listops length generalization or logical inference) for completeness but we put them in a different section because they are not the baselines we are most concerned with in terms of computational costs. 6. RIR-GRC is a simpler model serving as an ablation of our main RIR-EBT-GRC proposal. It also performs worse in Tables 2 and 3 for worse structural bias. Similarly, GRC (RecurrentGRC) tends to perform worse than EBT-GRC/BT-GRC outside the RIR framework as well (results with Recurrent GRC can be found in the appendix paper “Beam Tree Recursive Cells”). * (7.1) RIR-EBT-GRC is the only model that can be feasibly run in 2000+ sequence length data and can also get 90%+ accuracy in OOD length generalization settings in ListOps (Table 2) and Logical Inference (Table 3). In contrast, S4D/MEGA gets performance within 15-30% in ListOps OOD length generalization settings (900-1000 seq length) and ~60% in logical inference. That is at least a 30% performance gap. Moreover, RIR-EBT-GRC reduces training speed and memory by approximately 30 times in 1500-2000 sequence length settings (Table 1) compared to EBT-GRC. Yes, our accuracy results in LRA IID settings are not SOTA, but the SOTA performance comes at the cost of OOD generalization at the moment. * (7.2) For some comparison with MEGA please see the general response pdf. 8. Thank you for the suggestion. We added Pareto frontier graphs in the general response pdf. **Re Questions:** 1. We will rephrase the “infinite receptive field” mentioned. We used this to refer to the unbounded nature of attention (as opposed to local convolution where interaction in a specific layer is limited to tokens within a window of some limited size). * (2-3.) We do not set $k$ as dependent on $n$. We simply choose $k$ as big as we can while getting feasible/reasonable computational costs in LRA ListOps. We set $k=30$. Ablations with smaller $k$ are given in Table 1 Appendix. As expected, in Table 1 Appendix smaller k/chunk-size (k=20,k=10) leads to worse results but can be still better than S4D, Balanced Tree, etc. in length generalization (also EBT-GRC is effectively the result of k=infinite, and BBT-GRC is basically k=2 - and the pattern still holds that higher k yields better results at least in structure-sensitive contexts). 4. In Table 2 in the main paper RIR-EBT-GRC performs better. In Appendix Table 2, it’s a more general problem that RvNN-based models often turn out to be mostly similar in performance with plain RNNs and other alternatives when it comes to more natural language tasks (see, for example, the “Beam Tree Recursive Cells” paper in Appendix). This can indicate that RvNNs are failing to exploit syntactic structures as well in those tasks (perhaps may require pre-training with language modeling) or that these tasks have exploitable statistical shortcuts that allow simpler methods to get ahead. Still, we can see in Appendix Table 3, that in some stress tests of larger datasets like MNLI, RvNN-based models perform better than simpler GRC/RIR-GRC. So there are specific contexts where inductive biases of RvNNs tend to shine more. --- Rebuttal Comment 1.1: Title: Thank you for the clarifications Comment: I thank the authors for their detailed rebuttal and for answering all of my questions. I am raising the score to 6 and recommending acceptance of the paper.
Rebuttal 1: Rebuttal: We thank all our reviewers for their insightful reviews and comments. **On the complicacy of RIR:** * **RIR Computational Complexity:** RIR-X is generally computationally more efficient than X both memory-wise and time-wise. For example, this can be observed in RIR-EBT-GRC vs EBT-GRC in Table 1. If the time complexity of X is $O(f(n))$ (n being the sequence size), the time complexity for RIR-X is $O(\log_k{n} f(k))$. This can effectively turn into $O(\log{n})$ if $k$ is a constant. * **RIR Implementation Complexity:** RIR is only slightly more complicated to implement. Particularly, balanced tree recursion is a simple algorithm, and all we have to do is import some existing code for inner recursion within the balanced tree recursion for the 2-level recursion. In the case of BT-RvNN, there is an extra complexity for beam alignment but that too is only a few lines of code. Adding pre-chunk S4D contextualization is optional and requires adding only 2-3 lines of code if we are importing from an existing implementation of the inner recursion model. * Overall, implementation of RIR-EBT-GRC (without S4D) from scratch can still be simpler (or as complex) as prior approaches like Ordered Memory or CRvNN. Also, the code is shared in Supplementary Materials (and will be open-sourced in GitHub with documentation) to allay implementation difficulties. **Updates in the Rebuttal PDF (attached below)**: 1. We added MEGA results on ListOps (Table 1) and Logical Inference (Table 2). As can be seen from these results, MEGA performs better than S4D but still falls behind RIR-EBT-GRC by a large margin in structure-sensitive length generalization contexts. 2. We added scatter plots + Pareto frontiers in Fig. 2. In all three subfigures (focusing on different trade-offs) in Fig. 2, RIR-EBT-GRC (RRE in the figures) remains as a Pareto-efficient solution that maintains a highly competitive trade-off. S4D, BBT-GRC, and RIR-GRC can win on the time cost and memory cost compared to RIR-EBT-GRC, but with a sharp degradation of OOD performance in structure-sensitive tasks (logical inference, ListOps). While OM, CRvNN, BT-GRC, BT-GRC OS, and EBT-GRC can outperform RIR-EBT-GRC (to an extent) on OOD length generalization accuracy in ListOps and logical inference, they come with much more exorbitant time/memory cost. We also added more clarification diagrams for beam alignment in Figures 1a, 1b. That is, in Fig 1a, we show the visualization of the String-it solution and in Fig 1b, we show the visualization of the beam alignment solution. **Additional Beam Alignment Intuitions:** RIR framework allows a degree of parallel processing - multiple non-overlapping chunks can be simultaneously processed by the inner recursion cell. However, when the inner recursion cell is EBT-GRC, each chunk will return different lists of $b$ beams (sorted in the descending order of their scores). If there are $m$ chunks, we will have $m$ lists where each list will have $b$ beams. Each beam should have a scalar beam score and a beam sequence $\in \mathbb{R}^{1 \times d}$ where $1$ is the sequence size and $d$ is the hidden state size. But for the next iteration, we will need a single list of $b$ beams (where each beam should have a scalar beam score and a beam sequence $\in \mathbb{R}^{m \times d}$ where $m$ is the sequence size and $d$ is the hidden state size. Note that $m$ is initially the number of chunks. After combining the outputs of the chunks, we get the sequence size. Since output of each chunk has sequence size 1, concatenating output of m chunks lead to m sequence size.). The question then is - how to combine the different $m$ lists of $b$ beams into a single list of $b$ beams. In the visualizations, we consider a simple scenario where $m=2, b=3$. The String-it solution (Fig. 1a), is to simply concatenate the beam sequences (and add the corresponding scores) from all lists that occur in the same position. For example, in Fig. 1a we concatenate the first beam (Beam Sequence 1) from the first list (chunk 1 results) with the first beam (Beam Sequence 4) from the second list (chunk 2 results). Similarly, we concatenate the second beam from the first list with the second beam from the second list, and so on. However, we want to create "high-scoring" beam combinations with more probability. However, the string-it solution does not care for that. For example, ``Beam Sequence 1 + Beam Sequence 5" would be the second highest-scoring beam, but it cannot be ever selected by string-it since Beam Sequence 1 is in the first position of the first list, and Beam Sequence 5 is in the second position of the second list - that is, they are not in the same position. Thus, we propose Beam Alignment where we take a stochastic approach towards the ideal of biasing the preservation of high-scoring combinations. For this approach, instead of immediately applying the string-it solution, we make the beams in each list stochastically 'compete' for their positions. Essentially, we want high-scoring beams to be more likely to occupy more positions in the beam lists from each chunk. This is done by simply sampling a beam per list position (for each of the $b$ positions) according to the normalized beam score distribution. The result is that the beam lists will be now filled with mostly the higher-scoring beams (See the results after `Sample' in Figure 1b). Next, if we simply apply the string-it solution at this point, it automatically leads to high-scoring combinations as we wanted because of the prior sampling of high-scoring combinations. Now there is a possibility of combinations like ``beam sequence 1 + beam sequence 5" to arise because the sampling step allows the same beam to occupy multiple positions. Overall as we can see, in the simulation of Beam-alignment (Fig. 1b) the resulting combined beams tend to have higher scores than in the case of the direct application of String-it (Fig. 1a). Pdf: /pdf/dbc4e928532cd0b49ace1380bf4c4f2de46670f8.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper proposes Recursion in Recursion (RIR) to address the shortcomings that computationally efficient models like Binary Balanced Tree Recursive neural networks have in solving arithmetic tasks like ListOps. RIR also seeks to alleviate the computational burden brought forth by structure-aware ListOps-competent models such as the Beam tree RvNN. The approach, relying on 2 levels of computation, break down a sequence into chunks (of length k) on which an inner loop uses another network (e.g. Beam tree RvNN) to compute local representations. The representations from the inner loop are subsequently taken in by an outer loop. The authors suggest that this approach can help scale structure-aware inference from (n) to (k*log_k(n)), where n is the input length. The authors conducted experiments to show that their RIR-based models use much less time and memory for the ListOps task than baseline approaches. The results on ListOps, logical inference and long range arena (LRA) show that even with this efficiency, RIR’s performance is competitive with current baselines. Strengths: The paper proposes a scaleable solution that can be structure-aware and effective at tasks like ListOps. Experiments have shown that this approach (RIR) is promising, with performance competitive with less-efficient baselines. Overall, the paper is well-written and the contribution is clear. Weaknesses: Modifications are required to integrate existing models into the RIR framework, e.g. the EBT-RvNN, which can limit the widespread use of this approach. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Typo: line 102: gaurantee -> guarantee Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes, addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review. Please note the general response for clarification on the complexity of the approach. We will fix the typo. --- Rebuttal 2: Title: Acknowledgement of rebuttal Comment: I have read the authors' rebuttal and decided to keep my original score.
Summary: In this work, the authors try to combine the best of two worlds in proposing Recursion in Recursion, where outer recursion is K-array balanced binary tree and inner implements its cell function for recursive neural networks for sequential inputs. The proposed framework is tested on various logical inference and NLP tasks to show model can reach respectable performance and scale to longer sequences. Strengths: 1. Good paper on scaling. 2. Paper is well written 3. Good sets of experiments. Weaknesses: 1. Novelty is limited. The paper is more focused on scaling and several works are already published in the literature 2. Why inner cell uses BT-RvNNs, given they are similar to CYK-based RvNNs, and what advantage it offers should be mentioned in depth. Since selecting top-k is still heuristic and based on beam size, results will vary. 3. Huge standard deviation, indicating potential instability in the model. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: I would like to see the computational complexity analysis, such as number of FLOPs, average time for convergence for the proposed model, It is not clear what advantage proposed framework offers and what is the loss in performance. Given current paper is mainly focused on improving the scaling capability more focus should be given these problems Effect of computation budget, can authors also report values when computational budget is higher, it is not clear how higher budget would lead to better gain, and what is the bound for it (k) How did you choose beam size 5 and 7? Beam search should be conducted on dev set and not on test set, so how did authors come to this conclusion is challenging to understand, can authors provide more details, how did you obtain this beam search number? The variance for RIR based model is big, it shows they are highly unstable, can authors comment on that? In appendix authors have written “At any iteration t, we start only with some sequence”, what is some sequence, such terms is used throughout the paper, and its difficult to know what is the bound. Please provide bound or some numbers to back these claims Why there is a need of using 2-layer MLP for fix-1 (line 133 in appendix), why not simple linear transformation, I am confused how mathematically it suffix the same function and how does it approximate the prior function? Does beam alignment find good structures that support required compositionality? More analysis is needed on the distribution and Importance of beam alignment. Missing relevant work [1] and [2] 1. Mali, A., Ororbia, A.G., Kifer, D. and Giles, C.L., 2021, May. Recognizing and verifying mathematical equations using multiplicative differential neural units. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 35, No. 6, pp. 5006-5015). 2. Arabshahi, F., Lu, Z., Singh, S. and Anandkumar, A., 2019. Memory augmented recursive neural networks. arXiv. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: 1. Language can be improved, some sequence, some score, some function such terms should be avoided while explaining any mathematical concepts. 2. The variance of the proposed model is big leading to instability, hence what benefits it offers is questionable. What about the generalization effect, how does the model work on longer sentences? What is the limit? These questions are still unclear. 3. More analysis will benefit the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review. **Re Weaknesses:** 1. Our goal is not pure scalability. Our goals are (i) to extend RvNNs in a novel manner that achieves a balance between scalability and length-generalization capacities in structure-sensitive contexts (ii) to show that it has competitive results (if not SOTA) in LRA compared to SSM/linear RNN and generally much better results than non-hybrid Transformers. We are unaware of other works attempting to balance scalability with length generalizability in structure-sensitive contexts. Besides, penalizing the lack of novelty of the research goal would lead to penalizing any novel method trying to improve on any pre-established research goals. 2. CYK models are much more expensive than BT-RvNN and show less performance in the non-RIR setup. This is shown in the appendix paper “Beam Tree Recursive Cells” (please see Tables 1, 4, 5, 8 of that paper). Moreover, practical implementations of CYK-based RvNNs involve soft-attention-based marginalization of paths at every inner loop [1] or similar top-k-based selection [2] - bringing back heuristics. 3. Huge std (+-9) is only shown by a specific RIR-variant “RIR-EBT-GRC - S4D” in a specific task - ListOpsMix (if you have some other result in mind where there is a huge std, please point it out). It’s not a general norm. Besides, even the “worst” run of that model in that task is $61.45$ which is still better than all models in Table 4 except RIR-EBT-GRC. **Re Questions:** 1. See the general response for complexity analysis. Accuracy comparisons between the models are charted for various datasets - ListOps (Table 2), Logical Inference (Table 3), and many other NLP tasks in the Appendix. Empirical time/memory comparisons between the models are charted in Table 1. Combined they show the performance-cost tradeoff. We also add Pareto frontiers in the General Response PDF. We can add FLOPs/convergence in the final version. 2. As far as we are aware, we do not claim that a higher computational budget by itself leads to better performance. But higher chunk size ($k$) can lead to better gain and a higher computational budget can allow us to use bigger $k$. Theoretically, this is because higher $k$ leads to less of an artificial restriction from the outer balanced-tree structure (for example in the case of $k=$infinite, we recover EBT-GRC which still performs better than RIR-EBT-GRC in structure-sensitive tasks at the cost of memory and time). Empirically, we show this in ablation Table 1 (where reducing chunk size $k$ to 20/10/2 from 30 leads to worse performance in general (k=2 is equivalent to BBT-GRC)). 3. A higher beam size can expand the search space and enhance error recovery (prevent chances of filling the beams with all bad structures). As such, higher was chosen a priori because RIR makes it more feasible to use a higher beam size. 4. We found that “RIR-EBT-GRC - S4D” fails in utilizing the low-sequence-length data as well (but still better than S4D and others) in ListOpsMix in some runs but utilizes it much better in others. Otherwise, in general, we do not observe high standard deviation for RIR models in general or even “RIR-EBT-GRC - S4D” in general (see other LRA experiments for example besides ListOpsMix). Also, please refer to point 3 in Re Weaknesses. 5. “Some sequence” refers to a sequence $\in \mathbb{R}^{n \times d}$ (n being the sequence size). We will add more clarifying notations. 6. Linear transformation can be used but we haven’t tried it. We used a non-linear function because linear functions are less expressive and because the original scoring also involved non-linearity. We do not have to “approximate” the original function in any strict sense - we just need a reasonable scoring function that gets the same input format and can be trained with gradient descent. The point is that the original function is computationally more expensive and without any clear a priori motivation for using it - and at the same time, empirically (i.e., a posteriori) it doesn’t show much better accuracy (as shown in various EBT-GRC vs BT-GRC experiments in the main paper and the appendix). 7. Strictly speaking “good structures” in a human interpretable sense is abandoned in RIR-framework (unless one uses non-RIR inference but then beam alignment will not be relevant during inference) because of enforced balanced tree outer recursion. Beam alignment simply allows alignments of beams from different chunks to be sensitive to beam score. The usefulness of beam alignment is empirically shown in Ablation Table 1 (in appendix). Both random alignment (+Random Align) and removal of beam alignment (-Beam Align) result in poorer performance. More intuition (with diagrams in the general response pdf) on beam alignment is provided in the general response. We will add the citations you mentioned. Thank you. [1] Jointly Learning Sentence Embeddings and Syntax with Unsupervised Tree-LSTMs - Maillard et al., Natural Language Engineering 2019 [2] Unsupervised Parsing with S-DIORA: Single Tree Encoding for Deep Inside-Outside Recursive Autoencoders - Drozdov et al. EMNLP 2020
Summary: This paper addresses the long-sequence modeling problem. A recursion in recursion strategy is proposed to balance the advantages between BB-Tree RvNNs and RvNN models. The idea is straightforward but achieves competitive performance on LRA tasks. Strengths: 1. The paper is easy to follow. Weaknesses: 1. There are large numbers of related works missing. For long sequence modeling, there are at least 4 types of methods, including Linear attention, SSM, Linear RNN, and LongConv. Since these methods are implemented for the same goal, they should be included in related work as well as the experiment section. 2. The experiments are inadequate. LRA is a toy benchmark for assessing long-sequence modeling. It is insufficient to use it as the sole indicator of effectiveness. In fact, this work focuses solely on a sub-task in LRA, making the experiments even weaker. I would encourage the author to verify the actual long sequence modeling capabilities in real-world scenarios such as language modeling, image classification, etc. 3. Linear RNN has achieved STOA performance in many benchmarks, including LRA, language modeling, and image classification, as demonstrated in recent papers. Why should we consider non-linear RNNs, which are slow to train and perform no better than linear RNNs? 4. The concept is straightforward and straightforward. By forming a RIR structure, it combines the advantages of the two methods. As a result, the processing time is lengthened. It would be more appealing if the processing time could be shortened. Furthermore, the competitive methods are ineffective. I don't see any standard benchmark LRA results, so it's difficult for me to justify the effectiveness of the proposed method. 5.The maximum sequence length used in this paper for an efficient long sequence modeling method is 2K, which is a standard sequence length for transformer LLM. It would be preferable to see the proposed method in long sequence tests, such as 32K and higher. Furthermore, long sequence modeling takes much longer to process than transformer, making the method less appealing in real-world scenarios. Technical Quality: 3 good Clarity: 3 good Questions for Authors: See the weakness part. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review. * (1.1) Many of the models you mention are already cited. SSM models are cited in [15,16,17,40], LongConv models are cited in [39,12], Linear RNN is cited in [34,30]. Top efficient Transformer based models are cited in [49,9] + indirectly efficient Transformers are referred through [45]. * (1.2) We will add more citations and discussion in related work. We will add more models in LRA. * (1.3) Our main goal is balancing scalability with the ability for robust length generalization in structure-sensitive tasks. Correct us if we are wrong, but to our knowledge, this is not tackled by any of the SSM, Linear RNN, LongConv, or Efficient Transformer models. We compare S4D as a representative of LinearRNN/SSM models for length generalization settings, and we also added MEGA (see the general response pdf). Prior results [p5,p6] have already shown that full Transformer models (that efficient transformers try to approximate) don’t work for length generalization in structure-sensitive contexts. * (2.1) Text and Retrieval tasks in LRA are both realistic tasks - one is IMDB sentiment classification and another AAN document retrieval. Neither is obviously any more toyish than image classification tasks like Sequential Cifar. Besides, “toy tasks” can be important tasks when most existing models struggle with them or fail to learn them in a generalizable manner (eg. logical inference or ListOps) without being inefficient (like Ordered Memory, CRvNN, or original Beam Tree Cell). Besides, in the Appendix we also show experiments on Natural Language Inference, Semantic Similarity, and other sentiment classification datasets. Currently, RIR-EBT-GRC is a sentence encoder model. Scaling it for causal language modeling would require more modifications which would be an interesting direction to explore for future work. * (2.2) It is worth noting that modern RvNNs (CRvNN, BT-RvNN) are relatively less mature. Just because they have some limitations now - doesn’t mean we will not find workarounds later. Please note that even predecessors of modern SSM-based models/S5 [p1,p2,p3] showed limited applicability or limited empirical demonstrations before being picked up and enhanced in the future. Our modeling approach is less mature and would require more development. What we show is that it has a potential in length generalization that is not shown by others (see point 3). * (3.) “Why should we consider non-linear RNNs, which are slow to train and perform no better than linear RNNs?” - As we demonstrate: our non-linear RvNNs such as RIR-EBT-GRC perform better than Linear RNNs (S4D) in OOD settings or in learning from shorter length data in Table 2 (ListOps)/Table 4 (ListOpsMix) and Table 3 (Logical Inference). We also show that RIR-EBT-GRC performs better than MEGA in similar settings (please see general response pdf). * (4.1) Regarding RIR complexity, see the general response. As expected from the theoretical time complexity, RIR-X speeds up any recursive model X by an order of magnitude (please see Table 1 in the main paper) - and thus reduces processing time. * (4.2) You are right that other ListOps-competitive RvNNs are inefficient for LRA. But that’s precisely what justifies the efficiency of RIR-X models in comparison to prior RvNNs. Besides that, we have also shown comparisons between RIR-X and X models in Table 1, Table 2, and other natural language tasks in the appendix - all of which show where our main RIR-based models stand in contrast to prior RvNNs in tasks where we can compare them. In the general response pdf, we also added Pareto frontiers showing the effectiveness of RIR-X models. * (5.) The maximum length for IMDB (LRA Text) is 4000 (not 2K). Our model will not be as efficient with 32K+ but please see point 2.2 above on this issue. Moreover, efficiency seems to come at other costs like OOD robustness (point 3). One future extension for RIR-framework could be to compress the sequence length beforehand like Funnel Transformer [p4] before running the model to make it handle larger sequence lengths. [p1] HiPPO: Recurrent Memory with Optimal Polynomial Projections - Gu et al. Neurips 2019 [p2] Legendre Memory Units: Continuous-Time Representation in Recurrent Neural Networks -Voelker Neurips 2019 [p3] Parallelizing Linear Recurrent Neural Nets Over Sequence Length - Martin et al. ICLR 2018 [p4] Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing - Dai et al. Neurips 2020 [p5] The Importance of Being Recurrent for Modeling Hierarchical Structure - Tran et al. EMNLP 2018 [p6] Ordered Memory - Shen et al. Neurips 2019 --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer 619V Comment: Thanks to the author for the reply, my concerns have been addressed and I will raise the score to 5
Summary: This paper introduces the recursion-in-recursion (RIR) framework for balancing the tradeoffs between 1. sequential processing, which offers better inductive bias and stronger solutions for many types of symbolic processing and logical inference tasks, but can be very expensive 2. balanced tree recursion, which shortens the length of the computational graph and can be fast and scalable, but struggles with the aforementioned tasks The RIR framework is thoroughly evaluated on a simple set of arithmetic tasks (ListOps), where it shows the ability to solve and generalize on this task, while being much faster computationally than previous methods with this ability. Strengths: - The paper provides a very well-explained exposition of a lesser-known model approach. - The proposed method/framework is novel and makes intuitive sense. - The method performs very well on symbolic processing tasks that it was designed for. It almost preserves the performance of much more expensive methods (e.g. full beam search) while being an order of magnitude faster. - Results on other LRA tasks involving language are also shown, indicating that the method is not necessarily specialized to synthetic symbolic processing tasks but could be a viable more general approach Weaknesses: A weakness of the method itself is that the RIR framework adds complexity because of the two separate levels of hierarchy which can be freely chosen. Additionally the $k$ hyperparameter seems very important and there doesn't seem to be a first-principles way to choose it well. It seems like even if there is a model that can perfectly solve a given task, once RIR is introduced there are no guarantees about whether the task can be perfectly solved. Thus it becomes a heuristic tradeoff between efficiency and strength, with many hyperparameters that must be managed. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: I think it help the story to have some attention baselines. Although many variants of attention/transformers have been tried in the original LRA works, showing some ablations within the RIR framework (e.g. as either the inner or outer aggregator) seems interesting. This is interesting particularly because attention is often viewed as a catch-all solution to discrete and symbolic data. However I am unlikely to increase my score even if these are shown, and this is just a suggestion that could strengthen the paper and make the overall line of work more convincing. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: Main limitations are properly addressed. It might be worth being more explicit about the fact that the proposed family of methods is not meant to address other types of sequential data such as perceptual signals (e.g. images/audio) and likely doesn't work in those settings. (Otherwise people may also wonder why the particular subset of LRA was chosen.) Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review. * Please check the general response for comments on the complexity of RIR. * Regarding the selection of the hyperparameter $k$, a higher $k$ (chunk size) should be generally better in structure-sensitive tasks. $k=$infinite (or maximum input sequence length) for example would reduce RIR-EBT-GRC to EBT-GRC (which is still better in ListOps). A lower $k$ leads to lower accuracy but higher computational performance. So one first-principle way to choose $k$ is to just settle on the maximum value whose computational performance one is comfortable with. In Appendix Table 1 Ablation we show that reducing $k$ leads to lower performance, as expected, on ListOps (please see chunk size = 20 or 10). * RIR + Transformer is an interesting area to explore more extensively that we keep for future work. There is a contemporary work that independently came up with a similar idea that they use to achieve OOD generalization on parity tasks [1] - this shows the general potential for the RIR framework. We briefly tried RIR + Transformer once in ListOps but it wasn’t as promising. * We were mainly focusing on language processing tasks where hierarchical bias is relevant (it can be relevant in image domains as well, but it’s more tricky in LRA since the images are flattened). We believe some more work is needed for RvNNs to work well there. Nevertheless, we ran them on CIFAR from LRA, and RvNN-based models can get 60%+ which is worse than the SSM-based/LongConv models but much better than any non-hybrid Transformer models. We will add some more analyses and discussion regarding that. [1] Transformer Working Memory Enables Regular Language Reasoning And Natural Language Length Extrapolation - Chi et al. ArXiv 2023 --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. I am maintaining my score on the grounds that although perhaps not immediately practical, this work explores an important direction toward structure-sensitive sequence modeling problems that is not sufficiently addressed by existing methods.
Arbitrarily Scalable Environment Generators via Neural Cellular Automata
Accept (poster)
Summary: The authors' goal is to optimize large floorplans for warehouses or manufacturing facilities, where robots need to drive during completion of tasks, while avoiding a congestion of the system. Their innovation is to use the framework of neural cellular automata (NCA) to generate optimized patterns from an initial pattern. Optimized here means maximizing the number of robots in the system. The training of the NCA's parameters is done using an unsupervised objective, the troughput of the system for the robot drivers, and simoultaneaously optimizing for the diversity of the solution using quality diversity algorithms. It is shown that the NCA produce regular floor plans for envirnments larger than seen in their training and also perform well under the desired metrics, while being much more efficient to generate than what existing optimizers can achieve. Strengths: 1. Interesting unsupervised objective for training environments of NCA with a combination of different optimization techniques. 2. The authors found a way of optimizing patterns from small environments that seems to scale/generalize to larger environments of a similar structure. 3. I find the evaluation strategy with the RL agent that scales with the NCA interesting and creative. I have not seen jointly scaling agents and environments before and would find this a generally interesting point to investigate independent of this work (although I might not be aware of existing literature on this topic). Weaknesses: 1. An ablation or discussion is missing towards what can break the scalability and generalization ability to new settings: What happens for non-rectangular environments? What if the positioning of the stations is not as in the training set? More diverse test samples would strengthen the argument, that NCAs truly scale well. Also, at which size does the NCA start to fail? 2. I think the setup explored here has big potential for introspecting the solutions. Can one derive the strategy the NCA learned from its produced examples and directly apply it? I.e. find the scalable simple rules that seem to underly the regular patterns generated? 3. Since quality diversity is a big part of the main draft, it would be interesting to see several examples of generated environments in the main draft. 4. To me, the design choices of experimenting with NCA and quality diversity objectives for warehouse plan optimization are not clearly justified (see question section). Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. Why should a bigger warehouse plan be more difficult to optimize overall? I understand that the complexity of the search space grows, but why is simply tiling good solutions for small environments not mentioned as a (possibly) good baseline? After all, this is how the NCA environments seem to behave (but I might also be missing an obvious problem here). 2. Isn’t a cool thing about your approach that it possibly also scales to arbitrarily shaped environments? I think it would be interesting to see what happens when the constraints of the warehouse are more intricate, e.g. integrating an old warehouse plan with a new one. 3. How ‘broken’ are the NCA’s plans typically? Is there a lot of extra work that is done by the extra solver? 4. Why is it necessary to use quality diversity as an algorithm for warehouse plan optimization? I can imagine how a diversity of environments is interesting in video games, but why should the diversity of the tiling types (which is the entropy measure here) be necessary? I would rather expect a low entropy to be useful, as repeated patterns in the warehouse probably make manufacturing the hall and plans cheaper and less complex. 5. What was the process of manually designing the human-created solutions? 6. Does congestion occur only during testing on the 5,000 time steps? If not, how do you handle it during training? 7. Do you have an idea of how the environment size scale with respect to the maximum number of robots/tasks achievable without congestion? Minor: 1. L.14 regularized -> regular (?) … if you truly mean regularized, please explain what is being regularized here. 2. L.41 sortation -> sorting (?) 3. L.49 it would be helpful to clarify that your objective is unsupervised and simply comes from the different losees you define. 4. Overall, an (at least informal) definition of the throughput objective missing in the main text. 5. L.168 what kind of object is $\Theta$ and $\mathbf{\theta}$? 6. I think in L.178 it should be “Due to the locality of the NCA operation…” rather than “By using a CNN …” Whatever mechanism you use to implement, the NCA will behave locally as it is its inherent property. The CNN is just an efficient way of implementing it in ML frameworks. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Overall, I think the ideas presented in the paper are interesting and a good work of engineering. However, to make the paper more relevant and useful to a broad audience, it is missing a more abstract perspective on the interesting techniques it proposes. For this, it should be possible for other researchers to better understand the working principle so that they can estimate how well the techniques could work in different settings. I think this requires more introspection on the results than in the current draft. Moreover, to make the claim that the NCAs are ‘arbitrarily scalable’, as in the title, the evaluation is lacking different levels of floor plan scales, NCA time scales and an analysis of the regular patterns obtained from the NCA. For the application to the warehouses, I do not find it very comprehensible that quality diversity plays a big role as emphasized in the paper. These factors are what lead me to my score. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. Why should a bigger warehouse plan be more difficult to optimize overall? I understand that the complexity of the search space grows, but why is simply tiling good solutions for small environments not mentioned as a (possibly) good baseline? After all, this is how the NCA environments seem to behave (but I might also be missing an obvious problem here). Please see **More Competitive Baselines** in the general rebuttal. 2. Isn’t a cool thing about your approach that it possibly also scales to arbitrarily shaped environments? I think it would be interesting to see what happens when the constraints of the warehouse are more intricate, e.g. integrating an old warehouse plan with a new one. We thank the reviewer for pointing out arbitrarily shaped environments. One naive way of considering non-rectangular shaped environments is encoding the shape of the environments as additional constraints in the MILP solver. We admit in Section 7 that generating environments with irregular shape is one of the future directions of our work. 3. How ‘broken’ are the NCA’s plans typically? Is there a lot of extra work that is done by the extra solver? Please see **Role of MILP** in the general rebuttal. 4. Why is it necessary to use quality diversity as an algorithm for warehouse plan optimization? I can imagine how a diversity of environments is interesting in video games, but why should the diversity of the tiling types (which is the entropy measure here) be necessary? I would rather expect a low entropy to be useful, as repeated patterns in the warehouse probably make manufacturing the hall and plans cheaper and less complex. We argue that simultaneously optimizing an objective function and diversifying measure functions with a QD algorithm result in better solutions than merely optimizing an objective function. In Appendix D.3, we compare a popular derivative-free single objective optimization algorithm, CMA-ES, with our chosen QD optimizer, CMA-MAE. Figure 16 and Table 4 shows the scalability of the generated environments and numerical results. While CMA-ES generally matches CMA-MAE in throughput and scalability for environments of size S_train, it significantly lags in scalability for size S_eval. In fact, in the warehouse (even) and the manufacture domains, the MILP solver fails to find valid solutions in size of S_eval with the given computational budget specified in Appendix D.4. 5. What was the process of manually designing the human-created solutions? For the warehouse domains, we take the commonly human-designed environments from previous works [28, 29, 47], which 1 x 10 blocks of shelves are repeated placed. We scale this pattern to create human-designed environments of size S_eval. For the manufacture domain, we create the human-designed environments of size S_train by taking design insights from the optimal NCA-generated warehouse environment (shown in figure 7f) and maintaining similar number of workstations with the DSAGE optimized environment. With size of S_eval, we take design insights from the NCA generated manufacture environment of the same size (shown in figure 10b) to create the human-designed environment. We describe the detailed process of creating human-designed environments in appendix C.4. 6. Does congestion occur only during testing on the 5,000 time steps? If not, how do you handle it during training? During training, we run 5 simulations with 1000 timesteps for each environment. Following previous work (cited in line 482), we stop the simulation early in case of congestion and return the current throughput as the result. This is because (1) we penalize the environments that run into congestion, and (2) we empirically observe that the number of finished tasks per timestep will quickly drop after congestion occurs, resulting in low throughput. 7. Do you have an idea of how the environment size scale with respect to the maximum number of robots/tasks achievable without congestion? Please see **Scalability in More Environment Sizes** in the general rebuttal. --- Rebuttal Comment 1.1: Comment: Thank you for your response, of which I took note. I am more convinced now, that quality diversity is a viable strategy to find optimal environments. However, the clarifications on the scalability have not convinced me that the environments generated by the NCA are "arbitrarily scalable", as stated in the title. The bottleneck of computational time is the MILP solver, which takes for a tile floor of ~100x100 about 8hrs, which is an order of magnitude larger as the time to find the initialization from the NCA. --- Reply to Comment 1.1.1: Comment: Thank you for reading through our rebuttal. For our response to the claim regarding “arbitrarily scalable”, please see our response under official comment in the general rebuttal section. Thank you!
Summary: In this work, the authors train neural cellular automata with a quality diversity evolutionary algorithm to grow the 2D environment of a multi-agent automated warehouse, manufacturing domain, and single agent maze domain. They show that the approach is able to generate environments that can scale to different sizes, while keeping important regularities. Strengths: - NCAs can be trained to generate small environments and then scaled to larger environments without further training, thereby saving computational costs - Compared to previous methods, the Mixed Integer Linear Programming (MILP) has to only be run once to fix the larger environments - the combination of NCa and MILP is an interesting idea that could be extended to many other domains that NCAs already have been successfully applied to Weaknesses: - The approach is only shown in similar 2D domains - The paper could include more environment generation baselines, beyond DSAGE. For example, a comparison to Compositional Pattern Producing Networks (which could also be scaled to larger environments without further training) could elucidate the importance of growth over time - It would also be useful to include a baseline in which a larger map is created by just concatenating smaller environments and running MILP. Would those perform equally well? Technical Quality: 3 good Clarity: 3 good Questions for Authors: - "we define a tile pattern as one possible arrangement of a 2- 2 grid in the environment." -> What about the distribution of global patterns? I assume they would not be captured this way? - One important question is, how easily could this approach be applied to other domains? Does it work particularly well for the domains in this paper because of their types of patterns? Which other domains could it be applied to in the future and what type of patterns would they need to benefit of, in order to allow the scaling to happen? What about environments that need more global pattern than local ones, which the NCA is particularly good at? - How important is the MILP in the process? At some point in the paper, it says that the process can take 8 hours, which suggests it can also become a bottleneck. It would be good to include the number of errors that MILP fixed (including the number of enironments that were initially not functional) in the result tables for all methods. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes for the most part. It would be good to talk a bit more about the potential limitation of NCAs to capture more global patterns. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. "We define a tile pattern as one possible arrangement of a 2- 2 grid in the environment." -> What about the distribution of global patterns? I assume they would not be captured this way? We clarify that the environment entropy measure introduced in Section 4 only measures the local pattern. We first define a tile pattern as one possible arrangement of a 2 by 2 grid, then count the occurrences of all possible tile patterns to form a tile distribution. The environment entropy is calculated as the entropy of the tile distribution. High environment entropy represents low regularity in the patterns while low environment entropy represents high regularity. We are more interested in the local patterns instead of global patterns because warehouse and manufacture system designers care more about resolving congestion in the environment with scalable local patterns. If our method is to be applied to other domains in which global patterns are important, we can develop diversity measures that consider global patterns. 2. One important question is, how easily could this approach be applied to other domains? Does it work particularly well for the domains in this paper because of their types of patterns? Which other domains could it be applied to in the future and what type of patterns would they need to benefit of, in order to allow the scaling to happen? What about environments that need more global pattern than local ones, which the NCA is particularly good at? Please see **Generalizing to Other Domains and Real-World Scenarios** in the general rebuttal. 3. How important is the MILP in the process? At some point in the paper, it says that the process can take 8 hours, which suggests it can also become a bottleneck. It would be good to include the number of errors that MILP fixed (including the number of environments that were initially not functional) in the result tables for all methods. Please see **Role of MILP** in the general rebuttal. 4. For the question regarding more comptitive baseline methods such as Compositional Pattern Producing Networks, please see **More Competitive Baselines** in the general rebuttal. --- Rebuttal Comment 1.1: Comment: Thank you for clarifying the questions I had. After running the tile-based baseline and addressing my other main concerns, I'm happy to increase my score. However, I don't agree with the comment "Yet, CPPN, mainly adept at generalizing global patterns, isn't well-suited for our use cases, which focus on local patterns to alleviate robot congestion.". CPPNs are not restricted to just creating global patterns and can in fact generate intricate local patterns as well. For example, in Figure 12 in [1], one can clearly observe repeating local patterns with variations, which could be useful for the domains investigated in this paper. Another example of environment generation with CPPNs can be found in [2]. So I believe it would still be an interesting baseline comparison to run in the future. [1] Stanley, Kenneth O. "Compositional pattern producing networks: A novel abstraction of development." Genetic programming and evolvable machines 8 (2007): 131-162. [2] Team, Open Ended Learning, et al. "Open-ended learning leads to generally capable agents." arXiv preprint arXiv:2107.12808 (2021). --- Reply to Comment 1.1.1: Comment: Thank you very much for the pointers and for increasing the score. We will make sure to point out the connection to CPPNs in the revised version.
Summary: This paper proposes a method for generating diverse sets of environments and solving arbitrarily large environments. The main difference of this method is that the individual in QD algorithms serves as an environment generator rather than an environment, making it capable of handling very large-scale environment generation tasks. Strengths: 1/ The proposed method can generate large environments and significantly improve the throughput of multi-robot system. 2/ Using QD algorithms to solve this problems make sense, and the effectiveness are verified. Weaknesses: 1/ The paper is somewhat difficult to read and lacks sufficient background information. To effectively convey the significance of the problem, the authors should provide a more detailed introduction to the research and add more references. This will help readers who are unfamiliar with the content to better understand the paper 2/ The technical contribution of this work is somewhat limited. The proposed method appears to be an ad-hoc combination of existing techniques, and the authors should emphasize their unique technical contributions or demonstrate the method's importance in real-world applications. 3/ To further strengthen the results, it would be beneficial to include more challenging types of environments and competitive baselines. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1/ Why are there only two lines in Figure 3-f? 2/ Could you show the diverse behaviors of the different generators? 3/ The problem formulation, i.e., geneate a set of environment generators rather than environments, significantly increases the optimization overhead during the training process. How to balance the cimputation cost and the final performance? 4/ How about using QD algorithms to directly generate a set of environments with different scales, and taking the scales as a part of the diversity measure? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. Why are there only two lines in Figure 3-f? Our method has a hyperparameter alpha that controls the weighting between throughput and similarity score in the objective function. We studied the effect of hyperparameter alpha in Section 6.1 in the warehouse domains. We observe that alpha = 5 is a reasonable setting that can generate environments which outperform human-designed ones. We therefore run experiments in the manufacture domain with alpha = 5. In the final version, we will add the results with more alpha values to validate our hypothesis. 2. The problem formulation, i.e., generate a set of environment generators rather than environments, significantly increases the optimization overhead during the training process. How to balance the computation cost and the final performance? We clarify that generating a set of environment generators rather than environments does not significantly increase the optimization overhead because the majority of the computation resides in agent simulations. For example, in the warehouse (even) domain, the baseline DSAGE algorithm finishes within 24 hours while our method finishes in 28 hours on machine (2) specified in appendix C.5. The extra time is attributed to the MILP solver because, early in the optimization process, the initial NCA generators tend to generate unrepaired environments with simple patterns that do not satisfy most domain-specific constraints. 3. How about using QD algorithms to directly generate a set of environments with different scales, and taking the scales as a part of the diversity measure? We thank the reviewer for the suggestion. By introducing scale as a diversity measure, the QD optimization problem would be much more challenging because we will be simultaneously optimizing NCA generators that generate environments of different scales. In addition, the large computational requirement for the agent-based simulator and MILP solver still remain with this approach. We acknowledge that this is an interesting direction for future work. 4. For the question regarding more comptitive baseline methods such as Compositional Pattern Producing Networks, please see **More Competitive Baselines** in the general rebuttal. --- Rebuttal Comment 1.1: Comment: Thank you for your response. It addresses several issues that I have. I will increase my score to 6. After reading other Reviewers' comments and responses, the main issues that I have are: 1. Similar to the comments of Reviewer ShTu, I agree that "arbitrarily scalable" is overclaim. 2. The writing should be further strengthened. See Weakness 1. --- Reply to Comment 1.1.1: Comment: Thank you for reading through our rebuttal and for increasing the score. We will improve the writing in the revised version of the paper. For our response to the claim regarding “arbitrarily scalable”, please see our response under official comment in the general rebuttal section. Thank you!
Summary: The authors have used the Covariance Matrix Adaptation MAP-Annealing algorithm to generate warehouse environments that are efficient for robots to roam around and do their task of moving packages from one place to the other. The authors have presented the idea very nicely in step by step manner, making the concepts clear gradually. It appears that the work is an improvement over previous works by the same authors. So this cannot be called a novel idea or work. Although, the authors have successfully applied the proposed algorithm to the problem at hand, i.e. organizing a warehouse for swift movement of robots. Strengths: The results presented by the authors are comprehensive and promising. The appendices cover the areas that are left unhandled in the paper. The authors have provided the code to execute. Although, I could not run it due to the time required for it. Weaknesses: Including Human design as a metric of comparison is odd, because this is very subjective. An expert human being might be able to design an environment that is far better than a machine. Then there will be a question how have you sampled a human being, have you conducted a survey, or have designed a test that can measure the knowledge of a human being? The questions will keep popping up. I am not saying that you cannot compare with a human being, but the idea is when a scientific study is done including human beings, then the appropriate representatives are picked carefully, which is missing here. The authors should have mentioned which hardware they used for executing the algorithm. They should have also mentioned how much time it took for them to execute all the scenarios. It appears that the simulation step would have taken quite a large amount of time, then have they considered any parallelism? Following are some mistakes - Figure 1 ○ shows an warehouse environment optimized directly with QD algorithms ○ [correction] ○ shows a warehouse environment optimized directly with QD algorithms - Line 45 ○ are well suited to arbitrary scaling as they incrementally contruct ○ [correction]are well suited to arbitrary scaling as they incrementally construct - Line 56,57 ○ we only run the MILP solver once after generating the large environments. ○ [correction] ○ we only run the MILP solver once after generating large environments. - Line 80 ○ However, in the case of the environment optimization ○ [correction] ○ However, in the case of environment optimization - Line 177 ○ The generator’s input and output are indentical sized one-hot environments ○ [correction] ○ The generator’s input and output are identical sized one-hot environments - Line 223 and onwards ○ Repeatedly the work "Manufacture" has been used, instead in most places "Manufacturing" should have been used - Line 319 ○ of arbitrarily sizes. ○ [correction] of arbitrary sizes. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: What is a measure space that you have mentioned at different places starting from line 115, apparently, you have not defined it. Please provide a reference where this term is defined and an appropriate meaning relevant to your context. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 2 fair Limitations: Yes, the authors have dedicated a complete section to this. The present work is only limited to the square-shaped cells each having 2 to 4 neighbors. They are aiming to find a strategy for organizing irregularly shaped objects, which would be quite challenging. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Measure Space Measure space is part of the QD problem definition and we adopt the definition from the DSAGE paper [47]. Specifically, a measure space is a bounded and discretized space defined by the user-specified diversity measure functions. The goal of the QD algorithm is then to simultaneously optimize an objective function and diversity the measure functions, resulting in an archive of solutions in the measure space. In our context, the measure space of each domain is defined by measure functions specified in Section 4 and 5. We will add the definition to the revised version of the paper. # Human Subject Regarding the human-designed environments, we clarify that many prior works in Multi-Agent Path Finding (MAPF) [27, 29, 1*, 2*, 3*] use similar human-designed environments to benchmark the algorithms. Therefore, we take the human-designed environments used in prior works as reasonable representative of huma-designed environments in our study. We include how the human-designed environments are created in Appendix C.4. # Hardware We thank the reviewer for pointing out the importance of parallization in our experiments. We did parallelize all the simulations in our experiments using compute resources specified in Appendix C.5. ### References [1*] Van Nguyen, Philipp Obermeier, Tran Cao Son, Torsten Schaub, and William Yeoh. Generalized target assignment and path finding using answer set programming. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), pages 1216–1223, 2017. [2*] Hang Ma, Jiaoyang Li, T. K. Satish Kumar, and Sven Koenig. Lifelong multi-agent path finding for online pickup and delivery tasks. In Proceedings of the International Conference on Autonomous Agents and Multiagent Systems (AAMAS), pages 837–845, 2017. [3*] Zhe Chen, Javier Alonso-Mora, Xiaoshan Bai, Daniel D Harabor, and Peter J Stuckey. Integrated task assignment and path planning for capacitated multi-agent pickup and delivery. IEEE Robotics and Automation Letters, 6(3):5816–5823, 2021. --- Rebuttal Comment 1.1: Title: Acknowledgement Comment: Thank you for the clarification.
Rebuttal 1: Rebuttal: We thank all reviewers for the detailed feedback. We appreciate that the reviewers find the combination of NCA, QD, and MILP novel and interesting (Reviewer YTho, v7gi) and the results comprehensive and promising (Reviewer fmHH). ## Role of MILP Q3 of reviewer v7gi and Q3 of ShTu The role of MILP varies by the domains. To clarify it, we show the unrepaired and repaired environments of size S_train in Figure 1 in the rebuttal as well as Figure 8e and 8f in Appendix C.3. In the warehouse (even) domain, its role is minimal, merely enforcing constraints due to slight disparities between unrepaired and repaired environments. Conversely, in the warehouse (uneven) and manufacture domains, MILP plays a role in pattern creation in addition to constraint enforcement. For example, shown in Figure 1c, the NCA generator for the warehouse (uneven) domain generates more endpoints (blue) on the left and more storage shelves (black) on the right. This facilitates the MILP in repairing the environment such that the left part has less obstacles (storage shelves), enabling agents to more efficiently access the frequently visited left-border workstations. For S_train environments, MILP takes up to 1 minute, while for larger S_eval sizes, it's up to 8 hours. This longer duration is a one-time event after training NCA generators. Our method reduces the overhead of frequent large environment repairs when optimizing with size S_eval. ## Scalability in More Environment Sizes Q7 of reviewer ShTu To further demonstrate our method’s scalability, we use the trained NCA generators from Section 6 to generate progressively larger environments and run simulations. Figure 2 in the rebuttal shows the result. The y-axis illustrates two metrics: maximum mean throughput over 50 simulations (right) and the maximum scalability, defined as the agent count at this maximum (left). We see an increasing trend for both maximum scalability and maximum mean throughput as the environment size increases since more space will be available for the agents to move. Layouts generated by our algorithm generally scale better than the human-designed warehouse layouts. We see two exceptions: Maximum mean throughput in the 69x69 warehouse (uneven) environment and both metrics in the 57x58 manufacture environment. We can attribute this to the interaction between the MILP and the specific environment generated by the NCA. Since MILP makes numerous changes to the generated environment in these domains (e.g. Fig. 1c and 1d in the rebuttal), it is possible that certain combinations of generated environments and MILP random seeds can lead to repaired layouts that create congestion. However, if we encounter such issues in practice, it is possible to either leverage a different NCA generator from the archive or re-run the MILP repair with a different random seed. ## More Competitive Baselines Q3 of reviewer YTho, Q1 of ShTu, and concerns regarding baseline from reviewers 6eLV and v7gi Multiple reviewers (YTho, 6eLV, v7gi) suggested adding more baseline methods. We chose two representative baselines in our work: DSAGE [47] (a state-of-the-art technique for multi-agent environment optimization) and human-designed environments. Previous environment generation techniques often have the same searched and generated environment sizes, likely facing computational challenges similar to DSAGE. We thank reviewer v7gi for pointing out the Compositional Pattern Producing Network (CPPN) as another baseline. Yet, CPPN, mainly adept at generalizing global patterns, isn't well-suited for our use cases, which focus on local patterns to alleviate robot congestion. ### Tiling Environment Baseline Suggested by reviewer v7gi and ShTu, we add a new baseline. We tile the environments of S_train shown in Appendix C.3, Figure 7f (warehouse (even)), 7i (warehouse (uneven), and 8d (manufacture), to create the large S_eval environments. We then use MILP to enforce constraints. We run 50 simulations with N_a_eval agents specified in table 1 of the paper. The new baseline failed in the warehouse (even) domain and achieved only a 23% success rate in the warehouse (uneven) domain. Figure 3 in the rebuttal displays the S_eval-sized tiled environments and tile-usage maps, which show the frequency of each tile used in the simulation. As shown in Figure 3b and 3d, the agents are congested, resulting in low success rates. In contrast, for the manufacture domain, the baseline matches our method, with a 100% success rate and 22.73 average throughput. This success is attributed to Figure 8d's tiling resembling NCA-generated patterns in Figure 10b. Thus, the tiling baseline may be a good method for the manufacture domain, yet it falls short in the warehouse domains. ## Generalizing to Other Domains and Real-World Scenarios Q4, 5, 8, 10 of reviewer YTho, and Q2 of v7gi Large companies such as Amazon and Alibaba have deployed multi-robot systems in warehouses to transport packages or inventory pods. Therefore, one real world application of our method is optimizing the layout of the automated warehouses to improve throughput. Since our method is agnostic to the specific agent simulator and only requires metrics such as throughput post-simulation, we can plug-in different simulators and apply our environment generation algorithm. We selected our simulator because similar ones are used in the prior works [28, 47]. Nevertheless, the potential obstacles of applying our method to the real-world physical scenarios are sim-to-real gaps and availability of a realistic and efficient simulator. Similar to prior works [27, 28, 29, 47], we make simplifications in our agent-based simulators such as perfect robot motion dynamics and deterministic environment dynamics. We are excited about integrating our idea of scalable environment generation via NCAs with physical robots and warehouse simulators in the future. We will include all the above discussions in the appendix of the revised paper. Pdf: /pdf/52d02c4b07abbd95e4dc03075abdaa952d5cfa29.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The research paper addresses the problem of generating large environments to improve the throughput of multi-robot systems. While previous work has proposed Quality Diversity (QD) algorithms for optimizing small-scale environments in automated warehouses, these approaches fall short when replicating real-world warehouse sizes. The challenge arises from the exponential increase in the search space as the environment size grows. Additionally, the previous methods have only been tested with up to 350 robots in simulations, whereas practical warehouses could host thousands of robots. To overcome these limitations, the authors propose a novel approach using Neural Cellular Automata (NCA) environment generators optimized through QD algorithms. Instead of directly optimizing environments, they train a collection of NCA generators with QD algorithms in small environments and then generate arbitrarily large environments from these generators at test time. The key advantage of using NCA generators is that they maintain consistent, regularized patterns regardless of the environment size, significantly enhancing the scalability of multi-robot systems. The research is divided into three domains: multi-agent warehouses, multi-agent manufacturing scenarios, and single-agent maze environments. In the warehouse and manufacturing domains, the authors compare their NCA-generated environments with human-designed and state-of-the-art optimized environments, demonstrating higher throughput and better scalability in the NCA-generated environments. In the maze domain, they show that their method can scale a single-agent reinforcement learning (RL) policy to larger environments with similar patterns, outperforming baseline environments. The paper's strengths lie in its originality, proposing a novel combination of NCA generators and QD algorithms to address the scalability challenge, its quality in presenting a systematic evaluation across multiple domains, clarity in the presentation of algorithms and results, and the significance of its potential impact on various multi-robot systems applications. Strengths: Originality: The paper introduces a novel approach to address the problem of generating large environments for multi-robot systems, which is distinct from prior work that focused on optimizing relatively small-scale environments. The utilization of Neural Cellular Automata (NCA) environment generators, optimized through Quality Diversity (QD) algorithms, is a unique and creative combination of techniques, enabling the generation of large environments with regularized patterns. The introduction of environment entropy as a diversity measure to quantify regularized patterns is an innovative concept, which contributes to understanding and evaluating the environments' structural characteristics. Quality: The research paper thoroughly describes the methodology, algorithms, and experiments, providing comprehensive details for replicability and understanding. The authors present a systematic evaluation across multiple domains, comparing their NCA-generated environments with human-designed and state-of-the-art optimized environments. The results are statistically analyzed and presented, demonstrating the quality of the proposed approach. The utilization of CMA-MAE, a state-of-the-art optimization algorithm, adds rigor to the paper's technical foundation. Clarity: The paper is well-written and organized, making it easy for readers to follow the methodology and results. Mathematical formulations and notations are clear and appropriately explained, enhancing the paper's accessibility to researchers in the field. The use of illustrative figures and tables helps in visualizing the concepts and experimental outcomes. Significance: The paper addresses an important problem in the field of multi-robot systems, i.e., scalability and efficient generation of large environments. The proposed NCA environment generators have the potential to impact various applications, such as automated warehouses and manufacturing systems, by providing optimized and scalable environments. The demonstrated scalability of single-agent RL policies to larger environments with similar patterns indicates broader implications for other agent-based systems and RL tasks. Weaknesses: Lack of Novelty in the Proposed Method: The paper claims to propose a new method for optimizing Neural Cellular Automata (NCA) environment generators using Quality Diversity (QD) algorithms. However, the paper does not adequately demonstrate the novelty of the proposed method compared to prior works that have used similar optimization techniques for environment generation. The lack of a comprehensive literature review and clear differentiation from previous approaches weakens the paper's contribution. Insufficient Evaluation of Scalability: While the paper demonstrates improved scalability of NCA-generated environments compared to human-designed environments, it lacks a thorough analysis of scalability as the environment size increases. The experiments focus on relatively small environments (e.g., 36x33) and then evaluate the same NCA generators on larger environments (e.g., 101x102). A more systematic study with varying environment sizes and intermediate steps would provide a clearer understanding of scalability. Limited Comparison with Baseline Models: The paper compares the NCA-generated environments with human-designed environments and a state-of-the-art optimization method, DSAGE, but does not include comparisons with other baseline models or state-of-the-art methods in related areas of research. Including more comprehensive comparisons would strengthen the paper's findings. Lack of Real-world Implementation and Validation: The research focuses on simulated environments and agent-based simulations. While these are suitable for initial validation, the lack of real-world implementation and validation on physical multi-robot systems in actual automated warehouses or manufacturing scenarios reduces the practical significance of the proposed method. Lack of Discussion on Generalizability: The paper focuses on three specific domains (multi-agent warehouse, multi-agent manufacturing, and single-agent maze), but it lacks a discussion of the generalizability of the proposed method to other domains or applications. Addressing the potential limitations and extensions to different scenarios would enhance the paper's broader impact. Insufficient Analysis of Hyperparameters: The paper briefly mentions the use of hyperparameters, such as ↵, but does not provide a detailed analysis of how these hyperparameters affect the performance of the proposed method. A thorough sensitivity analysis and tuning of hyperparameters would provide more insights into the model's behavior and stability. Lack of Open-Source Implementation: While the paper describes the proposed method in detail, it does not provide an open-source implementation or a publicly available codebase. Providing access to the code would allow researchers to reproduce the experiments and build upon the proposed method. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Clarification on Novelty: Can the authors provide a more detailed explanation of the novel aspects of their proposed method compared to prior works in the field of environment generation and optimization? How does their approach differ from existing approaches in terms of algorithmic design and optimization techniques? Scalability Analysis: Could the authors elaborate on the reasons behind the observed differences in scalability between the warehouse (even) and warehouse (uneven) domains? What factors contribute to the better scalability in one domain compared to the other? Is it possible to improve scalability in the domain with poorer performance? Comparison with Baseline Models: While the authors have compared their NCA-generated environments with human-designed environments and DSAGE, could they provide additional comparisons with other baseline models or state-of-the-art methods in related research areas? This would help establish the broader significance of their proposed method. Real-World Applicability: How feasible is the implementation of the proposed method in actual automated warehouse or manufacturing environments? What are the potential challenges or limitations when applying the method to real-world scenarios? Are there any specific requirements or modifications needed for practical implementation? Generalization to Other Domains: Could the authors discuss the potential applicability of their proposed method to domains beyond the ones tested in this paper? How well does the approach generalize to diverse multi-agent systems and different task settings? What are the potential obstacles or adaptations required for transferring the method to other scenarios? Sensitivity to Hyperparameters: How sensitive is the proposed method to the choice of hyperparameters, such as ↵? Can the authors provide insights into the effects of different hyperparameter settings on the quality and scalability of the generated environments? Are there specific guidelines for tuning these hyperparameters? Open-Source Code Availability: Is the authors' codebase publicly available for reproducibility and further research? If not, would the authors consider releasing the code to the research community? Open-sourcing the implementation would facilitate collaboration and promote further advancements in the field. Realistic Simulation Assumptions: How well do the agent-based simulators used in the experiments represent real-world multi-agent systems? Are there any limitations or simplifications in the simulators that might affect the validity of the results or the generalizability of the findings? Impact of Environment Size: In the experiments, the authors evaluate the NCA-generated environments on larger sizes (Seval) compared to the training environments (Strain). Could the authors discuss how increasing environment size affects the performance and efficiency of the NCA generators? Are there any specific challenges or benefits associated with generating larger environments? Practical Use Cases: Can the authors provide examples of potential real-world applications or use cases where the proposed method could be applied to improve multi-robot systems' performance and scalability? What are the envisioned practical benefits of adopting the proposed approach in such scenarios? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Simplified Simulations: The agent-based simulators used in the experiments might oversimplify real-world scenarios, leading to potential discrepancies between simulated and actual multi-robot system behaviors. The authors should discuss the implications of these simplifications and any potential deviations from real-world performance. Lack of Real-World Validation: The proposed method's evaluation is primarily based on simulations, and there is no direct validation in real-world warehouse or manufacturing environments. It would be beneficial to include experiments in real-world scenarios to assess the transferability and effectiveness of the NCA-generated environments. Generalization to Other Environments: While the paper shows promising results in the warehouse and manufacturing domains, it remains unclear how well the proposed method generalizes to more complex and diverse environments, such as outdoor spaces, multi-floor warehouses, or environments with dynamic obstacles. Hyperparameter Sensitivity: The method's performance might be sensitive to the choice of hyperparameters, such as the weight ↵ used to balance similarity and objective functions. The authors should provide a more comprehensive analysis of the sensitivity of the method to different hyperparameter settings. Scalability in All Domains: While the paper demonstrates superior scalability in some domains, the method's performance in other domains, such as the warehouse (uneven) scenario, does not show the same level of improvement over baseline environments. The authors should address the limitations that might hinder scalability in certain domains and propose potential solutions. Ethical and Societal Considerations: Although the paper focuses on technical aspects, it is essential to discuss the broader societal impacts and potential ethical considerations of deploying large-scale multi-robot systems in various real-world applications. Addressing ethical implications and potential negative societal consequences can provide a more well-rounded perspective on the research. Reproducibility: While the paper outlines the method's details, there is no mention of whether the authors' codebase is available for replication and further research. Providing open-source code or detailed instructions for replication would improve the study's transparency and reproducibility. Comparison with More Baseline Models: The paper compares the NCA-generated environments with human-designed environments and DSAGE. Including additional comparisons with other state-of-the-art environment generation and optimization methods would strengthen the research's credibility and position it in the broader context of related work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. Clarification on Novelty: In prior environment generation methods [47], the size of the environments searched over and the size of the environments generated are the same. These methods involve thousands of agent simulations during the search. In large environments that we generate in our work, each simulation may take up to a day, making it intractable to directly apply prior methods. Hence, searching for environment generators in smaller environments and leveraging them to generate larger environments is a crucial step in scaling environment generation. Another work [11] has combined QD algorithm and NCA to generate diverse game environments in small sizes (16 by 16). Our work is different from the following aspects: (1) while the prior work focuses on generating diverse game levels, we show that the NCA generators can be trained at a small scale and then be used to generate larger environments at various scales while maintaining consistent local patterns, which is beneficial to large multi-agent system design, (2) we incorporated the MILP solver to enforce domain specific constraints, (3) our objective function includes the throughput, which is a simulated agent-based metric, while the objective function of the previous work focuses on properties of the environments, (4) we introduced the similarity score S (introduced in section 4) to the objective function so that the generated environments can more easily scale to larger sizes. 2. Scalability Analysis: Environments of warehouse (uneven) domains are generally less scalable because the tasks of the robots are unevenly distributed. In particular, robots are 5 times more likely to go to workstations on the left border than those on the right. As a result, the robots can more easily get congested because more robots will be traveling to the left-border workstations throughout the simulation. 3. Comparison with Baseline Models: Please see **More Competitive Baselines** in the general rebuttal. 4. Real-World Applicability: Please see **Generalizing to Other Domains and Real-World Scenarios** in the general rebuttal. 5. Generalization to Other Domains: Please see **Generalizing to Other Domains and Real-World Scenarios** in the general rebuttal. 6. Sensitivity to Hyperparameters: The most important hyperparameter in our method is alpha, which controls the weighting between throughput and similarity score in the objective function. We show the experimental results of different alpha values in warehouse (even) and warehouse (uneven) domains in section 6.1. In Figure 7 and 9 of Appendix C.3, we show the generated environments trained with different alpha values. In general, larger alpha results in better scalability in larger environments. 7. Open-Source Code Availability: We have included the code with instructions to reproduce our results in the supplemental material. We will make the code public if our paper is accepted. 8. Realistic Simulation Assumptions: Please see **Generalizing to Other Domains and Real-World Scenarios** in the general rebuttal. 9. Impact of Environment Size: The performance of the NCA generator in terms of runtime is neglectable with increasing environment sizes. We provide the detailed runtime of the NCA generator and MILP solver in sizes of S_train and S_eval in table 5 of appendix D.4. 10. Practical Use Cases: Please see **Generalizing to Other Domains and Real-World Scenarios** in the general rebuttal.
null
null
null
null
null
null
Certifiably Robust Graph Contrastive Learning
Accept (poster)
Summary: Inspired by the certifiable robustness of graph contrastive learning (GCL) is still remain unexplored, the authors develop the first certifiably robust framework in GCL by proposing a unified criteria to evaluate and certify the robustness of GCL. Specifically, the authors introduce a novel technique, RES (Randomized Edgedrop Smoothing), to ensure certifiable robustness for any GCL model, and this certified robustness can be provably preserved in downstream tasks. Experiments on 7 real-world datasets show that the proposed RES-GRACE achieves state-of-the-art performance. Strengths: 1. The presentation of the paper is very clear and the figures are easily digestible. 2. The numerical experimental results and visualization support the effectiveness of the proposed method. Weaknesses: 1. In this paper, the authors conduct experiments for both node and graph classification tasks over regular and large-scale graphs. It is good. However, the choice of baselines seems somewhat insufficient. Can the authors compare with some recent GCL baselines? E.g., RGCL [1] and GLCC [2]. 2. Can the authors provide some explanations why the performance of RES-backbone (e.g., RES-GRACE, RES-DGI) framework sometime is worse than the backbone on raw graph (however, their performance is always better than backbone under perturbation scenarios)? 3. From a clarity perspective, this work is, in my opinion, weakly motivated. It is not clear why the authors consider and start edgedrop smoothing to enhance the robustness of a GCL model. [1] Li, Sihang, et al. "Let invariant rationale discovery inspire graph contrastive learning." International conference on machine learning. PMLR, 2022. [2] Ju, Wei, et al. "Glcc: A general framework for graph-level clustering." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 37. No. 4. 2023. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Please see comments and questions in Weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: The topic about certifiable robustness is quite interesting and is of high practical value. I believe this work has some potential things need to be addressed such as a better motivation and more details of comparisons in the experiments. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer A65y for recognizing the novelties, the clear presentation, the valid technique details and extensively conducted experiments. The following is our point-to-point response to the reviewer’s concerns and comments: **Q1: Compare with some recent GCL baselines? E.g., RGCL [1] and GLCC [2].** We thank the reviewer for raising the related works. Note that since GLCC does not publish their source code for us, instead, we choose to only compare with RGCL, where we implement our RES on RGCL and obtain RES-RGCL. Specifically, we follow the same setting in the paper to compare the robust accuracy of RES-RGCL and RGCL. We set graph classification as the downstream task. The hyperparameter is tuned based on the performance of the validation set. We use random attack to get the noisy graphs and the perturbation rate is 0.1. Each experiment is conducted 5 times and the average results are reported. Comparison results in MUTAG and PROTEINS are shown in **Fig. 2 of the attached PDF file**. From the table, we can observe that (i) When no attack is applied to the raw graphs, RES-RGCL achieves comparable performance to the baseline RGCL. (ii) When attacks are conducted on the noisy graphs, RES-RGCL consistently outperforms the baseline on both the MUTAG and PROTEINS datasets. This result demonstrates the effectiveness of our method in enhancing the robustness of RGCL against adversarial attacks. The above observations are similar to that of GraphCL. We also report the certified accuracy of RES-RGCL on the two datasets. The results are shown in **Fig. 3 of the PDF file**. From the figure, we can observe the tradeoff between certified robustness and model utility, which is similar to that of Sec. 6.3. **Q2: Provide some explanations why the performance of RES-backbone (e.g., RES-GRACE, RES-DGI) framework sometime is worse than the backbone on the raw graph** Thanks for your careful reading. We would like to kindly clarify that the performance of RES-backbone framework is comparable to the baseline on raw graphs in these situations. For example, based on Table 3, the performance of RES-backbone is superior to the baselines on the raw graphs of all datasets except for Pubmed, OGB-arxiv. Specifically, RES-DGI achieves robust accuracy of 80.0±0.8 and 64.8±0.1 on the Pubmed and OGB-arxiv datasets, respectively. These results are comparable to the respective performance of DGI on the two datasets: 65.0±0.2 and 80.1±0.9. The difference in average robust accuracy between them is negligible and standard deviations of RES-DGI are also slightly lower than the baseline DGI, which validates our claim. **Q3: This work is weakly motivated. Why consider and start edgedrop smoothing to enhance the robustness of a GCL mode** Please see our following clarification for your concerns about our motivation. **(i)** Certified robustness aims to prove samples are robust to any perturbation in considered space. **To the best of our knowledge, there is no existing work studying the certified robustness of GCL.** Existing works [3,4,5] on the certifiable robustness of GNNs are designed for supervised settings, However, they generally analyze the worst-case attack on labeled nodes, which cannot be directly adapted to GCL due to the absence of labels. **(ii)** To address the above challenges, we propose **RES to derive certifiable robustness for GCL**. Our motivation is that injecting the randomized edgedrop noise $\epsilon$ to $\mathbf{v}’$ in the inference phase for multiple times will make each perturbed edge in the majority of these samples will be dropped, making these perturbed edges finally drop in the final decision. which makes certifying the robustness of GCL is feasible. **(iii)** However, applying RES to test samples in the inference solely may hurt performance in downstream tasks and the certified robustness based on Eq.(8). Thus, to address this issue and enhance RES’s robustness, we propose to train the robust GNN encoder by injecting randomized edgedrop noise to one augmented view and maximizing the agreement between two views via GCL to ensures the samples with randomized edgedrop noises have the same latent class as the clean samples. **This eliminates the negative impacts of randomized edgedrop noise, enhances model utility and robustness of GCL and further improves the robustness certification of GCL**. We again thank the reviewer for the time and effort in reviewing our paper. If you have any further concerns or questions, please do not hesitate to let us know. We will respond to them timely. [1] Let invariant rationale discovery inspire graph contrastive learning. ICML 2022. [2] Glcc: A general framework for graph-level clustering. AAAI 2023. [3] Certifiable robustness and robust training for graph convolutional networks. KDD 2019. [4] Efficient robustness certificates for discrete data: Sparsity-aware randomized smoothing for graphs, images and more. ICML 2020. [5] Certified robustness of graph neural networks against adversarial structural perturbation. SIGKDD 2021. --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer A65y Comment: I appreciate the authors' responses and additional experiments. The authors have addressed my concerns. I will keep my score the same. --- Reply to Comment 1.1.1: Comment: Thanks for your kind response. We are glad that your concerns are addressed. We sincerely hope that you can raise your rating. If any other concerns or questions are required to be addressed for raising scores, please let us know. We will respond to them timely. Title: Response to Reviewer A65y
Summary: This paper introduces a certifiably robust graph contrastive learning method called RES (randomized Edgedrop Smoothing). This paper (1) first introduces the criteria of how to evaluate and certify robustness, (2) and then introduces RES to ensure certifiable robustness, (3) finally, proves that the certified robustness can be transferred to downstream tasks. Strengths: 1. This paper is well motivated that current robust graph contrastive learning methods are not able to certify their robustness. 2. The theorem introduced in this paper looks interesting. 3. The experimental results on several benchmark datasets show that the proposed method could outperform baselines. Weaknesses: 1. The motivation of using $\mathbf{v}^+$ rather than $\mathbf{v}$ in Eq. (3) is not quite clear. Specifically, in Eq. (3), why do you use $s(h(\mathbf{v}'), h(\mathbf{v}^+)) > s(h(\mathbf{v}'), h(\mathbf{v}^-))$ rather than $s(h(\mathbf{v}'), h(\mathbf{v})) > s(h(\mathbf{v}'), h(\mathbf{v}^-))$. 2. $v$ and $v^+$, and the term "positive sample" are a kind of confusing in section 4.1 (after line 194). I think the "positive sample" after line 194 has a different meaning of the "positive sample" in contrastive learning. It is highly suggested that the authors find a better way to present the idea here. 3. The claim in line 225-226 that "the majority of them will possess identical structural vectors, that is, $\mathbf{v}\oplus\mathbf{\epsilon}=\mathbf{v}'\oplus\mathbf{\epsilon}$" seems too strong. If this is the real case, can you show the statistics in some real datasets? 4. Random dropping edges is one of the basic graph augmentation techniques that have been introduced in GraphCL [1] and GraphMAE [2]. What's the difference between RES and random edge drop in GraphCL and GraphMAE? [1] You, Yuning, et al. "Graph contrastive learning with augmentations." Advances in neural information processing systems 33 (2020): 5812-5823. [2] Hou, Zhenyu, et al. "Graphmae: Self-supervised masked graph autoencoders." Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2022. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. In problem 1, what are $y$ and $y^\ast$? 2. What does "maximum of {$\mathbf{v}_1,\dots, \mathbf{v}_N$}" mean in Lemma 1? Can you show a concrete example? 3. There are many ways to define the probability shown in Eq. (5). For example, empirically, Gaussian is the most common and effective way. What are the advantages of your definition over a simple Gaussian? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer vCmM for recognizing the novelties, the valid theoretical and technique details and extensive experiments of this work. The following is our point-to-point response to the reviewer’s concerns and comments: **W1: Why to use $\mathbf{v}^+$ instead of $\mathbf{v}$ in Eq. (3)**. Thanks for your question. We agree with the reviewer that $\mathbf{v}$ can be also used here because in Eq. (3), as $\mathbf{v}$ and $\mathbf{v}^+$ are the positive samples in GCL, which have the same latent class. Here are two reasons we use $\mathbf{v}^+$: **(i)** The use of $\mathbf{v}^+$ can distinguish it from $\mathbf{v}^-$, making our statement more clear. **(ii)** Since given a positive pair $(\mathbf{v}, \mathbf{v}^+)$, the goal of robustness certification of GCL is to make sure perturbed sample $\mathbf{v}’$ is always the positive sample of $\mathbf{v}^+$, as shown in Eq. (3) and Eq. (5). Replacing $\mathbf{v}$ as $\mathbf{v}’$ in Eq. (3) will cause the inconsistentness of notations in subsequent parts. **W2: "positive sample" after line 194 differs from the one in CL** Thanks for your question. We kindly clarify that both two “positive sample” share the same meaning. In GCL, the positive samples are generated from augmentation and they have the same latent classes based on the label-invariant augmentation intuition of GCL (see lines 129-130). For the “positive sample“ after line 194, it comes from Def. 2 about the certified robustness of GCL, where $(\mathbf{v}$, $\mathbf{v}^+)$ is a positive pair in GCL as shown in line 189. The meaning of Eq. (3) here is to check if $\mathbf{v}’$ under any perturbation within the specific budget is still most similar to the positive sample $\mathbf{v}^+$ of $\mathbf{v}$ than other $\mathbf{v}^+\_i$ in the latent space. In other words, it is to check if $\mathbf{v}’$ is still the positive sample of $\mathbf{v}^+$. Therefore, the two “positive sample” are the same. **W3: The claim that "the majority of them will possess identical structural vectors” seems too strong.** Thanks for your insightful question. We kindly clarify that given the perturbed sample $\mathbf{v}’=\mathbf{v}\oplus\delta$, after adding the randomized edgedrop noise $\epsilon$, the only difference between $\mathbf{v}’\oplus\epsilon$ and $\mathbf{v}\oplus\epsilon$ is in the part of fake edges in $\delta$. **Thus, the problem is reformulated to validate if the majority of the injected $\epsilon$ drop these fake edges.** Specifically, based on Sec. 5.2, RES will draw $\mu$ samples of $v’\oplus\epsilon$. Then use Monte Carlo to decide the final connection status of the edge by validating if the majority of the $\mu$ samples drop this edge. Thus, **all fake edges in most of the $\mu$ samples will be dropped if selecting high $\beta$**, making these fake edges finally dropped. which supports our claim. We further report the statistics on Pubmed to validate the correctness of the above statement. Specifically, we conduct Nettack with various perturbation sizes on test nodes. We conduct RES with various $\mu$ and $\beta$. The frequency of test nodes satisfying $\mathbf{v}\oplus\epsilon=\mathbf{v}’\oplus\epsilon$ are shown in **Table 4 of the attached PDF file**. From the table, we observe that almost all test nodes possess $\mathbf{v}\oplus\epsilon=\mathbf{v}′\oplus\epsilon$, validating our claim. **W4: Difference between RES and random edgedrop in GraphCL and GraphMAE.** Thanks for your insightful question. We kindly clarify that our RES is inherently different from the random edgedrop in GraphCL and GraphMAE. Due to the word limits, please see the **Q1 of the global response** for our detailed responses. **Q1: What are $y$ and $y^\*$.** Thanks for pointing them out. $y^\*$ should be the latent class $c^\*$ of $\mathbf{v}$ in line 160 and $y$ should be $c$ in line 161. We apologize for the confusion and will revise them in the final version of our paper. **Q2: What does "maximum of $\\{\mathbf{v}_1,\cdots, \mathbf{v}_N\\}$" mean in Lemma 1?** Thanks for your question. “Maximum of $\\{\mathbf{v}_1,\cdots,\mathbf{v}_N\\}$” is the maximum of a sequence of i.i.d samples based on Lemma 1. In our theorem, it refers to $\\max\\{-D\_1,\cdots,-D\_{n}\\}$ in Eq.(C.9), where $D\_i=(1-s(h(\mathbf{v^+,v^-\_i})))/2$. It denotes the negative of half of the minimum distance between representations of positive and negative samples $\mathbf{v}^+$ and $\mathbf{v}^-\_i$. We apologize for this confusion about this notation and will revise it in the final version of our paper. **Q3: Advantages of Eq.(5) over a simple Gaussian.** Thanks for your insightful question. The advantage of the definition of Eq. (5) is: **We can use Eq.(5) to reformulate the problem of certifying robustness for GCL in problem 1 to the problem of comparing the distances between positive and negative samples $\mathbf{v}^+$, $\mathbf{v}^-$ with respect to $\mathbf{v}$ as shown in Eq.(3). This enables an easier obtainment of GCL’s certified robustness.** Specifically, based on Problem 1 in Sec. 3.3, we aim to develop a certifiably robust GCL for encoder $h$ such that the probability of $h(\mathbf{v}’)$ in the latent class $c^*$ is always the largest over any other $c\neq c^*$. However, quantitatively calculating such a probability is indeed challenging in GCL. Based on Theorem 1 and its proofs in lines 630-648, Eq. (5) build a connection between the probability of $\mathbf{v}$ being the positive sample of $\mathbf{v}^+$ and the distance between them in the latent space. And the probability aligns with that of $\mathbf{v}’$ in latent class $c^*$ due to label-invariant augmentation intuition in line 129. Then, we can RES to derive certified robustness based on Eq. (5). [1] Graph contrastive learning with augmentations. NeurIPS 2020. [2] Adversarial graph augmentation to improve graph contrastive learning. NeurIPS 2021. [3] Label-invariant augmentation for semi-supervised graph classification. NeurIPS 2022. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I've read the rebuttal and I'd like to slightly raise my score. --- Reply to Comment 1.1.1: Title: Response to Reviewer vCmM Comment: Thanks for your kind response and approval. We sincerely appreciate your time and efforts in improving our paper. If you have any further concerns or questions, please do not hesitate to let us know. We will respond to them timely.
Summary: This study represents the first exploration of certifiable robustness in Graph Contrastive Learning (GCL). The authors propose a unified definition of robustness in GCL, addressing the existing ambiguity in quantifying its resilience to perturbations. The introduction of the Randomized Edgedrop Smoothing method is an interesting approach that applies randomized edgedrop noise into graphs to offer certifiable robustness on unlabeled data while minimizing the insertion of unnecessary edges. Theoretical analyses provided affirm the robust performance of their encoder in subsequent tasks. Extensive empirical experiments on various real-world datasets have been conducted, suggesting that the proposed method enhances the robustness of GCL models and provides certifiable robustness. Strengths: 1. The authors explore a novel area of certifying the robustness of Graph Convolutional Learning (GCL) against various perturbations. They introduce a comprehensive definition for robustness in GCL and design an innovative framework, Randomized Edgedrop Smoothing (RES), to validate this robustness. 2. A theoretical analysis provided in the study affirms that the representations learnt by their robust encoder can deliver a provable robust performance in subsequent tasks. 3. The authors have developed an effective training strategy for robust GCL, integrating the application of randomized edgedrop noise. 4. Comprehensive empirical evaluations demonstrate that their method can deliver certifiable robustness for downstream tasks. Weaknesses: 1. The selection of adversarial attack strategies in this study seems somewhat underwhelming given the marginal difference between clean accuracy and attack accuracy. It would be beneficial to employ more aggressive attack methods to genuinely assess the robustness of your proposed framework. 2. From what is shown in Table 3, the RES method appears to result in only slight improvements. Providing results from a broader range of graoh attack methods might lend more robust evidence to support your claims. 3. The paper seems to be missing a comparative analysis between the proposed Randomized Edgedrop Smoothing method and existing edge drop augmentation techniques in Graph Contrastive Learning (GCL). Including such an analysis could provide readers with a more concrete understanding of the advantages and potential improvements brought about by the RES method. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Could you clarify the operation of the evasion attack in the context of transductive node classification within Graph Contrastive Learning (GCL)? In a scenario where structural perturbations are incorporated into the test set, however, it's worth noting that the GCL encoder is already trained based on the entire graph. So, in this transductive learning setting, what would be the practical significance of the evasion attack? Could you perhaps provide a schematic or flowchart, similar to Algorithm 1 in your paper, that visually represents the process of the attack method in GCL? This could facilitate a better understanding of its operation and mechanics, thus enriching the comprehension of your methodology. 2. I recommend considering stronger graph injection attacks for evasion scenarios, such as those presented in [1]. 3. Could you provide a comparison between your proposed Randomized Edgedrop Smoothing (RES) method and the learnable edge-dropping augmentations described in [2]? I am interested in both theoretical distinctions and empirical differences based on experimental results. 4. Could you elucidate the distinction between your proposed Randomized Edgedrop Smoothing (RES) method and the conventional edge perturbation for augmentation as outlined in [3]? Understanding this difference could clarify the unique value proposition of your approach. 5. The definition of the "concatenation vector v" as given in line 106 seems somewhat ambiguous, as it appears to include the connected edge of node v. However, equation 5 also features h(v), where v presumably represents node features. Could you provide further clarification regarding the nature and role of the vector v? 6. In line 198, the space B is defined, but its details remain unclear. Could you provide a more comprehensive explanation of the space B, including its constituent elements? Is it meant to represent a probability space? [1]. Chen Y, Yang H, Zhang Y, et al. Understanding and improving graph injection attack by promoting unnoticeability[J]. arXiv preprint arXiv:2202.08057, 2022. [2]. Suresh S, Li P, Hao C, et al. Adversarial graph augmentation to improve graph contrastive learning[J]. Advances in Neural Information Processing Systems, 2021, 34: 15920-15933. [3]. You Y, Chen T, Sui Y, et al. Graph contrastive learning with augmentations[J]. Advances in neural information processing systems, 2020, 33: 5812-5823. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer F8a7 for recognizing the novelties, the valid theoretical and technique details and extensive experiments of this work. The following is our point-to-point response to the reviewer’s concerns and comments: **Q1: Clarify the operation of the evasion attack in the context of transductive node classification within GCL** Thanks for your thoughtful questions. Please find our answers below: **Evasion attack process.** In the context of transductive node classification within GCL, the target nodes of the evasion attack are visible during encoder training and inference. The objective is to perturb these target nodes, causing the GNN encoder $h$, to generate poor representations that degrade the nodes' performances on downstream tasks. The procedure of evasion attacks is summarized in **Fig. 1 of the attached PDF file.** Specifically, $h$ is first trained via GCL on the raw graph. Then perturbations are added to target node $v$, yielding a perturbed node $v{‘}$. After that, we generate the representations of $v{‘}$ via $h$ and evaluate its performance on the downstream tasks. **Practical significance.** In real-world scenarios like social networks and financial systems, pre-training a GNN encoder on untampered graphs is a common practice. Take social networks as an example: a company could pre-training a GNN encoder on the raw social network graph, and enable downstream task execution via an API. Consequently, user representations are periodically derived from the encoder for these tasks, involving users present during the encoder's training. Yet, malicious attackers might manipulate graph structure by injecting spurious edges, leading to mispredictions in downstream tasks and raising severe trustworthy concerns. Hence, ensuring the model's robustness against evasion attacks is also an important concern. **Q2: Compare with some stronger graph injection attacks in [1]** Thanks for the suggestions. We add two SOTA graph injection attack methods, i.e., TDGIA and AGIA [1]. We insert the same number of fake nodes as the target nodes. We compare the performance of GRACE and RES-GRACE. For RES-GRACE, we set $\mu=50$ and $\beta=0.9$. The comparison results on 4 datasets are shown in **Table 2 of the PDF file**. From the results, we observe that RES-GRACE consistently outperforms the baselines across 4 datasets in defending graph injection attacks. **Q3: Compare RES with the learnable edge-dropping augmentations in [2]** Thanks for sharing the great work [2]. We would like to kindly clarify that RES is inherently different from [2]: **(i)** Our work has different objectives as our method utilizes RES to **achieve certificate robustness for GCL**, while [2] is a learnable edge dropping augmentation to enhance downstream task performance. **(ii) In the inference phase, RES is deployed to guarantee certified robustness. Ans RES is further added to one augmented view during GCL training for model utility and robustness.** However, [2] is only used to augment graphs during training. To demonstrate the effectiveness of RES, we compare ADGCL with GraphCL. We also add RES-ADGCL into comparisons. We generate noisy graphs with a 10% random attack. The comparison results on two graph datasets are shown in **Table 3 of the PDF file**. From the table, we observe that RES-GraphCL and RES-ADGCL both achieve comparable performances to the baselines on raw graphs and consistently outperform the baselines in the noisy graphs of two datasets, which validates the effectiveness of RES in any GCL model. **Q4: Compare RES and the random edge-dropping in [3]** Thanks for your insightful question. We would like to kindly clarify that our RES is inherently different from the random edge-dropping augmentation in [3]. Due to the word limits, please see the **Q1 of the global response** for our detailed responses. **Q5: The nature and role of the concatenation vector v** Thanks for your insightful question. We would like to clarify that the concatenation vector $\mathbf{v}$ here is a vector to depict the structure of the node/graph for learning representations. Due to the word limits, please see the **Q2 of the global response** for our detailed responses. **Q6: Explain the space $\mathbb{B}$** Thanks for your question. In our paper, given a sample $\mathbf{v}^+$ with the latent class $c^+$, **$\mathbb{B}(\mathbf{v}^+)$ is a space for $\mathbf{v}^+$, where each constituent element within it is the positive sample of $\mathbf{v}^+$**. In the context of GCL, as detailed in 129, the label-invariant augmentation intuition of GCL makes augmented positive views have consistent latent classes with the original ones. Therefore, **each sample within this space also has the same latent class $c^+$, and we can then define $\mathbb{B}$ in Eq. (4).** **Role:** By leveraging the space $\mathbb{B}$ in Theorem 1 and 2, **we establish a connection between the probability of $\mathbf{v}$ being a positive sample of $\mathbf{v}^+$ and their cosine similarity in the latent space**. This connection enables the certification of GCL's robustness and transfers such robustness to downstream tasks to solve our problem 1 in Sec. 3.3. [1] Understanding and improving graph injection attack by promoting unnoticeability. ICLR 2022. [2] Adversarial graph augmentation to improve graph contrastive learning. NeurIPS 2021. [3] Graph contrastive learning with augmentations. NeurIPS 2020. --- Rebuttal Comment 1.1: Comment: Thank you for the authors' response. Regarding Q1, are you suggesting that by using RES $v'$ still obtains a robust representation after $h$? For Q3 and Q4, I would like to gain a deeper understanding of the role of RES during both the training and inference phases. Which part contributes to the robustness? In line 373 of your paper, could you clarify what is meant by 'the number of removed structures'?" --- Reply to Comment 1.1.1: Comment: Thanks for your time and effort in reviewing our paper. Here is our point-to-point response to your further questions: **Q1: are you suggesting that by using RES, $\mathbf{v}’$ still obtains a robust representation after $h$.** Thanks for your insightful question. We kindly claim that given the perturbed sample $\mathbf{v}’=\mathbf{v}\oplus\delta$, where $||\delta||_0<k$, by using RES in the GCL training and inference phases, the representation of $\mathbf{v}’$ learned by our RES-based encoder $h$ can achieve certifiably robust performance in downstream tasks, that is, $h(\mathbf{v}’)$ has consistently correct prediction in downstream tasks under any perturbation within budget $k$. **Q3 and Q4: The role of RES during both the training and inference phases. Which part contributes to the robustness?** Thanks for your insightful question. We kindly claim that RES works both in the training and inference phases, which contributes to the robustness together. **The role of RES in the inference phase:** RES performs randomized edgedrop smoothing in the inference phase through Monte Carlo. Our motivation is that injecting the randomized edgedrop noise $\epsilon$ to $\mathbf{v}’$ in the inference phase for multiple times will make each perturbed edge in the majority of these samples will be dropped, making these perturbed edges finally drop in the final decision. which makes certifying the robustness of GCL feasible and promises $\mathbf{v}’$’s robust performance. Specifically, based on Sec. 5.2, RES will draw $\mu$ samples of $\mathbf{v}’\oplus\epsilon$. Then we use Monte Carlo to decide the final connection status of the edge by validating if the majority of the $\mu$ samples drop this edge. Thus, **all fake edges in most of the $\mu$ samples will be dropped if selecting high $\beta$**, making these fake edges finally dropped. **The role of RES in the training phase:** RES is proposed to train a robust encoder in the training phase to eliminate the negative impacts of randomized edgedrop noise in inference, and enhances model utility and robustness of GCL. Specifically, our motivation is that applying RES to test samples in the inference solely may hurt performance in downstream tasks and the certified robustness based on Eq.(8). Thus, we propose robust encoder training for RES in Sec. 5.1 by injecting randomized edgedrop noise into one augmented view during GCL. It ensures the samples with randomized edgedrop noises align in latent class with clean samples under the encoder, thereby mitigating the negative impacts of such noises and further benefiting the robustness and certification of RES. **Which part contributes to the robustness:** According to the claim above, RES works both in the training and inference phases, which contributes to the robustness together. In the inference phase, our randomized edgedrop smoothing can drop all fake edges if selecting a high $\beta$ to ensure certifiably robust performance under any perturbation within the specific attack budget. In the training phase, our robust encoder training method mitigates the negative impacts of randomized edgedrop noise in inference and enhances the model utility of GCL, further benefiting the robustness and certification of RES. **The meaning of 'the number of removed structures' in line 373.** Thanks for your question. The number of removed structures denotes the number of views/graphs where all edges are removed. In the training phase, we usually have two views for GCL training. In the inference phase, we usually have one test graph. Therefore, we use $i\in\\{0,1,2\\}$ and $j\in\\{0,1\\}$ to denote the number of removed structures in the training and testing phases to implement several variants of our model in ablation studies to understand how RES contributes to the robustness of GCL. We again thank the reviewer for the time and effort in reviewing our paper. If you have any further concerns or questions, please do not hesitate to let us know. We will respond to them timely. Title: Response to Reviewer F8a7
Summary: This paper studies the problem of certifiable robustness against adversarial perturbations in Graph Contrastive learning (GCL). This is an interesting paper where theoretical and empirical results are provided. The goal is to provide provable guarantee of robustness in the face of a limited budget adversarial attack on graph structure, showing that (1) positive samples remain positive samples in GCL, and (2) the downstream node/graph classification output does not change either. To this end randomized edge-drop smoothing (RES) is proposed and analyzed theoretically and empirically. Strengths: The main advantage of this work is its certification capability, which eliminates dependence on empirical assessment relying on empirical attacks, whose parameter/algorithmic tuning would be questionable. The proposed method is simple and intuitive, and the experimental results show relative improvements on SOTA as well. Weaknesses: The main disadvantage of the proposed work seems to be its reliance on the latent classes in the downstream task, which is considered to be node/graph classification. This still encompasses a large set of problems and is of interest, however, it does partially violate the claim that a general GCL certification is studied. Also, the manuscript has many typos and grammatical errors that would benefit from careful proof-reading. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1- In multiple occasions in the definitions and theorems, an assumption is made on the encoder h being a "well-trained" GNN encoder, however this term is not defined, and it is not clear what criteria should be met for this. Since this is repeated in multiple Theorems, its clarification is crucial. 2- Is the importance of certification of the GCL in 9 itself only theoretical, and a stepping stone to providing certification on the downstream node/graph classification? Or is it possible to also evaluate such certification? I cannot seem to find any such results in the experiments section. 3- In ablation study in 6.4, what was the optimal $\beta$ for the FLIP algorithm? The gap between FlIP and other methods (Baseline and Ours) seems too large, and it is interesting to know if the smaller \beta=0.1 turned out to be the best value, which would motivate having a finer grained search for a smaller value for this. 4- What is the rationale behind setting such high values for $\beta$ in experiments in Figure 2? Such high values indicated that almost all edges are dropped in the randomization, which means that no structural information is retained, which is counterintuitive given that such high *certified* accuracy can still be obtained. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Increase in computational needs for Monte Carlo estimates and its environmental impacts should be mentioned in the limitations section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer ssR8 for recognizing the novelties, the valid theoretical and technique details and extensive experiments of this work. The following is our point-to-point response to the reviewer’s concerns and comments: **W1: The proposed work seems to rely on the latent classes in the downstream task, which is considered to be node/graph classification.** Thanks for your questions. We acknowledge that our work is based on the latent class to formalize GCL and establish the connection between the certified robustness of GCL and of downstream tasks. Many of the existing works [1,2,3] also exploit latent class to theoretically analyze contrastive learning. Thanks for your good suggestions. We leave other downstream tasks (e.g. link prediction) as future work. **W2: Typos and grammatical errors.** Thanks for pointing them out. We will correct them in the final version of this paper. **Q1: Define a “well-trained” GNN encoder.** Thanks for your insightful question. The definition of a “well-trained”: is an encoder that can extract meaningful and discriminative representations by mapping the positive pairs closer in the latent space while pushing dissimilar samples away. **Q2: The importance of the certification of GCL** Thanks for your insightful question. We would like to kindly clarify that our certified GCL robustness has following contributions: **(i)** Our work is the first attempt to certify the robustness of GCL. **(ii)** We propose a novel framework RES to certify GCL’s robustness. This framework also improves the robustness of any GCL model. **(iii)** Extensive experimental results on 7 real-world datasets validate the effectiveness of RES in certifying robustness and enhancing the robustness of any GCL model in practice. Please see Table 1 and Table 3 of the original paper for the details. **Q3: The optimal $\beta$ for the FLIP algorithm.** Thanks for your insightful question. To find the optimal $\beta$ for FLIP, we follow the same setting with various beta in Sec. 6.4 (see lines 362-363). The robust accuracies of FLIP on both clean and noisy Cora and Pubmed graphs under Nettack with an attack budget of $3$ are shown in **Table 1 of the attached PDF file**. Our observations are: (i) FLIP achieves the best robust accuracy on the clean graphs when $\beta=0.1$, implying that $\beta=0.1$ is the optimal choice. (ii) FLIP generally demonstrates inferior robust accuracy on the clean graphs at $\beta=0.9$. Our interpretation is that a smaller $\beta$ in FLIP introduces fewer noisy edges to clean graphs, thereby preserving more structural information, beneficial for downstream tasks. However, based on the analysis in [4], a smaller $\beta$ leads to a smaller certified perturbation size, which implies a tradeoff between model utility and certifiable robustness within FLIP. Therefore, it is important to study how to select the best $\beta$ in FLIP to balance the tradeoff. We will leave it as future work. **Q4: Rationale behind setting high values for $\beta$ in experiments.** The rationale behind setting high $\beta$ is: The proposed robust encoder training method in Sec. 5.1 improves the model utility of GCL. Even setting a large $\beta$ for RES, we can still obtain high robust accuracy on clean graphs, further leading to high certified accuracies. Specifically, as shown in line 321, certified accuracy denotes the fraction of correctly predicted test nodes/graphs whose certified perturbation size is not smaller than the given perturbation size. It implies that these certified robust samples should also be correctly predicted by RES in the clean datasets. However, as the reviewer said, introducing randomized edgedrop solely to test samples during the inference could potentially hurt downstream task performance and further negatively impact the certified robustness based on Eq.(8). Thus, we propose robust encoder training for RES in Sec. 5.1 by injecting randomized edgedrop noise into one augmented view during GCL . **It ensures the samples with randomized edgedrop noises align in latent class with clean samples under the encoder, thereby mitigating the negative impacts of such noises and further benefiting the robustness and certification of RES.** We sincerely thank you again for your time and efforts in improving our paper. We will include the above discussion in the final version of our paper. If you have any further concerns or questions, please do not hesitate to let us know. We will respond to them timely. [1] A theoretical analysis of contrastive unsupervised representation learning. ICML 2019. [2] Do more negative samples necessarily hurt in contrastive learning? ICML 2022. [3] Robustness verification for contrastive learning. ICML 2022. [4] Certified robustness of graph neural networks against adversarial structural perturbation. KDD 2021. --- Rebuttal Comment 1.1: Title: Response to authors Comment: Thank you for responding to the previous questions. Most of my questions are answered, however I am still not convinced with the qualitative definition of "well-trained" as also in the above response, it still remains unclear what is meant mathematically (e.g. in terms of minimum desired classification accuracy, or such quantities.) If no such requirement is necessary for the GNN encoder accuracy for the theorems to hold, I'd recommend to drop the term "well-trained" in the theorems to avoid being qualitative and mathematically unclear. I will keep my score unchanged. --- Reply to Comment 1.1.1: Title: Response to Reviewer ssR8 Comment: Thanks for your kind response. Please see the following clarification for your question about the definition of the well-defined GNN encoder: To evaluate whether a GNN encoder $h$ is well trained or not mathematically, we introduce criteria based on the similarity between node/graph representations in the latent space. For each positive pair $(\mathbf{v}, \mathbf{v}^+)$ with its negative samples $\mathbf{V}^- = \\{\mathbf{v}\_1,\cdots,\mathbf{v}\_n\\}$, we clarify that $h$ is well-trained at $(\mathbf{v},\mathbf{v}^+)$ if the following inequality is satisfied: $$s(h(\mathbf{v}),h(\mathbf{v}^+))>\max_{\mathbf{v}^-\in\mathbf{V^{-}}}{s(h(\mathbf{v}),h(\mathbf{v}^-))},$$ where $s(\cdot,\cdot)$ is a cosine similarity function. This implies that $h$ can effectively discriminate $\mathbf{v}$ from all its negative samples in $\mathbf{V}^-$ and learn the meaningful representations for $\mathbf{v}$ in the latent space. Moreover, in this paper, we further extend the criteria for certifying robustness in GCL, as defined by Eq. (3). Based on the clarification above, given a GNN encoder $h$ that is well-trained at $(\mathbf{v}, \mathbf{v^{+}})$, suppose that $\mathbf{v}′$ is a perturbed sample obtained by adding structural noise $\delta$ to $\mathbf{v}$ as described in line 192, where $||\delta||\_0\leq k$. We then clarify that $h$ is certifiably robust at ($\mathbf{v}, \mathbf{v^{+}}$) if the following inequality is hold: $$s(h(\mathbf{v}{'}),h(\mathbf{v}^+))>\max_{\mathbf{v}^-\in\mathbf{V^{-}}}{s(h(\mathbf{v}{'}),h(\mathbf{v}^-))},~\forall{\delta}:\|\delta\|_{0} \leq k,$$ which indicates that for any perturbation within the attack budget $k$, the cosine similarity $s(h(\mathbf{v}{'}),h(\mathbf{v}^+))$ is consistently larger than $s(h(\mathbf{v}{'}),h(\mathbf{v}^-))$ for any $\mathbf{v}^-\in{\mathbf{V}^-}$. We sincerely thank the reviewer again for your time and efforts in improving our paper. We will include the above discussion in the final version of our paper. If you have any further concerns or questions, please do not hesitate to let us know. We will respond to them timely.
Rebuttal 1: Rebuttal: We sincerely thank all the reviewers for their thoughtful comments and constructive suggestions, which significantly helped us strengthen our paper. We are encouraged to find that the reviewers appreciate the novelty of this work, the valid theoretical and technique details and extensively conducted experiments and clear presentation. We now provide our answers to these shared comments and report the required additional experimental results and algorithms in **the attached PDF files**. **Q1: Difference between RES and random edgedrop in GraphCL.** Our RES is inherently different from the random edge-dropping augmentation in GraphCL: **(i)** Random edge-dropping is an augmentation method to generate different augmented views and maximize the agreement between views. However, RES is devised from the robustness perspective, providing certifiable robustness and enhancing the robustness of any GCL method. **(ii)** While random edge-dropping is only applied to augment graphs for GCL, RES extends beyond this. Following the generation of two augmented views as shown in Sec. 5.1, RES injects randomized edgedrop noise into one augmented view during GCL training. Then, it performs randomized edgedrop smoothing in the inference phase through Monte Carlo, as shown in Sec. 5.2. Specifically, for inference using RES (based on lines 274-276), $\mu$ samples of $h(\mathbf{v}\oplus\epsilon)$ are drawn by injecting randomized edge-drop noise $\epsilon$ to $\mathbf{v}$ $\mu$ times. The final prediction is from Monte Carlo, selecting the $\mu$ predictions with the highest frequency in $\mu$ samples. **Q2: About the concatenation vector v.** The concatenation vector $\mathbf{v}$ is a vector to depict the structure of the node/graph for learning representations. **For node-level tasks, it represents the connection status of any pair of nodes in the K-hop subgraph of the node $v$. For graph-level tasks, it represents the connection status of any pair of nodes in the graph $\mathcal{g}$.** To construct such a vector, we select the upper triangular part of the adjacency matrix of the K-hop subgraph of $v$ or the graph $\mathcal{g}$ and flatten it into the vector, where each item in this vector can denote the connection status of any pair of nodes in the K-hop subgraph of the node $v$ or the graph $\mathcal{g}$. **Motivation:** The motivation for using this notation is that since we focus on perturbations on the graph structure $\mathbf{A}$ in this paper, we treat the feature vector of $v$ as a constant and use the adjacency matrix of the K-hop subgraph of the node or the adjacency matrix of the graph to represent the structure of the node or graph. For simplicity and clarity, given a GNN encoder $h$ and the concatenation vector $\mathbf{v}$ of the node $v$ or the graph $\mathcal{g}$ as above, **we then omit the node feature matrix $\mathbf{X}$ and simply write the node $v$’s representation $h\_{v}(A,X)$ and the graph $\mathcal{g}$’s representation $h(\mathcal{g})$ as $h(\mathbf{v})$.** Therefore, we use a unified notation $\mathbf{v}$ to denote the node $v$ or the graph $\mathcal{g}$, and further facilitate our theoretical derivations. Pdf: /pdf/d8f1f690a4b1a1ab612b9c49a4e55b051c13c825.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper is the first one to dive into the certifiably robust Graph Contrastive Learning (GCL) and proposes a certifiably robust GCL framework. It defines the certified robustness of GCL and then proposes Randomized EdgeDrop Smoothing (RES), which randomly drops each edge of the input sample with a certain probability. Besides, it also presents a simple training method for robust GCL. The theoretical analyses and extensive experiments demonstrate its effectiveness. Strengths: This paper is the first one to dive into the certifiably robust Graph Contrastive Learning (GCL) and proposes a certifiably robust GCL framework. It defines the certified robustness of GCL and then proposes Randomized EdgeDrop Smoothing (RES), which randomly drops each edge of the input sample with a certain probability. Besides, it also presents a simple training method for robust GCL. The theoretical analyses and extensive experiments demonstrate its effectiveness. The paper is well-written. Weaknesses: I have some minor comments, as below. 1. In Line 106, the definition of " concatenation vector \mathbf{v}" is confusing. What is the meanings of "which captures the upper triangular part of the adjacency matrix for the K-hop subgraph of v". Does it mean the K-hop neighbors of node v? 2. The use of probability operator symbols is inconsistent. For example, Eq. 2, 6, 7 use "\mathbb{P}", while Eq. 5 use "Pr". It is better to harmonize the symbols used. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to the weaknesses above. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: I believe this work does not have potential negative societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for your approval of the novelties, the valid theoretical and technique details and extensively conducted experiments of our work. We sincerely thank you for your time and thoughtful feedback. We will perform all the changes requested in the minor comments. Here, we hope to address the raised points. **Q1: What is the meaning of "which captures the upper triangular part of the adjacency matrix for the K-hop subgraph of v"** Thanks for your insightful question. We would like to clarify the concatenation vector $\mathbf{v}$ here is a vector to represent the connection status of any pair of nodes in the K-hop subgraph of the node $v$, which can depict the structure of the K-hop subgraph of the node $v$. To construct such a vector, we select the upper triangular part of the adjacency matrix of the K-hop subgraph of $v$ and flatten it into the vector, where each item in this vector can denote the connection status of any pair of nodes in the K-hop subgraph of node $v$. A similar setting also appeared in [1]. **Q2: The use of probability operator symbols is inconsistent in Eq.(5).** Thanks for pointing this out. We will correct this typo in the final version of this paper. We sincerely thank you again for your time and efforts in reviewing our paper. We will include the above discussion and revision in the final version of our paper. If you have any further concerns or questions, please do not hesitate to let us know. We will respond to them timely. [1] Certified robustness of graph neural networks against adversarial structural perturbation. KDD 2021. --- Rebuttal 2: Comment: Hi reviewer: Please kindly respond or acknowledge authors' rebuttal.
null
null
null
null
null
null
Causal de Finetti: On the Identification of Invariant Causal Structure in Exchangeable Data
Accept (poster)
Summary: This paper is about causal discovery with exchangeable data, thus relaxing the traditional iid assumption. This allows for causal structure identification by conditional independence tests. The result has application to multi-environment data. Strengths: Despite my low confidence, I think the paper delivers strong results for causal analysis. Both the Causal De Finetti theorems and the identifiability result (Theorem 5) appear not trivial and of some impact on people working in the field. Weaknesses: - The identifiability result seems to hold for Markovian models only. - The experiments are very promising, but only a very limited setup. - The paper is very technical and dense, thus more suitable for a journal version. Yet, it can be worth presenting such a condensed version at the Neurips conference. Technical Quality: 3 good Clarity: 3 good Questions for Authors: I have found it surprising that a relaxation of the iid assumption leads to higher identifiability. Is it possible to better explain this point? Figure 1a could be clearer to me. Is it possible to make it more expressive (or drop it)? Is there any room to extend the result to non-Markovian models? Confidence: 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No specific issues related to that. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the efforts and we greatly appreciate your positive feedback. From your comments, you clearly understood the paper. > The identifiability result seems to hold for Markovian models only. We thank the reviewer for the suggestion, however, Pearl points out in his classical textbook [1] that “Markov compatibility” is "a necessary and sufficient condition for a DAG G to explain a body of empirical data represented by P, that is, to describe a stochastic process capable of generating P” (Definition 1.2.2). Also the classical result on ‘two DAGs are observationally equivalent’, i.e., one can only distinguish causal structures up to Markov equivalence classes in i.i.d. data, is also established on the condition of Markov compatibility. Here we take the same view and show one can distinguish unique causal structure in exchangeable data. Just as the development of causal discovery in i.i.d. data, we hope there is follow-up work for more practical considerations of non-Markovian models in exchangeable data. > The experiments are very promising, but only a very limited setup. We thank the reviewer for appreciating the values of experiments. Indeed, we aim for the experiments to serve as a demonstration for our identifiability theorem, i.e., ICM-generative processes can indeed recover unique causal structure via conditional independence tests only, and current methods developed perform poorly in our setting. We also agree with the reviewer that it would be interesting to extend the experiments further. Here we include experiment results that are compared to more baselines, (specifically, FCI, GES and NOTEARS). We observe that existing methods all perform poorly in our setting. We will include the result and a more detailed evaluation in an updated version. > The paper is very technical and dense, thus more suitable for a journal version. Yet, it can be worth presenting such a condensed version at the Neurips conference. We agree with the reviewer that the paper is ‘technical and dense’. We also agree with the reviewer that it is worth presenting in a venue like NeurIPS to demonstrate the advantages of exchangeable data for causality. > I have found it surprising that a relaxation of the iid assumption leads to higher identifiability. Is it possible to better explain this point? We thank the reviewer for the question, it is indeed one of the main point of this paper. The higher identifiability power is due to there exist richer conditional independence structures contained in exchangeable data. For example, consider pairs of $(X_1, Y_1)$, $(X_2, Y_2)$. The conditional independence $Y_1 \perp X_2 | X_1$ trivially holds in i.i.d. data, but holds non-trivially in exchangeable data. This thus allows us to identify unique causal structures. > Figure 1a could be clearer to me. Is it possible to make it more expressive (or drop it)? We thank the reviewer for the suggestion and will try to incorporate it in a future version. > Is there any room to extend the result to non-Markovian models? We thank the reviewer for the question and will illustrate the potential of extension in a simple example of the bivariate case where hidden confounders are allowed. Suppose there are two pairs of variables $(X_1, Y_1, Z_1)$, $(X_2, Y_2, Z_2)$, where $Z_1, Z_2$ are unobserved confounders. It follows the causal structure, $X_i \to Y_i, Z_i \to X_i, Z_i \to Y_i$, for all $i = \{1, 2\}$. In particular, $(X_i, Y_i)$ are generated by an exchangeable process connected by latent variables $\theta$, $\psi$ and $Z_i$ is generated by an i.i.d. process. Note the conditional independence $Y_2 \perp X_1 | X_2$ still holds in this setting where there exists hidden confounders. This suggests one can still recover the causal direction of $X$ and $Y$ in this case. Reference: 1. Pearl, J. (2009). Causality. Cambridge University Press. --- Rebuttal Comment 1.1: Title: Keeping the same (positive) recommendation Comment: I thank the authors for their comments, which helped me understand some points better. As far as I can understand, the (serious) problem with the proof of Lemma 2 raised by Reviewer ikix has been fixed, and this had no impact on the empirical validation of the result. In this situation, I am happy to confirm my positive (but low confidence) opinion about the paper. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for responding and their positive feedback on the paper.
Summary: Causal analogs to de Finetti's theorem are proven, showing that if in an exchangeable distribution certain conditional independences hold, the distribution can be seen as being generated by a DAG with a latent variable corresponding to each node, determining that node's causal mechanism. It is further shown that knowing these conditional independences allows unique identification of the DAG. A causal discovery algorithm is presented leveraging these results, and evaluated in a synthetic data setting where data come from very many different environments. Strengths: * The causal de Finetti's theorems are philosophically interesting, similar to how the original de Finetti's theorem can play a role in the justification of Bayesian inference. The resulting graphical models, expressing disentangled causal mechanisms in terms of latent variables (Figure 1b, right), are very insightful and deserve to be commonly known in the community. * The resulting conditional independences, which are testable if data are available from multiple environments, are a very useful ingredient for causal discovery algorithms in such settings. Weaknesses: * The related work section only considers causal discovery, not the causal de Finetti theorem itself. Such references should also be listed, because the paper is claiming a contribution in this area. For instance, [Dawid 2021] also uses exchangeability to do causal inference. * Also about the related work section: the sentence "These algorithms on non-i.i.d. grouped data all demonstrate success, though it is unclear why grouped data enable causal structure identification." - I disagree strongly with this, there is a very good understanding of why multiple environments (possibly including interventional data) help with causal discovery, in the papers you list as well as in the causal inference literature as a whole. * Implications of the causal de Finetti theorem are listed without sufficiently arguments, and I believe overstated. See question about line 139 below. * Here is a counterexample to Lemma 2: $X_i \rightarrow X_a \rightarrow X_b \leftarrow X_c \leftarrow X_d \rightarrow X_j$ with $X_b \rightarrow X_j$. Then $X_d \in S_n$, so only $X_b$ is conditioned on, but this opens the listed path. (Separate from this, in line 294 explaining the lemma, I think "non-directed" should be "open": also directed paths other than the 1-arrow path should be blocked.) * Experiment (some things are unclear to me now, see questions below): * other methods are not really fit for this scenario * 1 x-y pair per environment (?): this disconnects the experiment from the theory **References:** [Dawid 2004]: Probability, Causality and the Empirical World: A Bayes-de Finetti-Popper-Borel Synthesis, Statistical Science , Feb., 2004, Vol. 19, No. 1 (Feb., 2004), pp. 44-57 [Dawid 2021]: Decision-theoretic foundations for statistical causality, Journal of Causal Inference 2021; 9: 39-77 Technical Quality: 3 good Clarity: 2 fair Questions for Authors: * Line 139: "one can separately manipulate each latent variable controlling different mechanisms" - Does this follow from equation 5? How? ([Dawid 2004] warns that de Finetti's theorem only establishes that the latent variable exists in our minds, not in the real world.) * In the experiment, I assume $\tilde{N}^e$ is a 2-element vector? What about $N^e$? And how many x-y pairs are sampled per environment? Suggestions to improve the language (not relevant for my assessment of the paper): * the spelling of "i.i.d." is inconsistent (sometimes with spaces, sometimes with the final . missing) * line 92 & 595: "infinite exchangeable" -> "infinitely exchangeable" * the final sentence of section 2.2 doesn't parse ("due to"&"that underlies"; "involving observations are i.i.d.") * below definition 2: "does not hold for all" -> "does not hold for any" * line 140 & 141: "supporting mechanisms" -> "supporting that mechanisms" (2x) * line 147: "one implicitly make" -> "makes". Similar in appendix A. * Theorem 3 & 4: I'd replace "The sequence is" by "If", add "the sequence is" to point 1, and remove "if" from point 2 * line 201: "decide" -> "deciding" * line 220: "process" -> "processes" (also elsewhere) * Definition 5: "Given $P$ is" -> "Let $P$ be"; "Given $G$ be" -> "Let $G$ be"; "a ADMG" -> "an ADMG" (this also appears in Def 6); "read-off" -> "read off" * Definition 7: There is only one mapping fitting this definition, so define it straightaway instead of defining when something "is an ICM operator". Line 242 has a double "other" * line 280: $<$ -> $\leq$ * line 287: "topological ordering" is not the right concept here, as that is a total order * line 629: "exchangeable" -> "exchangeability" * line 679: "$l < k + 1$ -> $l \leq k$ Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: Yes (except as discussed above) Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the thorough feedback, and we appreciate the kind words that causal de Finetti theorems “deserve to be commonly known in the community”. Thank you for pointing out the interesting reference Dawid 2021. We will discuss and cite [1] in an updated version. Re ‘grouped data’: This paper meant to express that there is work [2] assuming ICM, but does not come with an understanding for its implications on data’s probabilistic relationships. We thank you for your feedback and we promise to rephrase the sentence and clarify that our contribution is with respect to the statistical understanding of ICM. Re de Finetti parameters: We are sorry we failed to clarify this. We did not mean to claim that the variable necessarily exists in the real world - only that the conditions of Thm. 2 imply that the data looks as if it has been generated by a process with independent mechanisms. (We are no experts on those philosophical ramifications, so we try to focus on technical results.) Re counterexample to Lemma 2 (only used for structure learning, not affecting causal de Finetti theorems and identifiability theorem): We thank the reviewer for spotting this error. Here we correct the lemma and its proof. For its impact on the empirical evaluations, we re-run the experiments with the updated fix and observe no changes in the results (see the uploaded pdf). We will include all the changes in the updated version. **Lemma 2** Let node $X_i \in S_n$ and $X_j \in S_m$ where $m < n$. Set $k:= n-m$. There does not exist a directed edge from $X_i$ to $X_j$ iff when $k = 1$, $X_i \perp X_j | S_{>n}$; and when $k > 1$: $X_i \perp X_j | Z$, where $Z = S_{>n} \cup (PA_j \cap S_{<n}) \cup {S_n \backslash X_i}$. **Proof of Lemma 2**: ($\Rightarrow$): suppose there is no direct edge, i.e. $X_i \to X_j$. If there is no path connecting $X_i$ and $X_j$, then $X_i \perp X_j$. Next, we assume there exists a path $p$ connecting $X_i, X_j$. Then $p$ satisfies: Either (1) there exists $X_k \in p$ s.t. $X_k \in S_{>n}$, or (2) all variables in the path $\in S_{\leq n}$. When $k = 1$: Under (1), let $W$ be the set containing all variables $\in S_{>n}$. Note $|W| \geq 1$. Then there exists a non-collider $X_k \in W$. Suppose all variables in $W$ are colliders, then there must $ \exists X_l \in p$ and $X_l \not \in W$ that has an edge outgoing to some variable contained in $W$. By definition of sink orders, $X_l \in S_{>n+1} \implies X_l \in W$. Contradiction. Under (2), we show that there exists a collider in the path. Suppose there is no collider on the path, then all the edges point in one direction, here the edges flow from $X_i$ to $X_j$. Since the path is not a 1-arrow direct path, there $\exists X_a \in p, a \neq \{i, j\}$. Let $X_a$ be the neighbour of $X_j$. If $X_a \in PA_j$, then $X_a \in S_{\geq m + 1}$. As $X_i$ is an ancestor of $X_a$, $X_i \in S_{\geq m+2} = S_{\geq n+1}$. If $X_a \in CH_j$, if edges flow in one direction, $X_j$ is an ancestor of $X_i$. Contradiction. Given the path connecting $X_i$ and $X_j$ either contains variables in higher sink orders and there exists a non-collider $X_k \in p$ and $X_k \in S_{>n}$., such path can be blocked by conditioning on the set $S_{>n}$; or the path only contains variables in $S_{\leq n}$ and the path must exist a collider, which we block the path by not conditioning on any variables in $S_{\leq n}$. when $k > 1$: Under (1), similar as above, there exists a non-collider $X_k \in p$ and $X_k \in S_{>n}$. Therefore conditioning on all variables in $S_{>n}$ blocks the set of paths under (1). Under (2) when all variables in the path belong to $S_{\leq n}$, then let $X_p$ be the parent of $X_j$ in this path, either (2.1) $X_p \in PA_j \cap S_{<n}$, or (2.2) $X_p \in PA_j \cap S_n$. Under (2.1), $X_p$ is a non-collider in the path, since it has one outgoing edge. Then conditioning on $PA_j \cap S_{<n}$ blocks the set of paths under (2.1). Under (2.2), for any variable $X_m \in S_n$ on the path $p$, $X_m$ only has outgoing edges since all variables on the path belong to $S_{\leq n}$. $X_m$ is also a non-collider, thus conditioning on $S_n$ blocks the set of paths satisfying (2.2). By Markov assumption, CIs holds in distribution. ($\Leftarrow$): suppose the proposed CI holds and $\exists X_i \to X_j$, then $X_i$ is not d-separated by $X_j$. By faithfulness, CI does not hold in distribution. **End of Proof** Re clarifications on experiments, here the equations listed represent the data-generating process for one data instance $X^e$. Specifically, $N^e$ is a vector with size $2$ and $\tilde{N}^e$ is also a vector with size $2$. In practice, we generate two instances per environment as stated in line 319, where we fix $N^e$ per environment, and generate $2$ $\tilde{N}^e$ based on the $N^e$ generated and consequently generate 2 data instances $X^e$ based on corresponding $\tilde{N}^e$. We acknowledge that most other methods are designed to tackle data sampling from i.i.d. setting. To this end, we include a comparison with the method ‘CD-NOD’ [2] which is designed for heterogeneous and nonstationary datasets. We also observe CD-NOD performs poorly in our setting. Overall, we thank you for the detailed review, which we will carefully take into account your feedback on the clarity of the presentation in experimental design, the related work discussion and implications of causal de Finetti theorem, and will update with more detailed clarification as discussed above. We hope that our response has addressed all your questions, especially on the soundness of our paper (with the corrected Lemma). We kindly ask you to let us know if you have any remaining criticism, and - if we have answered your questions - to consider reevaluating your score. References: 1. Dawid, P. (2021). Decision-theoretic foundations for statistical causality. 2. Huang, B. et al. (2020). Causal discovery from heterogeneous/nonstationary data. --- Rebuttal Comment 1.1: Comment: I think this paper has the potential for high impact, but with this comes great responsibility to be precise and complete about positioning w.r.t. related work. I am happy that the authors promise to improve the paper in this regard. In the case of Lemma 2, they provide a concrete fix, and I agree that the new version of the lemma is correct; I am glad this had no impact on the experimental results. I am raising my score to 6. Not having seen the updated version, I am hesitant about raising it further. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for going the extra mile in finding an error and checking the corrected version. We thank the reviewer also for acknowledging the changes we made and for raising the score. We will take particular care regarding the related work section for the final version, and hope the updated version will not disappoint. In case you feel comfortable revealing your identity after the end of the reviewing process, we would be happy to add your name to the acknowledgement.
Summary: The authors describe how the combination of assumptions about exchangeability and specific statistical tests can enhance the causal implications that can be derived from observational data. Strengths: The utility and implications of exchangeability for causal inference are well described by the authors. Under the assumptions that they outline (the availability of multiple environments), the additional causal implications that can be derived are clearly very useful. The authors help readers by following essentially all complex mathematical statements with an informal statement of the result in relatively plain language. Weaknesses: A 2019 paper by Jensen et al. [1] on exchangeability and structure learning in causal graphical models makes a very similar point as this paper regarding the value of multi-environment data and exchangeability. That paper describes how to use knowledge of environments (what that paper calls “parent objects”) to provide additional constraints on graph structure by conditioning on environment (and the latent variables implied by that environment). The proofs in that paper rely on exchangeability and contrast exchangeability with conditional independence. That paper differs somewhat from this paper in goals and focus, but the current paper would be improved by clearly describing the contributions of that paper and delineating the unique contributions of the present paper. The authors cite the Independent Causal Mechanism principle without reference to the long history of the concept, stretching back almost a century. Pearl’s work [2] (in which it is sometimes called “modularity”) cites an earlier review by Aldrich [3] (in which it is called “autonomy”), which cites very early work by Haavelmo and others as far back as the 1930s. This history is worth at least noting. The difference between “environments” and random variables is not clearly explained until Section 4, and this distinction is critical to understanding how (for example) schools differ from students in the second example of Section 3. Each school corresponds to many students. The plate notation introduced in Section 4 would help readers if introduced earlier. *References* 1. Jensen, D., Burroni, J., & Rattigan, M. (2020, August). Object conditioning for causal inference. In *Uncertainty in Artificial Intelligence* (pp. 1072-1082). 2. Pearl, J. (2009). *Causality*. Cambridge University Press. 3. Aldrich, J. (1989). Autonomy. *Oxford Economic Papers*, *41*(1), 15-34. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: (none) Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors do not sufficiently emphasize the special structure of the data (specifically, multiple environments) that is needed to apply the methods that they advocate. Judging from the results shown in Figure 3, the benefit to multivariate structure learning only becomes substantial at truly large numbers of environments. The need for multiple environments (and large numbers of them) should be emphasized more clearly earlier in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time and are glad to find the reviewer appreciates the “utility and implications of exchangeability for causal inference” and considers our results as “clearly very useful”. > That paper differs somewhat from this paper in goals and focus, but the current paper would be improved by clearly describing the contributions of that paper and delineating the unique contributions of the present paper. Thanks for pointing out the work from Jensen et al. [1] and their interesting findings: the probabilistic implications for conditioning on objects can be explained using exchangeability. They show object conditioning, due to exchangeability, has the advantage of mitigating latent confounding and measurement errors for causal inference. Our work differs from [1] as rather than inference, we show the advantage of exchangeable data for causal discovery and advocate the widely-used causality principle ICM can be expressed as Equation 5. Our unique contribution is that researchers often assume ICM principle (or its variants) on the data-generating process, but do not come with an understanding for the implications of assuming ICM, i.e., the probabilistic relationships of the underlying data. Causal de Finetti, explicitly states, the implicit probabilistic assumptions are exchangeability of data and certain conditional independences. In summary, this paper’s unique contribution is to 1) show exchangeable data is better than i.i.d. data in unique causal structure identification (which is previously deemed impossible via CI tests [2] ) and 2) prove causal de Finetti theorems which explicitly connect causality and probabilistic modelling. We thank the reviewer for the suggestion in comparing related work on exchangeability for casual inference and will include more detailed discussions in an updated version. > The authors cite the Independent Causal Mechanism principle without reference to the long history of the concept, stretching back almost a century. Pearl’s work [2] (in which it is sometimes called “modularity”) cites an earlier review by Aldrich [3] (in which it is called “autonomy”), which cites very early work by Haavelmo and others as far back as the 1930s. This history is worth at least noting. We thank the reviewer for pointing this out and will include the relevant references, for example, Pearl (2009), Aldrich (1989), Hoover (2008), in an updated version. > The authors do not sufficiently emphasize the special structure of the data (specifically, multiple environments) that is needed to apply the methods that they advocate. The need for multiple environments (and large numbers of them) should be emphasized more clearly earlier in the paper. We thank the reviewer for the suggestion. The experiments aimed as a proof-of-concept to demonstrate our identifiability theorem in practice. The need for large number of multiple environments arise due to we only use two samples per environment. From the reviewer's feedback, we understand there could be potential misunderstandings. We will incorporate your suggestions (e.g. explaining the difference between ‘environments’ and random variables, moving the plate notation earlier, emphasize the need of large number of environments) in the updated version to ease readers’ understanding. References: 1. Jensen, D., Burroni, J. &amp; Rattigan, M.. (2020). Object Conditioning for Causal Inference. Proceedings of The 35th Uncertainty in Artificial Intelligence Conference, in Proceedings of Machine Learning Research. 2. J. Pearl, Causality: Models, Reasoning, Inference, 2nd ed. New York, NY, USA: Cambridge Univ. Press, 2009. 3. J. Aldrich, “Autonomy,” Oxford Econ. Papers, vol. 41, no. 1, pp. 15–34, 1989. 4. K. D. Hoover, “Causality in economics and econometrics,” in The New Palgrave Dictionary of Economics, S. N. Durlauf and L. E. Blume, Eds., 2nd ed. Basingstoke, U.K.: Palgrave Macmillan, 2008. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' careful reading of my review and their thoughtful response. The changes that you outline will improve the paper. One side note: After reading Hoover (2008), I'm not convinced that it is a particularly good reference regarding autonomy/modularity. Hoover certainly mentions the Lucas Critique (a version of the modularity issue), but most of the entry is on other topics. Aldrich (1989) is much more on-point, as are some specific portions of Pearl (2009) (though the discussion of modularity is oddly dispersed throughout that book). I'll increase my rating. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for acknowledging our changes and raising the score. We thank the reviewer for pointing the appropriate references for ICM and will include the more relevant reference, e.g., Pearl (2009), Aldrich (1989) in the updated version.
Summary: This paper examines a stronger notion of exchangeability which provides invariant causal structure, i.e., is capable of serving as the basis for causal reasoning. The main contribution is a theorem which states that for pairs of RV with certain conditional independence properties it is always possible to represent them as a mixture of i.i.d. sequences with identical Markov factorization. This idea is further extended to the multivariate case. The authors then connect this notion of causal exchangeability to causal identification and causal discovery. Experimental results are provided for bivariate orientation and causal discovery from multiple datasets which show favorable performance when taking causal exchangeability into account. Strengths: I found this paper to be a simple, intuitive, and thorough piece of work that provides an interesting new lens on causal inference. The notion of defining what constitutes exchangeable sequences with invariant causal structures is very interesting, and the authors do a very nice job of motivating the problem, providing thorough and well written theorems, and contextualizing them within the context of application. This is a clear description of causal mechanisms from a purely statistical view and an interesting addition to the literature. Weaknesses: The largest weakness I can see here is the lack of comprehensive empirical evaluations. The evaluations provided are fairly simplistic, limited in scope, and serve more as a demonstration rather than an evaluation finding relative strength and weakness of the proposed method. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: Did you run any other causal structure learning algorithms such as FCI, GES or even optimization based approaches such as NOTEARS? Given the poor performance of PC I'm wondering if this is a weakness shared across all algorithms. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time and providing a thorough review and constructive feedback. We are glad you found the paper interesting as a simple, intuitive, thorough piece of work. > The largest weakness I can see here is the lack of comprehensive empirical evaluations. The evaluations provided are fairly simplistic, limited in scope, and serve more as a demonstration rather than an evaluation finding relative strength and weakness of the proposed method. > Did you run any other causal structure learning algorithms such as FCI, GES or even optimization based approaches such as NOTEARS? Given the poor performance of PC I'm wondering if this is a weakness shared across all algorithms. We agree with the reviewer that the empirical evaluations are rather limited. We mainly use it to illustrate unique causal structure learning is feasible for ICM-generative processes. Here we include additional empirical results that compare with FCI, GES and NOTEARS in the uploaded pdf. Our result shows that poor performance is shared across all algorithms evaluated. This is consistent with the main point of this paper that exchangeable data allows unique causal structure identification using conditional independence tests and existing algorithms (designed under i.i.d. data setting) performs poorly in our settings. We thank the reviewer for the suggestion and will include this result and a more detailed empirical evaluation in the updated version.
Rebuttal 1: Rebuttal: We thank all the reviewers for their time and effort in providing valuable feedback. We are glad to see all the reviewers have unanimous agreement that this paper delivers solid contributions to the community and some consider it to be “very insightful and deserve to be commonly known in the community”. We also are glad to see several reviewers appreciate this paper’s simple, intuitive explanation of complex mathematical theorems. We thank again for all reviewer’s recognition of the impact of our work and hope this work provides a new lens on causal inference and can trigger follow-up work in the space of causality and exchangeable data. We respond to all reviewer’s points in detail but wanted to highlight: * we followed TMn5’s suggestions for **more comprehensive empirical evaluations**. Here we include comparisons with additional benchmarks, e.g. FCI, GES, NOTEARS, in the uploaded pdf. We observe PC’s poor performance is indeed shared across the additional benchmarks - methods mostly designed for i.i.d. data setting performs poorly on exchangeable data in our setting. We will include the result and a more detailed comparison in the updated version. Pdf: /pdf/1bae6f0ade6e39e36ca94ed06cc66b38247ac7a9.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Identifiability Guarantees for Causal Disentanglement from Soft Interventions
Accept (poster)
Summary: This paper focuses on the problem of causal discovery with unobserved causal variables. The authors show that under necessary assumptions, the latent causal model can be recovered by iteratively identifying the interventions that target the source nodes and removing these source nodes. Based on this finding, the authors propose a VAE-based generative model for recovering the causal mechanisms from observational and interventional data. The model shows promising performance on a gene mutation dataset in terms of capturing the interventional distribution and learning the causal DAG between target genes, which demonstrates the effectiveness of the proposed method. Strengths: • The authors clearly list the steps of identifying the ancestral relationships of nodes and direct edges in a causal DAG with soft interventions and give concrete examples to help readers better understand the assumptions. • The presented experimental results show the proposed model’s capability of learning the causal mechanism in the training data as well as generalizing to unseen interventional data. • The paper is organized and presented in good shape. Weaknesses: • It is better to briefly discuss the meaning of interventional faithfulness in previous studies. Does it simply mean “all causal variables are observed”? • I think encourage the authors to discuss the validity of assumptions 2 and 3, especially assumption 2, in real-world scenarios as they claim that these are stronger versions of faithfulness assumptions. Minor comments: • In the equation on line 278, should the subscripts be $\hat{I} \notin \mathcal{I}$ and $\hat{I} \in \mathcal{I}$ instead of $I \notin \mathcal{I}$ and $I \in \mathcal{I}$? • What does $s_i(\cdot)$ represent on line 297, the causal mechanism corresponding to the $i^{th}$ node in the causal DAG? • The fonts in Figure 4 are too small. It might be better to change the orientation of the diagram (e.g. using a left-right flow instead of a top-down flow) and make the fonts larger. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: • Why do the authors choose MMD over Wasserstein distance for the distance measure between the generated and true interventional distribution? • Can the proposed method handle the case where there exist **imbalanced interventions**? For example, there is only one sample in each of $\mathcal{D}^1, …, \mathcal{D}^{K-1}$ and 1000 samples in $\mathcal{D}^K$. I expect this situation to be possible in biological data since some genetic mutations can be extremely rare while some are frequently observed. • Besides biological settings, can authors think of other scenarios where the proposed model can be applied? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 2 fair Limitations: The limitations are appropriately discussed. I’m not aware of any direct negative societal impacts of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the encouraging comments. We appreciate that you think the identifiability proof is well laid out and the experiment shows promising results. We would like to address your remaining questions as follows: > **"It is better to briefly discuss the meaning of interventional faithfulness in previous studies. Does it simply mean “all causal variables are observed”?”** Prior interventional faithfulness assumptions [8-10] vary by a few technicalities; but they all assume that all causal variables are observed (causal sufficiency), and, more importantly, that intervening on a node will always change the marginal of its descendants. In particular, [8] (Definition 2, called “influentiality”) only made this assumption and showed that the causal graph is identifiable up to its transitive closure by detecting marginal changes. [8] showed that their algorithm consistently identifies the full causal graph by assuming additionally that intervening on a node changes the conditional distribution of its direct children giving its neighbors (details can be found in Assumption 4.5 of [9]). A similar notion was also introduced in [10] where they made further assumptions regarding changes in the conditional distributions. We introduced this briefly between line 182-183. We will add this discussion to Appendix B.1 before presenting our faithfulness results and a pointer to this discussion after line 183. > **”I encourage the authors to discuss the validity of assumptions 2 and 3, especially assumption 2, in real-world scenarios as they claim that these are stronger versions of faithfulness assumptions.”** We appreciate this suggestion by the reviewer. Appendix B.1 discusses how these faithfulness assumptions can be satisfied. For Assumption 2, we extended Example 2 in the main text by providing a wider class of nonlinear SCMs that satisfies it, as demonstrated in Example 5. Then in Lemma 3, we showed in general how Assumption 2 can be satisfied by checking the moments of a finite number of variables. This boils Assumption 2 down to satisfying a finite set of inequalities. This is similar to the interventional faithfulness assumptions, but stronger, since the set of inequalities required pose more constraints than the inequalities required by interventional faithfulness. In both cases, the set of inequalities required ensure that any variable that remains unchanged under an intervention is not due to coincidence but rather the causal structure. As for Assumption 3, reductions can be made by assuming structural forms. We provided a brief discussion of how it can be satisfied in Lemma 4, where we showed it can be satisfied in a tree graph that satisfies Assumption 2. This is a stronger version of the faithfulness assumption in causal models [4], which ensures that any independence in the data arises not from coincidence but rather from the causal structure. > **”... should the subscripts be $\hat{I}\in\mathcal{I}$ and $\hat{I}\in\mathcal{I}$ instead of $I\in\mathcal{I}$ and $I\in\mathcal{I}$?”** Thank you for pointing this out. It should be $\hat{I}\in\mathcal{I}$ and $\hat{I}\notin \mathcal{I}$. We will revise it accordingly. > **”What does $s_i(\cdot)$ represent... the causal mechanism corresponding to the $i$th node...?"** Yes. $s_i(\cdot)$ corresponds to the causal mechanism that generates $U_i$ from its parents $U_{pa_{\mathcal{G}}(i)}$ and the exogenous noise $Z_i$. We will clarify this by adding a footnote stating its definition. > **”The fonts in Figure 4 are too small...”** Thank you for this suggestion! We were able to make the fonts larger by changing the orientation of the diagram. This can be found in the PDF attached to the general response. > **”Why do the authors choose MMD over Wasserstein distance for the distance...?”** We chose to use MMD over Wasserstein distance mainly because the kernel mean embedding used by MMD makes it efficiently computable, unlike the Wasserstein distance. In addition, MMD has several desirable properties. For instance, it can be estimated well with empirical samples, where the expected square error does not grow with the dimension. Further, when using a characteristic kernel (e.g., the Gaussian kernel that we used), MMD is a “strong metric”, i.e. the MMD between two distributions is equal to zero if and only they are equal almost everywhere. > **”Can the proposed method handle the case where there exist imbalanced interventions?...”** In the case of biological data, the interventions are imbalanced, but not very severely. Each intervention out of the 105 interventions comprises 50 to 2,000 cells. The proposed method seems to fit well to the interventions with smaller sample sizes. However, in case of severe imbalance, e.g. with just one sample in some interventions, we cannot estimate the distribution distance well. > **”Besides biological settings, can authors think of other scenarios where the proposed model can be applied?”** To apply the proposed model, we need data corresponding to various interventions to ensure theoretical guarantees. Therefore, in addition to observational data, we also need several interventional dataset to evaluate the algorithm. The biological domain provides a suitable setting, as it allows for large-scale interventional experiments. Besides the biological domain, we also see other potential applications that we hope to explore in future work. For example, the synthetic RL suite in [11] could be a suitable test case where interventions correspond to actions on the objects. However, the evaluation metric needs to be adapted in this case to maximizing a reward. Advanced image generative models such as [12] may also provide an opportunity to potentially generate interventional data by engineering prompts. However, it is debatable whether prompt engineering is more similar to intervening or conditioning. --- Rebuttal Comment 1.1: Title: Response to author rebuttal Comment: Thank you for the comprehensive discussion regarding the faithfulness assumptions, which I believe will significantly enhance the theoretical robustness of this paper. The authors have adequately addressed most of my questions, and I'm inclined to keep the score unchanged. --- Reply to Comment 1.1.1: Title: Response Comment: Thank you for the reply! We appreciate the comments which help improve the paper. We would be happy to answer any additional questions if there are any.
Summary: This paper studies the task of learning causal variables from high-dimensional observations. This is done under the assumption that single-node, soft interventions are available for all causal variables, as well as the observation function being a full row rank polynomial, e.g. linear. Based on prior work, the paper shows that this reduces to identifying causal variables from a linear combination, and determines challenges to identify the causal variables further in interventional distributions. The paper derives a result to identify causal variables up to linear combinations of its ancestors, and for identifying the causal graph among them. Finally, a practical algorithm, CausalDiscrepancyVAE, is derived and evaluated on genetic data with CRISPR interventions. Strengths: The paper is overall well written. The paper uses a clear and consistent notation throughout the paper. The setting is well introduced. Further, examples are provided to guide the understanding of the theory. The closest related works for the presented paper are clearly discussed and put into perspective with the goal of this paper. The paper presents both theoretical and empirical results. The theoretical results are backed up by detailed proofs in the appendix. The proofs appear sound and the results are intuitive, but I have not checked all proofs in detail. The setting of unpaired observations with causal relations between variables is challenging. Hence, while not ideal, an assumption for single-node intervention on all variables is reasonable, considering the current stage of causal representation learning (CRL) in general. The empirical results are based on a newly proposed algorithm, that takes advantage of a variational autoencoder setup with additional losses for guidance. Another strength of the paper is its application to real-world data, specifically gene expression data. Many CRL works currently work on purely synthetic benchmarks, which this paper tries to go a bit beyond. The results on the genetic data are encouraging, suggesting that future work in CRL can become potentially used in this application domain. Weaknesses: The main focus of the paper is its theoretical identifiability results. However, for that, the theoretical contribution of this work is quite limited. The main task of the identifiability theory is to determine the causal variables $U$ from a linear mixing $X=U\Lambda +b$. The problem with identifying $U$ from interventions is then discussed as the effect of interventions potentially being canceled out by linear combinations of the causal variables. Assumption 2 is then stated as simply ruling out this case. However, this assumption is done without much of simplification of this linear combination statement and requires checking all possible linear combinations in $P(U\_j+U\_SC^T)\neq P^I(U\_j+U\_SC^T)$. This is very likely difficult or impossible to verify in any real-world setting given that (a) the causal variables are not known, and (b) one needs to search over all $C$. Similarly, with the identifiability of the graph $G$, the assumption that it is the sparsest within its transitive closure is relatively restrictive and difficult to verify in real-world applications. Finally, it's unfortunate to see that the causal variables are then only identified up to linear combinations of its ancestors. The paper would be strengthened in discussing whether hard interventions can give stronger identifiability in these cases, which is likely the case as in [Lippe et al., 2023]. The proposed model architecture is quite involved and has several components (encoder, decoder, DSCM, MMD, intervention encoder) and hyperparameters ($\lambda, \alpha, \beta$, temperature of intervention encoder, MMD kernels) interacting. The tuning of the hyperparameters is not well discussed and it is only mentioned that delayed linear schedules are used for both $\alpha$ and $\beta$. It is unclear how sensitive the model is to those, and how much tuning is necessary. No ablation study on individual elements of the architecture have been performed, overall making it unclear what each component is contributing and whether parts are necessary. Overall, with such an involved setup without ablation studies, it is unlikely that the architecture will be adopted in future work. Moreover, the theory has as integral assumption that the mixing function is limited to full row rank polynomial (Assumption 1). Nonetheless, the practical algorithm itself uses non-linear neural networks covering a much larger space of possible mixing functions. This appears to violate the assumptions and break its relation to the theoretical results. A discussion on how this affects the algorithm or intended datasets is largely missing. In terms of empirical evaluation, it is great to see experiments on real-world data. However, no baselines like a simple VAE are compared to. This makes it difficult to put the results into perspective. A synthetic dataset on which the identifiability of the causal variables and baselines can be precisely compared is crucial to gain insights in the proposed architecture. Further, considering other modalities like images would greatly improve the range of the empirical evaluation. Finally, it is unclear if the experiments have been performed for multiple seeds, and how the trained models vary over the seeds. Regarding the writing and presentation of the paper, it is overall well written. Still, it clearly targets researchers in the CRL community and is likely difficult to fully understand for researchers outside. In Section 2, 3 and 4.1, it is not always clear which parts are novel for this paper and which builds up on previous results. Overall, assumptions like almost-linear mixing function and single-node interventions on all variables should be clearly stated in the introduction. Additionally, the term 'unobserved causal variables' is confusing with latent variables in common causal discovery tasks, e.g. latent confounders. In this setup, the information of the causal variables is still observed through a mixing function, just the separation and individual variables are not identified. This should be clarified when using this term, e.g. in the introduction (line 52). ### Typos - Line 306: 'to to' ### References [Lippe et al., 2023] Lippe, P., Magliacane, S., Löwe, S., Asano, Y. M., Cohen, T., & Gavves, E. (2023). Causal representation learning for instantaneous and temporal effects in interactive systems. In The Eleventh International Conference on Learning Representations. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: ### Review Summary The paper presents an identifiability result for causal representation learning from soft interventions. It's strengths includes the detailed proofs for the presented theoretical results and its application to real-world datasets. However, the theoretical contribution is limited, considering its basis on prior works and extensions on them. The empirical evaluation is further insufficient, requiring more experiments to validate the claims and components of the architecture. Thus, I consider the current version of the paper below the acceptance threshold. ### Questions Main aspects: * How would you verify Assumption 2 and 3 in real-world settings? * Do you have theoretical results on whether hard interventions strengthening your identifiability class or not? * How does your practical algorithm relate to the theory given that the mixing functions have different classes? * Do you have ablation studies on all components of your architecture? Minor points: * Example 1, Line 190: the interventional distribution of $U\_2$ is presented as identical to its observational distribution. I assume this is a typo where the interventional distribution should be $P(U\_2|U\_1)=\mathcal{N}(U\_1+1,1)$ instead? If not, can you explain how the interventional faithfulness is satisfied despite an intervention having no effect? * Line 222: why do you consider here $p$ interventional settings ($I\in \{I\_1,...,I\_p\}$) instead of $K$ ($I\in \{I\_1,...,I\_K\}$) as in Sec. 2 before? --- ### Post-rebuttal update The rebuttal addressed some of my concerns. Considering an improved presentation as discussed in the rebuttal, and the main contribution of this paper being theoretical, I raise my score to '5: Borderline Accept'. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Limitations of the theoretical setting have been shortly discussed in the conclusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the detailed review! We appreciate that reviewer found the application to real-world data encouraging. Due to character limits, we’d address the reviewer’s main points below. > **“Assumption 2 is done without much of simplification...” | “$G$ ... is the sparsest within its transitive closure” | "unfortunate that causal variables are only identified up to...” | "verify Assumption 2 and 3...”** There are many misunderstandings of the theoretical result that we would like to clarify as follows. Due to the points below, we strongly disagree with the assertion that the theoretical contribution is limited. Compared to other works on CRL, which have shown identifiability under the simpler cases of hard interventions, this paper considers the much more challenging case of soft interventions. Further, this paper provides a novel way of proving identifiability, which is clearly different from previous works. In addition, our detailed analysis show where non-identifiability arises and how this would affect real-world applications. **Assumption 2.** This assumption is not stated without much simplification of the linear combination statement. We require that the interventional effect is not canceled out *only for the interventional target itself and its immediate children*, which is a major reduction from requiring such non-cancellation for all the nodes downstream of the interventional target. More importantly, we did not require changed marginals for all possible linear combinations of $U$, but only linear combinations of nodes that are not downstream of the interventional target. Since the unknown linear mixing function can be arbitrary, we need to assume changed marginals for all linear combinations of the reduced set. This is required to avoid the existence of a mixing function such that the effect is canceled out which leads to a valid but wrong causal representation. These simplifications are necessary for us to show how it can be satisfied in Appendix B (Example 5 and Lemma 3). In Lemma 3, we reduce this assumption to a finite number of inequalities (instead of all possible linear combinations) that the causal model needs to satisfy, which is similar to previously assumed faithfulness (e.g. [8-10]). **Testability of faithfulness.** Similar to previous faithfulness, Assumption 2 is not a testable assumption, but rather a belief that any independence/unchanged relationships present in the data are due to the causal structure rather than coincidence. Although it seems like a hefty assumption, it really isn’t (see [4] for a discussion of faithfulness in prior work). Importantly, without some faithfulness or parsimony assumption, it is well-known that it is impossible to infer causal structure from data. **Graph sparsity.** We did not assume that the graph is sparsest within its transitive closure to guarantee its identifiability. See Theorem 2: no such assumption is made. Assumptions 1 and 2 suffice for identifying the graph up to transitive closure (Theorem 1). To also identify the direct edges, we need a type of “adjacency faithfulness” (Assumption 3), similar to traditional causal structure learning setups. We also show a special case of how Assumption 3 is satisfied if the ground-truth DAG is a tree graph. **Non-identifiability of variables.** The non-identifiability of causal variables themselves is an inherent result of this setup under soft interventions. This is not a limitation of our proof. Similar non-identifiability was also pointed out in [1,5]. Understanding it is crucial when we apply such methods in applications for interpreting the learned information correctly. Furthermore, even with this, we can still draw causal explanations and predict the effect of unseen combinations of interventions, as shown in Section 4.4. > **”theoretical results on hard interventions...”** In Appendix A.2, we discuss how the model is identifiable up to its finest level - CD equivalence class - with hard interventions. Since this is a result of prior work, we only present a short discussion. > **”tuning of the hyperparameters..." | "No ablation study...” | "synthetic dataset..." | "modalities like images..." | "multiple seeds...”** We provided the hyperparameters in detail in Appendix E (Table 1). We observe that the model is not sensitive to hyperparameters and not much tuning is necessary. The generation results reported are not performed with multiple seeds and should not vary significantly. However, the learned DAG structure varies across runs, likely due to its inherent combinatorial nature. For this, we run our algorithm with the identified number of modules multiple times and take the one with the least number of edges. We apologize for forgetting to add this sentence and will add it after line 382. In terms of empirical evaluation, we appreciate the reviewer’s suggestion. We performed additional ablation studies and a simple simulation study during the rebuttal. These results and details can be found in the general response and its attached PDF. For other modalities, we agree it would be interesting but this work is primarily a theoretical contribution. We see our algorithms and experiments as a way to show how our theory can be useful in applications. Further empirical evaluations should remain in future work. >**”practical algorithm relate to the theory... different classes?”** In Appendix D, we give consistency results if the algorithm uses parameterizations that are in line with Assumption 1. However, the assumptions are only required for providing theoretical guarantees. To apply the proposed algorithm in practice, it can easily adopt any neural network as the mixing function (e.g., MLPs that are easy to code up). However, such a method will not come with theoretical results such as consistency, unless proven in future works. --- All references are provided under the general response. We will make responses to minor points during the discussion period. --- Rebuttal Comment 1.1: Title: Response to minor points raised by the reviewer Comment: We thank the reviewer again for the detailed review. Below, we provide responses to the minor points for completeness: > **”In Section 2, 3 and 4.1 ... not always clear which parts are novel ... which builds on previous results.”** Section 2 only introduced the model setup with citations to where the definition comes from. Papers in CRL that study similar setups are discussed in the previous Section 1.1 on related work. In Section 3, we laid out the definition of equivalence classes (for our main results), which many previous papers implicitly used but may not explicitly define. In Section 4.1, we stated its title as “Preliminaries”, so this is mainly built on previous work. In addition, when introducing the Assumptions and Lemmas, we stated the references in the paragraph above. To make this more clear, we will add the citation to the assumption themselves. > **"almost-linear mixing function and single-node interventions... should be clearly stated..." | "'unobserved causal variables confusing...”** We thank the reviewer for this suggestion. We will revise the introduction by adding the following sentence to the last paragraph where we introduced the content of this paper: “We provide theoretical guarantees when the latent variables are observed through a class of (potentially non-linear) polynomial mixing function proposed by [1].” We will also clarify the term 'unobserved causal variables' in line 52. > **”Example 1, Line 190: the interventional distribution of $U_2$ is presented as identical to its observational distribution. I assume this is a typo where the interventional distribution should be $P(U_2|U_1)=\mathcal{N}(U_1+1,1)$ instead?”** Yes, it should be $P(U_2|U_1)=\mathcal{N}(U_1+1,1)$ in line 190. Thank you for pointing this out; we will revise that accordingly. > **”Line 222: why do you consider here $p$ interventional settings ($I\in I_1,...,I_p$) instead of $K (I\in I_1,...,I_K)$ as in Sec. 2 before?”** This is for simplicity of illustrating the proof sketch of Theorem 1, as we stated in line 216. The formal proof in Appendix B.2 considers the general setting where there are $K\geq p$ interventions. --- Rebuttal Comment 1.2: Title: Response to Rebuttal Comment: Thank you for your response and clarifications. The rebuttal clarified the theoretical contribution of the paper better, with the suggestion to also take these clarifications into account for the final paper version. The empirical part of the paper remains a weakness, e.g. given its several components necessary. Still, since I acknowledge the contribution to be mainly theoretical, I raise my score by one. --- Reply to Comment 1.2.1: Title: Response Comment: Thank you for the discussion and updates on reviews! We were glad to provide the clarifications on the theoretical contributions. We would also be happy to answer any additional questions if there are any. We want to make another comment on why several components are necessary for the proposed model. In this empirical framework, several components are necessary because of 1). the setup with multiple interventional datasets (discrepancy) and unknown interventions (intervention encoder) 2). needing to model the latent graph (DSCM).
Summary: This paper proves identifiability of causal disentaglement in latent space in the presence of interventional data. In more details, consider data X generated as X = f(U) where the distribution of U encodes a causal graph G (or a Bayesian network), identifiability is the question of whether f and U can be recovered uniquely. This is known to be not true in general but recent works have shown that this is possible under additional information or biases. This work assumes access to extra data of the form X' = f(U') is available where U' is a single-node soft intervention on U and under various other assumptions shows that identifiability holds. Specifically, if we assume 1. f is a full-rank polynomial 2. the causal variables are linear interventional faithful (which means interventional effets don't cancel each other out), then this work shows that the intervention targets and the transitive closure of the causal graph can be recovered up to an equivalence class. Additionally, with 3. an adjacency faithfulness condition, it's shown that the underlying causal graph can also be recovered up to an equivalence class. Identifiability is a statistically desirable notion that a unique latent variable model could have generated the data. A flurry of recent works have studied identifiability of latent causal graphs under interventions and this work continues this line of work for soft interventions under polynomial mixing. For the proof, the work by Ahuja et al. 2022 is first applied to directly recover the polynomial f up to a linear transformation. What follows next is the main contribution of this work, which is a sort of a peeling procedure. Basically, interventions that target the source nodes are recovered, after which we can recursively extract the transitive closure of the graph. The authors also propose a variational autoencoder framework to estimate the latent causal representations from interventional datasets in practice. The standard Evidence Lower Bound is modified to include a discrepancy term, that roughly measures how well the interventional samples have been modeled. The model is then trained via the standard reparameterization trick. Experiments on a biological dataset, with around 100 latent variables, are performed. The model is shown to predict certain expected causal relationships as well as good R^2 values for unseen double-node interventions. The target audience are people interested in identifiable representation learning. The theory seems good but the assumptions are a bit strong and the experiments should have been more comprehensive. It's possible that similar experimental techniques will adapt for other datasets however it may require more work, since assumptions maybe limiting (see weaknesses below). #### References: - [1] A. Seigal, C. Squires, and C. Uhler. Linear causal disentanglement via interventions. arXiv preprint arXiv:2211.16467, 2022 - [2] K. Ahuja, Y. Wang, D. Mahajan, and Y. Bengio. Interventional causal representation learning. arXiv preprint arXiv:2209.11924, 2022. - [3] Varici, B., Acarturk, E., Shanmugam, K., Kumar, A., and Tajer, A. Score-based causal representation learning with interventions. arXiv preprint arXiv:2301.08230, 2023. Strengths: - As opposed to the do-interventions that Ahuja et al. 2022 consider, soft interventions addressed in this work are more realistic. Similarly, unpaired datasets and unknown intervention targets are also more natural. - The illustrative examples shown in section 4.2 are useful to understand non-identifiability situations. - The intervention targets are learned on-the-fly, as part of the autoencoding variational bayes formulation. This allows the work to generate virtual counterfactual samples. This is shown also in the experimental section. Weaknesses: - It's not mentioned anywhere in the introduction that the function f is a polynomial. Since this is a significant assumption both in theory and especially in practice (where we use neural networks), I find this very misleading to the readers. - Assumption 2 as it reads seems a bit strong. The authors show that certain quadratic SEMs satisfy this assumption in Appendix B, but do the more simpler case of linear SEMs satisfy them? If not, it's a bit strange that the theorem does not cover the case of linear SEMs with linear f, that's covered in [1]. - Image and genomics datasets are used as motivation, but it's not clear why the crucial assumptions such as interventional faithfulness or polynomial f, should be true in those settings. - Footnote 4 says that interventions are chosen to be shifts, why do the authors make this choice, when the theory seems to hold in general? This seems too restrictive and may limit potential applications. - Additional experiments on synthetic datasets would have been more illustrative of the technique's performance, as well as allowing access to ablation studies which are crucial for such a loss formulation. In other words, it's not clear if the good performance on the biological dataset is because of the model (as we would like) or instead just an artefact of the specific dataset and training technique used. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Some questions were raised above. - I think it's fair to also cite Varici, B., Acarturk, E., Shanmugam, K., Kumar, A., and Tajer, A. Score-based causal representation learning with interventions. arXiv preprint arXiv:2301.08230, 2023. - A minor remark, the problem studied in this work has also been called "causal representation learning" in the literature, while "causal disentanglement" is used for the special case when the latent variables are jointly independent. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Limitations have been discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the thorough review! We appreciate your recognition of our theoretical result, and the appreciation of our illustrative examples. We’ve ran addition experiments as suggested and we’d like to address your detailed comments below: > **”It's not mentioned anywhere in the introduction that the function f is a polynomial...”** We thank the reviewer for pointing this out. We will revise the introduction by adding the following sentence to the last paragraph where we introduced the content of this paper: “We provide theoretical guarantees, when the latent variables are observed through a class of (potentially non-linear) polynomial mixing functions proposed by [2].” In our original writeup, we introduced it in the main section on identifiability results (section 4) as this assumption is only needed for providing our theoretical guarantees. To apply the proposed algorithm in practice, one can use a neural network as the mixing function. However, this won’t come with nice theoretical results such as consistency. Contemporary work [3] considered the setting when the mixing function f is nonparametric. However, this work assumes that the underlying structural causal model (SCM) is linear additive Gaussian, whereas we work with general nonparametric SCMs. It remains an open problem to study the most general setting where both the mixing function and the SCM are nonparametric. > **”Assumption 2 as it reads seems a bit strong... do the more simpler case of linear SEMs satisfy them?”** The simpler linear SEMs with shift interventions will not satisfy Assumption 2. In fact, the transitive closure (and thus the latent causal graph) is not identifiable in this setting. Counter-example 1 in section 4.2 is a linear SEM with $U_2=U_1+\epsilon_2, U_1=\epsilon_1$, where the interventions are shift interventions. Intuitively, linear SEMs pose a harder challenge for identifiability because it is hard to distinguish the linear causal mechanism from the arbitrary linear mixing. In the case of nonlinear SEMs, some nonlinear causal mechanisms could be preserved after the linear mixing and used to identify the causal relationships. Note that this is only the case for soft interventions. When considering hard interventions (as is the main focus of [1]), identifiability can be achieved with weaker assumptions. The non-identifiability of linear SEMs with soft interventions was also recognized in [1] (Appendix J). The authors obtained identifiability of the transitive closure for a special class of soft interventions (Assumption 1(b) in their paper). This was achieved by fully utilizing the linearity of the underlying SEM, which would not be available in the general case. > **”Image and genomics datasets are used as motivation, but it's not clear why the crucial assumptions...”** We thank the reviewer for this comment. By assuming interventional faithfulness, we rule out the cases where the effect of an intervention is canceled out. Although it seems like a strong assumption, we feel it isn’t (see [4] for a discussion of the faithfulness assumption in previous literature). This is because it essentially assumes any variable that remains unchanged under an intervention is not due to coincidence but rather the causal structure. Without it, it is hard to infer causal structure from data, as one can only rule out causal structures that imply independence/unchanged relationships that are not present in the data. In terms of our polynomial assumption, see our response to the first comment of why it is required and ways to relax it. It is also worth noting that polynomial functions are universal approximators [2]. We would also like to clarify that we motivate the scenario we study using image and genomic examples. However, to provide theoretical guarantees, certain assumptions need to be made. On real data, with limited samples, these can only hold true with approximations. We note this as an important future step in the discussions, which we hope to explore further in future work. > **”Footnote 4 says that interventions are chosen to be shifts, why do the authors make this choice...”** The theory indeed holds for general nonparametric interventions. In footnote 4, shifts are utilized as a convenient illustration of our algorithm, where the notation is simple. When applying the algorithm, the intervention model can be changed to fit the corresponding application. We appreciate this comment and will clarify the footnote to emphasize this flexibility. > **”Additional experiments on synthetic datasets... allowing access to ablation studies which are crucial for such a loss formulation...”** We thank the reviewer for this suggestion. Similar to [1,5], we view this work as primarily a theoretical contribution, where the proposed algorithms and experiments on biological data serve as a proof-of-concept of how theory can be helpful in real-world applications. However, we agree that more compressive experiments are helpful. Therefore, we performed additional ablation studies and a simple simulation study during the rebuttal phase. These results can be found in the PDF attached to the general response. The details of these experiments can also be found in the general response. > **”...fair to also cite Varici...”** Thank you for pointing this out! We will add the these contemporaneous works (including [3,5,6]) to Section 7.2. > **”... the problem studied in this work has also been called "causal representation learning" in the literature...”** Thank you for this remark. We adopted the term “causal disentanglement” mainly following this literature review [7] and some previous works [1]. This has the advantage of being more specific than “causal representation learning”, which also includes methods such as Invariant Risk Minimization (IRM) which do not completely learn all latent variables. We will clarify this in the related works section. --- All references are provided under the general response. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: I thank the authors for the response and clarifications. The additional ablation studies are quite important and the authors should add them to the paper, along with improving the writing as per the reviewers' suggestions. I'm willing to increase my score. --- Reply to Comment 1.1.1: Title: Thank you so much for the discussion and update on the reviews Comment: Thank you so much for the discussion and update on the reviews! We will make sure to include all the additional results and feedbacks from the reviews to the revision. Additionally, please feel free to reach out if there are any further questions.
Summary: This paper focuses on latent causal representation learning, with a specific focus on showcasing the identifiability of the causal structure among latent variables. The authors propose an approach based on assuming a full-row rank polynomial generator, soft interventions on each latent variable, a generalized version of faithfulness, and sparsity, so that the causal structure can be identified up to some equivalence. Strengths: Overall, the presentation is clear, and the study introduces a relatively novel contribution. While previous works in causal representation learning have primarily assumed marginally independent causal variables and hard interventions, this paper explores new dimensions by incorporating soft interventions. Weaknesses: However, there is one primary concern regarding the interpretation of soft interventions. In line 111, the paper mentions, "We focus on the scenario where we have at least one intervention per latent node." On the other hand, Assumption 2 implies, "Intervention I with target i," which appears to indicate that only one variable is being intervened upon each time. To avoid any confusion, I kindly request the authors to clarify this point in the discussion. Technical Quality: 3 good Clarity: 3 good Questions for Authors: In line 111, the paper mentions, "We focus on the scenario where we have at least one intervention per latent node." On the other hand, Assumption 2 implies, "Intervention I with target i," which appears to indicate that only one variable is being intervened upon each time. To avoid any confusion, I kindly request the authors to clarify this point in the discussion. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for appreciating our results exploring soft interventions, and for your recognition of our contribution. We would like to clarify the following points in response to your comments: > **”In line 111, the paper mentions, "We focus on the scenario where we have at least one intervention per latent node." On the other hand, Assumption 2 implies, "Intervention I with target i," which appears to indicate that only one variable is being intervened upon each time.”** We will clarify the point about single-node interventions, and having one single-node intervention per target. This is best clarified with an example: if we have $d = 3$ latent nodes, then the intervention set $\\{ \\{ 1 \\}, \\{ 2 \\}, \\{ 3 \\} \\}$ is an intervention set consisting of single-node interventions, for which there is at least one intervention per latent node. Our assumption about single-node interventions is stated on line 91, and the necessity of at least one intervention per latent node is stated on line 111. There could be multiple single-node interventions targeting one node, as we discussed after line 111, e.g. the intervention set $\\{ \\{ 1 \\}, \\{ 2 \\}, \\{ 3 \\}, \\{ 3 \\} \\}$ is also sufficient for identifiability. For further context, note that single-node interventions have been the main point of study in previous works (e.g., [1,2]) on causal disentanglement / causal representation learning, which considered the simpler cases of do- and perfect interventions. An extension to multi-node interventions is an important and challenging direction for future work, as we describe on line 398. --- All references are provided under the general response. --- Rebuttal Comment 1.1: Comment: Thank you for the explanation. Single-node interventions seem to be extremely hard to achieve. I would be excited to see multi-node interventions. Initially, I thought the current work can handle multi-node interventions as well. --- Reply to Comment 1.1.1: Title: Clarification of multi-node intervention results and contributions (in contrast to previous/contemporary works in this direction) Comment: Thank you for the discussion! We do want to stress that some of our results hold for multi-node interventions — these are stated in Section 4.4 and Section 6. In the following, we further clarify this work’s results regarding multi- and single- node interventions (in contrast with previous/contemporary works). As the reviewer thought, the current work can handle multi-node interventions; we apologize for the confusion created in our first response. We hope this clarification elevates the reviewer’s opinion about our work. **Results on multi-node intervention.** In Theorem 3 of Section 4.4, we show that it is possible to extrapolate and generate samples from multi-node interventions. This theorem consists of our main theoretical results on multi-node interventions, which states that we can predict the effect of a multi-node intervention from its single-node components. This result is very helpful when working with real-world applications (which we describe in the next paragraph). To the best of our knowledge, no previous/contemporary works have shown similar results. For the empirical results on multi-node interventions, we demonstrate the extrapolation to predict multi-node interventional effect in our experiment section. In the biological application we considered, one important problem of interest is to predict the effect of combinatorial perturbations. In this context, one have access to perturbing several single genes (which can be modeled as single-node interventions) and the goal is to predict the effect of perturbing the combination of these genes (i.e., multi-node interventions). Our result in Theorem 3 and the proposed discrepancy-based VAE framework provide a solution. **Identifiability proof.** In our first response, we only describe the setting where we can show identifiability of the latent causal model (Theorem 1, 2). This result holds when we have at least one single-node interventions per latent node. The contribution of this works in terms of such identifiabilty proofs lies in considering _general soft_ interventions in _general_ structural causal models. This stands in contrast to previous/contemporary works [1,2,3], which primarily consider _hard_ interventions. It also generalizes previous/contemporary works [1,4], which consider _linear Gaussian_ structural causal models. In comparison, the setup we consider is the most general setup in terms of both intervention and structural equation model types. Two other contemporary works that consider this general setting are [5,6], where [5] consider having exactly one single-node interventions per latent node (which is an easier setting as we discussed in line 112), and [6] deals with the special case of p = 2 latent variables. All these works consider single-node interventions, as we do in this paper. Considering this paper works with the much more challenging case of soft interventions in general causal models, it provides a novel way of proving identifiability, which is very different from prior works. This comes with a new set of techniques that we think could be instrumental for future work. --- [1] Squires, C., Seigal, A., Bhate, S. S., and Uhler, C. (2023). Linear causal disentanglement via intervention. [2] Ahuja, K., Wang, Y., Mahajan, D., and Bengio, Y. (2022b). Interventional causal representation learning. [3] Jiang, Y. and Aragam, B. (2023). Learning nonparametric latent causal graphs with unknown intervention. [4] Buchholz, S., Rajendran, G., Rosenfeld, E., Aragam, B., Sch ̈olkopf, B., and Ravikumar, P. (2023). Learning linear causal representations from interventions under general nonlinear mixing. [5] Varici, B., Acarturk, E., Shanmugam, K., Kumar, A., and Tajer, A. (2023). Score-based causal representation learning with interventions. [6] von K ̈ugelgen, J., Besserve, M., Liang, W., Gresele, L., Keki ́c, A., Bareinboim, E., Blei, D. M., and Sch ̈olkopf, B. (2023). Nonparametric identifiability of causal representations from unknown intervention.
Rebuttal 1: Rebuttal: We thank all the reviewers for their insightful comments and suggestions! --- In this general response, we attached a pdf of the additional figures and tables that we will add to the manuscript. To summarize the PDF, it includes: - A modified Figure 4, which now has a larger font with the procedure to generate virtual counterfactual samples highlighted. - Additional experiments (see below for details): - Table 1, which shows ablation studies of different components of the proposed architecture on biological data - Table 2 and Figure 1, which shows a simple simulation study. In addition, we will also provide a discussion on contemporaneous work in discussion (including [3,5,6]). --- For the ablation studies of different components, we compared the performance of our final model against three alternative versions. All models are trained with the same setting (data split, schedule, learning rate, etc). This can be found in Table 1 of the PDF under general response. In particular, we compared against - **Models without the discrepancy loss.** These learn the distributions similar to the conditional VAE [13], where both an interventional sample and its interventional label are fed in to learn the exogenous $Z$. Then inside the latent, we use the same causal layer as our model to generate a virtual sample. During inference, we can generate interventional samples via two approaches. One is sampling the exogenous $Z$ from $\mathcal{N}(0,I)$ and decoding. The other is sampling an observational sample, obtaining its exogenous Z using the encoder then decoding. These two approaches correspond to the second and third rows respectively. - **A model without the causal layer.** This model uses a similar workflow as our final model (illustrated in Figure 4 of our paper), where we do not use a causal-based decoder but a simple MLP decoder. This corresponds to the fourth row of the table. We also note that the encoder, decoder, DSCM, and intervention encoder are needed to learn distributions and the latent causal graph from this setting where observational and interventional data are present. To make the architecture clearer for future use, we improved the presentation of Figure 4. The metrics can be found in the PDF, where we report both MMD and $R^2$ (a widely adopted metric in biological applications). However, MMD is more meaningful as we are assessing the quality of generating a distribution. We observe that models without discrepancy perform much worse due to mode collapses, whereas the sampling approach using observational data (see the above bullet points for details) performs slightly better. Our final model works the best in general; however on the MMD for double-node interventions, the version without a causal layer seems to work slightly better. This is potentially because some double-node interventions that act non-additively can be captured better without imposing the structure. For the simulation study, as a proof-of-concept, we tested on a simple 5-node graph, where we generate $2048$ samples in each of the 5 interventional datasets. We map this to a $10$-dimensional observation space, where we pad zeros to the additional dimensions. This ensures clear visualization of the generated samples in Figure 1, where we compare the zero-shot learned double-node interventional samples against ground truth. In Table 2, we report the quantitative metrics. In addition to the MMD on left-out single and double-node interventions, we also report the training MMD and Structural Hamming Distance (SHD) of the learned graph. Due to the combinatorial nature of learning a DAG and the small sample sizes in this setting, we observe that the learned intervention targets can be quite sensitive to initializations. Therefore during evaluation, we report the metrics while fixing the intervention targets to be of different transposition distances to the true targets. For single-node generations, different transposition distances return similar results, meaning that the model is expressive enough to learn these distributions, although we observe the result with zero transposition distance is slightly better. This is also true during training, which can potentially be used as model selection to overcome the initialization issue. For double-node extrapolation, the result with zero transposition distance shows a larger benefit, as expected from our theory. --- We provide the full list of references here: [1] Squires, Chandler, et al. "Linear Causal Disentanglement via Interventions." [2] Ahuja, Kartik, et al. "Interventional causal representation learning." [3] Buchholz, Simon, et al. "Learning Linear Causal Representations from Interventions under General Nonlinear Mixing." [4] Sobel, Michael E. "An introduction to causal inference." [5] Varici, Burak, et al. "Score-based causal representation learning with interventions." [6] Lippe, Phillip, et al. "Causal representation learning for instantaneous and temporal effects in interactive systems." [7] Kaddour, Jean, et al. "Causal machine learning: A survey and open problems." [8] Tian, Jin, and Judea Pearl. "Causal discovery from changes." [9] Yang, Karren, et al."Characterizing and learning equivalence classes of causal DAGs under interventions." [10] Jaber, Amin, et al. "Causal discovery from soft interventions with unknown targets: Characterization and learning." [11] Ke, Nan Rosemary, et al. "Systematic evaluation of causal discovery in visual model based reinforcement learning." [12] Ramesh, Aditya, et al. "Hierarchical text-conditional image generation with clip latents." [13] Sohn, Kihyuk, et al. "Learning structured output representation using deep conditional generative models." Pdf: /pdf/e50786f553cd9c4952b1b6078a14cde06c776b28.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Active Observing in Continuous-time Control
Accept (poster)
Summary: This paper tackles the issue of determining the optimal timing for observations in continuous-time control with costly observations. The authors formulate the problem and develop theoretical findings to provide insights about why observing at irregular intervals is advantageous in certain environments. They then introduce a novel method called Active Observing Control, which involves taking observations at irregular intervals and performing continuous-time control. The adaptive time interval is achieved using a heuristic that compares the reward variance of the rollout with a threshold, leveraging a model learned from an offline dataset. The effectiveness of this method is validated through experimental results, demonstrating that it outperforms alternative approaches that either utilize a discrete-time model for planning or employ a continuous-time model with regular observation intervals. Ablation studies illustrate how the proposed method can mitigate the discretization error associated with discrete-time formulation and show that the performance is not sensitive to the choice of the threshold. Strengths: - The proposed approach is novel. - It provides insights from both theoretical and experimental results. - It includes comprehensive implementation details and ablation results. Weaknesses: 1. The proposed method appears to be heavily reliant on the accuracy of the learned model. This could limit the generalizability and robustness of the approach. Specifically, the time interval is directly impacted by the uncertainty of the reward. Knowing the true reward function simplifies the problem to estimating uncertainty of the dynamics. It is not clear whether the approach would work well in the general case, where the reward function is not known and need to be learned. 2. The following related works are not included: [1] Huang, Y. and Zhu, Q., 2020. Infinite-horizon linear-quadratic-gaussian control with costly measurements. arXiv preprint arXiv:2012.14925. [2] Zhang, Z., Kirschner, J., Zhang, J., Zanini, F., Ayoub, A., Dehghan, M. and Schuurmans, D., 2022. Managing Temporal Resolution in Continuous Value Estimation: A Fundamental Trade-off. arXiv preprint arXiv:2212.08949. In particular, [1] adds the cost of observations to the original cost function based on the state and action, similar to this work, but in a discrete-time LQG setting. [2] studies the continuous-time stochastic LQR setting, where they formulate the costly measurement differently through the data budget. 3. The paper claims that the formulation is novel. However, it appears that Reference [1] above proposes a similar objective function, albeit within the Linear Quadratic Gaussian (LQG) setting. It would be beneficial if the authors could provide clarifications on the differences between their proposed formulation and that found in Reference [1]. 4. I may be missing something but the method might not be truly continuos-time. The time resolution seems to be lower-bounded by the \delta_t. 5. While an ablation study regarding the cost parameter 'c' in the Cancer environment is included in Appendix K.8, the process of selecting an appropriate 'c' in a real-world context remains unclear. Practical guidance or heuristic for choosing 'c' would greatly increase the method's applicability and ease of use. 6. The uncertainty threshold \tau is determined from a single episode and subsequently fixed. It seems that further adjusting \tau as we collect more episodes could potentially enhance the method’s performance. Can the authors comment on this? 7. The submission of code via an external link is not ideal, particularly because the linked material appears to have been last updated on June 11, which is after the supplementary materials deadline. 8. Typos: - Line 60: “our initial method capable of irregularly observing”, missing “is” - Appendix 939: “Random that samples the state” should this be “action” instead? - There seems to be an inconsistency in the numbers: Line 762: G=1k but G=10k in the main paper Technical Quality: 3 good Clarity: 3 good Questions for Authors: - See weakness section - How to choose the parameter \delta_a and \delta_t? Grid search? - In adaptive quadrature, the time interval for estimating the integral of a function is adjusted based on a comparison of a threshold with the relative estimation error of two different time intervals. Although different to the proposed method, it appears to bear some similarities. Have the authors explored the possibility of applying algorithms from adaptive quadrature to determine the time interval in their proposed method? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: It is discussed in the abstract and conclusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments and suggestions! ## (A) Dependence on the accuracy of the learned model We agree; allow us to clarify that it is standard in all offline model-based RL methods to learn an accurate dynamics model [Levine et al. 2020, Lutter et al., 2021]. We verified empirically that AOC can perform well even when the dynamics model is *less accurate* by performing an *additional experiment ablation* (R1); see the global response. We include this in the supplementary rebuttal pdf. Table 21 shows that AOC with a less accurate dynamics model outperforms Continuous Planning and Discrete Monitoring. ## (B) Extending to work with a learned reward model You are correct; it is useful to also work in the more general case of learning a reward model. We performed an additional experiment, where we learned a separate reward model from the reward values from the offline dataset—**(R2)**; see the global response. We include this in the supplementary rebuttal pdf. Table 22 shows that AOC with a learned reward model still outperforms Continuous Planning and Discrete Monitoring. ## (C) Include related work of Huang et al. 2020, Zhang et al. 2022 We agree they are relevant and now include them in the paper. In the following (D) we discuss Huang et al. 2020. Zhang et al. 2022 is a discrete planning method that works for *linear* (LQR) systems and optimizes the observing frequency for a given total budget of observations. ## (D) Formulation clarification between related work of Huang et al. 2020 We find the formulation differences between ours and the neat work of Huang et al. 2020 that address an infinite horizon discrete time Linear-Quadratic-Gaussian control problem with observation costs be: * Huang et al. 2020 only applies to *linear systems* that are *discrete in time* and have an *infinite time horizon*; our formulation applies to *non-linear systems* that are *continuous in time* and have a *fixed time horizon*. * Huang et al. 2020 assume their full system dynamics model is *known*; ours makes no such assumptions and only assumes access to an offline collected dataset of (possibly irregular in time) state-action trajectories to learn a dynamics model, which is more applicable to real-world environments. We now include a version of this discussion in the related work Section 3. ## (E) Clarification of the $\delta_t$ parameter for a continuous time method We agree \delta_t can be better explained—it is the continuous search (root finding algorithm) tolerance that is used when searching with binary search for the continuous time duration that the computed action trajectory can be followed for, which occurs when the standard deviation of the reward crosses the threshold $\tau$. We note that all numerical root-finding algorithms generally involve a search tolerance or a similar stopping criterion. The $\delta_t$ tolerance ensures that the binary search algorithm stops evaluations once a solution time is found that is "close enough" to the actual true value. Another way to view the search tolerance $\delta_t$, is that the numerical precision that the search value is correct up to. We included a form of this discussion when introducing $\delta_t$. ## (F) Practical guidance for selecting $c$ in a real-world context We agree such guidance increases greatly the applicability of the method. The observation cost $c$ should be decided based on the real-world application at hand and include any human preferences in that application if applicable. In real-world systems (with resource constraints), this cost might correspond to actual monetary cost, computational expense, or energy consumption. Further, if this cost involves a human, for example, a patient receiving medical treatment, they may have preferences for treatment, their health impact (e.g., chemotherapy side effects), and or the number, frequency, and timing of treatments that could be included in the cost $c$. Also, there can often be a trade-off between control performance and the number of observations taken, and one way to trade this off is to tune $c$. We have expanded this discussion to create an additional detailed **new Appendix L**, labeled “Practical guidelines to select $c$”. ## (G) Updating $\tau$ when more data is collected We agree. Currently, we assume an initial offline dataset of state-action trajectories for training the dynamics model. However, we can easily extend this to when more state-action trajectories are collected; we retrain the dynamics model and determine $\tau$ again. This would be most helpful when the initial offline dataset is small and has limited state-action space coverage. We now include this discussion in section 4. ## (H) Code submission We kindly highlight code submission is *not* compulsory and that the NeurIps policy for code provision is after acceptance, not before. We only updated the readme. ## (I) Typos Thank you, we agree with all the typos. Yes, line 762 should read $G=10,000$. ## (K) How to choose $\delta_a$ and $\delta_t$ This depends on the real-world environment; $\delta_a$ is often determined for us by the offline dataset of irregular time state-action samples, where it is the mean time between the samples in the dataset. However, if it had to be chosen, it would be a trade-off between the min and max range, where each of the max and min (see Appendix K.4, for decreasing $\delta_a$) lead to poor performance. For the search tolerance $\delta_t$, a sufficiently small value achieves good control performance. Bayesian optimization or grid search would be suitable. ## (L) Applying adaptive quadrature algorithms We agree; however, our initial method focuses on a simple initial method to verify the theoretical contribution empirically. It is precisely more purpose-designed methods we hope to inspire for future work such as this. ## Additional References * Levine, Sergey, et al. "Offline reinforcement learning" arXiv preprint arXiv:2005.01643 (2020). --- Rebuttal Comment 1.1: Comment: I thank the authors for the comprehensive response, particularly the additional experiments. My concerns have been addressed and I have increased the score accordingly. Regarding the code (readme as explained by the authors), while I don’t think that being non-compulsory grants permission to modify post submission deadline, that is a minor point and did not influence my scoring. --- Reply to Comment 1.1.1: Title: Gratitude for Revised Review and Score Increase Comment: Thank you very much for your thoughtful consideration and the time you have dedicated to reviewing our manuscript. We truly appreciate your recognition of our efforts to address your insightful points, particularly regarding the additional experiments. Your feedback was instrumental in enhancing our work, and we are grateful for your increased score. Thank you once again!
Summary: The paper addresses the problem of continuous control with costly observations. The authors provide a formal definition of the observation problem and the control problem. The continue by proposing a scheme to how to do the two simultaneously in irregular intervals. Finally, the method is evaluated against benchmark methods on 4 problems and the results are analyzed. The paper is well structured and clear. Strengths: The paper deals with an important problem and provide a formal clean definition for it, and also proposes a solution. The experiments not only show that the method has merit but also help to discuss and analyze the solution and the importance of the problem. Weaknesses: The paper is not clearly stationed within previous work on similar topics. The area of event-based sampling in the control community has delt with close problem and analyzed how to control them and even analyzed the stability of such problem under irregular sampling. Also, the method in the paper assumes access to the full state (noisy state), which can be unrealistic in real systems (for the purpose of developing theory it is ok but should be discussed). The method assumes the system can be identified and then a controller controls the system for the given interval based on the identified dynamics. This sounds very close to classical control principle such as certainty equivalence and and the separation principle. Those are applied to nonlinear systems as well (while some of the proofs are for linear systems), and since there is no proof of stability or optimality in this paper should be discussed in light of that. It is unclear what are the limitation of the method in regard of system dynamics, should the system be initially stable? should it be of specific structure? While the experiments are compared to simple control methods, there are other approaches out there, that addresses similar problem and can be adjusted to this setting. As paper does not compare to other methods (other than naïve ones) it is hard to assess the true value of this work. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. What are the limitation on the dynamics of the system? Can any f(t) be used and the method would work? 2. The model under non-uniform samples is reduce to semi-MDP, is there something from that line of research that can solve the problem? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Not applicable Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments and suggestions! ## (A) Add similar related work of event-based sampling We agree that it is helpful for the reader to expand the related work section to include a discussion of the similar topic of event-based sampling. We now extend our related work section to include the following. There exists a similar related work area, that of *event-based sampling* in the control community [Åström et al. 1999, Åström et al. 2008], which addresses the similar problem of controlling a system, of only taking a full state observation when an external input *event* occurs, and then providing a control action. To create this *event*, it **assumes part of the state is continuously observed**, and often an *event* is defined when this part of the state (or a function of it) crosses a fixed threshold (e.g., a physical sensor crossing a threshold) [Vasyutynskyy et al. 2010]. This finds multiple uses in problems such as electronic analog control systems for audio and mobile telephone systems [Åström et al. 1999], and battery capacity optimization in wireless devices [Vasyutynskyy et al. 2010]. However, this is different from our proposed problem of continuous-time control whilst deciding when to take costly observations as: 1. Event-based sampling *assumes part of the state is continuously observed*. Often in our environments, it is not feasible to observe part of the state continuously (e.g., imaging a cancer tumor volume or performing a medical test) or it is prohibitively expensive to continually observe at a high-frequency part of the state (similar to Continuous Planning approaches). 2. The *event* input (the time to take the next observation) is given as a control input to the agent. However, it is precisely the more difficult problem we tackle of coming up with an observing policy that decides *when* to take the next observation. 3. The *event* is often defined by a human in advance and is a function of the current partial state. General environments may not have partial state spaces that are predictive of when to observe next, such as a robotic navigation task in two dimensions and only continuously observing one dimension. This is now included in Section 3. ## (B) Assumption of full noisy state access We acknowledge your concern about the assumption of full state access, which may not always be feasible in real-world systems, however is often taken as a standard assumption in other works [Yildiz et al. 2021]. Indeed, we made this assumption primarily to facilitate the development of the theory. We agree, that our proposed problem and formalism could be extended to partial state observations, however, we leave this for exciting future work and development of such methods. We have now included this discussion and assumption in the limitations paragraph in the Conclusion and Future work, section 6. ## (C) Proof of Stability or Optimality We agree; however, optimality or stability proofs are *not* typically provided for deep RL methods. Allow us to clarify that this current paper contributes the first step to formalizing and understanding theoretically a key property an optimal method should have, which is verified empirically with an initial method. Furthermore, all model-based RL methods, and RL methods in general, are not optimal in *all scenarios*. A few methods can theoretically find an optimal policy *only in restrictive scenarios*, for example, Q-learning with an MDP, finite and discrete state and action spaces, stationarity, infinite exploration, and a fully observable environment. We have now added this additional promising future work of a proof of stability and clearly state this as a limitation in the limitation paragraph in the Conclusion and Future work section (section 6). ## (D) Limitations of the dynamics of the system Our formulation does not impose any specific structure on the dynamics of the system $f$ or require the system to be initially stable. We allow non-linear dynamics and unstable systems. However, we do assume stationarity in the dynamics, i.e., the dynamics are not changing within a single collected offline dataset. Allow us to reiterate that $f$ is defined as a differential equation, which is continuous in time, meaning that existing discrete-time and discrete system dynamics methods are not applicable. We agree this is beneficial to clarify, and we have now added this discussion to the Problem section (section 2). ## (E) Related work clarification of Semi-MDP literature We agree that semi-MDP literature is a similar related field, where it extends the MDP problem to include *options* that define temporally abstracted action sequences [Sutton et al. 1998]. However, some distinct differences prevent using a semi-MDP to address our continuous-time control whilst deciding when to take costly observations problem. These differences include: 1. Semi-MDP is still an underlying discrete MDP with discrete state transitions, whereas our problem formulation focuses on continuous-time systems that can handle continuous actions and states. 2. Semi-MDP formulation does not involve an observation cost, whereas our problem formulation does. We have now expanded the related work section (section 3) to include the above discussion. ## Additional References * Åström, Karl Johan, and Bo Bernhardsson. "Comparison of periodic and event based sampling for first-order stochastic systems." IFAC Proceedings Volumes 32.2 (1999): 5006-5011. * Aström, Karl J. "Event based control." Analysis and design of nonlinear control systems: In honor of Alberto Isidori. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008. 127-147. * Vasyutynskyy, Volodymyr, and Klaus Kabitzsch. "Event-based control: Overview and generic model." 2010 IEEE International Workshop on Factory Communication Systems Proceedings. IEEE, 2010. * Sutton, Richard S. "Between MDPs and Semi-MDPs: Learning, planning, and representing knowledge at multiple temporal scales." (1998). --- Rebuttal Comment 1.1: Title: Thank you for the response Comment: I would like to thank the authors for the response. No further issues. --- Reply to Comment 1.1.1: Comment: Thank you very much for your valuable feedback and for taking the time to review our work. We're pleased to hear that you have no further concerns. We hope that our response has successfully addressed all of your questions, and we are optimistic about the impact of our work in the field. We look forward to the final decision and appreciate your consideration. Thank you once again. Title: Appreciation for Feedback
Summary: The paper addresses the problem of controlling continuous-time environments while actively deciding when to take costly observations in time, which is relevant to real-world scenarios such as medicine, low-power systems, and resource management. Existing approaches either rely on continuous-time control methods that take regular, expensive observations in time or discrete-time control with costly observation methods, which are inapplicable to continuous-time settings due to the compounding discretization errors introduced by time discretization. The paper formalizes the continuous-time control problem with costly observations and shows that observing at regular time intervals is not optimal in certain environments, while irregular observation policies yield higher expected utility. The paper proposes an initial method called Active Observing Control (AOC) to solve the problem of continuous-time control with costly observations and demonstrates how AOC can avoid discretization errors in time and achieve a better utility as a result. The paper empirically verifies the key theoretical result in a cancer simulation and standard continuous-time control environments with costly observations. Strengths: 1) The paper addresses an important and unexplored problem of controlling continuous-time environments while actively deciding when to take costly observations in time, which is relevant to real-world scenarios such as medicine, low-power systems, and resource management. 2) The paper provides a theoretical framework for the problem and shows that irregular observation policies can achieve higher expected utility than regular observation policies in certain environments. 3) The paper proposes an initial method called Active Observing Control (AOC) to solve the problem of continuous-time control with costly observations, which can plan action trajectories in continuous time and plan when to observe next in continuous time. 4) The paper demonstrates how AOC can avoid discretization errors in time and achieve a better utility as a result. 5) The paper empirically verifies the key theoretical result in a cancer simulation and standard continuous-time control environments with costly observations. Weaknesses: 1) The paper proposes an initial method called Active Observing Control (AOC) to solve the problem of continuous-time control with costly observations, but determining the optimal method remains an open problem. 2) The paper constructs a simple initial method to solve the problem, with a heuristic threshold on the variance of reward rollouts in an offline continuous-time model-based model predictive control (MPC) planner, which may not be optimal in all scenarios. 3) The paper assumes that the dynamics model is learned accurately from an offline dataset, which may not always be feasible or accurate in practice. 4) The paper only empirically validates the key theoretical result in a cancer simulation and standard continuous-time control environments with costly observations, and further empirical validation in other real-world scenarios is needed to establish the generalizability of the proposed method. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1) The paper proposes an initial method called Active Observing Control (AOC) to solve the problem of continuous-time control with costly observations, but determining the optimal method remains an open problem. 2) The paper constructs a simple initial method to solve the problem, with a heuristic threshold on the variance of reward rollouts in an offline continuous-time model-based model predictive control (MPC) planner, which may not be optimal in all scenarios. 3) The paper assumes that the dynamics model is learned accurately from an offline dataset, which may not always be feasible or accurate in practice. 4) The paper only empirically validates the key theoretical result in a cancer simulation and standard continuous-time control environments with costly observations, and further empirical validation in other real-world scenarios is needed to establish the generalizability of the proposed method. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: 1) The paper proposes an initial method called Active Observing Control (AOC) to solve the problem of continuous-time control with costly observations, but determining the optimal method remains an open problem. 2) The paper constructs a simple initial method to solve the problem, with a heuristic threshold on the variance of reward rollouts in an offline continuous-time model-based model predictive control (MPC) planner, which may not be optimal in all scenarios. 3) The paper assumes that the dynamics model is learned accurately from an offline dataset, which may not always be feasible or accurate in practice. 4) The paper only empirically validates the key theoretical result in a cancer simulation and standard continuous-time control environments with costly observations, and further empirical validation in other real-world scenarios is needed to establish the generalizability of the proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments and suggestions! ## (A) The optimal method remains open We agree—in fact, this is the primary motivation for our paper, to first formalize this problem, theoretically contribute a property that an optimal method should have and lay the groundwork for the development of further methods to tackle this “important” and “unexplored” problem of continuous-time control whilst deciding when to take costly observations. The theoretical contribution of this work is a proof that shows regular observing is not optimal and that irregularly observing can achieve a higher expected utility. We empirically verify this key theoretical result in a cancer simulation and standard continuous-time control environments with costly observations by constructing the simplest initial method to do so (AOC). Coming up with an optimal method for all scenarios and systems could be *intractable*. Krueger et al. 2020, who focus on a simplified setting of multi-armed bandits where they must pay a cost to observe, propose that for their problem **an optimal method is intractable**—which applies to our problem. As we agree that the optimal method remains open, we clearly state this throughout the paper, in the *abstract (line 18)*, in the *contributions (line 58)* and in the *conclusion and future work section (line 391)*. We believe solving the optimal problem is out of scope for the current paper (and is most likely intractable, i.e., there is no efficient algorithm to solve it), however, we have provided a key theoretical property and empirically verified, that an optimal method should have. ## (B) The simple initial method to solve the problem may not be optimal in all scenarios We agree—building on the above rebuttal response **(A)**, such an optimal method for all scenarios may be intractable. Hence, this motivates the need to rely on heuristics to solve the problem, as suggested by Krueger et al. 2020—which is how our initial method (AOC) works, using a heuristic threshold on the variance of the reward rollouts in an offline continuous-time model-based predictive control planner. Allow us to kindly re-iterate that we explicitly state in the limitations paragraph in line 392 that this “initial solution in our experiments may not be optimal for all scenarios”. Let us restate the official 2023 NeurIps reviewer guidelines, *“In general, authors should be rewarded rather than punished for being up front about the limitations of their work”*. Furthermore, all model-based RL methods, and RL methods in general, are not optimal in *all scenarios*. A few methods can theoretically find an optimal policy *only in restrictive scenarios* (certain conditions), for example, Q-learning with an MDP, finite and discrete state and action spaces, stationarity, infinite exploration, and a fully observable environment [Sutton et al. 2018]. ## (C) Assumes that the dynamics model is learned accurately from an offline dataset, which may not be feasible or accurate in practice We agree and explicitly state this in the limitations paragraph on line 394. This is a key assumption of *all model-based offline RL methods* in general. 1. All offline methods assume that there exists a previously collected dataset of state-action trajectories, as they cannot interact online by definition [Ernst et al. 2005, Levine et al. 2020]. We clearly state throughout the paper (abstract, introduction, contributions, problem definition, etc.) that we are working in the offline setup, where it *is feasible* to have access to an offline dataset. Furthermore, we provide an additional discussion in **Appendix D** of the benefits of offline RL and model-based RL. 2. All offline model-based RL methods rely on the offline dataset covering sufficient state-action space and that the learned dynamics model is accurate enough [Sutton et al. 2018, Levine et al. 2020, Lutter et al., 2021]. We verified empirically that AOC can perform well even when the dynamics model is *less accurate* by performing an *additional experiment ablation* (R1); see the global response. We include this in the supplementary rebuttal pdf. Table 21 shows that AOC with a less accurate dynamics model outperforms Continuous Planning and Discrete Monitoring. ## (D) Only empirically validates the key theoretical result in a cancer simulation and standard continuous-time control environments with costly observations Let us clarify that we empirically validate in more environments than is standard for continuous-time control environments [Yildiz et al. 2021]; as we validate in all three of the continuous-time control environments of the ODE-RL suite from Yildiz et al. 2021, and an additional real-world Cancer environment. Furthermore, we also validate in another environment setup in **Appendix K.3**. We highlight that we discuss the selection reasons for these environments in **Appendix H**. We selected the standard continuous-time control environments from the ODE-RL suite [Yildiz et al. 2021], which consists of three well-known environments: Pendulum, Cartpole, and Acrobot. Additionally, we implemented a Cancer environment that simulates a Pharmacokinetic-Pharmacodynamic (PK-PD) model of lung cancer tumor growth [Geng et al. 2017] under continuous dosing treatment of chemotherapy and radiotherapy. As we need a continuous-time environment, many standard discrete-time environments are not applicable. Therefore, we empirically verify all the environments from the ODE-RL suite and an additional cancer environment. ## Additional References * Sutton, Richard S., and Andrew G. Barto. Reinforcement learning: An introduction. MIT press, 2018. * Ernst, D., Geurts, P., and Wehenkel, L. (2005). Tree-based batch mode reinforcement learning. Journal of Machine Learning Research, 6(Apr):503–556. * Levine, Sergey, et al. "Offline reinforcement learning: Tutorial, review, and perspectives on open problems." arXiv preprint arXiv:2005.01643 (2020). --- Rebuttal Comment 1.1: Title: Expanding Empirical Verification: Additional Real-World Validation of Active Observing Control Method in Glucose, HIV, and Quadrotor Environments from Medical and Engineering Domains Comment: Thank you once again for your invaluable insights and time dedicated to the review process. We are thrilled to present **new "further empirical validation in other real-world scenarios"** to address your point 4. Precisely, we have further empirically verified AOC can successfully operate in the real-world environments (medical and engineering) of an accurate **Glucose environment** (controlling the injected insulin for a diabetic patient [Eisen et al. 1988]), a (Human Immunodeficiency Virus) **HIV environment** (controlling the chemotherapy dose for affecting the infectivity of HIV in a patient [Butler et al. 1997]), and a **Quadrotor environment** (controlling the actuators of an unmanned aerial vehicle [Nonami et al. 2010])—this is detailed in response **(E)** below. These new results directly address your concerns and reinforce the generalizability of our AOC method. We have carefully considered all your comments and have worked diligently to address each one. Should any remaining questions or concerns, we welcome the opportunity to clarify them before the author discussion period concludes. Here's a detailed mapping of our responses to your questions: * The optimal method remains open; see **Response (A)** * The simple initial method to solve the problem may not be optimal in all scenarios; see **Response (B)** * Assumes that the dynamics model is learned accurately from an offline dataset, which may not be feasible or accurate in practice; see **Response (C)** * Only empirically validates the key theoretical result in a cancer simulation and standard continuous-time control environments with costly observations; see **Response (D) & (E)** In alignment with your feedback, we have added additional discussions and new results to the paper. We are excited to introduce a **new Appendix O**, entitled **"Further empirical validation in other real-world scenarios"**, to showcase that AOC further empirically validates our main theoretical claim in other real-world environments. We have provided this additional rebuttal point **(E)** below. ## (E) Further empirical validation in other real-world scenarios We specifically chose these three additional environments to respond to your call for further empirical validation in diverse real-world scenarios. Each environment represents a unique challenge and has direct implications in medical and engineering applications, thus reflecting the broad applicability of our method. These are: 1. **Glucose environment**. Controlling the injected insulin for a diabetic patient to regulate their blood glucose level—here observations are costly as a blood test must be performed to measure the glucose level [Eisen et al. 1988]. 2. **HIV environment**. Controlling the chemotherapy dose for affecting the infectivity of HIV in a patient—where observations are costly as a blood test must be performed to measure CD4*T cell levels [Butler et al. 1997]. 3. **Quadrotor environment**. Controlling the actuators of an unmanned aerial vehicle—where observations can be costly due to performing an expensive (power and compute) localization measure [Nonami et al. 2010]. We observe that AOC still achieves a high average utility $\mathcal{U}$ on all environments, outperforming the competing Continuous Planning and Discrete Monitoring methods, reinforcing our theoretical claims, and extending our empirical validation. **Table 24. Glucose Environment** |Policy|$\mathcal{U}$|$\mathcal{R}$|$\mathcal{O}$| |-|-|-|-| |Random|0$\pm$0|0$\pm$0|50$\pm$0| |Discrete Planning|96$\pm$0.485|96$\pm$0.485|50$\pm$0| |Discrete Monitoring|120$\pm$0.489|92.9$\pm$0.493|15.7$\pm$0.0466| |Continuous Planning|100$\pm$0.39|100$\pm$0.39|50$\pm$0| |**Active Observing Control**|**126$\pm$0.41**|99.9$\pm$0.39|17.5$\pm$0.0336| **Table 25. HIV Environment** |Policy|$\mathcal{U}$|$\mathcal{R}$|$\mathcal{O}$| |-|-|-|-| |Random|0$\pm$0|0$\pm$0|50$\pm$0| |Discrete Planning|152$\pm$0.121|152$\pm$0.121|50$\pm$0| |Discrete Monitoring|2.77e+03$\pm$0.455|126$\pm$0.455|7$\pm$0| |Continuous Planning|100$\pm$0.431|100$\pm$0.431|50$\pm$0| |**Active Observing Control**|**2.83e+03$\pm$1.56**|107$\pm$0.878|5.75$\pm$0.0269| **Table 26. Quadrotor Environment** |Policy|$\mathcal{U}$|$\mathcal{R}$|$\mathcal{O}$| |-|-|-|-| |Random|0$\pm$0|0$\pm$0|50$\pm$0| |Discrete Planning|101$\pm$0.00882|101$\pm$0.00882|50$\pm$0| |Discrete Monitoring|1.79e+03$\pm$0.199|101$\pm$0.0142|5.99$\pm$0.00518| |Continuous Planning|100$\pm$0.015|100$\pm$0.015|50$\pm$0| |**Active Observing Control**|**1.83e+03$\pm$0.0789**|97.9$\pm$0.0258|5$\pm$0.00196| We believe that these new experimental results, conducted in alignment with your feedback, not only bolster our empirical validation but also emphasize our method's practical utility and theoretical integrity. We hope this additional evidence adequately addresses your concerns and merits reconsidering the initial score. Should any uncertainties linger, we remain at your disposal for further clarification; thank you! --- Rebuttal Comment 1.2: Title: Response to author Comment: Thanks for your explanations. I would like to keep my original score
Summary: The authors present a continuous-time framework for "active observation", meaning deciding when to take observations in a control problem, assuming taking observations has a cost. They present two separated controllers, one for taking action and one for deciding when to observe. The controller for acting is an MPC controller, the controller for deciding when to observe is based on binary search. They learn a continuous-time stochastic dynamics model using an MLP with delta time in the input, with ensembling to model epistemic uncertainty and a gaussian prediction to model aleatoric uncertainty. Strengths: The problem is really interesting, and it's obvious from the introduction that this is something that should be worked on. The paper is well-written. Weaknesses: Let's start with a few things that I think are incorrect in the math. Line 203 seems to state that a mixture of gaussian (summing the pdfs, with uniform weight) is a gaussian. That is not correct. You can compute the mean and variance of the mixture, as your equations suggest, but that doesn't make it a gaussian. Same thinking from 241 to 246. With your ensembling, the z distributions at t^{k+1} won't be gaussian, and neither will be your reward. Yes you can compute their mean and covariance, but the reward distribution won't be gaussian unless your state distribution is gaussian and the reward function is linear. Beyond the dynamics model which is truly continuous-time (can predict future observation at an arbitrary floating point resolution), I think the proposed control algorithm is decisively discrete-time. The delta_t for observing is smaller than delta_a for control, but that's it: these are two discrete-time controller with differently size timesteps. There is no notion of adaptive step size during control, the controller makes no use of the continuous time formulation in either case. The advantage for the approach, if I understand correctly, does not come from the continuous-time formulation, but merely for allowing the observation controller to control at a higher frequency than the action controller. The dynamics model is only conditioned on the previous observation z. But in general, you need the whole history of noisy observations and action to predict the next, especially in noisy scenarios. The formulation is limited by the fact that the observation is simply the true state plus some gaussian noise, rather than being a non-linear transformation of the state. The claim that ensembling captures epistemic and gaussian prediction captures aleatoric uncertainty is vague to me. Intuitively that both should capture both type of uncertainty to some degree; can you provide a reference? Table 3: the bold font should be applied to continuous planning when it performs better or just as well. So continuous planning - cancer reward, acrobot reward, and pendulum reward should be bold. In Krueger et al, they actively sample rewards, not observations Line 73, the wiggle should be an equal sign Technical Quality: 2 fair Clarity: 3 good Questions for Authors: For the experiment in figure 3, what happens if you use discrete monitoring with a slightly smaller timestep? Same for figure 4; to me this feels simply like the time discretization is too big and the gain is simply from the fact that the observation controller has a higher frequency. How would you handle the interesting case where the cost of observing depends on the state? How would you create a combined act+observe controller? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The authors are honest about the limitations of their method. I would add the limitation of the observation simply being the true state plus some noise. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments and suggestions! ## (A) Distributions of state and reward We agree that a uniformly weighted mixture of Gaussians is not a Gaussian itself, rather it is possible to compute the mean and variance of the mixture to approximate it with a Gaussian, and we have now fixed this typo in line 203. Allow us first to clarify that we follow the standard model-based uncertainty-aware dynamics model methodological setup from the seminal work of Chau et al. 2018. By doing so, we perform Monte Carlo Sampling of simulating state particle $z_p(t^{(k+1)})$, roll-outs of state trajectories, *to capture the multi-modality* of the state and reward distributions [Chau et al. 2018]. You are correct that when forward propagating to determine the future state and reward distributions, the distribution of the state particles $z_p(t^{(k+1)})$ is not Gaussian. Although we do compute the standard deviation of the reward particles at an evaluation time, we find this *approximate* statistic to empirically work well in AOC—which is the *simplest* instantiation of a method to solve this new problem setting and verify the key theoretical contribution that irregular observing policies can achieve a higher utility than regular observing policies. ## (B) Why can’t you create a discrete observing policy at a higher frequency? We agree that creating a discrete observing policy with a higher frequency is possible, and we already provide an experiment in **Appendix K.4** to show this. Allow us to re-iterate the goal of the paper is to formalize the problem of continuous-time control with observation costs, provide the theoretical proof that irregular observing policies can achieve a higher utility than regularly observing policies, and verify this key theoretical contribution with the simplest initial instantiation of a method, that of AOC. The inherent benefit of the continuous-time dynamics model is that it allows arbitrary evaluation at any future time, which is beneficial when determining the time that the reward variance crosses the threshold, which is determined up to a tolerance of $\delta_t$, as is common for any root finding algorithms (e.g., Newton’s Method). However, using an observing policy that is discrete at a higher frequency is **not practical**, as $\delta_t << \delta_a$, computing the reward variance at every $\delta_t$ within the whole action trajectory duration $H$ scales poorly—commonly known as *linear search*, and scales as $\mathcal{O}(n)$, where n is the worst case number of evaluations of the reward variance, $n = \lfloor \frac{H}{\delta_t} \rfloor$. Instead, AOC employs the simplest adaptive search method, that of *binary search*, which scales as $\mathcal{O}(\log(n))$, which **is practical**. We now include this discussion in Section 4.2. ## (C) Discrete monitoring with a smaller timestep Building on the above, we already provide empirical results in **Appendix K.4** (line 1036) for discrete monitoring with a smaller timestep $\delta_a$. As we reduce $\delta_a$, all baselines degrade in performance; however, AOC still has the highest utility amongst the baselines for a set $\delta_a$—with the gain over discrete monitoring reducing. The improvement gain of AOC compared to discrete monitoring arises from: 1. Avoiding time discretization errors, finding $\rho(z(t_i))^*$ up to a tolerance of $\delta_t$ compared to a large discretization of $\delta_a$ (where $\delta_t << \delta_a$) 2. scales much better as the tolerance $\delta_t$ is reduced, $\mathcal{O}(log(n))$ versus discrete monitoring of $\mathcal{O}(log(n))$ 3. the continuous-time dynamics model is more accurate as it can learn from the irregularly observed offline dataset, whereas the discrete-time dynamics model does not use $\delta$ as an input. ## (D) Dynamics model is only conditioned on the previous observation The implementation dynamics model choice does not affect how the different observing policies $\rho$ compare, as they *all use the same form of dynamics model*. We chose this as the simplest dynamics model, which happens to be a Markovian neural network. However, we could have used an RNN-based continuous-time model to encode the whole history of observations. ## (E) Non-linear state transformations Let us clarify that we follow the standard assumptions for an environment description in control [Yildiz et al., 2021]. We agree that observations cannot be non-linear in the current formulation. However, it *does allow for non-linear state transitions*. We have clarified this in Section 2. ## (F) Capturing both aleatoric and epistemic uncertainty with the probabilistic dynamics model Let us clarify that this is well known and comes from the key reference of Chau et al. 2018, which provides an excellent explanation in their Section 4, from the top of page 4—we reference it on lines 180,188. ## (G) Clarification of related work w.r.t. Krueuger at al. 2020 Krueger et al. 2020, focus on the simpler problem of multi-armed bandits (MAB), where there is a cost to observe the reward. In the MAB setting, it is possible to formulate it as an RL PO-MDP with one state. The underlying static state (the fixed reward distributions of each bandit) is unknown and must be determined by taking successive costly observations (paying a cost to observe a reward). Therefore, Kreuger et al. 2020’s statement that the optimal algorithm for their simple MAB setting is intractable applies to our problem. We now clarify this in Section 3. ## (H) Extensions Handling state-dependent observation costs requires intricate reward scaling. A combined act+observe controller with MPC would scale poorly in complexity. These are beyond our current scope but are valuable directions for future work. We thank the reviewer for their suggestions. ## (I) Typos Thank you, we will un-bold the reward in Table 3, as only Utility should be bolded as that is our ultimate objective to maximize; yes, line 73 should be $=$. --- Rebuttal Comment 1.1: Title: Empirical Verification of AOC with Non-Linear State Transformations and Responses to Reviewer Comments Comment: Thank you once again for your invaluable insights and time dedicated to the review process. We are thrilled to present a **new** experiment that empirically verifies that AOC can successfully operate in environments where *observations are non-linear transformations of the state*, denoted by response **(J)** below. We have carefully considered all your comments and have worked diligently to address each one. Should there be any remaining questions or concerns, we welcome the opportunity to clarify them before the author discussion period concludes. Here is a detailed mapping of our responses to your questions: * Distributions of state and reward; see **Response (A)** * Creating a discrete observing policy at a higher frequency; see **Response (B)** * Discrete monitoring with a smaller timestep; see **Response (C)** * Conditioning of the dynamics model on the previous observation; see **Response (D)** * Non-linear state transformations; see **Response (E)** * Handling aleatoric and epistemic uncertainty with the probabilistic dynamics model; see **Response (F)** * Clarification regarding related work such as Krueger et al. 2020; see **Response (G)** * Extensions; see **Response (H)** * Typos; see **Response (I)** In alignment with your feedback, we have updated our manuscript with pertinent discussions and clarifications. We are excited to introduce a **new Appendix N**, entitled **"AOC also empirically works for non-linear state transformations"**, to empirically showcase that AOC can indeed be applied in environments characterized by non-linear state transformations. We have provided this additional rebuttal point (J) below. ## (J) AOC also empirically works for non-linear state transformations Our latest experiment investigates AOC's performance within environments that utilize observations stemming from non-linear state transformations. We tailored the existing Cancer environment to render observations via the non-linear state transformation function $z(t)=0.1(s(t) + \epsilon(t))^2 + (s(t) + \epsilon(t))$. The subsequent results, conducted across 1,000 random seeds, are outlined in Table 23 below. We observe that AOC still achieves a high average utility $\mathcal{U}$ on all environments, outperforming the competing Continuous Planning and Discrete Monitoring methods. Specifically, this empirically further verifies our key theoretical contribution that regular observing is not optimal and that irregularly observing can achieve a higher expected utility. **Table 23** | Policy | $\mathcal{U}$ | $\mathcal{R}$ | $\mathcal{O}$ | |--------------------------|---------------|---------------|---------------| | Random | 0$\pm$0 | 0$\pm$0 | 13$\pm$0 | | Discrete Planning | 91.7$\pm$0.397 | 91.7$\pm$0.397 | 13$\pm$0 | | Discrete Monitoring | 90.5$\pm$0.559 | 85.4$\pm$0.548 | 5.25$\pm$0.0329 | | Continuous Planning | 100$\pm$0.21 | 100$\pm$0.21 | 13$\pm$0 | | **Active Observing Control** | **104$\pm$0.221** | **98.9$\pm$0.208** | **4.68$\pm$0.0315** | Should any uncertainties linger, we sincerely invite you to share them with us before the author discussion period concludes in the next few days. Your continued engagement is deeply appreciated, and we are at your disposal for any further elucidation, thank you!
Rebuttal 1: Rebuttal: We thank all reviewers for their thoughtful comments and suggestions! We respond to each reviewer with an individual rebuttal and share all **additional new experimental** results here. ## (R1) Dependence on the accuracy of the learned dynamics model To understand this further, we performed an *additional experiment ablation*, where we benchmarked all methods with a less accurate dynamics model—training all the dynamics models with fewer samples (10\% of the total amount of samples used in training the dynamics models presented in the main paper), here trained on 100,000 samples. We provide the experimental results in the rebuttal supplemental pdf as Table 21. We observe that AOC still achieves a high average utility $\mathcal{U}$ on all environments, outperforming the competing Continuous Planning and Discrete Monitoring methods. Specifically, this empirically further verifies our key theoretical contribution that regular observing is not optimal and that irregularly observing can achieve a higher expected utility. We now include this discussion and new experiment in an **additional new Appendix L**. ## (R2) Extending AOC to work with a learned reward model To investigate whether AOC can also be used with a learned reward model, we performed an *additional experiment*, where we trained an MLP reward model (4-layer MLP with 128 units and Tanh activations) from the offline dataset in all the benchmarks. We provide the experimental results in the rebuttal supplemental pdf as Table 22. We observe that AOC still achieves a high average utility $\mathcal{U}$ on the Cancer environment, outperforming the competing Continuous Planning and Discrete Monitoring methods. This empirically verifies that our proposed initial approach can still perform well using a learned reward model and that our theoretical contribution still holds. We now include this additional experiment in an **additional new Appendix M**. Pdf: /pdf/f2049bc92241e5421414b6818713c29c6bfce7cf.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Individual Arbitrariness and Group Fairness
Accept (spotlight)
Summary: This paper explores the interaction of predictive multiplicity with fairness interventions, making the observation that models which receive common fairness interventions often result in increased predictive multiplicity within the Rashomon set. They back this up with some theoretical exploration and experiments, proposing ensembling as an approach for reducing multiplicity. Strengths: - the observation that models under fairness interventions have higher predictive multiplicity is a novel and interesting one - the paper is well-presented and well-scoped, making a clear and useful contribution - Figure 3 is a nice intuitive explanation for why the observed phenomenon occurs - ensembles are a reasonable mitigation for this problem Weaknesses: - I'm a little confused by the discussion around "confidence" in 4.2. It would be useful to give more explanation of how confident (or unconfident) classifiers improve the multiplicity problem, and if this matters before or after ensembling. I also cannot seem to find the referred discussion around fairness and confidence in the Appendix - it sounds like an interesting observation and would be interesting to highlight it a bit more. - some questions around the formalisms in Sec 2: 1. I think T(D) should be a distribution rather than a set, 2. the writing of equation 1 is a little confusing - it's not quite clear how both m and epsilon can be parameters (what if there aren't m models of loss less than epsilon? is it the m lowest loss or any m?) 3. is the domain of \ell above Eq 1 [0, 1] x {0, 1}? - in Prop 3.1, is there some assumption made around what loss function is used? Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: - is reducing predictive multiplicity ultimately a good thing? the upside is that the output is more consistent, but the downside is that the more consistent output is not necessarily chosen in a principled manner. I'd be interested to hear more discussion of this tradeoff - I'm curious about why the Leveraging approach improves multiplicity so much - this would be good to explore a bit if you have space or at least discuss - do we know that ensembles will satisfy fairness metrics if their component models do? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 4 excellent Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer gGVC for the thoughtful comments and for appreciating the novelty and value of the work. We hope that the answers below address the points raised in the review. --- - **Q1: “Is reducing predictive multiplicity ultimately a good thing?”** --- We thank the Reviewer for this important question! There are mainly two lines of thought on this topic. One line is motivated by the growing recognition that at least some forms of arbitrariness are harmful (common response). For a discussion on positive views of predictive multiplicity, see Kulynych et al. [30], Section 7.1. When predictive multiplicity poses harm to individual users, we believe having a mechanism that stabilizes training is important. One danger of multiplicity is when the model developer is unaware that trivial and innocuous choices during training (even the choice of random seed for parameter initialization!) may lead to arbitrary predictions during model deployment. Our research aims to spotlight this issue, revealing that the addition of group fairness constraints does not necessarily preclude arbitrariness (see **Fig. R1** and our **global response** in this rebuttal). Our proposed solution—an ensembling method—retrains the model with different hyperparameters to produce multiple models that satisfy both fairness and accuracy constraints. As shown in Fig. 6, this approach not only leads to more consistent predictions but also maintains baseline fairness values. We will incorporate this discussion, including the trade-off you've highlighted, into the concluding section of our paper. --- - **Q2:** **“why the Leveraging approach improves multiplicity so much”** --- The primary goal of the leveraging approach is to identify an optimal threshold that reduces the disparity of opportunities based on distinct groups while ensuring optimal accuracy. This threshold is applied to generate the final decision output, i.e., binary outcomes in a classification task, as opposed to outputting the likelihood of class memberships. In cases where the variance of scores is low, implementing thresholding on the scores can yield a more stable and consistent output, except for those points near the decision boundary. The leveraging approach, much like other group-fairness-oriented interventions such as the rejection approach, focuses primarily on decision (i.e., binary) outputs rather than raw scores (i.e., values in $[0,1]$). These fairness interventions essentially result in a compression of score variance, contributing to the straight line at 0 in **Fig. R3**. We must note, however, that not all fairness interventions exhibit this pattern. For instance, calibration-based interventions, which are predicated on score outputs, may yield results quite distinct from the leveraging approach, which outputs either 0 or 1. This is because, while group-fairness notions focus on predictions, calibration is based on scores. To further clarify this point, we have supplemented the rebuttal with Figure R3, which demonstrates the effect of thresholding the scores of the baseline model. The behavior exhibited by this curve exhibit resemblance with that of the leveraging approach, but, as shown in the violin plot and discussed in the global response, the increase in score std dev and the ensuing arbitrariness persists after thresholding. --- - **Q3:** **“do we know that ensembles will satisfy fairness metrics if their component models do?”** --- Great question! The answer depends on the nature of the fairness metrics in question. If the group fairness metrics amount to convex constraints on model performance, ensembling — which is a linear combination of models — will still satisfy the fairness constraints. For example, score-based formulations of Statistical Parity, Equalized Odds, and Overall Accuracy Equality are examples of fairness metrics that can be expressed as convex (in fact, linear) constraints on probabilistic classifiers that output scores instead of thresholded predictions; see Alghamdi et al. [2, Appendix A.7] for a thorough discussion. However, when fairness metrics are cast in terms of thresholded classifier outputs (e.g., differences in FPR and FNR across groups), we cannot provide a theoretical guarantee that fairness constraints will continue to hold post-ensembling due to the metrics’ non-convex nature. Nevertheless, we observe empirically that these fairness constraints are typically met by the ensemble model, as demonstrated in **Figure 6**. We will stress this limitation in the revised version of the paper. --- - **Q4: “It would be useful to give more explanation of how confident (or unconfident) classifiers improve the multiplicity problem, and if this matters before or after ensembling.”** --- We thank the Reviewer for raising this question! In general, a classifier giving very confident scores, say 0.95 for class 1 or 0.05 for class 0, does not improve the multiplicity problem. While the score is often interpreted as “confidence”, the score is sometimes a proxy for the label, and has no implication of the classifier’s confidence. For example, in Fig.1 of [20], when given a picture of a dog, all scores assigned by competing models are high, but on different classes. We add the confidence assumption, before ensembling, to prove the concentration of predictions in ensembled models. We say a classifier is confident when, roughly, its scores are bounded away from 0.5 with high probability. This assumption enables us to translate consistent scores among ensembled models to consistent predictions in Theorem 4.2. --- - **Q5: “some questions around the formalisms in Sec 2”** --- We appreciate the Reviewer for pointing out details where the notation can be improved! We will make the relevant changes to the notation of the empirical Rashomon set in Section 2. --- Rebuttal Comment 1.1: Title: Response Comment: Thanks for the rebuttal - I already have a strong score on this paper so no need to update anything on my end.
Summary: The authors discuss arbitrariness in automated decision-making, i.e. the fact that similarly trained classifiers may end up disagreeing on the most suitable label for a data point. In particular, the authors observe that arbitrariness can be worsened by fairness interventions, suggesting that increased group fairness may mean that some individuals arbitrarily receive worse decisions. A solution is proposed in the form of an ensemble algorithm that takes many (arbitrarily) fair classifiers and takes a majority vote. Strengths: The authors visit the active yet understudied topic of arbitrariness in a seemingly novel way. Not only may fairness metrics be oblivious to arbitrariness, their optimization may even worsen it. This nuance meaningfully adds to the debate. The arguments are presented in a simple manner, and the theoretical results are only as complicated as they need to be. The paper is clearly thought-provoking and it invites discussion at a level fit for NeurIPS. Weaknesses: 1. My biggest concern for the paper is a philosophical one: why is arbitrariness (under your definition) harmful? * In Example 1, this is not very clear: in the 'fair' column, each group receives mistakes at the same rates. Why is arbitrariness problematic here? * I have a similar concern with Example 2: it is elegantly shown that two classifiers are each equally EO-fair yet use a different threshold (leading to arbitrariness). Why should individuals care which classifier is chosen? * Perhaps the argument could be made that arbitrariness is a form of unfairness, and that certain fairness notions are oblivious to it. Yet also in this case I wonder whether there is not always a fairness notion that *does* appropriately consider arbitrariness. For example in Figure 2, the arbitrariness could be detected through the Equalised Opportunity fairness notion (i.e. equal false negative rates). In that case, model 2 would be considered fair and model 1 would not be. * Please note I have not read up on the socio-technical arguments for studying arbitrariness. Nevertheless, I believe the examples in this paper should clarify why arbitrariness is important. 2. Figure 4 presents a worrying result. Should we not be able to see the orange curve (low fairness) trend towards the green curve (baseline)? To their credit, the authors remark this as a remarkable result, but I wonder whether the authors can discuss why we don't see a more gradual shape evolution from green to orange as the fairness intervention is done more and more intensively. If not, then it raises the question if the baseline curve is comparable to the others. 3. Many parts of the presentation are unpolished. Here are some examples. * Equations 1, 2 and 3 all use different symbols for the score / classifier function. Equation 2 in particular is inconsistent, as it indexes the data samples with $i$, and it selects models as elements from a set whereas other equations use the symbol $i$. * It seems $\mu$ in Eq. 3 should also have a subscript $x$. * Definition 3 presents the random variable $\hat{Y}$ whereas $h(x)$ was used before. * Section 4 rather suddenly makes the jump from binary classification to multiclass classification. * Figure 3 unnecessarily has a different font and layout. Its x-axis is unlabeled on the right, even though it seems different from the x-axis on the left. The colors in Figure 5 should also be more consistent (as the same methods are used on both sides). 4. Something appears to be wrong with the LaTeX settings. There are no line numbers, and the "Anonymous Authors" block is hidden. If this was done to save space, then this gives an unfair advantage compared to other authors that do follow the template correctly. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Please find my questions and suggestions in the Weaknesses box. If the authors have limited time during the rebuttal, then please consider only answering my questions related to the first weakness. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 4 excellent Limitations: The (negative) societal impact of the work was not discussed. The authors may want to consider whether ensemble classifiers are a reasonable solution to arbitrariness in practice. Fairness in general has many societal risks if implemented incorrectly. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer APJf for the thoughtful comments and for appreciating the novelty and value of the work. We hope that the answers below address the points raised in the review. --- - **Q1: ‘’My biggest concern for the paper is a philosophical one: why is arbitrariness (under your definition) harmful?’’** --- We thank the Reviewer for raising this important question! Please refer to the **global response** in this rebuttal for an in-depth discussion of arguments and examples presented in the literature as to why arbitrariness is harmful. In particular, we include under the paragraph **"Why is arbitrariness (i.e., predictive multiplicity) harmful?"** in our response a discussion that will be reflected in the final version of the paper. We hope this response clarifies your concerns. The Reviewer also raised an important point on whether some fairness notions capture arbitrariness while others are oblivious to it. We would like to first note that Figure 2 is a simplified example. In general, harmful arbitrariness is a **global** notion, one that can only be measured with a pool of equally good models, not with a single classifier. In contrast, group-fairness notions are **local**: they pertain to a *single* classifier and its average performance across groups. Hence, harmful arbitrariness cannot *a priori* be captured by *local* notions such as group fairness. Our work shows that this disentanglement between fairness and arbitrariness is extreme: one can be orthogonal to the other (as can be inferred by, e.g., our Proposition 3.1; see also our generalization result included in the response to **Reviewer 47CU**'s **Q1**). --- - **Q2: “Figure 4 presents a worrying result. Should we not be able to see the orange curve (low fairness) trend towards the green curve (baseline) (…) as the fairness intervention is done more and more intensively?”** --- Thanks for the great question! It is indeed natural to expect that as a fairness intervention is applied with a less-stringent group fairness constraint, the scores’ standard deviation tend towards the baseline. However, this is not what we observe, and this phenomena actually reveals the inherent overconfidence, almost thresholded-like, scores from the fairness intervention. Increased arbitrariness still exist even comparing the fair classifiers with the thresholded baseline models. Please refer to our detailed discussion under the paragraph **"Explanation of Attached Figures to the Rebuttal"** in the global response. To be clear, the classifiers chosen in Figure 1 and 4—obtained from the reductions approach using the implementation from the IBM AIF360 package—outputs scores. However, the majority of the samples receive 0 or 1 post-intervention (see Figure R4). Once we threshold the baseline scores and compute the relevant statistics, the thresholded baseline curve and those with fairness intervention have a closer resemblance (see Figure R3)—thresholding produces the initial phase transition at 0 and high standard deviation at top quantiles. Beyond the samples that have 0 standard deviation in scores (i.e., consistent prediction) in the thresholded baseline model to begin with, the result remains worrying. As shown in Figure R2, removing the samples that receive consistent predictions both pre and post intervention (about 40% of them), there is a notable group (blue region), the biggest group by count, where the standard deviation increased significantly post-intervention. --- - **Q3: ‘’Many parts of the presentation are unpolished. Here are some examples. (...)’’** --- We appreciate the Reviewer for pointing out details and parts of the paper where the notation and layout can be clarified and improved. We will make all the suggested changes in the revised version of the paper. 1. Use $h$ to denote classifiers in equations (2) and (3); 2. Use appropriate indices in equation (2); 3. Clarify that $\bar{\mu}$ denotes the empirical mean for a fixed $x$; 4. Change the random variable $\hat{Y}$ to $h(X)$ in equation (5); 5. Use the notation of binary classification in Section 4 and explain in Appendix the generalization to multi-class classification; 6. Clean up Figure 3 and 5 for consistency in font, layout, and color. Thanks a lot for providing the detailed feedback that further strengthens the paper! --- - **Q4: ‘’Something appears to be wrong with the LaTeX settings.’’** --- This is an unfortunate and accidental mistake because of an oversight on our side: we used the “preprint” instead of the “review” option in the NeurIPS style file. We recompiled the document under the “preprint” option and observed no meaningful change in length. We regret the error. --- - **Q5: ‘’The (negative) societal impact of the work was not discussed. The authors may want to consider whether ensemble classifiers are a reasonable solution to arbitrariness in practice. Fairness in general has many societal risks if implemented incorrectly.’’** --- We thank the Reviewer for the valuable remark. Fairness interventions are at risk of creating the illusion of fairness if not implemented correctly—we will add this note to the limitation section. Ensembling acts as a stabilizing step in mitigating arbitrariness—rather than stopping at a single fair and accurate model, we retrain the model several times with different hyperparameters (e.g. random initialization) to get many models that satisfy both fairness and accuracy constraints. As shown in Fig. 6, this strategy reduces score variation while maintaining baseline fairness and accuracy values. For large models, where training is costly, such an ensembling procedure may be computationally expensive. It is definitely valuable to explore a more computationally efficient strategy in such cases. We will add a discussion in the limitation section as well. --- Rebuttal Comment 1.1: Comment: I'm convinced by the author's response to my biggest concern, which was the lack of motivation for arbitrariness. I especially appreciate the distinction that arbitrariness is a global notion affecting multiple classifiers. Therefore, I increase my score and trust the authors will implement the promised changes. It is clear the paper will be accepted, and I congratulate the authors for their excellent submission.
Summary: The authors theoretically and empirically study predictive multiplicity as it relates to fairness and accuracy. They show in both ways that multiplicity increases as a result of bias mitigation. Strengths: This is a problem that has not been studied and was worth studying because practitioners don't think about it, and should start doing so. Both the theory and the experiments are compelling. The paper is well written. The math looks correct. Bravo. Weaknesses: The authors should justify why arbitrariness is so bad under limited resources that can be allocated. If two people are exactly the same and right at the decision boundary, what other than an arbitrary decision is just if there is only one spot? Similarly, the authors should mention that several bias mitigation post-processing algorithms including [23] have an arbitrariness nature to them as part of the way they work. It would be nice if the authors could connect their concentration results more to the original work in (non-fairness) ensemble classifiers, including Breiman's original paper on Random Forests (https://link.springer.com/article/10.1023/a:1010933404324) and the parameters used therein: strength and correlation. There may be simple ways of doing so by extending the operating points of https://doi.org/10.1109/TKDE.2012.219 to also consider fairness metrics. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: The paper is pretty clear. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 4 excellent Limitations: Nothing specific noted. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer MNFT for the thoughtful review and for appreciating the novelty, complexity, significance, and positive prospects of the work! --- **Q1: “Justify why arbitrariness is so bad under limited resources. If two people are exactly the same and right at the decision boundary, what other than an arbitrary decision is just if there is only one spot?”** --- This is a great point. We agree that, within the context of a single model, making an arbitrary decision between two individuals who are indistinguishable in scores (i.e., equally-distant to the boundary) seems fair. However, when viewed from the perspective of a collection of equally effective models, one person may end up being more likely to experience a change in prediction than the other ************across************ models, resulting in a disparate incidence of arbitrariness. In your example, equal distance to a boundary for a single fixed model does not necessarily imply equal likelihood of score change across a collection of competing models. While conventional fairness measures address the average performance of a single classifier across groups, arbitrariness (as considered in our paper) and predictive multiplicity is a global concept that requires a suite of equally accurate and fair models in order to be measured. The potentially detrimental impact of disparate arbitrariness has been increasingly recognized in recent literature [12,14,28]. Although arbitrariness might be unavoidable—especially when dealing with over-parametrized models such as neural networks—serious harm can occur when (i) the selection of a model from a set of "equally good and fair" options disproportionately impacts certain individuals, (ii) the model is employed in high-stakes applications with significant individual consequences, and/or (iii) a single model is used across multiple institutions, possibly leading to systemic exclusion for certain individuals (the “Algorithmic Leviathan” in [12]). We delve deeper into this topic in our global response — please take a look. A key point of concern arises when the arbitrariness in model selection remains unknown or unacknowledged. In many fairness interventions, like existing implementations of the “Reductions Approach,” the output for a sample is a thresholded score, in which case a decision (0-1 output) is always given for an individual output. In your example, an apparently inconsequential choice, such as the random seed used to initialize model parameters or a fairness intervention, can dictate the selected individual, transforming a seemingly innocuous decision into a potentially harmful one. Of greater concern is when this harm is silent, and model selection is oblivious to such disparate arbitrariness and masked by the assurance of group fairness. We hope our work sheds light on this issue. --- **Q2: “Mention that several bias mitigation post-processing algorithms including [23] have an arbitrariness nature to them as part of the way they work.”** --- Thanks for the Reviewer’s suggestion. Indeed, several bias-mitigation algorithms used to ensure group fairness have inherent randomness that may exacerbate arbitrariness, and we will highlight this issue. In fact, even outside of fairness, Kulynych et al. [26] show that Differential Privacy mechanisms could also unintentionally increase arbitrariness by adding randomness to ensure privacy. This phenomenon also seems intertwined with the behavior of fairness interventions outputting overly confident classifiers; please see **Fig. R4** and the accompanying discussion in our common reply in this rebuttal. --- **Q3: “Connect their concentration results more to the original work in (non-fairness) ensemble classifiers”** --- Thanks for the Reviewer’s suggestion! It would be an interesting future direction to extend the existing work on ensemble classifiers and specifically Random Forests to consider fairness metrics. (We note that the referenced concentration results pertain to the probability of error, whereas our concentration results have to do with score agreement between ensembled classifiers.) --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. I think the paper should be accepted as long as the authors do include their additional discussion of why arbitrariness is harmful.
Summary: This paper studies the effect of group fairness regularization constraints on a notion termed arbitrariness. Here, arbitrariness is defined to be the variability of the predictions, for a set of similarly accurate models, on individual samples. In this paper, they show that when a model is regularized to satisfy group fairness constraints, it is still possible that the predictions of similarly accurate models show variability in the individual predictions. The set of similarly accurate models on a given subpopulation of data is termed the rashomon set. Here, the authors measure arbitrariness with the quantile of the standard deviation of the output of the models in a rashomon set. They also measure ambiguity by computing the proportion of points in the dataset that can be assigned multiple conflicting predictions. They prove that it is possible for a model to satisfy accuracy equality (i.e. all protected subgroups have similar accuracy), but for the arbitrariness of the rashomon set to be as high as up to 100 percent. To counter this challenge, they propose a simple scheme that ensembles the models in the rashomon set (convex combination), and show theoretically and empirically (on three datasets for random forest models) that this ensembling scheme reduces arbitrariness. Strengths: **Originality**\ This paper follows in the line of work on predictive multiplicity and underspecification. Overall, this paper explores the interplay between the predictive multiplicity of a rashomon set and and group fairness properties of that set. A key insight in this work that I hadn't seen before is that it is possible for group fairness constraints to increase the predictive multiplicity of the rashomon set. This finding seems counter-intuitive. The ensembling approach discussed in this work has been mentioned in past work, like the underspecification paper by D'amour et. al., however, it was not explicitly used to address predictive multiplicity as it is used here. Overall, this paper furthers the discussion in a particularly interesting way. **Quality/Clarity**\ The paper is well-written and clear. Figure 2 was quite helpful for trying to understand the main message of the paper. Overall, this is a high quality paper. **Significance**\ If it remains the case that arbitrariness is orthogonal to group fairness constraints, then that is an important finding and has far reaching impact on how to obtain models with reliable predictions. The paper has opened up a line of work that can be explored in several ways. Weaknesses: None of the weaknesses I discuss here are disqualifying, but I note below places where the paper can be improved. **Unify the terms Predictive multiplicity, Arbitrariness, and Ambiguity**: Right now, I think these three terms are all referring to the same general phenomenon, but it is unclear whether there is an instantiation of each that is different. Perhaps the authors could unify this. **Proposition 3.1 is only for accuracy parity/equality**: Unclear if this proposition generalizes to other metrics (see questions section). **Restricted model class and dataset**: This paper only explores random forest models on tabular datasets. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - Is equality of accuracy interchangeable with any other group fairness constraint in proposition 3.1? I know Example 2 discusses equality of opportunity, but the main argument for why group fairness constraints and predictive multiplicity are perhaps orthogonal is this proposition, but in reading the proof, it is not clear to me that you can extend this to other fairness metrics. Accuracy parity makes sense here, but I don't think something like group calibration fits the theme. Can the authors comment on this? - How does one pick the number of models in a rashomon set? In practice, in the experiments, it seems like you need to set a number for the size of the rashomon set, and this parameter might have an oversized influence on the arbitrariness and other properties of the set. How was this number determined? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: Discussed in the final section of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer 47CU for the thoughtful comments and for appreciating the novelty and value of the work. We hope that the answers below address the points raised in the review. --- **Q1: Proposition 3.1 shows orthogonality of group fairness and predictive multiplicity. Can you extend equality of accuracy to other group fairness constraint (e.g. group calibration) in this proposition?** --- Thanks for the question! We confirm that the orthogonality result in Proposition 3.1 extends to group-fairness constraints beyond Overall Accuracy Equality (OAE), which we will add to the Appendix in the final version. What changes now is that there will always be a floor for the probability of error (denoted $\\epsilon_{\\min}$), and then the condition for attaining maximal ambiguity would be to have at least $m > 1/(\\epsilon - \\epsilon_{\\min})$ competing models. (Note that $\\epsilon_{\min}=0$ for OAE!) We illustrate the case of Statistical Parity (SP) below. We note that group calibration can also be done in the more general setup of probabilistic prediction, so the probability of error is taken with respect to the population distribution. We will add further details on this generalized setup for group calibration in the final version of the paper. Consider SP for binary classes and groups: $\\mathrm{Pr}(\\hat{Y} = 1 \\mid S=0) = \\mathrm{Pr}(\\hat{Y} = 1 \\mid S=1)$. If $Y$ does not satisfy SP, then any predictor $\\hat{Y}$ satisfying SP must make an error of at least $$ \\epsilon_{\\min} := P_Y(1) - \\min(P_{Y|S=0}(1) , P_{Y|S=1}(1)).$$ We have the following analogue of Proposition 3.1. **Proposition 3.1 (For Statistical Parity).** For small $\\epsilon > \\epsilon_{\\min}$, and $m$ large enough, there is an empirical Rashomon set $\\hat{\\mathcal{R}}_m=\\{ h_1, \\cdots , h_m\\}$ such that: 1) each classifier $h_i$ has probability of error $\\epsilon$; 2) each classifier $h_i$ satisfies Statistical Parity (SP) exactly; and 3) the set $\\hat{\\mathcal{R}}_m$ has $100\\%$ ambiguity. In particular, under mild assumptions on $P_{Y,S}$, it suffices to have $m> 1/(\\epsilon-\\epsilon_{\\min})$ models. **Proof.** Denote $p=P_{Y|S=0}(1), q=P_{Y|S=1}(1)$. Assume $p \\le q$. Let $\\alpha_{0,0}=\\epsilon-\\epsilon_{\\min}$, $\\alpha_{1,0}=1-\\alpha_{0,0}$, $\\alpha_{0,1} = \\frac{1-p}{1-q}\\alpha_{0,0}$, and $\\alpha_{1,1}=\\frac{p}{q} \\alpha_{1,0}$. For each $(y,s)\\in \\{0,1\\}^2$, let $h$ be a classifier that assigns the class $1$ to a fraction $\\alpha_{y,s}$ of the individuals $x$ that have true class $y$ and group $s$. (We have $0<\\alpha_{y,s}<1$ if, e.g., $\\epsilon < \\epsilon_{\\min} + (1-q)/(1-p)$.) The classifier $h$: 1. has probability of error $\\epsilon$; 2. and satisfies SP exactly. By partitioning the sets of individuals into subsets of relative sizes $r_{y,s}:=\\min(\\alpha_{y,s},1-\\alpha_{y,s})$, we may construct $m>1/\\min_{y,s}(r_{y,s})$ classifiers as above that make conflicting predictions on each individual (similar to the proof of Proposition 3.1), i.e., the constructed set has ambiguity $100\\%$. Finally, if, e.g., $q \\le 1-p(1-p)$ (and $\\epsilon$ is correspondingly small enough), then the condition simplifies into $m> 1/(\\epsilon-\\epsilon_{\\min})$. --- **Q2: “How does one pick the number of models in a rashomon set? How was this number determined in the experiments?”** --- We thank the Reviewer for the question. The number of models sampled from the Rashomon Set indeed may have an important influence on the arbitrariness metric being measured, but the importance varies with the chosen metric. For example, ambiguity (percentage of samples receiving a conflicting label) is non-decreasing with the number of models. On the other hand, standard deviation of scores—our metric of choice—is less influenced by the choice of the hyper-parameters. In practice, the number of models one can sample depends heavily on available computation resources. Through literature research and some initial experimentation, we find 10 models being a reasonable number to estimate the standard deviation of scores, given the size of the datasets, computation required to train the 3 baseline model classes with the various fairness interventions. Repeating the result using 10 random splits of the data also produce small error bars (in shaded region), which acts as a sanity check that the estimation is fairly consistent. There is a separate question of how many models one should sample in order to accurately estimate a metric of arbitrariness within a Rashomon set. This pertains to a line of work that investigates the size of a Rashomon set, which also depends on the volume of the hypothesis space. We refer the Reviewer to Hsu and Calmon [20] for a related discussion. --- **Q3: “Unify the terms Predictive multiplicity, Arbitrariness, and Ambiguity.”** --- Thanks for the suggestion! The three terms are connected in the following way. Arbitrariness refers to the general phenomena where a choice — a model or a decision — cannot be justified. Arbitrariness can result from predictive multiplicity, which pertains to prediction tasks and refers to the phenomena where multiple competing models yield conflicting predictions. Ambiguity, proposed by Marx et al. [28], is also a metric of predictive multiplicity of a dataset. We understand the confusion since we sometimes use these terms informally. We will add the above discussion to clarify the terminology in the revised version. --- **Q4: “Restricted model class and dataset: This paper only explores random forest models on tabular datasets.”** --- Thanks for the comment! We would like to point out that we have provided results for two other model classes—gradient boosting and logistic regression—in **Section D** of the Appendix, as discussed in **Section 5** of main body. --- Rebuttal Comment 1.1: Title: Acknowledging Rebuttal Comment: Thanks for response to clear the issues that I had. Overall, I think this work is a nice contribution, so I'll maintain my current rating.
Rebuttal 1: Rebuttal: We thank all reviewers for their time and effort! We are glad our paper was positively received. In particular, we were encouraged that all reviewers recognized the novelty and impact of our work: “If it remains the case that arbitrariness is orthogonal to group fairness constraints, then that is an important finding and has far-reaching impact on how to obtain models with reliable predictions. The paper has opened up a line of work that can be explored in several ways.” (**Reviewer 47CU)**; “This is a problem that has not been studied and was worth studying because practitioners don't think about it, and should start doing so.” (**Reviewer MNFT**); “The authors visit the active yet understudied topic of arbitrariness in a seemingly novel way” (**Reviewer APJf**); and “the observation that models under fairness interventions have higher predictive multiplicity is a novel and interesting one” (**Reviewer gGVC**). We appreciate the reviewers’ thoughtful input. We believe we have addressed all critical points in the rebuttal below and have outlined how we plan to update the paper accordingly. --- ### **Why is arbitrariness (i.e., predictive multiplicity) harmful?** In response to **Reviewer MNFT Q1, Reviewer APJf Q1, Reviewer gGVC Q1**, we provide a further discussion (will be added to the revision) to contextualize this work. Our work is motivated by recent literature that delineates the harmful impact of arbitrariness. See Creel and Hellman [12] for a philosophical overview of the hazards of arbitrariness in algorithmic decision-making, the introduction of Marx et. al [28] for potential harms of predictive multiplicity in ML, and D’Amour et. al [14] for extensive examples on how underspecification and the ensuing arbitrary choices among competing models can challenge the credibility of ML models. There is a growing recognition that at least some types of arbitrariness in ML can be viewed as harmful; and in particular *disparate* arbitrariness. Arbitrariness itself may be inevitable— when models are overparametrized. Harm arises when (i) an arbitrary choice of model across the set of “equally good and fair” models impacts different individuals differently, (ii) the model is deployed in a high-stakes application with individual-level consequences, and/or (iii) the same model is used across institutions, where an arbitrary choice of model may lead to systemic exclusion for certain individuals. One example from [12, 28] is: - **Recidivism prediction:** consider a recidivism risk prediction task that admits competing models with similar accuracy and group fairness levels. In this case, a person who is predicted to recidivate by one model may achieve a lower recidivism risk score by a competing model. An arbitrary choice between these models may lead to unjustified harm, e.g., some defendants may receive consistent predictions across competing models whereas others may not. The harm is particularly concerning when the existence of competing models is unknown or not communicated to the model user. In such cases, arbitrariness can strengthen the growing calls to forgo the use of ML in criminal justice. As thoroughly discussed by Creel and Hellman [12] (“Algorithmic Leviathans”), arbitrariness may lead to systemic exclusion of opportunities for certain individuals when a single decision-making system, out of the many equally good ones, is chosen without deliberation. --- ### **Explanation of Attached Figures to the Rebuttal** We would like to direct your attention to the attached page, with further analysis the fairness-intervention-induced arbitrariness. These figures aim to address points raised by **Reviewer APJf (Q2) and Reviewer gGVC (Q2 and Q4)**. In short, these figures illustrate three points: **1)** Fairness intervention makes classifiers overly confident (attached **Fig. R4**); **2)** This overconfidence partially explains the increase in arbitrariness after intervention; and **3)** Additional significant arbitrariness is induced post-intervention that affects a sizable portion of the population that had no arbitrariness pre-intervention (attached **Fig. R2**). - In response to **Reviewer APJf**’s question on the missing trend in **Fig. 4** in the manuscript, in **Fig. R3**, we added 3 curves to the original **Fig. 1** in the main paper, including different levels of intervention using the Rejection approach [23] as well as the thresholded baseline. From the similarity in the shape of the thresholded baseline curve and the fair models’ curves, thresholding-like behavior of some interventions may explain some—but certainly not all (see **Fig. R2**)—increase in score std dev and the ensuing arbitrariness. We plot in **Fig. R2** the distribution of score std. relative to the *thresholded* baseline model. Removing samples that receive very low score std. both from thresholded baseline and fair classifiers, the largest group (blue area) in this violin plot are those individuals for which std. increases from 0 to a large positive value (median around > 0.15). Hence, the blue area shows that significant arbitrariness is introduced by the fairness intervention, in addition to and separate from the effects of thresholding the baseline. Combining the findings from **Figs. R2** and **R3** reveals that overconfidence of scores post-intervention partially explains the shape of the cumulative plots in the original **Fig. 1** of the manuscript. - Lastly, the aforementioned overconfidence of the post-intervention classifiers is illustrated in **Fig. R4** (addressing **Reviewer gGVC**’s **Q4**). **Fig. R4** provides a histogram of the scores of fair classifiers (from the Reductions [1] approach), which displays the inherent overconfidence, almost thresholded-like, scores from post-intervention. --- Please do follow up with us if you have additional suggestions and feedback that can further strengthen the paper. Thank you very much! Pdf: /pdf/6666cf44db6e32b2a964fdefb1f9865e654b8115.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Variational Gaussian Processes with Decoupled Conditionals
Accept (poster)
Summary: This paper addresses enhancing the performance of sparse approximate Gaussian processes, which has scalability with the dataset size but causes performance degradation. The authors proposed a novel parametrization of conditionals as decoupled forms for training and testing ones, where lengthscales in kernel representations in their means and covariances can have different values. Experimental results on multiple public datasets demonstrate that the proposed method performs better than existing methods. Strengths: - Experimental results of the proposed method showed promising performance. Weaknesses: ### Originality: - Originality is limited to the proposal of Eq.3. ### Clarity: - The important concept of how to select or generate inducing points is not described in the main text. For example, inducing points are different between the left and right in Figure 1, but it is not described why. - It is better to provide the definition of SVGP in Introduction. - In the equation under l.53, some variables, such as mu, are not defined. The notation for x* is inconsistent - In the description of Eq.1, it is better to define Kmm explicitly. Accordingly, the difference between Kmms for the proposed method and the existing methods is not clear. - It is better to introduce p(f∗|fm) with more intuition. - It is unclear how to determine the regularization parameter. Technical Quality: 3 good Clarity: 1 poor Questions for Authors: NA Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 1 poor Contribution: 2 fair Limitations: There is a study of performances over different hyperparameters. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. We will address each of your questions below. > Originality is limited to the proposal of Eq.3. We would greatly appreciate it if you can **concretely** point out the limitations on our novelty. We believe that referring to Eq (3) as our full novelty is reductive, as it describes only the modifications we make to the prior and not the resulting considerations when deriving an inference scheme or implementation details like whitening. Arguably, you could be this reductive about other prior published work just as easily, and in our opinion just as wrongly (e.g., by this logic, Salimbeni et al., 2019 “reduces” to equation 12, Jankowiak et al., 2020 “reduces” to equation 18, and so on). To be concrete about our novelty from our end, as far as we are aware, ours is not the first work to consider modifications to the conditionals used in inducing point methods, but *is* the first to consider *more flexible* conditionals. This has significant payoff, reducing error by as much as half on several commonly used benchmark datasets in the GP literature. By being the first to do so, our method is arguably “novel.” > The important concept of how to select or generate inducing points is not described in the main text. For example, inducing points are different between the left and right in Figure 1, but it is not described why. In all results, the inducing points are learned automatically by maximizing the ELBO, which is (1) standard in practice, and (2) the default approach to selecting inducing points in software (e.g., both in GPFlow and in GPyTorch). This allows us to select the “best” inducing points, assuming the numerical optimization of the ELBO is done properly. It is natural and expected to have different inducing points in the left and right in Figure 1. The inducing points are learned parameters of the model, and the left and right figures depict different trained models. We will revise the paper to make this more clear. > It is better to provide the definition of SVGP in Introduction. SVGP, stochastic variational Gaussian process, is a widely used method in the literature and we will add the definition of SVGP in the introduction to make it more clear. > In the equation under l.53, some variables, such as mu, are not defined. The notation for x* is inconsistent. We will carefully go through the paper for better consistency. In the equation under l.53, $\mu(\cdot)$ is the prior mean function. We will write the variable $\mathbf{x}^*$ in l.53 in bold for consistency. Thanks for spotting the typo. > In the description of Eq.1, it is better to define Kmm explicitly. Accordingly, the difference between Kmms for the proposed method and the existing methods is not clear. $K_{mm} \in \mathbb{R}^{m\times m}$ is the kernel matrix formed by kernel function evaluated at inducing points: $K_{mm}(i, j) = k(u_i, u_j)$ where $u_i, u_j$ are inducing points, which is a standard expression in GP literature and we will make it more clear in the text. The difference between Kmms for our method and the existing methods lies in the decoupling of the kernel hyperparameters and we provide explicit examples in section 3.2 (example 1 and 2). > It is better to introduce $p(f^∗|f_m)$ with more intuition. $p(f^* \mid f_m)$ is the distribution of testing labels $f^*$ conditioned on the inducing values $f_m$ induced directly by having a GP prior. It would be the same distribution as conditioning a GP on the inducing points and the noise-free labels $f_m$ and making predictions at the test points. > It is unclear how to determine the regularization parameter. We thoroughly evaluate the effect of $\beta_2$ in Figure 2 and section 5.2 on two of our test datasets, with additional similar figures on most datasets in the supplementary materials (see l.207-l.209 in the paper and Figure 5&6 and Table 7&8 in the supplement). In practice, the lessons learned in this study seem to be general, where our method prefers lower values of $\beta_2$ but is insensitive otherwise. To set the regularization parameter even more precisely on a real world application of our method, consider doing cross validation. Please let us know if you have further questions and we would appreciate it if you would raise your score given our response. ### References Salimbeni, Hugh, et al. "Orthogonally decoupled variational Gaussian processes." Advances in neural information processing systems 31 (2018). Jankowiak, Martin, Geoff Pleiss, and Jacob Gardner. "Parametric gaussian process regressors." International Conference on Machine Learning. PMLR, 2020. --- Rebuttal 2: Comment: Dear reviewer TCkp, As the discussion deadline is approching, we kindly ask you to check our rebuttal response :) If our rebuttal addressed your concerns, we would like to ask if are willing to reconsider your score. Also, please let us know if you have further questions we can response to! Thank you!
Summary: This paper studies sparse Gaussian processes which allow different kernel hyperparameters such as length scale to be used, for instance, for the variational posterior mean and covariance. They derive a variational approximation that enables training of models of this kind. This extra scalable approximation flexibility is shown to lead to better predictions on UCI datasets, as well as a few benchmark-style Bayesian optimization problems. EDIT: As indicated below, I've acknowledged the authors' rebuttal and previously updated my review and score appropriately. Strengths: The paper is technically sound, generally well-written, and includes all of the content and evaluation I would expect for this topic. The authors have done a reasonable and comprehensive job here. They evaluate on both modeling tasks (UCI dataset regression) and decision-making tasks (Bayesian optimization), as well as include an ablation study for one of the key non-obvious hyperparameters they introduce (the \beta regularization term which controls how the test conditional component enters the objective). My most common reaction while reading this paper was "that makes sense" - which is actually very good, but also means that I don't have too many comments on what to add since the paper was largely clear, and therefore this review will be a bit short. The authors should not be discouraged by this - the shortness in this case should be understood as a sign they wrote the paper well. Weaknesses: This paper is studying what I view as a relatively old-school topic, and as a result much of the value here is from good evaluation and execution of ideas, rather than from the ideas themselves. Inducing point approximations have been studied very extensively, and at this stage there are so many variations out there that it isn't clear why we need to write more papers about them, since these methods are mature enough that readers can probably figure out new ones themselves when they are needed for applied work. The need to add $\beta$ regularization parameters is a bit suspicious, since this suggests there might sometimes be a need to downscale the extra term coming from different conditionals because it starts behaving badly in the limit of many test points. It also forces the user to consider even more hyperparameters, and in most cases there are already enough of those out there. As a result, I'm curious for what choices of conditionals the respective infinite-dimensional variational approximation problem between stochastic processes is well-defined. See questions. The empirical evaluation is overall good, but it focuses too much on RMSE tables on UCI datasets. These take up a lot of space and are not super informative, so it would be better to have just one and move extra results into the appendix, creating more room for describing the experiments. I would be particularly interested to see more figures which evaluate the technique's produced uncertainty, particularly since inducing points tend to blow up uncertainty and over-smooth the data in situations where there are not enough inducing points, and it would be much more interesting to see how the authors' technique behaves in this situation, rather than more RMSE tables. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Is there a way to view the inducing point approximation you propose as a random function via pathwise conditioning? By this, I mean that, since the true posterior satisfies $(f|y)(.) = f(.) + K_{(.)x} (K_{xx} + \sigma^2 I)^{-1} (y - f(x) - \epsilon)$ where $f \sim GP(0,k)$ and $\epsilon \sim N(0,\sigma^2 I)$, and the approximation of Titsias satisfies $(f|u)(.) = f(.) + K_{(.)z} K_{zz}^{-1} K_{zx} (K_{xz} K_{zz}^{-1} K_{zx} + \sigma^2 I)^{-1} (y - g(x) - \epsilon)$ where $g(x) \sim N(0, K_{xz} K_{zz}^{-1} K_{zx})$ and $\epsilon \sim N(0,\sigma^2 I)$, your posterior approximation might admit a similar formula, and its structure should give insight into the approximation's behavior. Do you know what the respective formula is? Is your variational optimization problem consistent in the limit of infinitely many test points? By this, I mean, does it lead to a valid Kullback-Leibler divergence minimization problem between stochastic processes, where the true posterior and variational approximation are absolutely continuous so that the KL is well-defined? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive feedback. We will address each of your questions below. > "Could move some RMSE tables into appendix and describe more about experiments." Thanks for the suggestion. We plan to use the additional page of content to add additional description for the experiments. We agree that the RMSE results are generally less important than the NLL results given the probabilistic nature of the model, and are happy to move them to create additional space as well. > " ... more figures which evaluate the technique's produced uncertainty, particularly since inducing points tend to blow up uncertainty and over-smooth the data in situations where there are not enough inducing points, and it would be much more interesting to see how the authors' technique behaves in this situation, rather than more RMSE tables." We agree that uncertainty evaluation is very important. We do at least provide the NLL metric that evaluates both accuracy and uncertainty. For visualization, with synthetic data, we see that when there are not enough inducing points, uncertainty blows up and over-smoothed data (Figure 1a) and our method resolves this issue (Figure 1b). On real datasets, generally our setting does not have enough inducing points due to large number of trainig data, and we provide model calibration results in terms of z-score distribution figures in the supplement (Figure 3 in section 2.1.3 in the supplement). It shows that our method has a better calibrated predictive uncertainty. We will draw more attention to this using the additional page. Please let us know if you would like to see more visualization of the uncertainty evaluation. > "View the inducing point approximation you propose as a random function via pathwise conditioning. ... Your posterior approximation might admit a similar formula, and its structure should give insight into the approximation's behavior. Do you know what the respective formula is?" Yes. We can derive a similar formula following Titsias. For standard variational GP, it requires computing the optimal variational distribution $\phi^*$ by differentiating the ELBO. Our ELBO for the decoupled model is formed similarly with an additional expected KL divergence term. If we set the regularization parameter $\beta_2=0$ (removing the additional expected KL divergence term), we could follow Titsias and find a similar optimal variational mean $ \mathbb{m}^*=\sigma^{-2} Q_{zz}\Sigma^{-1} Q_{zx} y$, where $\Sigma = Q_{zz} K_{zz}^{-1} Q_{zz} + \sigma^{-2} Q_{zx} Q_{xz}$. With $\mathbb{m}^*$, we can then derive the predictive mean $ \mu(\cdot) = Q_{(\cdot) z} Q_{zz}^{-1} \mathbb{m}^* $. Finally, the random function expression of the decoupled model is: $ (f\|y)(\cdot) = f(\cdot) + Q_{(\cdot) z} P_{zz}^{-1} Q_{zx} (Q_{xz} P_{zz}^{-1} Q_{zx} + \sigma^2 I)^{-1} (y - g(x) - \epsilon)$, where $P_{zz} = Q_{zz} K_{zz}^{-1} Q_{zz}$ and $g(x) \sim \mathcal{N}(0, Q_{xz} P_{zz}^{-1} Q_{zx})$. So the differences between the standard variational GP formula and ours are (1) our formula mostly involves the Q kernel matrices from decoupled mean and (2) the $K_{zz}$ matrix in the standard formula is replaced by $P_{zz}$ which involves the interaction of $K_{zz}$ and $Q_{zz}$. However, in the more general case where $\beta_2 \ne 0$, it is harder to solve for the optimal variational distribution and therefore trickier to find such a formula. With that said, we note (admittedly loosely) that as $\beta_2 \to \infty $, $K_{zz} \to Q_{zz}$ and therefore $P_{zz} \to K_{zz}=Q_{zz}$, which would recover the original derivation. Thus the closed form looks like–but may not actually be–a strict generalization. > "Is your variational optimization problem consistent in the limit of infinitely many test points? By this, I mean, does it lead to a valid KL divergence minimization problem between stochastic processes, where the true posterior and variational approximation are absolutely continuous so that the KL is well-defined?" Note that the modification we made to derive our model is not directly made to the SVGP ELBO, but to the GP prior itself. Thus, our derivation results in simply a standard variational inference problem for a slightly different probabilistic model. Our method is therefore best interpreted not as a “modified SVGP” but rather as a “modified GP” to which we are applying variational inference, and there is indeed still a true posterior being approximated by our KL minimization. Thank you for your insightful comments and please let us know if you have further questions. --- Rebuttal Comment 1.1: Comment: Thanks very much for your comments! Overall, I liked the paper, I am somewhat surprised that my review turned out to be the most positive one. I have no major issues with the paper being accepted, so it seems to me that responding to the other referees' feedback and concerns is in order. --- Reply to Comment 1.1.1: Comment: Thank you for your support!
Summary: This paper develops a variational approximation for learning sparse Gaussian Processes. The central idea is to "decouple" the mean and covariance parameters. From my read, this decoupling simply means formulating two different kernel matrices (called $Q$ and $K$ in the paper), and then working those new parameters through the inference and learning mechanisms: to that end, the paper provides appropriately-detailed derivations, including for some empirical concerns (such as whitening). The paper formulates the proposed approach in two cases, evaluating on 10 benchmark regression tasks. Overall, the proposed approach yields improved benchmark tasks. Ablation studies are also performed, which help provide insight into the approach. Strengths: Overall, this paper achieves an acceptable balance of mathematical derivation vs. explanation. However, please see "Clarity" in weakness for qualifications. Experiments: Experiments were averaged across 10 runs, and sufficient details of standard error & significance were included. There's an appropriate amount of rigor, and ablation studies run. Weaknesses: Clarity: Key summary statements, to act as milestones, throughout would definitely help. This is especially true of Table 1, section 3.2 (Eqn 3), and section 3.3: broadly, the equations are presented, but without a lot of discussion contextualizing the meaning and importance of those equations. Limited discussion: Given the number of experiments run, the discussion does not have a good pay-off. For example, Table 5 (one of the ablation studies) has fewer than 3 lines of prose dedicated to discussing those results in the main paper. Lack of error analysis: The battery of regression benchmark tasks gives some evidence of broad performance, but there's no error analysis or insight into the numbers and performance. Currently, the results read as "just numbers": it's not clear, e.g., what a reduction from 0.321 to 0.156 on "Pol" means. This is more of a minor point, but not without impact: splitting related work into sections 2.1 and 4 was not effective for me. It made 2.1 very much a slog, and section 4 seem unimportant. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: Q1: Please provide a few paragraphs that would form the basis of an error analysis & extended discussion section. Q2: Please provide a limitation discussion (or if I missed it in the paper, please point me to it). Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: No: no obvious limitations section was provided in the main paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive feedback. We reply to each of your suggestions and questions below. > Clarity: Key summary statements, to act as milestones, throughout would definitely help. This is especially true of Table 1, section 3.2 (Eqn 3), and section 3.3: broadly, the equations are presented, but without a lot of discussion contextualizing the meaning and importance of those equations. Thank you for pointing out areas we could focus on for adding additional clarification. We found space to be fairly tight to include our reasonably extensive experimental evaluation, and plan to use the additional content page to add additional summary and discussions. > Limited discussion: Given the number of experiments run, the discussion does not have a good pay-off. For example, Table 5 (one of the ablation studies) has fewer than 3 lines of prose dedicated to discussing those results in the main paper. We will make sure to add more discussion of experimental results, especially for Table 4 and 5. For example, in Table 5, our SVGP-DCDKL performs the best in both mean performance (RMSE) and probabilistic performance (NLL). This suggests that it is more beneficial to have different models (decoupled feature extractors in SVGP-DCDKL) for conditional mean and covariance rather than only modeling the conditional mean (SVGP-MeanDKL) or having same models (SVGP-DKL). Moreover, it is surprising that SVGP-MeanDKL outperforms baseline SVGP-DKL, which re-emphasizes that a *decoupled* (*different*) or even a much simpler model for the covariance is more beneficial than using the same model for conditional mean and covariance. For the PPGPR base model, PPGPR-DCDKL also performs the best and we can draw the same conclusion. However, PPGPR-MeanDKL does not outperform the baseline PPGPR-DKL. This suggests that for the PPGPR base model, it is vital to decouple the feature extractor (PPGPR-DCDKL) rather than having a simpler model for the covariance (PPGPR-MeanDKL) since PPGPR treats the predictive variance differently in the learning objective. > Lack of error analysis: The battery of regression benchmark tasks gives some evidence of broad performance, but there's no error analysis or insight into the numbers and performance. Currently, the results read as "just numbers": it's not clear, e.g., what a reduction from 0.321 to 0.156 on "Pol" means. We use standard regression benchmark datasets that are commonly used in GP regression papers (e.g. Wenger et al., 2022, Jankowiak et al., 2020, Wang et al., 2019), and the labels are standardized to mean 0 and variance 1. We will do our best to better explain the meaning of improvement in numbers. For example, the reduction from 0.321 to 0.156 on "Pol" generally indicates a large performance improvement. > This is more of a minor point, but not without impact: splitting related work into sections 2.1 and 4 was not effective for me. It made 2.1 very much a slog, and section 4 seem unimportant. We will do our best to reorganize these related work. Section 2.1 is intended to contain necessary background and prior work needed to understand our method directly. Section 4 is intended to contain broader related work and provide context. Such a section is commonly included and we will do our best to make it more coherent. > Please provide a few paragraphs that would form the basis of an error analysis & extended discussion section. Please provide a limitation discussion (or if I missed it in the paper, please point me to it). As stated above, we will provide more summary statements, more discussion of experiment results, as well as error analysis. We will also add a limitation discussion -- for example, we only applied our decoupling framework to regression tasks and BO applications, and we have not applied it to classification tasks. Thank you for your positive and constructive feedbacks! Please let us know if you have further questions. ### References Wenger, Jonathan, et al. "Preconditioning for scalable Gaussian process hyperparameter optimization." International Conference on Machine Learning. PMLR, 2022. Jankowiak, Martin, Geoff Pleiss, and Jacob Gardner. "Parametric gaussian process regressors." International Conference on Machine Learning. PMLR, 2020. Wang, Ke, et al. "Exact Gaussian processes on a million data points." Advances in neural information processing systems 32 (2019). --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. Assuming these changes are included in the next version, they satisfactorily address my main concerns. --- Rebuttal 2: Comment: Dear reviewer 2Aqn, As the discussion deadline is approching, we kindly ask you to check our rebuttal response :) Also, please let us know if you have further questions we can response to! Thank you!
Summary: This paper considers the problem of increasing the expressivity of sparse Gaussian processes without increasing the number of inducing points by considering a decoupled ELBO. In their setup, the gram matrices appearing in the predictive mean and covariance have different parameterizations. They consider two such parameterizations, the first corresponding to learning different length-scales in a non-ARD RBF kernel, and the second using different neural networks in a deep kernel learning setup. An ELBO is derived in this setup, and a version is given for the PPGPR analogue. Whitening is considered with mean-whitening being favored. Numerical examples are given on UCI regression and Bayesian optimization tasks. The effects of differing levels of regularization are examined. Extensions parameterized with further neural networks are considered. Strengths: The paper is well structured and written. Numerical examples are given and their methods appear to perform well. The new parameter that's introduced in their derivation is studied. Weaknesses: There are typos/grammatical errors, as well as inconsistencies in use of abbreviations (see e.g. 5.2 vs the previous paragraph), so this should be thoroughly checked by the authors. The non-DKL baselines would be illuminating in the DKL experiments. Only non-ARD kernels are considered when considering different length-scales without justification - it would seem natural to consider different length scales in each dimension as is common practice. The authors give choice between one of two different whitening matrices, however no justification is given to why both are not used. Since the model separates the posterior mean and covariance, a metric that measures both and their tradeoff would be useful, or at least discussion on how this can be inferred from the reported results. Technical Quality: 3 good Clarity: 3 good Questions for Authors: How do the DKL models compare to SVGP? Why are only non-ARD kernels considered? Why is a model that utilizes both whitening matrices not considered? Can you report performance in a way that considers the tradeoff between accuracy and uncertainty? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: These are discussed above from my end. No limitations of the study are listed in the work, however the risk of harm or civilisation collapse based on this work is minimal. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. We will address each of your questions below. > Typos/grammatical errors and inconsistencies in use of abbreviations. Thank you for pointing this out. We will make sure to resolve all inconsistencies, for example changing DKL-DCPPGPR to PPGPR-DCDKL in 5.2. > The non-DKL baselines would be illuminating in the DKL experiments. How do the DKL models compare to SVGP? The difference between non-DKL and DKL is only in the use of a feature extractor. If you wish to compare DKL to non-DKL performance, note that we run all methods on the same datasets using the same setup, so the values in Table 2 and 3 are already comparable. > Only non-ARD kernels are considered when considering different length-scales without justification. We do consider ARD kernels, and decouple the multi-dimensional length scales in the same way. We mentioned that results with ARD kernels are in the supplement (see l.179-l.181 in the paper and Table 2 in the supplement). Decoupled ARD kernels performs similarly well as decoupled non-ARD kernels. We will draw more attention to this. > The authors give choice between one of two different whitening matrices, however no justification is given to why both are not used. We can only choose to use one of the whitening schemes (see l.81-l.84 and l.146-l.150 in the paper). If one uses both whitening matrices, for example whitening the variational mean $\mathbb{m}$ with $K_{mm}$ and whitening the variational covariance $\mathbb{S}$ with $Q_{mm}$, then neither the predictive distribution nor the KL divergence will be simplified. We choose to use Q matrices and provide the intuition of our choice. We also provide empirical results to compare the two choices in the supplement (see l.179-l.181 in the paper, and Table 3&4 in section 2.1.5 in the supplement). We will draw more attention to this. > Since the model separates the posterior mean and covariance, a metric that measures both and their tradeoff would be useful, or at least discussion on how this can be inferred from the reported results. We did not separate the posterior mean and covariance directly. We provided a decoupled conditional probability, which still leads to a unified ELBO for a single probabilistic model. We already report both RMSE and NLL, with RMSE measuring only mean performance and NLL measuring probabilistic performance. Reporting these two metrics to address this concern is common practice in the GP literature (e.g. Hensman et al., 2017, Havasi et al., 2018, Salimbeni et al., 2019, Jankowiak et al., 2020). > Can you report performance in a way that considers the tradeoff between accuracy and uncertainty? A GP provides a natural tradeoff between accuracy and uncertainty since predictive mean and covariance are analytically computed. We evaluate two metrics - RMSE measuring the predictive mean (accuracy) and NLL measuring both the predictive mean (accuracy) and predictive variance (uncertainty). If our response answers you questions, would you kindly update your rating? Please let us know if you have further questions. ### References Hensman, James, Nicolas Durrande, and Arno Solin. "Variational Fourier Features for Gaussian Processes." J. Mach. Learn. Res. 18.1 (2017): 5537-5588. Havasi, Marton, José Miguel Hernández-Lobato, and Juan José Murillo-Fuentes. "Deep Gaussian processes with decoupled inducing inputs." arXiv preprint arXiv:1801.02939 (2018). Salimbeni, Hugh, et al. "Orthogonally decoupled variational Gaussian processes." Advances in neural information processing systems 31 (2018). Jankowiak, Martin, Geoff Pleiss, and Jacob Gardner. "Parametric gaussian process regressors." International Conference on Machine Learning. PMLR, 2020. --- Rebuttal Comment 1.1: Title: Response Comment: I've read your response and am satisfied with the document provided the promised changes are made. While introducing a new metric to directly measure tradeoff between uncertainty and accuracy would be nice, doing so is probably out of the context of this work and reporting RMSE and NLL is reasonable. I have adjusted my score accordingly. --- Rebuttal 2: Comment: Dear reviewer u1uu, As the discussion deadline is approching, we kindly ask you to check our rebuttal response :) If our rebuttal addressed your concerns, we would like to ask if are willing to reconsider your score. Also, please let us know if you have further questions we can response to! Thank you!
null
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
State-Action Similarity-Based Representations for Off-Policy Evaluation
Accept (poster)
Summary: This paper introduces a new diffuse-metric for measuring behavioral similarity between state-action pairs for OPE, named ROPE. ROPE is used to learn state-action representations using available offline data. Theoretically, this metric can bound the OPE error. Empirically, ROPE boosts the data-efficiency of FQE and achieves lower OPE error than other OPE-based representation learning algorithms. It is claimed that this work is the first that successfully uses representation learning to improve the data-efficiency of OPE. Strengths: (1) There is a certain degree of application innovation in this work. It is claimed that this work is the first that successfully uses representation learning to improve the data-efficiency of OPE. (2) The research on related works is relatively comprehensive. The techniques involved in the proof are interesting and completely non-trivial. The maths overall seem correct and fully rigorous. (3) The main body of the paper is well-written and easy to follow. (4) This work enhances the data-efficiency of OPE methods through representation learning, which is of great significance for OPE methods. Weaknesses: (1) While the main body of the paper is well-written, there is space for improvement. I defer some of my issues in the appendix to "Questions". (2) Plots can be improved by: Improve colour-scheme by taking into consideration colorblindness. For instance, avoid red-green-blue combination (see e.g. https://davidmathlogic.com/colorblind/#%23D81B60-%231E88E5-%23FFC107-%23004D40 for more details). (3) There are a few misprints/suggestions in the text of the main body that I spotted: Line 142: “s1`, s2`\~P,a1`,a2`\~pi_e”should be ” s1`~P, s2`\~P, a1`~pi_e, a2`\~pi_e “ or “s1`,s2`\~P; a1`,a2`\~pi_e”. Line 167: “d_pi_e(s1,a1 , ;s2,a2)” should be “d_pi_e(s1,a1;s2,a2)”. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: (1) What is the meaning of △ in line 56? There seems to be no note. (2) This work proposed a State-Action Behavioral Similarity Metric, of which the core is to learn the State-Action Representations. Can this State-Action Representations only be used for Off-Policy Evaluation?What is the effect of using this method to fit Q-function for general reinforcement learning algorithm. (3) In lemma2 and theorem 2, why can the existence of the difference upper-bounds demonstrate the learned representations can help FQE estimateρ(pi_e) ? (4) The answers of the three questions proposed in Empirical Study is not obvious. Which sections correspond to the answers? (5) In Figure 1(c), what is the meaning of 0.7 circled in red? Is it the reference state-action (s*, a*)? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 2 fair Contribution: 2 fair Limitations: Limitations are explicitly discussed by the paper and the authors have partially addressed them. As far as I can see, has no potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for acknowledging the merits of our work and for your suggestions. We address the concerns from the Weaknesses and Questions sections below. Weaknesses - Re: general comment. Thank you for the suggestions. We address them in the questions section and will make the improvements for the camera-ready version. - Re: plot color-scheme. Thank you very much for that useful suggestion. We will definitely update the paper based on this for the camera-ready. - Re: misprint. Our notation actually meant to indicate what you suggested, but we will be clearer in the camera-ready. Questions - Re: $\Delta$. $\Delta(X)$ means the set of all probability distributions over the set X. We will add a note in the camera-ready. - Re: applying ROPE to general RL. ROPE can be easily extended to the general RL setting. Since we are specifically focused on OPE, the next actions, $a_1’$ and $a_2’$, are samples from the fixed evaluation policy, $\pi_e$. To use ROPE in the general RL setting, one can sample $a_1’$ and $a_2’$ from $\pi$, which is the control policy improving over time, which is essentially a state-action version of MICO [1]. - Re: understanding theory result. Our ultimate goal is to ensure that the learned representation satisfies realizability i.e. the representation supports estimating the true $q^{\pi_e}$. With Lemma 1 and Theorem 2, we have it that the learned action-value function after applying ROPE+FQE satisfies realizability, and the error in satisfying this is a function of the error ($\epsilon$) in the encoders ability to group state-action pairs. If the error is zero, then realizability is satisfied, which means the representation learned with ROPE estimates $\rho(\pi_e$). - Re: questions to experiment correspondence. We will clarify to avoid confusion in the camera-ready. To be precise, Q1 is answered by Section 4.2.1 and our global response, Q2 is answered by Section 4.2.2 (Figure 2), and Q3 is answered by Section 4.2.3 (Figure 3 and 4). - Re: clarification of result. The red circle indicates the reference state-action, $(s^*,a^*)$, yes. 0.7 indicates the normalized distance between $(s^*,a^*)$ and itself. Since we are dealing with diffuse metrics [1], self-distances may be non-zero as indicated in Line 248. [1] Castro P S, Kastner T, Panangaden P, et al. MICo: Improved representations via sampling-based state similarity for Markov decision processes[J]. Advances in Neural Information Processing Systems, 2021. --- Rebuttal Comment 1.1: Comment: I am pleased with the clarity with which the authors addressed the questions I raised in my initial review. The authors' responses in the rebuttal have provided lucid explanations to the concerns I had, accompanied by detailed justifications and elaborations. This has significantly bolstered the comprehensibility and scientific validity of the paper. --- Reply to Comment 1.1.1: Comment: We thank you for your response and for appreciating the merits of our work.
Summary: This paper introduces an OPE-tailored state-action behavioral similarity metric that acts as a new loss for representation learning that can be used to learn a encoder for the state-action features in place of the original features. Strengths: -- Very well written paper -- Interesting contribution to OPE Weaknesses: -- Little discussion about why state-representation then OPE is an easier task than OPE itself Technical Quality: 3 good Clarity: 3 good Questions for Authors: (1) It is clear that the distance metric is derived from triangle inequality on the difference in action-value functions. Can you theoretically justify why learning the representation and then plugging into FQE is an 'easier' task than just doing FQE? If it is not easier, why should we prefer it? (2) Sometimes FQE (which i assume is the "identity" in your plots) performs competitively with (or outperforms) ROPE. How can we anticipate this? I really want the OPE community to think about robustness of algorithms. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for appreciating our work. We address your comments below. Weaknesses: - Re: easiness of representation learning vs. OPE. This is a very good point and, as far as we are aware, is an open question as to when is it easier to learn a representation and plug it into FQE vs. applying FQE directly. In our paper, we do not claim that applying ROPE + FQE is easier, but we hypothesize that representation learning may help as a form of regularization where learned (s,a) representations that are similar based on the distance metric are biased towards a common solution. This is an exciting future direction that we would like to explore. Questions - See above (same as Weakness point). - Re: comparison to vanilla FQE. Thank you for this question. In the evaluated settings, we found that ROPE outperforms vanilla FQE. We suspect that vanilla FQE will perform competitively with ROPE+FQE when most state-action pairs are considered dissimilar in terms of the ROPE metric, which would lead to minimal clustering of state-action pairs, thus effectively causing ROPE+FQE to function as vanilla FQE. Further investigating this interesting direction would require understanding the intricate interplay among the environment, distribution shift, and data coverage etc. - Re: robustness: We share your view about the need for robustness in OPE algorithms. We found one of the advantages of ROPE to be that it was robust to hyperparameter tuning (Section 4.2.3), which is especially important since hyperparameter tuning is difficult in the OPE setting.
Summary: The paper introduces a method to enhance the data-efficiency of the fitted q-evaluation (FQE) algorithm in off-policy evaluation (OPE) for reinforcement learning. They propose using a learned encoder and an OPE-tailored state-action behavioral similarity metric to transform the fixed dataset, improving the representation learning process. Theoretical bounds on OPE error are derived, and empirical results demonstrate the effectiveness of the proposed method in improving data-efficiency and reducing OPE error compared to other approaches. Strengths: The paper demonstrates several strengths: 1. The paper addresses an important research direction in the field of off-policy evaluation (OPE) by focusing on enhancing data-efficiency through representation learning. This contributes to the advancement of OPE methods. 2. By learning representations based on a behavioral metric, the proposed approach avoids the direct use of importance sampling, which can introduce large variance in OPE. This innovative technique improves the stability and reliability of the OPE process. 3. The paper provides theoretical analysis, demonstrating the effectiveness of the proposed algorithm. This contributes to the understanding of the underlying principles and supports the validity of the approach. 4. The paper presents numerous experimental results, validating the effectiveness of the proposed method. These empirical findings provide strong evidence of the improvements achieved in terms of data-efficiency and OPE error reduction. Weaknesses: 1. My major concern is that the paper lacks a clear and intuitive explanation or discussion on why learning state-action representations improves data-efficiency in OPE. Providing a more intuitive explanation or discussing the underlying reasons for this benefit would enhance the clarity and understanding of the proposed approach. 2. It would be beneficial to include an illustration and an algorithm for the proposed paradigm in the main text. These visual aids would help readers grasp the key concepts and the implementation details more easily. 3. The paper overlooks a related work [1] that focuses on learning pseudometric-based behavioral representations for offline RL. Including a discussion of this work would enhance the completeness of the literature review and provide a more comprehensive understanding of the research landscape. [1] Learning pseudometric-based action representations for offline reinforcement learning. ICML 2022. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Considering the mentioned weaknesses, the paper raises the following questions: Can the authors provide a more intuitive explanation, discussion or visualizations regarding the benefits of learning state-action representations for data-efficiency in OPE? Addressing this question would enhance the clarity and understanding of the proposed approach. Furthermore, I would like to emphasize that addressing these concerns would significantly contribute to the improvement of my evaluation and, potentially, my overall score for the paper. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: limitations has been discussed by the authors. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you to the reviewer for their acknowledgement of the merits of our work, comments, and suggestions. Furthermore, thank you for providing an actionable suggestion for us to improve your evaluation of our work. Please also see our global response. Weaknesses: - Re: intuition behind data-efficiency. This is a great suggestion and we appreciate it for improving clarity of our algorithm. Please see the global response and the attached pdf for a visual that we hope will clarify how learning state-action representations can help OPE. We will definitely update the camera-ready with this new visual and elaborate on the intuition for ROPE. - Re: illustration/code of algorithm. Thank you for this suggestion. We have included pseudo-code in the attached pdf, and will definitely add it to the camera-ready version. Our method for learning the state-action encoder is similar to how MICO [1] learns a state encoder. The key differences between our work and [1] are: - 1) we encode state-action pairs instead of states, - 2) we train the encoder such that distances between the encoded representations of any two state-action pairs is equal to the ROPE distance between those pairs whereas Castro et al. use the MICO distance, and - 3) we first train the encoder and then freeze its weights before applying it to encode all state-actions for FQE with the fixed dataset whereas Castro et al. train their state encoder as an auxiliary task while simultaneously learning optimal action-values as a function of that encoder. - Re: related work. Thank you for referencing this paper. We will indeed cite this work in the camera-ready. In terms of differences from our work: 1) their paper is focussed on learning action representations instead of state-action representations in offline RL and 2) their focus on offline RL is specifically for the control setting rather than the evaluation setting. Questions - Yes, following your actionable suggestion, we have included a global response. Please let us know if you have further questions or suggestions. [1] Castro P S, Kastner T, Panangaden P, et al. MICo: Improved representations via sampling-based state similarity for Markov decision processes[J]. Advances in Neural Information Processing Systems, 2021. --- Rebuttal Comment 1.1: Comment: Thanks for the response of authors, I will improve my score. --- Reply to Comment 1.1.1: Comment: Thank you very much to the reviewer.
Summary: Towards enhanced data-efficiency of the fitted q-evaluation (FQE) method, this work first proposes an OPE-tailored state-action behavioral similarity metric and then uses this metric and the fixed dataset to learn an encoder, which is used to transform the fixed dataset. Experiments on the OPE tasks illustrate that the proposed method improves the data-efficiency of FQE and obtains a lower OPE error compared to other OPE-based representation learning methods. Strengths: 1. The motivation of this work is clear, which is a problem worth studying under off-policy evaluation topic. 2. Various experiments as well as relevant analyses are performed in this work, which illustrates the efficacy of the method. Weaknesses: 1. Some details of this work are not clear enough and some analysis is too superficial. 2. Although the approach is superior to the other baseline, the advantage seems not obvious enough, except in the setting of $\mathcal{D}_{100}^{off}$. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: There are some questions about this paper: 1. In Section 3.1, to learn a state-action representation, this work follows the method form [1]. I recommend that the authors give a supplementary explanation of this method. In addition, I wonder what is the difference between the proposed method, ROPE and the work [1]. In other words, why do the authors consider state-action representations? OPE-tailored behavioral similarity metric is unclear, and why is this metric strongly correlated with OPE? 2. In Section 4.2.2, the authors consider three other OPE-based representation learning methods, I recommend that the authors introduce these methods in a more intuitive way, such as figures. The experimental results of each method should be analyzed, for example, why the target-phi-sa performs poorly. The current version is not clear. 3. From Figure 3(b), the reward-only is significantly better than FQE and slightly worse than the rope. What are the details of the reward-only method? Does it involve learning state-action representations? This makes me more concerned about the need for learning state-action representations. 4. This work mainly considers the FQE approach, and I am more concerned about the generalizability of the approach. [1] Castro P S, Kastner T, Panangaden P, et al. MICo: Improved representations via sampling-based state similarity for Markov decision processes[J]. Advances in Neural Information Processing Systems, 2021, 34: 30113-30126. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: This paper discusses the limitations of the proposed algorithm and future work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you to the reviewer for their comments and suggestions. We answer concerns from the Weakness and Questions sections below. Please also see our global response. Weaknesses: - Re: general comment. We hope our response clarifies the details and we will make these improvements to the camera-ready. - Re: algorithm performance. We agree that MSE reduction between ROPE and the second-best method in each scenario is not always large. However, we wish to highlight that our method is the only method that gives consistently low absolute error across the considered scenarios. As noted by reviewer BFUR, robust estimation is an important property for OPE methods. Questions - Re: comparison to MICO, correlation to OPE. Thank you for the suggestion, we will expand on our description of MICO in the supplementary section. Below we attempt to clarify possible misunderstandings of ROPE, and we will include these in the camera-ready. Please also see our global response. - Re: why state-action representations and OPE-tailored metric. We refer the reviewer to line 136-140 of the paper. When learning representations in the context of OPE, we need a way to account for the distribution shift between the evaluation policy and the policy that generated the data. We do state-action representations since this can be accounted for by simply sampling actions from $\pi_e$. On the other hand, MICO learns state representations, and designing an off-policy version of MICO may involve another technique to correct the distribution shift such as importance sampling, but this requires knowledge of the behavior policy that generated the data. Moreover, if multiple policies generated the data, then estimating this importance sampling ratio is even harder. Thus, ROPE is OPE-tailored as it accounts for the distribution shift and uses off-policy data to learn the representations while MICO does not account for the distribution shift and uses on-policy data. - Re: intuition of baselines, analysis. Thank you for this suggestion. We briefly answer your questions now and will definitely update the paper for the camera-ready. - Algorithms. We note that these are not our contributions and so we simply used them as references, however, we can expand their description in the appendices. Our main goal is to show that ROPE makes FQE more data-efficient. We include these other baselines simply to benchmark performance of representation learning-for-OPE work: - Identity: this is simply vanilla FQE applied to the dataset. - BCRL: learns an encoder that outputs (s,a) representations that satisfy the theoretical conditions needed for convergent value function learning with linear function approximation. It then fixes the learned encoder to output (s,a) representations that are fed into Least Squares Policy Evaluation (LSPE) to learn $q^{\pi_e}$. - Target-phi-sa: uses the critic of $\pi_e$ (from its training) as a fixed encoder to output fixed (s,a) representations that are fed into FQE to estimate $q^{\pi_e}$. Intuitively, the representations outputted by this encoder should have sufficient information to estimate $q^{\pi_e}$. - Analysis of target-phi-sa. Please see lines 285-288. We simply show the performance of target-phi-sa for benchmarking purposes. However, it is known that target-phi-sa may perform poorly, which, as the authors of [3] note, is interesting because target-phi-sa contains sufficient information to perfectly represent the value of the evaluation policy and satisfy realizability. - If the reviewer has further analysis that they think would strengthen the paper, we would be happy to conduct it and add it to the paper for the camera-ready. - Re: reward-only. For details of the reward-only method, we refer the reviewer to line 312 and lines 316-319. In short, the reward-only method is a variation of ROPE that learns state-action features but where the long-term distance component (see Eqn 1, line 147) of the ROPE metric is removed. One of the main messages (lines 316-319) of this experiment is that learning appropriate state-action representations is nuanced: in some cases it appears grouping state-actions based only on reward is sufficient to do well. In future work, we will plan to understand the intricate nature between state-action representations, distribution shift, and the nature of the environment to better understand under what conditions certain distance metrics are better than others. - Re: beyond FQE. Representation learning for OPE beyond FQE is a very interesting direction we hope to explore in future work. Given that FQE is one of the most successful algorithms in OPE [1, 2] and that it is a core component of many RL algorithms, we focused our first efforts specifically in trying to understand the representation learning considerations for FQE. [1] Benchmarks for Deep Off-Policy Evaluation. Fu et al. 2021. [2] Empirical Study of Off-Policy Policy Evaluation for Reinforcement Learning. Voloshin et al. 2019. [3] Instabilities of Offline RL with Pre-Trained Neural Representation. Wang et al. 2021. --- Rebuttal Comment 1.1: Comment: Hello reviewer y8hg, we just wanted to follow-up to see if our response (and global response) clarified your concerns before the discussion period ends. If you had any further questions, we would be more than happy to answer them. Thank you again for your feedback.
Rebuttal 1: Rebuttal: Thank you to the reviewers for kind words regarding the merits of our work and helpful suggestions. Before responding to individual questions and comments, we wanted to briefly elaborate on the intuition for ROPE increasing data efficiency and how ROPE is tailored to the OPE setting. We also include figures and pseudo-code that will help clarify these points, which we will include in the camera-ready. To build intuition, we follow prior work [1, 2, 3, 4] and make the analogy between state-action aggregation and state-action representations (representation learning can be viewed as a soft form of state-action-aggregation according to some distance metric). Under this view, ROPE is a method for grouping different state-action pairs based on similarity under the ROPE metric. If all state-action pairs in a given group have small pairwise ROPE distance then they have similar state-action representations and they are behaviorally similar (i.e, have similar rewards and lead to similar future states when following the evaluation policy) and consequently will have a similar action-value. Thus, data samples from any member of the group can be generalized for more data-efficient learning to learn the group’s shared action-value as opposed to learning the action-value for each state-action pair individually. ROPE is a metric that quantifies this notion of behavioral similarity and enables identifying when we should be able to generalize across different state-action pairs. ROPE is OPE-tailored in that it can be learned with off-policy data as it is designed to account for the distribution shift between the fixed dataset and $\pi_e$. Prior work in learning representations based on behavioral similarity (e.g., MICO) would not suffice here as they focus on on-policy learning. In the attached pdf, we illustrate the state-action aggregation interpretation of ROPE using the gridworld domain so as to show how the ROPE metric is suitable for learning $q^{\pi_e}$ while other reasonable metric choices are not. The attached figures show 1) the action-values of the evaluation policy, 2) how ROPE groups state-action pairs, and 3) how alternative metric choices group state-action pairs. We expect the color-coding of the action-values of $\pi_e$ to match with that of the group clusters by each metric. We highlight that only ROPE correctly groups state-action pairs together only when they share the same action-value and this enables learning $q^{\pi_e}$ as required by FQE. Other metrics group state-action pairs together with different action-values and hence would produce biased FQE estimates. We also include the pseudo-code of ROPE + FQE in the attached pdf, which we will include in the camera-ready. [1] Scalable methods for computing state similarity in deterministic Markov Decision Processes. Castro. 2020. [2] Learning Invariant Representations for Reinforcement Learning without Reconstruction. Zhang et al. 2021. [3] MICo: Improved representations via sampling-based state similarity for Markov decision processes. Castro et al. 2021. [4] Learning pseudometric-based action representations for offline reinforcement learning. Gu et al. 2022. Pdf: /pdf/46759b1dbb607088345e7a65665be94c3889116a.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
A Scale-Invariant Sorting Criterion to Find a Causal Order in Additive Noise Models
Accept (poster)
Summary: The paper considers causal inference in the context of additive noise models (ANMs). In particular the paper points out a possible problem in simulation benchmarks for this setup: if the weights of the causal network are not chosen appropriately, the simulation may result in datasets in which identifying the causal ordering can be done with a trivial method which simply sorts the variables by the R^2 when the variable is predicted with other variables. Previously, a similar result has been published by ordering the variables by their variance, but this paper shows that the same outcome can be achieved by sorting the variables by the amount of variance explained, R^2, which is scale-invariant, and hence the problem can't be resolved by simple rescaling. Strengths: The clarity of the paper is good. The paper demonstrates possible deficiencies in existing benchmarks, which is important. Overall, I found this paper useful, but I also think the significance could be strengthened (see below). Weaknesses: The paper demonstrates that simulated datasets may have a large R^2 sortability. However, it does not go very far in clarifying how big a problem this is in practice, i.e., it is not really clear how R^2 sortable real-world datasets actually are (there's one example but that's still quite limited). If the real-world datasets actually often are R^2 sortable, then there is no problem. If, on the other hand, they are not R^2 sortable, it would have increased the significance of the paper to provide more concrete suggestions about how simulations should be conducted to in order for them to better correspond to real-world datasets. In real-world datasets the ground-truth may not be easily available, but at least it is easy to calculate the distribution of R^2 values for all variables in a dataset, for multiple datasets, and investigate how the weights in the causal graph simulator should be chosen for the R^2 distribution to be similar to real-world datasets. The take-home message from the experiments is not always clear. In Fig. 1 the presented method R^2-SortnRegress seems always worse than the previously published Var-Sortnregress. Also, the conclusion in the caption: "R^2Sortnregress performs well it R^2 sortability is high" seems a bit tautological. It is clear that R^2 is scale-invariant unlike the variance based criterion, so why not simulate datasets that accordingly demonstrate its strength? Also, in the real-world data the results don't seem impressive, as the novel R^2 SortnRegress has the worst SID value. (What is the othe value SHD? I didn't find it defined.) The theory considers a causal chain, where E(log|V|)>0, where V is simulated from the weight distribution, and shows that in this case the variance of the last node in the chain goes to infinity and the amount of variance explained converges to one when the length of the chain increases. It seems that if the expected value of |V| is smaller than 1, then the condition is not satisfied? Indeed, it is easy to imagine that the chain will diverge if we have coefficients that in general are larger than one, but this does not sound very realistic assumption in real-world data sets. It would have been interesting to investigate the distribution of estimated weights in some real-world datasets, to see how commonly this condition holds true in those. The paper says that if the condition did not hold, "detecting statistical dependencies between variables connected by long paths" would be difficult, but I would imagine this to be case with many real-world datasets, so this does not seem a proper reason for not considering such cases. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Could you show a histogram of R^2 sortability values among simulated datasets on which Fig. 1 was based? Now the R^2 sortability is shown on the x-axis but it is not clear what are the relative amounts of different R^2 values in the simulations. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive review and your suggestions on how to better present the strengths of our work. We propose the following edits in response and individually answer your questions below. --- ### Edit summary * **Edit 1 (see Figure 2 rebuttal PDF)** We will include the $R^2$-sortability histograms for the settings shown in Figure 1 and 3 in Section A2. For ER graphs, the distribution of $R^2$-sortabilities is close to symmetric, while for SF graphs it has a strong left skew. * **Edit 2** We will highlight the open nature of real-world $R^2$-sortability (lines 19-21): \ _Our findings reveal high $R^2$-sortability as an assumption about the data generating process relevant to causal discovery and implicit in the choice of simulation parameters. It should be made explicit, as its prevalence in real-world data is an open question._ * **Edit 3** We will emphasize the impact of $R^2$-sortability on benchmarking practices (line 84): \ _Characterizing $R^2$-sortability and uncovering its role as a driver of causal discovery performance enables distinguishing between data with different levels of $R^2$-sortability when benchmarking or applying causal discovery algorithms to real-world data._ * **Edit 4 (see Figure 1 in rebuttal PDF)** We have revised Figure 1 (and 3 & 4 likewise) to better show the strengths of our method. We will show results on standardized data to emphasize the scale-invariance of our method. For reference, we visually differentiate the baseline RandomRegress and include Var-SortnRegress on raw data as a dashed line. To simplify the interpretation of trends and uncertainty we will show moving averages using a window of width 0.1 (instead of binning by decile), and show error bars for the 95% confidence interval of the mean. * **Edit 5** We will clarify the take-home message of the real-world experiment (line 254): \ _This indicates that, on real-world data, we may not expect to see consistently high $R^2$-sortabilities as much as we do in many simulation settings (see Appendix B.2). For benchmarks to be representative of this factor, they should differentiate between settings with different levels of $R^2$-sortability._ * **Edit 6** We will highlight the novelty and usefulness as condition for the divergence of node variances of Equation (5) (line 274): \ _We introduce the following sufficient condition for weight distributions to result in diverging node variances along causal chains of increasing length:_ --- ### Answers to your questions __Histogram of $R^2$-sortabilities (see Figure 1 in rebuttal PDF)__ This is a great idea, thank you. We will include the histograms as described in **Edit 1**. --- ### Additional points **Real-world $R^2$-sortabilities** We agree that real-world $R^2$-sortabilities are an important open question that cannot be answered by any single dataset (see line 336), and will communicate this more clearly (see **Edit 2**). Real-world $R^2$-sortabilities are not known and that is precisely why we think it is important to be aware that current simulations systematically tend to result in high $R^2$-sortabilities. Any in causal discovery (e.g. Gaussian vs non-Gaussian, equal vs unequal noise variances, etc.) may or may not hold on different real-world data, which is why they are treated separately in simulations. For benchmarking, choosing ANM parameters that result in high $R^2$-sortability amounts to an assumption, which our contribution makes explicit. This allows a distinction between data with different $R^2$-sortabilities in the same way we already distinguish between other assumptions to reflect different possible real-world scenarios. We agree that this should be spelled out and will do so (see **Edit 3**). You suggest calculating $R^2$ distributions across multiple datasets and investigating how to match simulations to those values, yet real-world data suiting ANM assumptions are scarce, hence the extensive use of simulations in causal discovery. Such a study would have to find datasets, suitable function classes for $R^2$ estimation, and a strategy to adapt simulations (one could change the weights, graph structure, or noise variances) consistent with domain knowledge. While this is outside the scope of our work, we share your excitement for such a project and hope our work will inspire studies like these when shared with the community. **Take-home message of experiments** We agree that the scale-invariance of $R^2$-sortability should be emphasized more and update accordingly (see **Edit 4**). Thank you for highlighting this. \ We agree that none of the performances on the real-world dataset are impressive (line 250), which underlines the need for identifying potentially unrealistic benchmark properties such as high $R^2$-sortability. To clarify the take-home message, we will implement **Edit 5** and move the definition of the Structural Hamming Distance (SHD) to the main text. **Case of $E\log|V|<0$** Thank you for raising this. We can better clarify the role of the condition $E\log|V|>0$, which we prove is sufficient for the divergence of node variances to infinity (see **Edit 6**). Indeed, our condition is not satisfied if $E|V|<1$ (Jensen's inequality). Conversely, $E|V|>1$ does not imply $E\log|V|>0$ (e.g. consider $\text{Unif}(0.1, 2)$), and is thus not sufficient for the divergence. We agree that real-world processes may not have weights that result in diverging variances. This is why we think it is important to point out that simulations often rely on parameters that result in high var- and $R^2$-sortability. Unfortunately, measuring weights in real-world ANM data (which are scarce) is not straightforward, since they are affected by the arbitrariness of the data scale. \ We do consider the case of $E\log|V|<0$ in our empirical analyses. Figure 2 shows that even $E\log|V|<0$ result in high $R^2$-sortabilities for Scale-free graphs, and Section A2.1 shows a setting with small weights and $E\log|V|<0$. --- Rebuttal Comment 1.1: Title: Thanks for your replies Comment: Hello, I read your replies and I think they did a good job addressing my concerns. I will take this into account when reconsidering my score (probably during reviewer-AC discussions). I have no further questions. Thanks! --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to read our rebuttal and for your openness to reconsidering your score. We appreciate your constructive feedback, which has improved the quality of our work. Please do not hesitate to reach out if any further clarifications are needed.
Summary: The paper introduces the issue of "$R^2$-sortability" for synthetically generated data used in the evaluation of causal structure learning methods. $R^2$-sortability is a generalization of varsortability, which is invariant to re-scaling (e.g., standardizing) the simulated variables. They show that, using typical simulation parameters, sorting variables based on $R^2$-sortability can give good performance on causal discovery. Finally, they investigate how $R^2$-sortability is influenced by the data generation parameters. Strengths: **Clarity:** The paper clearly describes the motivation for introducing the concept of $R^2$-sortability, and the experimental results section clearly describes their experimental setup. **Significance:** There is significant value in papers which describe problems with current methods of evaluation. In particular, since evaluating causal structure learning methods relies heavily on synthetic data, it is important to characterize "artifacts" of synthetic data generation procedures, and make efforts to remove these artifacts. It is also helpful that the authors do some investigation into how $R^2$-sortability is influenced by the synthetic data generation parameters. Weaknesses: ### Experimental Results It is surprising that, in Section 4.1, the authors do not standardize the data so that *Var-SortnRegress* would perform poorly. It is not clear what message the authors are trying to convey by their experimental setup. I would expect that they would want to show that $R^2$-sortability is still an issue in simulated data, even when varsortability is not an issue. ### Minor issues *Clarity*: Clarity could be improved in some places. 1. Equation (3) seems to involve the number of paths between pairs of nodes $s$ and $t$, I think it would be more intuitive to write it this way and to also describe why we care about the number of paths rather than just the existence of a path. 2. Equation (3) never decreases when adding an edge. It seems undesirable that the score would always (weakly) favor denser graphs. I see that there is a sparsity penalty in Algorithm 1 - why is it introduced there, instead of earlier? 3. In Section 5, why do you switch from the product of squared edge weights to sum of the logs? It seems motivated by the appeal to the strong law of large numbers, but since you already lower bound the product of squared edge weights by the sum of the logs, then almost sure convergence also holds for the product. Right now, the logs seem ad hoc / unmotivated. 4. When introducing the condition on $\mathbb{E}(\log |V|)$ in Equation (5), you should make it more clear what the conditions means for $P_W$. For example, $P_W = Unif([0.5, 1])$ would not satisfy the condition, would $P_W = Unif([0.5, 2])$ satisfy it? 5. Please use $\mathbb{E}$ for expectations, e.g. in Equation (5). This is typically preferred, but especially important when dealing with graphs where $E$ often denotes edges. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please address the issues on clarity raised in the **Weaknesses** section. ### Other suggestions The authors may find it interesting that previous papers have designed a data generation process to control cause-explained variance, see eg. [1], Section 5 and [2], Section 5. Readers would likely find it helpful if the authors described such a data generation process to ameliorate the issue of $R^2$-sortability. [1] Agrawal, R., Squires, C., Prasad, N., & Uhler, C. (2021). The DeCAMFounder: Non-linear causal discovery in the presence of hidden variables. [2] Squires, C., Yun, A., Nichani, E., Agrawal, R., & Uhler, C. (2022). Causal structure discovery between clusters of nodes induced by latent factors. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors have adequately addressed the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your detailed review and for highlighting the significance of our contribution for the causal structure learning community. We propose to make the following edits in response to your review and answer your questions individually below. --- ### Edit summary * **Edit 1 (see Figure 1 in rebuttal PDF)** We have revised Figure 1 (and 3 & 4 likewise) to better show the strengths of our method. We will show results on standardized data to emphasize the scale-invariance of our method. For reference, we visually differentiate the baseline RandomRegress and include Var-SortnRegress on raw data as a dashed line. To simplify the interpretation of trends and uncertainty we will show moving averages using a window of width 0.1 (instead of binning by decile), and show error bars for the 95% confidence interval of the mean. * **Edit 2** We will add the following explanation to Equation (3) (line 168): \ _In effect, this is the fraction of directed paths of unique length between any two nodes that satisfy the sortability condition in the numerator._ * **Edit 3 (see Figure 3 in rebuttal PDF)** We will add an appendix section showing an empirical comparison of sortabilities obtained using Equation (3) as is, compared to a version of Equation (3) that only considers the existence of a path. In the experiment we observe strong alignment (Kendall rank coefficients of $0.86$ for $R^2$-sortability and $0.84$ for var-sortability) between the two versions. * **Edit 4** We will highlight the novelty and usefulness as condition for the divergence of node variances of Equation (5) (line 274) and will motivate the use of the log (line 548): \ 274: _We introduce the following sufficient condition for weight distributions to result in diverging node variances along causal chains of increasing length:_ \ 548: _The application of the log in the final step allows us to lower-bound the product by a sum and employ the law of large numbers._ * **Edit 5** We will include the references you suggested in the first paragraph of Section 5 (line 259): \ _$R^2$-sortability can be mitigated by introducing the assumption of a constant exogenous noise fraction (see for example Agrawal 2021, Squires 2022), which requires non-iid edge weight sampling in simulations._ --- ### Answers to your questions **Experimental results** We agree that the scale-invariance of $R^2$-sortability and -SortnRegress should be emphasized more and will make the changes described in **Edit 1**. **Minor issues (clarifications)** *1.* We will add an intuitive description of Equation (3) as outlined in **Edit 2**. Equation (3) builds on the definition by Reisach [2021] in a way that yields their original version of var-sortability as a special case. To address your question, we ran a comparison between the original sortability definition counting paths of different lengths compared to one that only considers the existence of a path and find no qualitative differences. We will include the experiment comparing the sortability variants as described in **Edit 3**. *2.* Sortability is a measure of a given data generating process and its causal structure. For example, for $A=N_A$, $\quad B=10 A + N_B$, $\quad C=0.5 B + N_C \quad$ with graph G1 $(A \to B \to C)$ and $\text{Var}(N_A)=\text{Var}(N_B)=\text{Var}(N_C)=1$ we have $\mathbf{v}_\text{Var}((A,B,C), G1) = 2/3$. For the data generating process $D=N_D$, $\quad E=10 D + N_E$, $\quad F=N_F \quad$ with $\text{Var}(N_D)=\text{Var}(N_E)=\text{Var}(N_F)=1$ and graph G2 $(D \to E \quad F)$ we have that $\mathbf{v}_\text{Var}((D,E,F), G2) = 1$. As this example shows, a data generating process with denser graph can have lower sortability than a DGP with sparser graph (and vice-versa). \ Algorithm 1 is used to estimate a causal graph from data and the sparsity constraint avoids returning a fully connected graph: after having sorted the variables we regress each node onto its predecessors in this order, and the sparsity constraint prunes superfluous edges. *3.* The application of the log allows us to make our statement about the divergence of node variances, and Equation 5 introduces a sufficient condition for divergence (the related conditions $\mathbb{E}|V|>1$ or $\mathbb{E}V^2>1$ are not equivalent to our condition as can be seen by the example of $\text{Unif}(0.1, 2)$). In particular, if Equation 5 is satisfied then this implies that for increasing chain lengths the sum of log-weights converges to $+\infty$, which in turn implies that the product of squared weights converges to $+\infty$. If Equation 5 is not satisfied, another condition would be needed to establish convergence of the product to $+\infty$. We believe the motivation for applying the log and the role of equation (5) can be made clearer by implementing **Edit 4**. Thank you for pointing this out! *4.* We will add the values for $P_W = \text{Unif}((-2, -0.5)\cup(0.5, 2))$ ($\approx 0.16$) in section 4.1, and for $P_W = \text{Unif}((-0.5, 0.1)\cup(0.1, 0.5))$ ($\approx -1.29$) in Section A2.1, both with a pointer to Section 5. (Figure 2 shows further weight ranges.) *5.* Agreed. We will change the notation for the expectation from $E$ to $\mathbb{E}$ throughout the manuscript. **Suggestions** Thank you for pointing out the data generating mechanisms in these related works. We will include them as described in **Edit 5** and hope they can serve as a reference point to develop new and more realistic simulation schemes. --- Rebuttal Comment 1.1: Comment: ### Score update I appreciate the author's detailed and well-written response to my review. I was impressed with their quick ability to incorporate feedback, and I believe that their updates to the experiments and their clarifications will further improve the quality of the paper. Taking this into account, I have raised my initial score by one point, from 6 to 7. --- ### Remaining questions **Minor issue 2**: Thank you for clarifying, I somehow missed the change in the denominator. However, the example did not get after my exact concern, which is for a *fixed* data generating process, the score favors denser graphs. In your example, let $G3 = D \to E \to F$, then what is $\mathbf{v}_{\textrm{Var}}((D,E,F), G3)$, i.e. how does the score change when adding $E \to F$? What about when also adding $D \to F$? --- Reply to Comment 1.1.1: Comment: Thank you for acknowledging our rebuttal and how our proposed edits improve the article. We are grateful for your constructive comments which have helped us refine our submission. *Re: sortability and graph density* We are not sure if we understand your question on sortabilities when keeping the data-generating process fixed while adding edges. Below we summarize the relationship between a given DAG and the corresponding sortability, and give a more extensive example. We hope this answers your question; Please let us know if anything remains unclear and we will be happy to answer. --- --- We define sortability for variables $X$ in a graph $G$ (lines 164ff) of an ANM (lines 101ff). Adding an edge to $G$ to obtain $G'$ necessarily changes the definition of the corresponding variables $X'$, meaning the data-generating process is not fixed across different ground-truth graphs. \ For G' denser than G it may be that $\\; \mathbf{v}\_\tau(X',G') < \mathbf{v}\_\tau(X,G)$, $\\; \mathbf{v}\_\tau(X',G') > \mathbf{v}\_\tau(X,G)$, or $\\; \mathbf{v}\_\tau(X',G') = \mathbf{v}\_\tau(X,G). \\;$ Adding edges can thus increase or decrease sortability, or leave it unchanged. To illustrate this point, we give a comprehensive example (using var-sortability for its simpler calculation) below that highlights the relationship between a given DAG and the corresponding variables. Let $N_1,N_2,N_3$ be independent random variables with $\\; \text{Var}(N_1)=2$, $\\; \text{Var}(N_2)=2$, $\\; \text{Var}(N_3)=1$. __Setting 1__ For the graph $\\;G : X_1 \to X_2 \quad X_3\\;$ and data-generating process $\\;X_1:=N_1$, $\\; X_2:=10 X_1 + N_2$, $\\; X_3:=N_3,\\;$, we have that $\\;\mathbf{v}\_\text{Var}((X_1,X_2,X_3), G) = 1/1 = 1$. __Setting 2__ We now consider the graph $G'$ resulting from adding an edge to $G$ such that $\\;G' : X'_1 \to X'_2 \to X'_3.\\;$ Adding the edge requires a change in the definition of $X_3$, which changes the data generating process: $\\;X'_1 = X_1 := N_1$, $\\; X'_2 = X_2 := 10 X_1 + N_2$, $\\; X'_3 := 0.1 X_2 + N_3, \\;$ which yields $\\;\mathbf{v}\_\text{Var}((X'_1,X'_2,X'_3), G') = 2/3$. __Setting 3__ Adding another edge such that $\\;G'' : X''_1 \to X''_2 \to X''_3, X''_1 \to X''_3\\;$ requires another change of the data generating process. We show how different edge weights can result in different sortabilities: 1. $X''_1 = X_1 := N_1$, $\\; X''_2 = X_2 := 10 X_1 + N_2$, $\\; X''_3 := 0.1 X_2 - 1.1 X_1 + N_3 \\;$ yields $\\;\mathbf{v}\_\text{Var}((X''_1,X''_2,X''_3), G') = 1/4$. 2. $X''_1 = X_1 := N_1$, $\\; X''_2 = X_2 := 10 X_1 + N_2$, $\\; X''_3 := 0.1 X_2 + 20 X_1 + N_3 \\;$ yields $\\;\mathbf{v}\_\text{Var}((X''_1,X''_2,X''_3), G') = 4/4 = 1$.
Summary: This paper is concerned with synthetic data generation for causal discovery. The paper explores the scale invariant pattern given by the coefficient of determination $R^2$ that potentially exists in synthetic data benchmarks. The authors present analysis in the case of linear ANMs. They also find out that prior over linear parameters might have significant influence on the $R^2$ sortability. Strengths: (++) There are many new algorithms being developed for causal discovery, and all of them evaluate on synthetic data as evaluation on real world data is very hard. Hence, any new insights which helps bridge the gap between synthetic data and real data is very timely and welcome. (++) This work generalises the results of Reisach et al (2021) in giving a general `sortability` criterion. Further, the $R^2$ sortability is more subtle than varsortability as it cannot be just adjusted by renormalisation. (+) I like the overall presentation and motivation of the problem. The presented analysis for linear ANM and trees is insightful. Weaknesses: (--) It is not clear how big of an issue is $R^2$ sortability in real world settings. Section 4.2 illustrates it on Sachs dataset. But Sachs dataset is not a linear ANM, and hence has significant model misspecification. So I am unsure how concrete are the conclusions from 4.2. Having said that, I still find the overall contribution useful. For example, if it is the case that there is a $R^2$ sortability issue in a more relevant dataset, one could at least find out to what extent it exists based on the insights of this paper. (--) A theoretical analysis would have been more helpful. Right now, it is not clear if this is just a problem in linear ANM or can be generalised to nonlinear/nonparametric cases. More so, it might have made more clear whether it is actually the linear parameters which affect it the most in non tree situations (which would be the case in most practical settings). (--) Unlike the varsortability, which is clear shown that almost all the new causal discovery algorithms being developed exploit it, it is not necessarily the case here. For example, it is not clear whether new optimisation based/ neural network based methods exploit this to perform better. I also don['t think it is necessarily a bad thing to exploit it if it is the case that $R^2$ sortability exists in real world settings (it is yet unclear from this paper, see above point). Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: 1. Is it possible to generate the synthetic data by first fixing the desired $R^2$, and generate the synthetic data which exactly matches this value? 2. Could you comment on whether this is a potential issue beyond ANMs and linear models? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors have done a clear job of stating the assumptions under which their conclusions are valid. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and for pointing out the impact of our findings on the causal structure learning community. We propose to make the following changes in response to your review and answer your questions individually below. --- ### Edit summary * **Edit 1** We will highlight the open nature of real-world $R^2$-sortability by adapting lines 19-21 (Abstract): \ _Our findings reveal high $R^2$-sortability as an assumption about the data generating process relevant to causal discovery and implicit in the choice of simulation parameters. It should be made explicit, as its prevalence in real-world data is an open question._ * **Edit 2** We will emphasize the impact of $R^2$-sortability on benchmarking practices 84 (Contribution): \ _Characterizing $R^2$-sortability and uncovering its role as a driver of causal discovery performance enables distinguishing between data with different levels of $R^2$-sortability when benchmarking or applying causal discovery algorithms to real-world data._ * **Edit 3** We will emphasize the benefit a full theoretical characterization of $R^2$-sortability could provide in line 337 (Discussion): \ _A complete theoretical characterization of the conditions sufficient and/or necessary for extreme $R^2$-sortability could, if found, help decide where one may hope to exploit it in practice._ --- ### Answers to your questions **Generating data with a specific $R^2$-sortability** Because $R^2$-sortability arises in a complex interplay of simulation parameters there is no systematic way to achieve a target $R^2$-sortability when sampling ANM parameters iid that we know of. One may use rejection sampling or update the properties of a given graph to achieve a target $R^2$-sortability with non-iid edge weights (cf. Sections 5 in Agrawal [2021], Squires [2022], as pointed out by reviewer zF5c). **$R^2$-sortability beyond ANMs and linear models?** We believe that $R^2$-sortability can be an issue beyond ANMs and linear models, although its impact may vary depending on function class and parameters, and increasing functional complexity along the causal order could reduce or flip $R^2$-sortability. In practice, estimating $R^2$ for nonlinear models requires the choice of a suitable regression method. Prompted by your question, we ran an experiment using our linear $R^2$-sortability as a first-order approximation on two different nonlinear Gaussian Process simulations used by Zheng [2020] and Reisach [2021]. Linear $R^2$-sortabilities range from 0.10 to 0.85, indicating that extreme values are possible in non-linear ANMs and making this question an interesting topic for future work. |graph|SEM|Var-sortability|$R^2$-sortability| |---|---|---|---| |ER(20, 20)| Additive GP|0.71$\pm$0.13|0.49$\pm$0.18| ||GP| 0.66$\pm$0.14| 0.37$\pm$0.13| | ER(20, 80)|Additive GP | 0.88$\pm$0.06|0.41$\pm$0.10| ||GP| 0.61$\pm$0.13| 0.10$\pm$0.06| |SF(20, 20)| Additive GP| 0.83$\pm$0.11|0.85$\pm$0.10| ||GP| 0.66$\pm$0.16| 0.52$\pm$0.17| | SF(20, 80) | Additive GP| 0.96$\pm$0.02|0.75$\pm$0.14| ||GP|0.66$\pm$0.11| 0.23$\pm$0.11| ---- ### Additional points **Real-world $R^2$-sortabilities** We agree that real-world $R^2$-sortabilities are an important open question that cannot be answered by any single dataset (see line 336), and will communicate this more clearly (see **Edit 1**). Real-world $R^2$-sortabilities are not known and that is precisely why we think it is important to be aware that current simulations systematically tend to result in high $R^2$-sortabilities. Any in causal discovery (e.g. Gaussian vs non-Gaussian, equal vs unequal noise variances, etc.) may or may not hold on different real-world data, which is why they are treated separately in simulations. For benchmarking, choosing ANM parameters that result in high $R^2$-sortability amounts to an assumption, which our contribution makes explicit. This allows a distinction between data with different $R^2$-sortabilities in the same way we already distinguish between other assumptions to reflect different possible real-world scenarios. We agree that this should be spelled out and will do so (see **Edit 2**). **Theoretical analysis** $R^2$ depends on a complex interplay of weights, noise parameters, and graph structure, which greatly complicates the development of a general theoretical characterization of $R^2$-sortability for generic and flexible parameter sampling schemes. While it is outside the scope (and page limit) of our contribution, we strongly share your view that more research on the conditions for the emergence of $R^2$-sortability would be beneficial, and hope that sharing our work with the NeurIPS community will inspire research leading to new theoretical insights. We will state the potential benefit of such a result through **Edit 3**. **Meaning of $R^2$-sortability for existing algorithms** As a property of the data, $R^2$-sortability affects the baseline performance that can be achieved and is thus relevant to the interpretation of the results obtained by causal discovery algorithms (we do not claim that other causal discovery algorithms exploit $R^2$-sortability). For example, the same causal discovery performance may be less impressive if the $R^2$-sortability on a dataset is 0.95 rather than 0.5. --- Rebuttal Comment 1.1: Comment: Dear Reviewer iqwe, The authors have provided a response to your review comments. Could you see whether your concerns were properly addressed by the authors' response, or at least acknowledge you read it? Many thanks, The AC
Summary: This paper proposes an interesting extension of the var-sortability approach proposed by Reisach et al. (2021), denoted R2-sortability. While var-sortability measures the agreement between the causal order and the order of increasing marginal variance, R2-sortability measures the agreement between causal ordering and the order of the explainable fraction of a variable’s variance (as captured by the coefficient of determination R2). Contrary to var-sortability, the R2-sortability approach is a scale-invariant approach which can be applied even when the data scale is arbitrary. The paper proposes a baseline algorithm, “R2-SortnRegress” which can match, and even exceed, the performance of established causal discovery methods such as the PC and FGES algorithms. Comparisons against these baselines (and the Var-SortnRegress and RandomRegress baselines) were performed using synthetic data generated from random and scale-free graphs using Gaussian noise data (with noise standard deviations and edge weights i.i.d. sampled from uniform distributions). The paper also presents a brief real data comparison using a single dataset. The paper provides a mathematical analysis of the influence of the weight distribution on R2 in the simple setting of causal chains, as well as, an empirical evaluation on more complex random graphs, underscoring the role of the weight distribution as a driver of R2. The analyses presented in the paper illustrate the need to make decisions on all ANM parameters in simulations. Strengths: This is a well written and interesting paper. The approach is novel and its need is well motivated. The proposed approach is sound and well described in the text. Weaknesses: I only have a few very minor comments. The notation is sometimes a little inconsistent. For instance, $\sigma$ and $\phi$ are used interchangeably to represent the standard deviation. Also, in Algorithm 1, $\sigma$ is also used to represent the permutation that sorts $\pi$ in ascending order. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Why did the paper only select the PC and FGES as traditional causal discovery algorithms in their comparisons? (The Reisach et al. (2021) paper includes other popular methods.) Also, why the synthetic data experiments only report SID, while the real data illustration reports SID and SHD? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank you for your thorough and positive review. We propose to make the following edits in response to your review and answer your questions individually below. --- ### Edit summary * **Edit 1** We will include DirectLiNGAM (Shimizu 2021) as an additional algorithm in our comparisons in Appendix A3, Figure 5 where we consider non-Gaussian noise, among other settings. * **Edit 2** We will add SHD results for the settings shown in Figure 1 and Figure 3 as a new subsection in Appendix A2 and refer to them in Section 4.1. * **Edit 3** We will replace $\sigma$ in the algorithm and align the use of $\phi$ and $\sigma$. --- ### Answers to your questions **Comparison to other algorithms** Following the findings by Reisach [2021] we decided to include Var-SortnRegress as a baseline representative of the performance of scale-sensitive algorithms given the large and growing number of such algorithms (Vowels 2021). In Section A3 we also include the _Top-Down_ algorithm as an additional method that exploits another type of assumption (equal noise variances), which performs largely similar to _Var-SortnRegress_. \ We will further add the DirectLiNGAM algorithm as described in **Edit 1**. DirectLiNGAM highlights the specificity of the non-Gaussian setting by outperforming the other methods for non-Gaussian noise but performing badly otherwise, in line with theory. We may be able to include another algorithm for the camera-ready version of the paper if there is one that would provide a substantially different aspect or perspective and thus help highlight the main takeaway of the comparison? **Addition of SHD results (qualitatively the same as SID results)** Thank you for the suggestion. We agree that the SHD results may be interesting to readers and will add them as described in **Edit 2**. The trends in terms of SHD are the same as in the SID results. $R^2$-SortnRegress improves with increasing $R^2$-sortability, and absolute performances are better on SF graphs than on ER graphs. --- ### Additional points **Notation** — Thank you for pointing this out, we will address it (cf. **Edit 3**). --- Rebuttal Comment 1.1: Comment: Thank you for your responses. The inclusion of DirectLiNGAM in the non-gaussian setting makes sense. I have no further questions and will maintain my overall score of 7.
Rebuttal 1: Rebuttal: Dear All, We thank you for the review of our submission, valuable suggestions, and recognition of our work's significance and timeliness for the causal discovery community. All reviews seem to agree on the good technical soundness (4x good) and presentation (1x excellent, 3x good) of the overall contribution (3x good, 1x fair) our paper makes by characterizing a novel pattern implicit in the parameters of additive noise models that can strongly affect causal discovery. We have revised our manuscript carefully and made improvements based on all reviewers' comments: * _Updated Figures._ We have improved existing figures and created new figures to complement the presentation of our experimental results and better highlight some of the strengths of our work (scale-invariance of $R^2$-sortability and -SortnRegress, distribution of $R^2$-sortabilities, robustness to different ways of counting sortable node pairs). * _Revisions._ We have rephrased or added individual sentences to clarify explanations and include two new references. We summarize these edits below and provide detailed responses to all reviewers in our individual replies. We are looking forward to discussing our work with you and hope our replies answer your questions and help in your assessment of our submission. We will continue to revise our manuscript if further suggestions arise during the author-reviewer discussion. --- __Updated Figures__ - __see Figure 1 in attached PDF__ [zF5c, UFwV] (Figure 1, Section 4.1). \ We have revised Figure 1 (and 3 & 4 likewise) to better show the strengths of our method. We will show results on standardized data to emphasize the scale-invariance of our method. For reference, we visually differentiate the baseline RandomRegress and include Var-SortnRegress on raw data as a dashed line. To simplify the interpretation of trends and uncertainty we will show moving averages using a window of width 0.1 (instead of binning by decile), and show error bars for the 95% confidence interval of the mean. - __see Figure 2 in the attached PDF__ [UFwV] (Appendix A2). \ We will include the $R^2$-sortability histograms for the settings shown in Figure 1 and 3 in Section A2. For ER graphs, the distribution of $R^2$-sortabilities is close to symmetric, while for SF graphs it has a strong left skew. - __see Figure 3 in the attached PDF__ [zF5c] (new Appendix C). \ We will add an appendix section showing an empirical comparison of sortabilities obtained using Equation (3) as is, compared to a version of Equation (3) that only considers the existence of a path. In the experiment we observe strong alignment (Kendall rank coefficients of $0.86$ for $R^2$-sortability and $0.84$ for var-sortability) between the two versions. - [j7gp] (Figure 5, Appendix A3). \ We will include DirectLiNGAM (Shimizu 2021) as an additional algorithm in our comparisons in Appendix A3, Figure 5 where we consider non-Gaussian noise, among other settings. We will also add the SHD results corresponding to Figures 1 and 3 in A2 (the trends are the same as for SID). __Revisions__ * We will highlight the open nature of real-world $R^2$-sortability [iqwe, UFwV] (lines 19-21, Abstract). \ _Our findings reveal high $R^2$-sortability as an assumption about the data generating process relevant to causal discovery and implicit in the choice of simulation parameters. It should be made explicit, as its prevalence in real-world data is an open question._ * We will emphasize the impact of $R^2$-sortability on benchmarking practices [iqwe, UFwV] (line 84, Contribution). \ _Characterizing $R^2$-sortability and uncovering its role as a driver of causal discovery performance enables distinguishing between data with different levels of $R^2$-sortability when benchmarking or applying causal discovery algorithms to real-world data._ * We will add the following explanation to Equation (3) [zF5c] (line 168, Section 3.1). \ _In effect, this is the fraction of directed paths of unique length between any two nodes that satisfy the sortability condition in the numerator._ * We will clarify the implications of our findings on real-world data for simulations [UFwV] (line 254, Section 4.2). \ _This indicates that, on real-world data, we may not expect to see consistently high $R^2$-sortabilities as much as we do in many simulation settings (see Appendix B.2). For benchmarks to be representative of this factor, they should differentiate between settings with different levels of $R^2$-sortability._ * We will include the suggested references [zF5c] (line 259, Section 5). \ _$R^2$-sortability can be mitigated by introducing the assumption of a constant exogenous noise fraction (see for example Agrawal 2021, Squires 2022), which requires non-iid edge weight sampling in simulations._ * We will highlight the novelty and usefulness as condition for the divergence of node variances of Equation (5) (line 274) [zF5c, UFwV] (line 274, Section 5.1). \ _We introduce the following sufficient condition for weight distributions to result in diverging node variances along causal chains of increasing length:_ * We will emphasize the benefit a full theoretical characterization of $R^2$-sortability could provide [iqwe] (line 337, Section 6). \ _A complete theoretical characterization of the conditions sufficient and/or necessary for extreme $R^2$-sortability could, if found, help decide where one may hope to exploit it in practice._ Pdf: /pdf/1c542a7dcafa596cd1ae452587f68521168c318c.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Leave No Stone Unturned: Mine Extra Knowledge for Imbalanced Facial Expression Recognition
Accept (poster)
Summary: This paper deals with the topic of facial expression recognition (FER). In particular, authors point out the problem of imbalanced class data since most FER data sets will have many more neutral or happy face images than images with other facial expressions. Authors propose an approach to address this problem building on two ideas. First is that there may be information to be learnt about the minor classes even from the samples from the major classes. Based on this observation, authors propose a novel attention map rebalancing to regularize the model. As the second idea, they introduce a label smoothing approach that weights the minor classes more than the major classes. Results on RAF-DB and FERPlus facial expression image data sets show that the proposed approach is able to outperform state of the art FER approaches. Strengths: The main idea of learning about the minor classes from training samples in both major and minor classes appears to be novel and is one of the strengths of this paper. The idea of weighting the minor classes more heavily or using smoothed labels have been used in other applications before, but perhaps not in FER. Another strength of this paper is the set of numerical results provided. Both data sets being used are state of the art data sets and results clearly indicate that the proposed method outperforms competing methods for FER. Another strength of the paper is that it is relatively well-written although authors should have more clearly explained the difference between "overall accuracy" and "mean accuracy". Weaknesses: In Fig. 1, flipping of the images is shown to introduce the notion of re-balanced attention consistency. It is not clear if flipping is the only transformation that makes sense for the FER or if some other transformations (e.g., scaling, intensity attenuation or gain) make sense. That needs to be discussed in more detail. Another weakness is that the proposed method is somewhat similar to the EAC approach although authors do a good job of clarifying the similarities and differences between their approach and EAC. The main assertion that there is information to be gained about minor classes from samples in major classes seems to mainly backed up by pointing to the similarities in corresponding attention maps (e.g., open mouth regions for fear and surprise categories). This evidence is subjective and not entirely convincing. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. What is the difference between mean accuracy and overall accuracy? 2. Line 106: How do we know that the two types of information are "orthogonal"? Perhaps, characterizing them as "complementary" might be more appropriate. 3. Fig. 1 caption: "--- while do not degrade the high accuracy on major classes" should be "--- while not degrading the high accuracy on major classes" 4. Fig. 2: I assume that the bar corresponding to 0.6 probability is for neutral images --- that needs to be clearly indicated in the figure. 5. Eq. (7): Is the tilde on y() here being used to denote the flipped samples or something else? 6. Shouldn't there be bold-face numbers in every column of Table 1 and other tables? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: There is no discussion of limitations in this submission. Authors may want to add some discussion of the performances of the proposed approaches based on gender, race and/or age considerations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are very exhilarated to receive your review. We sincerely thank you for your professional review, which gives us many instructions for polishing our paper. Thanks very much. **Weaknesses:** **1.** Thanks for your valuable suggestion. We carried out experiments to study the performance of using other transformations. The results are displayed below. |Method|Hap|Neu|Sad|Sur|Dis|Ang|Fea|Overall Acc|Mean Acc| |--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |Intensity|95.02|86.47|82.01|82.98|63.13|72.84|56.76|86.05|77.03| |Scaling|95.78|**91.91**|84.31|85.41|**75.00**|**82.10**|60.81|89.37|82.19| |Ours|**96.37**|89.56|**89.33**|**87.84**|66.89|80.86|**66.22**|**89.77**|**82.44**| The results illustrate that intensity transformation performs poorly, while scaling performs well, which almost surpasses our method. The reason lies in that attention map consistency regularizes the model to focus on the same regions before and after the transformation, which incorporates spatial information as the attention map in our method has height and width dimensions. Thus, the transformation should be spatial-related transformation to maximize the function of the method. **2.** Although the technical details of our RAC module are similar to EAC, the motivation and the task are different. EAC aims to prevent the model from memorizing part of the features to handle label noises, while our RAC aims to mine extra information related to minor classes from all training samples to solve imbalanced learning. We also introduce an RSL module to further enhance performance. Experiment results show that our method distinctly outperforms EAC under the imbalanced learning task. **3.** Thanks for bringing that up. The evidence is somewhat subjective since facial expression recognition can be subjective and context-dependent, unlike other classification tasks with clear-out boundaries like CIFAR-10. For instance, various annotators of facial expression datasets might yield diverse results for the same image. Our assertion is also supported by [1-3], all of which validate that expression is a continuous statement, implying that a sample could contain information from several classes. Notably, the dataset in [3] comprises expressions such as sadly fearful, sadly disgusted, fearfully surprised, and more. This aligns with our research, where we strive to extract extra information related to minor classes like fear and disgust from major classes like surprise and sadness. Moreover, our Supp. material contains additional visualization results to provide a more intuitive comprehension. We conducted experiments to support our claim. During training, we employed attention map consistency solely on the label class, rather than all classes, for each sample. The results are presented below. |Method|Hap|Neu|Sad|Sur|Dis|Ang|Fea|Overall Acc|Mean Acc| |--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |One map|95.02|**90.00**|87.66|85.71|57.50|**80.86**|63.51|88.30|80.04| |Ours|**96.37**|89.56|**89.33**|**87.84**|**66.89**|**80.86**|**66.22**|**89.77**|**82.44**| Our method outperforms the comparison group in almost all cases. The results validate our claim that FER resembles label distribution learning, as solely utilizing attention map consistency on the label class fails to capture additional information from other classes within each given sample. **Questions:** **1.** Mean accuracy is calculated as the average value of the test accuracy for each class. Overall accuracy is calculated as the test accuracy for the entire test set. Mean accuracy equals overall accuracy when the test set is balanced, meaning each class has the same number of samples. Both RAF-DB and FERPlus have imbalanced test sets, implying the model could easily achieve high overall accuracy by correctly classifying major classes such as happy. This is unsatisfactory as understanding the minor classes like fear and disgust with negative human emotions is equally crucial. **2.** Thanks for your valuable advice. We use the term "orthogonal supplement" to convey that previous works primarily address the noise in FER datasets. However, label noises are not connected to the varying sample numbers of each class, which causes the imbalanced learning problem. Nevertheless, your advice to use "complementary" is indeed a superior choice, and we have revised our manuscript. **3.** We appreciate your suggestion. We have corrected the grammar mistake and conducted proofreading to polish our writing. **4.** Yes, you're right. We have indicated the neutral class in the new version of Fig. 2 and added it to our manuscript. **5.** The tilde on y() signifies the re-balanced smooth label, distinct from the original label y(). To enhance clarity, we have replaced tilde y() with y_smooth(). **6.** As we mainly focus on the mean accuracy and the accuracy of minor classes, we did not use bold-face numbers in every column. According to your advice, we used bold-face numbers in every column of our tables and marked the mean accuracy and the accuracy of minor classes using a different color. **Limitations:** We added the limitation discussion regarding gender, race, and age to our revised manuscript. Societal and cultural factors might lead to distinct expression patterns, which could impact the model's ability to generalize across genders and races. Things are similar regarding the age factor as facial expressions evolve over time due to muscle tone changes and aging effects. [1] Sixteen facial expressions occur in similar contexts worldwide. In Nature. [2] Multi-Dimensional, Nuanced and Subjective-Measuring the Perception of Facial Expressions. In CVPR. [3] Multi-Label Compound Expression Recognition: C-EXPR Database & Network. In CVPR. --- Rebuttal Comment 1.1: Title: Thanks for your rebuttal comments to my review. Comment: Thank you for carefully responding to my comments and feedback. I have also read the other reviews and your rebuttal comments to those reviews. Overall, I am convinced that this will be a valuable contribution to NeurIPS 2023 and I will stay with my original rating of Accept. --- Reply to Comment 1.1.1: Title: Thanks for your time and effort. Comment: We sincerely thank you for both your prompt review and feedback. Your professional and thoughtful review provides us with many instructions for improving the quality of our paper. We'd also like to express our gratitude for your dedication in thoroughly reading all other reviews and comments. Your responsible and diligent reviewing is an honor for us. We're delighted to learn that you continue to support our paper and recommend its acceptance. Thank you once again for your valuable contributions.
Summary: This paper mainly focuses on solving the imbalanced problem in Facial Expression Recognition (FER). The goal of this paper is to enhance the performance on minor classes without compromising the performance on major classes. The contribution is two fold. Re-balanced attention consistency (RAC) module is proposed to mines extra knowledge pertaining to minor classes from both major and minor samples. Re-balanced smooth labels (RSL) module is proposed to regulate the classification loss and promote balanced learning. Extensive experiments on different datasets and backbones validate the effectiveness of the proposed method. Strengths: (1) This paper adapts attention map consistency to solve the imbalanced learning problem of FER for the first time, which proves that solving imbalanced FER through re-balanced strategy is promising. (2) The experiment including various backbones, imbalance and attention visualization is sufficient. Weaknesses: (1) The purpose of re-balanced attention consistency (RAC) is to extract balanced and transformation invariant knowledge of minor classes from all training samples. The word "knowledge" is unclear and should be more specific. Does it refer to high response values for specific regions on the facial feature map or something else? (2) The authors' explanation about the role of re-balanced smooth labels (RSL) is to improve performance to regulate the classification loss and promote balanced learning. This explanation is too abstract and difficult for readers to understand. Why is it designed as Eq.(8)? What do the two terms of Eq.(8) represent and why fusing these two terms works? I think the author should provide a more detailed explanation. (3) The proposed re-balanced attention consistency (RAC) is very similar to the attention consistency used in EAC. The difference is that an additional balance weight is used to re-weight the attention maps. I would like to know how much performance has been improved due to the addition of balance weight. I have looked at the ablation study in Table 5 and can only know the performance improvement of the entire RAC module. (4) The comparison in Table 1 is questionable. The overall accuracy of EAC is 88.01%, which does not match the number of 89.99% in the original EAC paper (See Table 6 in [46]). (5) In line 283, the authors think non-overlapping attention maps is better than overlapping ones. Are there any references supporting this viewpoint? Overall I think the paper is tackling the right problem. However, the unclear description of the proposed modules weakens the contribution of the paper. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Same as the weakness section. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The authors did not discuss the limitation in this paper. I suggest that the authors provide the "maximum capability" of the proposed method. In other words, what degree of imbalance will cause the proposed method to fail. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks very much for your thorough review, which helps us a lot to improve our paper. **Weaknesses:** **1.** Yes, the knowledge pertains to attention regions on the feature map. The transformation invariant knowledge ensures the FER model to focus on the same region before and after the transformation, which improves accuracy according to [7, 46]. The novelty and motivation of our paper are rooted in our idea of extracting useful minor-class features across all training samples, instead of simply introducing attention map consistency for the imbalanced FER task. **2.** The motivation for RSL is shown in Fig.2. By setting the smooth label inversely proportional to the sample number of each class, we create an imbalanced smooth label with larger smooth labels for minor classes compared to major classes. To achieve minimum loss during training, the FER model will predict more logits into the minor classes. As stated by Reviewer ss3S, this is akin to weighing the minor classes more heavily during training to improve the model's performance on them. The computation of Eq.(8) follows Fig.2 and is derived from Eq.(3) of label smooth in paper [1]. In Eq.(8), $\tilde{y}(i,l)$ is the $l$-th class value of the re-balanced one-hot label of $x_{i}$. The first item represents the original value of the one-hot label with a weight of $(1-\alpha)$. The second item is the smooth label introduced by our method, where $(\alpha/L)$ is the weight for normalization. $B_{l}$ is our re-balanced weight. Eq.(8) does not represent a fusion of two terms; it is the computation of our introduced RSL. **3.** While the technical implementation of RAC is similar to EAC, the motivation and the resolved task are quite distinct. EAC focuses on preventing the model from remembering part of the features to mitigate label noise, while our RAC aims to mine extra information of minor classes from all samples to promote balanced learning. As we introduce the RSL module in our paper, to solely evaluate the effectiveness of the balance weight, we combine EAC with our RSL module as the comparison group. The results on RAF-DB show that our method outperforms EAC+RSL on both overall and mean accuracy. |Method|Hap|Neu|Sad|Sur|Dis|Ang|Fea|Overall Acc|Mean Acc| |--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |EAC+RSL|95.95|88.24|88.91|86.63|65.63|79.63|63.51|88.92|81.21| |Ours|**96.37**|**89.56**|**89.33**|**87.84**|**66.89**|**80.86**|**66.22**|**89.77**|**82.44**| We further show the results when the imbalance factor is 150. The results illustrate that our method also outperforms EAC+RSL in most cases. |Method|Hap|Neu|Sad|Sur|Dis|Ang|Fea|Overall Acc|Mean Acc| |--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |EAC+RSL|**97.05**|**93.24**|77.20|78.72|**38.75**|59.88|29.73|84.52|67.80| |Ours|96.62|91.91|**79.29**|**83.89**|36.25|**61.11**|**43.24**|**85.20**|**70.33**| **4.** As EAC does not report the accuracy of each class and the mean accuracy, we could not cite their overall accuracy directly. We re-implemented their method strictly using their code and achieved 89.99% accuracy using their code. However, we noticed that their code is based on ResNet-50, while our Table 1 reports all results of different methods using ResNet-18 as the backbone. To ensure a fair comparison, we modified the pretrained ResNet-50 to pretrained ResNet-18. The best overall accuracy using ResNet-18 reached 88.85%, and the overall accuracy of the last epoch was reported as 88.01%. We believe reporting the best epoch accuracy may be considered cherry-picking, so we reported all the results using the accuracy of the last epoch. We have summarized both the last epoch accuracy and the best epoch overall accuracy in the following table. The results in the table show that our method outperforms EAC under almost all cases. Note that the best overall accuracy does not necessarily mean that the model achieves the best mean accuracy in the same epoch. In fact, the best mean accuracy of our proposed method reached 83.18% under ResNet-18. |Method|Hap|Neu|Sad|Sur|Dis|Ang|Fea|Overall Acc|Mean Acc| |--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |EAC last|95.70|86.91|87.87|87.23|59.38|80.25|58.11|88.01|79.35| |Ours last|**96.37**|**89.56**|**89.33**|**87.84**|**66.89**|**80.86**|**66.22**|**89.77**|**82.44**| |EAC best|96.03|88.68|**90.17**|86.02|61.25|79.63|59.46|88.85|80.18| |Ours best|**96.54**|**89.26**|89.75|**88.15**|**65.63**|**81.48**|**64.86**|**89.80**|**82.24**| **5.** In Fig. 4, the non-overlapping attention maps are from the happy and sad. These classes shouldn't exhibit similar features. However, other classes might be different, where the rationale comes from the definition of compound expressions [2-3]. For instance, the compound expression "fearfully surprised" exists, leading to similar features between these two classes and resulting in overlapping attention maps. **Limitations:** We carry out experiments with an extreme case of only 10 samples of both disgust and fear classes. We refer the reviewer to reviewer Ey1e for the summary of training sample numbers and baseline performance due to space limitations. The results illustrate that our method outperforms the SOTA method EAC, mainly in the minor-class accuracy and the mean accuracy. Note that happy (4772 samples) is around 500 times more than the samples of disgust and fear, which leads to low test accuracy for disgust and fear. We think this degree of imbalance might be considered a limitation. |Method|Hap|Neu|Sad|Sur|Ang|Dis|Fea|Overall Acc|Mean Acc| |--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |EAC|96.54|**89.71**|**90.79**|**90.58**|87.04|0.00|0.00|85.63|64.95| |Ours|**96.79**|**89.71**|90.17|90.27|**87.65**|**7.50**|**1.35**|**86.05**|**66.21**| [1] The Devil is in the Margin: Margin-based Label Smoothing for Network Calibration. In CVPR. [2] Sixteen facial expressions occur in similar contexts worldwide. In Nature. [3] Multi-Label Compound Expression Recognition: C-EXPR Database & Network. In CVPR. --- Rebuttal Comment 1.1: Title: Author follow-up Comment: Dear Reviewer N9Q5, We sincerely appreciate the time and effort you dedicated to reviewing our paper. We hope that our responses have effectively addressed your concerns. If you have any additional points of concern, please do not hesitate to bring them to our attention—we would be more than willing to address them. We eagerly await your feedback. Thank you very much!
Summary: This paper focuses on the imbalanced learning problem in facial expression recognition (FER). Unlike existing works in imbalanced learning for image classification, the proposed method addresses imbalanced FER from a novel perspective of label distribution learning. Specifically, the proposed method is motivated by the observation that certain major classes in FER may contain valuable features for the minor classes. Building upon this observation, a re-balanced attention map consistency module is introduced, which encourages the model to extract useful information about the minor classes from both minor and major-class samples through attention map consistency. Additionally, a re-balanced label smooth module is incorporated to leverage prior knowledge of the training data distribution and regularize the classification loss. Experimental results on FER datasets with varying levels of imbalance and different backbone architectures demonstrate the effectiveness of the proposed method, particularly in improving the performance of the minor classes. Strengths: 1. This paper provides an interesting label distribution learning perspective to deal with the imbalanced FER, which is novel to me. It proposes two modules to mine extra information of minor-class samples from both major and minor-class samples. From my understanding, this method is specifically designed for FER and technical novelties are sound. 2. The experimental demonstration well supports the claims. The authors have curated different FER datasets with various imbalance factors and the performance improvement on these imbalanced FER datasets is non-trivial compared with other SOTA methods, especially on the minor classes like fear and disgust. Ablation studies presented in Table 5 indicate that each of the proposed modules contributes to the overall performance. I also noticed that in Table 4, combined with Swin-T, the proposed method achieves remarkable results of 92.31% overall accuracy and 87.71% mean accuracy on the widely used RAF-DB dataset. 3. The proposed method is easy to implement and the paper is written in an easy-to-understand manner. Weaknesses: 1. Lack of experiments. As illustrated by the authors, they utilize the weight (i.e., a hyper-parameter of 0.9999) introduced by Ref. [5] as the re-balanced weight for both the re-balanced attention maps and re-balanced smooth label. However, I think there should be some studies regards of this hyperparameter, and what if we utilize different values for the two proposed modules? 2. Some claims were not supported by experiments. The authors claimed that "EAC employs attention maps from all expression classes, rather than just the labeled class, which is similar to label distribution learning". However, there is a lack of experiments to validate this claim. I believe the authors should include experiments to validate their point of label distribution learning, i.e., what is the performance if we only utilize the attention map of the labeled class instead of all classes? 3. Comparative techniques were not sufficiently performed. For example, what if we use GradCAM instead of CAM to implement attention consistency? What if we utilize the basic label smooth instead of the proposed re-balanced label smooth? Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: I have some questions regarding the weight design of the proposed method. Since the weight is utilized by both of the proposed modules simultaneously, I think it is very important for the performance of the proposed method. Therefore, I am curious what the performance would be if we simply set the weight as the inverse of the sample numbers of each class? There should be more discussions regarding the weight design of this method. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 4 excellent Contribution: 4 excellent Limitations: The limitations of this paper should be discussed. This method is designed specifically for imbalanced learning in FER as FER has the label distribution learning characteristic. However, things might be different in other imbalanced image classification tasks. For example, minor classes in imbalanced CIFAR-100 might contain little common feature with major classes in imbalanced CIFAR-100. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are thrilled to receive your review. We sincerely thank you for acknowledging the strengths of our work. **Weaknesses:** **1.** The re-balanced weight is inspired by the effective number of [5]. We visualize the re-balanced weight on the original RAF-DB according to different $\beta$ values. The re-balanced weight is normalized, and the sum of the weight is 1. The standard deviation (Std) can reflect the distribution of the re-balanced weight. From the results in the Table, we can summarize that on RAF-DB, when $\beta$ = 0.9, the re-balanced weight follows the uniform distribution, while when $\beta$ = 0.9999, the re-balanced weight has a large standard deviation and is similar to the inverse of the sample number. Notice that when $\beta$ = 0.9, our method degrades to EAC plus traditional label smoothing. Thus, EAC plus label smoothing could be viewed as a special form of our method which is suitable for balanced train sets, while our method is a more general form with our introduced RAC and RSL modules. |$\beta$| Sur | Fea | Dis | Hap | Sad | Ang | Neu | Std | |----------|------------|--------| -------| ------| ------| ------| -------|------- | |0.9| 0.1429 | 0.1429 | 0.1429 | 0.1429 | 0.1429 | 0.1429 | 0.1429 | 0.0000 | |0.99| 0.1415 | 0.1505 | 0.1417 | 0.1415 | 0.1415 | 0.1417 | 0.1415 | 0.0031 | |0.999| 0.1091 | 0.3227 | 0.1545 | 0.0798 | 0.0917 | 0.1563 | 0.0860 | 0.0789 | |0.9999| 0.0959 | 0.4188 | 0.1677 | 0.0306 | 0.0645 | 0.1705 | 0.0520 | 0.1235 | |inverse| 0.0939 | 0.4310 | 0.1689 | 0.0254 | 0.0611 | 0.1718 | 0.0480 | 0.1290 | To solve the imbalanced FER, following [5], we utilize a $\beta$ value of 0.9999 across all our experiments. We also carry out experiments with different $\beta$ values as suggested, and the results are shown below. The results show that using $\beta$ as 0.9999 achieves the best results. Note that simply setting the weight as the inverse of the sample number can also yield good results, as its re-balanced weight is similar to the case when $\beta$ = 0.9999. |Acc|0.9|0.99|0.999|0.9999|inverse| |--|--|--|--|--|--| |Overall|88.95|89.50|89.28|**89.77**|89.50| |Mean|81.18|81.24|81.75|**82.44**|82.02| We utilize different values for the two modules as suggested. The first value is for the RAC and the second value is for the RSL. From the results, we observe a trend that with the increase of $\beta$ from 0.9 to 0.9999, both the overall and mean accuracy increase. |Acc|0.9+0.9999|0.99+0.9999|0.999+0.9999|0.9999+0.9999|0.9999+0.999|0.9999+0.99|0.9999+0.9| |--|--|--|--|--|--|--|--| |Overall|88.92|89.41|89.57|**89.77**|89.67|89.63|89.47| |Mean|81.21|81.68|82.24|**82.44**|82.06|81.93|81.72| **2.** We carry out experiments with only the attention map of the labeled class instead of all classes to validate our claim of label distribution learning. The results are listed in the following table. |Method|Hap|Neu|Sad|Sur|Dis|Ang|Fea|Overall Acc|Mean Acc| |--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |One map|95.02|**90.00**|87.66|85.71|57.50|**80.86**|63.51|88.30|80.04| |Ours|**96.37**|89.56|**89.33**|**87.84**|**66.89**|**80.86**|**66.22**|**89.77**|**82.44**| Our method outperforms the use of only one attention map of the label class on almost all cases. The results validate our claim that FER resembles label distribution learning as utilizing attention map consistency only on the label class fails to learn extra information from other classes on a given sample. **3.** We conduct experiments utilizing GradCAM instead of CAM to implement attention map consistency. The results are shown in the following table. |Method|Hap|Neu|Sad|Sur|Dis|Ang|Fea|Overall Acc|Mean Acc| |--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |GradCAM|**96.46**|87.65|87.03|84.19|59.38|**82.72**|59.46|88.17|79.56| |Ours|96.37|**89.56**|**89.33**|**87.84**|**66.89**|80.86|**66.22**|**89.77**|**82.44**| The performance of utilizing GradCAM is worse than using CAM. As stated in the GradCAM paper, GradCAM contains a ReLU function. We speculate that the ReLU function might diminish some useful information and thus degrades the regularization effect of the attention maps. Furthermore, if we want to get the attention maps of all classes using GradCAM, we need to backward the gradient many times and concatenate the attention map responding to each class, which is not as efficient as using CAM. Based on the better performance and the easier implementation, we choose to use CAM instead of GradCAM in our method. The basic label smooth means the $\beta$ for the label smooth is 0.9 and the $\beta$ for attention map consistency is 0.9999 based on the above discussion. Thus, the overall accuracy is 89.47% and the mean accuracy is 81.72%, which are lower than our 89.77% and 82.44%, which illustrates the improvement of our RSL upon the basic label smooth. **Questions:** Please refer to the 1. in the weaknesses part. **Limitations:** Thanks for your suggestion. As our method is specifically designed for FER, it might not work well under datasets without the label distribution learning characteristic, we have added it to the limitation discussion of our revised manuscript. --- Rebuttal Comment 1.1: Title: Thanks for your detailed response. Comment: I've thoroughly reviewed the authors' rebuttal and am generally satisfied with their responses, both to my review and to the other reviews. They have effectively addressed my concerns regarding the re-balanced weight study. The authors provided comprehensive experiments and analyses on the re-balanced weight distribution and the corresponding performance with different hyper-parameters. The comparison with the one-class attention map reveals that there is indeed extra information from attention maps besides the target class. Overall, I recognize several merits of this paper: 1. The idea of extracting additional knowledge regarding minor classes from both major and minor class samples is novel. 2. The experimental results on RAF-DB using Swin-T as the backbone are highly promising. 3. In the rebuttal, the authors presented results on AffectNet and the extremely imbalanced RAF-DB. Considering the accuracy of minor classes, it's clear that the method is well-suited for the imbalanced FER task. To the best of my knowledge, there is no similar method that extracts minor-class information from both the major and minor-class samples to effectively address the imbalanced FER task. The motivation and idea are novel and interesting to me. Given the method's strong performance, achieving state-of-the-art results in the imbalanced FER task, I believe this paper deserves acceptance. Therefore, I recommend accepting this paper. --- Reply to Comment 1.1.1: Title: Thanks for your feedback. Comment: We wholeheartedly thank you for dedicating time to thoroughly read all the reviews and responses. The merits you highlighted align seamlessly with the strengths of our paper. Your recognition of the novelty in our motivation and method is greatly appreciated. We're thrilled to hear that you believe our paper deserves acceptance. Thank you very much for your support.
Summary: In the paper, the authors tackle the issue of imbalanced learning in the facial expression recognition task by introducing a fresh approach. Their proposed method revolves around the concept of attention consistency under spatial transforms, aiming to extract information about multiple classes from each sample. This approach effectively enhances the performance of under-represented classes. The authors evaluate their approach on two datasets and successfully achieve state-of-the-art performance with a transformer backbone. Strengths: (+) They achieve state-of-the-art (SOTA) performance on the RAF-DB dataset with the Swin backbone. (+) Table 4 clearly demonstrates the performance improvement brought by the proposed approach. It highlights the significant increase in performance for both minor classes and the overall score. (+) Despite attention consistency being proposed in previous work, the authors cleverly and novelly employ a modified version of it incorporating rebalanced smooth labeling. They provide a reasonable hypothesis, stating that information for minor classes can be captured from all training samples. (+) The visualization results presented in both section 4.7 and the appendix provide further evidence that the proposed approach functions effectively. Weaknesses: (-) To further support the idea, I suggest conducting evaluations on multi-label classification using other facial expression recognition (FER) datasets, such as BP4D. If the hypothesis is correct, the proposed approach should also work effectively in a different type of FER task. Testing it in this manner would provide a good opportunity for validation. (-) The two methods proposed in the paper, in terms of their structure, are not entirely novel. However, there is potential for further refinement and innovation in their design. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: - What is the purpose of utilizing Global Average Pooling to extract features? Doesn't it result in a loss of information? - Based on the findings presented in Table 4, it appears that the performance improvement for minor classes surpasses that of other classes. Regarding the fairness aspect (although it may not be the primary focus of the paper), what can be said about the trained models? Do you believe that your method also facilitates the development of fairer models? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: Given that it is a facial expression recognition task, privacy could potentially be a concern. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your insightful review. The strengths you highlighted align perfectly with the core contributions of our paper, and they indeed encapsulate what we take great pride in. Your review is undoubtedly one of the most exhilarating ones we have received in over a year. Thank you very much. **Weaknesses:** **1.** Thanks for your valuable suggestion. Though we wanted to carry out experiments on BP4D, we emailed the authors of BP4D and they told us they could not enter into an agreement for the dataset at this time and we could not download it. As suggested, we conduct experiments on the multi-label FER task. We utilize a widely used multi-label FER dataset EmotioNet [1, 2], which is published in CVPR and provides 6 basic expressions and 10 compound expressions. There are a total of 2,478 images and we utilize 80% of them for training and the rest for testing. The results of different methods are summarized below. |Method | Hap | Hap Dis | Sad Dis | Sad | Hap Sur | Ang | Sur | Ang Sur | Awe | |-------| :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | |SCN | 82.24 |**71.74**|**80.00**|**66.67**| 0.00 | 35.71 | 30.77| 15.38 |**61.54**| |BBN | 77.63 | 69.57 | 65.71 | 62.96 | 35.29 |**64.29**| 53.85| 7.69 | 53.85 | |RUL | 75.66 | 62.32 | 62.86 | 59.26 | 35.29 |50.00 |**69.23**|46.15|53.85 | |EAC | 78.29 | 69.57 | 68.57 |**66.67**|**70.59**| 57.14 | 46.15 | 46.15 | 46.15 | |Ours |**86.18**| 65.94 | 68.57 | 55.56 |**70.59**| 57.14 | 46.15 |**53.85**| 46.15 | |Method | Dis | Ang Dis | App | Fea Ang | Sad Ang | Fea | Fea Sur| Overall Acc| Mean Acc| |-------| :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: |:-: | |SCN | 8.33 | 0.00 | 0.00 |**30.00**| 0.00 | 0.00| 0.00 | 59.43 | 30.15 | |BBN | 8.33 | 18.18 | 0.00 |**30.00**|**30.00**| 50.00| 0.00| 60.45 | 39.21 | |RUL | 50.00 |**45.45**| 0.00 |**30.00**| 20.00 | 30.00| 0.00 | 59.43 | 43.13 | |EAC |**66.67**| 27.27 | 0.00 |**30.00**| 20.00 | 40.00|**50.00**| 64.71 | 48.95 | |Ours |**66.67**| 36.36 |**10.00**|**30.00**|**30.00**|**60.00**|**50.00**|**66.73**| **52.07**| We have organized the 16 different expression classes in [1, 2] into two tables based on the number of their training samples, arranged from largest to smallest. There are instances of similar test accuracy scores when the test sample numbers for certain compound expressions are small, which can be attributed to the challenges associated with collecting samples for compound expressions. The results demonstrate that our method consistently achieves the best overall and mean accuracy in the context of the multi-label FER task. Additionally, our method excels in terms of performance within the minor classes in the second table. The multi-label FER task is also aligned with our motivation, as the definition of compound expressions implies that a sample could encompass features from multiple basic expression classes. For instance, a sample in the fearfully angry (Fea Ang) class contains features from both the fear and angry classes. **2.** Thanks for bringing that up. Though the structure of our RAC and RSL modules is not entirely novel, their motivation is. We propose RAC to guide the model to mine extra information related to minor classes from all training samples for the first time. RSL acts as a complementary module for RAC and utilizes the extra information regarding the label distribution of the imbalanced training data to weigh minor classes more heavily. Both of the two modules learn existing extra knowledge to improve the imbalanced learning of FER, which aligns with the title of our paper. **Questions:** **1.** The features used for attention map consistency are extracted before the Global Average Pooling (GAP) layer to prevent information loss, as we employ F (features before GAP) instead of f (features after GAP) in Eq.(2) of our paper. The success of our method hinges on preserving the spatial information related to expression features since the attention map consistency also extends across the spatial dimensions. **2.** Yes, we believe our method achieves a fairer FER model. Though in Table 4, the performance improvement for minor classes sometimes surpasses that of other classes, the test accuracy of minor classes is still lower than the major classes. We believe a fair FER model should achieve similar accuracy on different expression classes. Based on the results in Table 4 and the results on AffectNet in the rebuttal for reviewer Ey1e, the gap between the accuracy of major and minor classes is distinctly smaller using our method. Thus, we believe our method achieves a fairer FER model. **Limitations:** Thanks for your suggestion. We have added the limitation discussion in our revised manuscript. As there might be a privacy problem in FER, we could further combine our method with the technology of differential privacy [3] to help preserve privacy during FER. [1] Emotionet: An accurate, real-time algorithm for the automatic annotation of a million facial expressions in the wild. [2] Emotionet challenge: Recognition of facial expressions of emotion in the wild. [3] Privacy Preserving Face Recognition Utilizing Differential Privacy. --- Rebuttal Comment 1.1: Title: Response to the rebuttal Comment: I appreciate your comprehensive feedback. As I comprehend your comments, it appears that you utilized the EmotioNet dataset. However, it seems that the outcomes are coming from a 'classification' problem, rather than a multi-label classification task where each instance could be linked to multiple labels concurrently. To illustrate, if the AU units are accessible within that dataset, could you kindly share the results involving a multi-label classification task? I am inclined to support the acceptance of this paper, as the authors have adeptly addressed both my concerns and those of fellow reviewers. Through their thorough experiments, as presented in both their response and the manuscript itself, they have convincingly demonstrated their model's advancement to state-of-the-art status. --- Reply to Comment 1.1.1: Title: Thanks for your time and effort. Comment: We extend our heartfelt gratitude for your valuable time and dedication to reviewing our paper. We are truly delighted by your endorsement of our paper's acceptance and your commitment to thoroughly peruse both the reviews from other reviewers and all of our comments. Your professional and insightful review is a significant honor for us, and we deeply appreciate it. In response to your suggestion, we have incorporated the AU recognition results into the tables provided below. In our experiments, we changed the loss to binary cross-entropy loss and added a sigmoid layer after the FC layer. During the test phase, logits larger than 0.5 are classified as 1, while logits smaller than 0.5 are classified as 0. Following the tradition of AU recognition, we report the F1 score for each action unit and calculate the average F1 score across all classes. As the AU labels are also imbalanced, we have organized the AUs in descending order based on the number of labels in each class. |Method|AU 25|AU 12|AU 10|AU 4|AU 1|AU 2|AU 15| |------|:-: |:-: | :-: | :-: | :-: | :-:| :-: | |Baseline|**0.9799**|0.9935|0.7251|**0.9291**|0.8522|0.7391|0.6885| |SCN|0.9759|**0.9951**|0.7225|0.9258|0.8364|0.7692|**0.7164**| |RUL|0.9564|0.9754|0.6633|0.8767|0.7302|0.6549|0.3714| |EAC|0.9757| 0.9903|0.7188|**0.9291**|0.8689|0.8317|0.7013| |Ours|0.9733|0.9919|**0.7353**|0.9214|**0.8926**|**0.8515**|0.7042| |Method|AU 20 |AU 26|AU 7|AU 17|AU 5|AU 9|AU 24|Avg.| |------|:-: |:-: | :-: | :-: | :-: | :-:| :-: | :-:| |Baseline|0.5306|0.4615|0.2353|0.3636|0.5405|0.4516|0.3810|0.6337| |SCN|**0.5714**|0.3784|0.2941|0.4118|0.5556|0.3871|0.3158|0.6325| |RUL|0.2373|0.1333|0.0513|0.2326|0.3448|0.1463|0.0000|0.4553| |EAC|0.5652|0.4878|0.3684|**0.4444**|**0.5854**|0.4516|0.4167|0.6668| |Ours|**0.5714**|**0.5581**|**0.4324**|0.3889|0.5500|**0.5294**|**0.4545**|**0.6825**| From the results, we can observe that our method achieves the best average F1 score compared to other FER methods. Furthermore, our method attains the highest F1 score on AU 9 and AU 24 with the least training labels, which is consistent with our experimental results in expression recognition. We recognize that RUL is not well-suited for multi-label classification and results in low performance. The reason behind this lies in the fact that RUL compares two samples from different expression classes to learn uncertainty. However, when a sample possesses multiple labels concurrently, such a comparison becomes less meaningful. EAC and our method perform well since the main component of these two methods is attention map consistency [1], which was originally proposed for multi-label classification. Notably, our method outperforms EAC as we re-weight the attention map and integrate the information of the label distribution, making it more suitable for handling imbalanced tasks. We have included this experiment along with its corresponding discussion in our revised manuscript, and we firmly believe that this addition will enhance the credibility of our method. We sincerely appreciate your valuable suggestion. If we have addressed your concerns, please kindly let us know. If you have any other additional concerns, please do not hesitate to bring them up. We would try our best to address them. Thank you very much for your immense contribution for reviewing our paper! [1] Visual Attention Consistency under Image Transforms for Multi-Label Image Classification. In CVPR 2019.
Rebuttal 1: Rebuttal: We sincerely thank all reviewers for their time and effort in reviewing our paper, as well as for their insightful feedback! We are encouraged that Reviewers 31BV, Ux1i, and ss3S all find our main idea of mining extra knowledge about minor classes from training samples of both major and minor classes to be novel. Reviewer 31BV thinks that we "cleverly and innovatively" designed our method, and Reviewer Ux1i also believes that our method is specifically designed for FER and the technical innovations are well-founded. Reviewers Ey1e and N9Q5 assert that we are addressing a challenging problem, and that our method is interesting and holds promise in solving imbalanced FER through a re-balanced strategy. Moreover, Reviewers ss3S, Ux1i, and 31BV all concur that our method achieved state-of-the-art performance on the RAF-DB dataset with the Swin backbone, and our results "clearly indicate that the proposed method outperforms competing methods" under the imbalanced learning FER task, particularly in terms of the accuracy of minor classes. Reviewers Ux1i and N9Q5 also note the thoroughness of the experiments in our paper. Reviewers ss3S and Ux1i remark that Table 4 clearly demonstrates the performance improvement brought about by the proposed approach, and that our paper is well-written. Addressing the weaknesses of our paper, Reviewer Ey1e suggested that we add experiments on AffectNet. Following this suggestion, we conducted experiments and found that our method also outperforms other FER methods on this extremely imbalanced dataset. Reviewer 31BV suggested evaluating our method under multi-label FER task. We conducted experiments as suggested and found that our method still performs well. Multi-label FER task aligns with our statement that a sample might contain features from several expression classes. For instance, the class "fearfully angry" combines features from both the fear and angry classes. Reviewer Ux1i primarily raised concerns about the hyper-parameter study, prompting us to add the corresponding experiments. Reviewer N9Q5 pointed out some unclear descriptions regarding our proposed modules, so we added more detailed discussions of our RAC and RSL modules. We greatly appreciate the extensive feedback provided by Reviewer ss3S, which has significantly aided in improving the quality of our paper. Additionally, Reviewer ss3S suggested exploring other transformations, leading us to include corresponding experiments as suggested. We address specific questions below and respond to each review with a separate response in detail. We have incorporated all feedback into the revised version of our manuscript and supplementary material. With our responses, we hope that we have addressed the reviewers' concerns. Please let us know if there are any further questions or comments. We are eager to do our utmost to address them. Pdf: /pdf/b400ac5f81a634071f852c32d5810582eadefa7f.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: The paper presents an approach to tackle the imbalance problem in facial expression recognition. The approach makes use of information from the majority class to help improve performance of the minority classes. The proposed approach is evaluated on two public FER datasets. Different backbone networks are evaluated, along with comparisons to state of the art and ablation studies. Strengths: The paper investigates a challenging problem. and the proposed approach using attention map consistency is interesting. Encouraging results are shown on two public datasets. Ablation study is generally well conducted. Weaknesses: Experimental design is lacking. FER2013 is an old dataset; newer FER datasets should be evalauted (e.g., AffectNet - Mollahosseini, Ali, Behzad Hasani, and Mohammad H. Mahoor. "Affectnet: A database for facial expression, valence, and arousal computing in the wild." IEEE Transactions on Affective Computing 10.1 (2017): 18-31.). Along with being newer, AffectNet needs to be evaluated as the approach is on imbalanced FER datasets. The paper conducts experiments under different imbalance factors, however, AffectNet is one of the largest and most imbalanced datasets available for FER. For example, in AffectNet, Happy is ~46% of the data and contempt is ~1% of the data. How does the proposed approach handle these extreme imbalance issues? The comparison to CB [5] in Table 1 is not clear. This paper is not on FER and does not evaluate RAF-DB. What is this? Paper states the accuracy of each class is not commonly shown in FER papers. This is not accurate. If it is not in a table, this is often shown through confusion matrices (e.g., Farzaneh, Amir Hossein, and Xiaojun Qi. "Facial expression recognition in the wild via deep attentive center loss." Proceedings of the IEEE/CVF winter conference on applications of computer vision. 2021.) Technical Quality: 2 fair Clarity: 3 good Questions for Authors: How does the proposed approach handle extreme imbalance such as in AffectNet (e.g., Happy vs. Contempt)? What is reference [5] in Table 1? This is not a FER paper that is being compared to. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 3 good Contribution: 3 good Limitations: The limitations of the proposed work are not addressed. How does the approach work on extremely imbalanced data? What kind of societal impact can this have? There are always ethical concerns with FER systems. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your insightful review, which has assisted us in enhancing the quality of our paper. **Weaknesses:** **1.** Thanks for your valuable suggestion. We have conducted experiments on AffectNet with both 7 and 8 expression classes. Due to the time consumption, we use ResNet-18 as the backbone for all methods. As AffectNet has a balanced test set with 500 samples of each class, the mean accuracy equals the overall accuracy. |Method | Hap | Neu | Sad | Ang | Sur | Fea | Dis | Mean Acc | Overall Acc| |-------| :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | | SCN | **95.20** |**82.70** | 44.20 | 56.30 | 35.80 | 38.00 | 20.90 | 53.30 | 53.30 | | BBN | 87.00 | 57.10 | **66.80**| 58.30 | 54.90 | 71.10 | 30.10 | 60.76 | 60.76 | | RUL | 90.50 | 62.40 | 64.70 | **69.30** | 60.80 | 49.00 | 34.20 | 61.56 | 61.56 | | EAC | 91.40 | 64.50 | 65.70 | 66.30 |**61.60** | 60.90 | 45.80 | 65.17 | 65.17 | |Ours | 86.20 | 59.00 | 64.20 | 66.50 | 57.80 |**64.50**|**61.90** | **65.73** |**65.73** | |Method | Hap | Neu | Sad | Ang | Sur | Fea | Dis | Con | Mean Acc |Overall Acc| |-------| :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | | SCN | **94.60** | **74.90** | 58.20 | 63.80 | 40.90 | 43.20 | 30.80 |2.20 | 51.08 | 51.08 | | BBN | 78.40 | 58.40 | 60.60 | **67.70** | 59.40 | 55.00 | 37.00 |46.70 | 57.90 | 57.90 | | RUL | 71.00 | 63.40 | 46.60 | 54.90 | 53.70 | 58.60 | 44.70 |47.70 | 55.08 | 55.08 | | EAC | 84.00 | 58.80 |**65.00**| 65.90 | **62.20**| 60.30 | 46.10 |41.90 | 60.53 | 60.53 | |Ours | 78.60 | 54.30 | 63.80 | 59.50 | 57.60 |**64.10**|**59.40**|**60.00**| **62.16** |**62.16** | From the results, we could draw the conclusion that our method achieves the best mean and overall accuracy on the test set under both 7 or 8 classes. Besides, we notice that our method improves existing methods on the minor classes of fear (Fea), disgust (Dis), contempt (Con) by remarkable margins, which illustrates that our method is more suitable for the imbalanced learning of FER task. Our method even achieves 60.00% accuracy on the contempt class under 8 classes. Furthermore, though "Happy is ~46% of the data, and contempt is ~1% of the data," our method clearly decreases the test accuracy gap between happy (78.60%) and contempt (60.00%) compared with other methods under 8 classes, which means our method is fairer and achieves a more balanced test accuracy. **2.** The re-balanced weight design of our method is inspired by CB [5], which is a widely used method in imbalanced learning of image classification. We carry out experiments using [5] to show that simply weighing the cross-entropy loss cannot bring much improvement in the FER task, which illustrates the superiority of our method using re-balanced attention map consistency to mine extra minor-class information from all FER samples. **3.** We report the accuracy of each class and arrange them according to the sample number of each class instead of plotting confusion matrices due to space limitations and visualization simplicity. As suggested, we display the main results of our paper using confusion matrices in the uploaded PDF in the above "global response" for further reference. **Questions:** Please refer to 1. and 2. in the weaknesses part. **Limitations:** Thanks for your valuable suggestion. We study our method under extremely imbalanced data. For example, on RAF-DB, we only keep 10 training samples for both fear and disgust and keep other classes the same, leading to a extremely imbalanced train set summarized as below. |Class | Hap | Neu | Sad | Sur | Ang | Dis | Fea | |------ | :-: | :-: | :-: | :-: | :-: | :-: | :-: | |Number |4772 | 2524 | 1982 | 1290 | 705 | 10 | 10 | The test accuracy on the original test set of RAF-DB is shown below. |Method | Hap | Neu | Sad | Sur | Ang | Dis | Fea | Overall Acc| Mean Acc| |------ | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | :-: | |Baseline |96.03|**91.03**|87.66 | 88.15 |82.10 | 0.00 |0.00 |84.71 | 63.57 | |EAC |96.54 |89.71|**90.79**|**90.58**|87.04 | 0.00 |0.00 |85.63 | 64.95 | |Ours |**96.79**|89.71|90.17 | 90.27 |**87.65**|**7.50**|**1.35**|**86.05**|**66.21**| The results illustrate that our method still outperforms the SOTA method EAC on extremely imbalanced data, mainly in the minor classes accuracy and the mean accuracy. However, the extreme case with happy having 4772 samples, which is around 500 times more than the samples for disgust and fear (only 10 samples each), also leads to a very low test accuracy of our method for disgust and fear. This should be considered a limitation of our method. As for the ethical concerns, we could further combine our method with the technology of differential privacy [1] to help preserve the privacy during recognition. [1] Privacy Preserving Face Recognition Utilizing Differential Privacy. --- Rebuttal Comment 1.1: Comment: Thank you for the response to my concerns. I have changed my rating to weak accept based on these new experiments. --- Reply to Comment 1.1.1: Title: Response to Reviewer Ey1e Comment: Thank you very much for providing a prompt response. Your review has been immensely helpful in improving the quality of our paper. We are truly appreciative of the increased score.
null
null
null
null
null
null
CLIP4HOI: Towards Adapting CLIP for Practical Zero-Shot HOI Detection
Accept (poster)
Summary: This paper proposes CLIP4HOI, a two-stage framework for zero-shot HOI detection, where the generalizable knowledge of CLIP is leveraged for interaction classification. To facilitate better transferability and avoid data-sensitive knowledge distillation, CLIP is tuned to a fine-grained HOI classifier. Extensive experiments are conducted on prevalent benchmarks from zero-shot and fully-supervised settings. Strengths: 1. The motivation of this paper is generally reasonable. 2. The proposed method is simple and easy to follow. 3. The experiment results show that the proposed method achieves competitive performance in both fully-supervised and zero-shot settings. Weaknesses: 1. The novelty of this work is marginal. Some key components are reused in existing HOI detection methods, e.g., HOI interactor is UPT[1]; the learnable text prompt in the HOI classifier is [2]. 2. The paper omits some important baselines, such as HOICLIP[3]. Although HOICLIP is a CVPR2023 paper, it was uploaded to Arxiv in Mar. 2023. The authors should provide a detailed discussion and comparison with it. 3. The paper misses some experimental results, such as Scenario One on Fully-supervised V-COCO test sets. [1] Efficient two-stage detection of human-object interactions with a novel unary-pairwise transformer, CVPR 2022. [2] Learning transferable human-object interaction detector with natural language supervision, CVPR 2022. [3] HOICLIP: Efficient Knowledge Transfer for HOI Detection with Vision-Language Models, CVPR 2023. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. My main concerns and questions lie in the weaknesses. The author should discuss them in detail. 2. The authors could include more qualitative examples in the supplementary material to illustrate the success and failure cases of their method. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors addressed the limitations of the proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for the insightful comments and clarify the concerns as follows. *** Q1: The novelty of this work is marginal. Some key components are reused in existing HOI detection methods. A1: This work targets to: 1) Alleviate the problem existing in existing top-performing one-stage zero-shot HOI detectors that are prone to overfitting the joint positional distribution of seen human-object pairs during training. 2) Better leverage the general knowledge of the CLIP model for fine-grained HOI discrimination. To this end, we adopt a two-stage paradigm and introduce three modules: HO interactor to generate pairwise proposals, HOI decoder to aggregate related features from CLIP image encoder outputs, and HOI classifier for fine-grained HOI discrimination. In the implementation of the HO interactor and HOI classifier, we follow some designs of UPT [1] (e.g., interaction module) and [2] (e.g., prompt learning). We would like to clarify that we clearly emphasize their contribution and never take these detailed designs as main innovations of ours. *** Q2: The paper omits some important baselines, such as HOICLIP. Although HOICLIP is a CVPR2023 paper, it was uploaded to Arxiv in Mar. 2023. The authors should provide a detailed discussion and comparison with it. A2: Thanks for pointing out the missed reference. As a common concern with Reviewer baYx, we attempt to address this concern in the global response. *** Q3: The paper misses some experimental results, such as Scenario One on Fully-supervised V-COCO test sets. A3: For fully-supervised testing on the V-COCO dataset, Scenario 1 needs to predict occlusion objects as [0,0,0,0] while Scenario 2 excludes no-object HOI categories during evaluation. This work focuses on the development of a practical zero-shot HOI detector for novel objects, verbs, and object-verb combinations, and thus no special mechanisms are designed to handle occluded objects. Therefore, Scenario 2 better reflects the real fully-supervised performance of the proposed CLIP4HOI on the V-COCO dataset. For completeness, we test the performance of our CLIP4HOI on Scenario 1, and the result is 59.7mAP. This performance is better than that of UPT [1], but lower than that of GEN-VLKT [3], which is reasonable. *** Q4: The authors could include more qualitative examples in the supplementary material to illustrate the success and failure cases of their method. A4: Thanks for your valuable suggestion. We have included more qualitative examples in the one-page pdf of the global response. *** References: [1] Zhang, Frederic Z., Dylan Campbell, and Stephen Gould. "Efficient two-stage detection of human-object interactions with a novel unary-pairwise transformer." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. [2] Wang, Suchen, et al. "Learning transferable human-object interaction detector with natural language supervision." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. [3] Liao, Yue, et al. "GEN-VLKT: Simplify association and enhance interaction understanding for hoi detection." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
Summary: This paper proposes a two-stage zero-shot HOI detection paradigm which uses information information from large-scale vision and language models like CLIP. The proposed approach, called CLIP4HOI, first extracts all feasible human-object pairs and generates pairwise proposals. In the second stage, CLIP4HOI uses the CLIP model to classify the pairwise proposals. The authors demonstrate the utility of the proposed approach through experiments on two datasets. Strengths: Utilizing the information contained in large-scale vision-language models for different tasks is the way to best use such models. The authors have taken inspiration from recent works along these lines and cleverly utilized the CLIP model for HOI proposal classification. Weaknesses: The motivation for the proposed approach and the differences with prior works are not clear. The paper contains several un-explained terms and claims. The authors should clearly address the questions below in a revision. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. The authors claim that prior models over-fit to the join positional distribution of seen human-object pairs. However, I didn't see any convincing evidence to show if this actually happens and if it happens, is it actually a problem. The authors should first show why and how this is a problem. Without a clear demonstration, the motivation behind the proposed approach is not clear. 2. Related to the above, the authors claim that the distribution discrepancy leads to compromised efficacy of the model when encountering novel HOIs. Have the authors actually tested the performance in such cases? It seems figure 1c is trying to do it, but the figure is not clear there's no clear description of the figure (what is the x-axis? what is the class?). The authors should take all classes which demonstrate such distribution dicrepancy and show if the performance is actually compromised. 3. It's not clear how the proposal generation is different from prior works. Prior works also generate all human-object pairs as proposals. How is the proposed approach different from such works? 4. In lines 169-170, the authors talk about "feature interaction". What exactly is feature interaction? 5. Similarly, in several places, the authors talk about "valid" human-object pairs. This is not defined anywhere. What are valid human-object pairs? Are the authors removing any pairs using some heuristics? 6. What is the CoopLayer? What is its form? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: The authors have not discussed the limitations of their proposed approach. I would recommend that the authors include such a discussion. They can talk about the need to use large-scale vision-language models which require large amount of resources for training. They can also talk about the need for separate object detectors (DETR) which make the overall architecture clunky - there might be ways of reducing the dependence on a large number of additional models. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for the insightful comments and clarify the concerns as follows. *** Q1: The authors claim that prior models over-fit to the join positional distribution of seen human-object pairs. However, I didn't see any convincing evidence to show if this actually happens and if it happens, is it actually a problem. The authors should first show why and how this is a problem. Without a clear demonstration, the motivation behind the proposed approach is not clear. A1: Thanks for raising this insight concern. We would like to kindly clarify we have discussed this problem in the Section of ``Introduction'' and showed the evidence in Figure 1b. Besides, some more examples are also included in Figure 2 of the supplementary material. In the literature, previous top-performing methods generally follow the query-based one-stage paradigm, whose query is expected to be used for simultaneously localizing both the human and the object. This kind of prediction paradigm works well under the i.i.d assumption. However, in the zero-shot HOI detection, the joint positional distribution of HOIs is totally different between the training stage and the test stage, making it an o.o.d problem. Therefore, such one-stage methods bear the inherent risk of overfitting to the joint positional distribution of human-object pairs for seen HOI categories during training, which results in compromised efficacy of the model when encountering novel HOIs that exhibit significant distribution discrepancy with the seen categories. We have compared our proposed approach with the previous one-stage method in Figure 1c and Table 2. Results show that our CLIP4HOI exhibits better robustness to positional distribution discrepancy and superior generalization capability to unseen HOI categories. *** Q2: Related to the above, the authors claim that the distribution discrepancy leads to compromised efficacy of the model when encountering novel HOIs. Have the authors actually tested the performance in such cases? It seems Figure 1c is trying to do it, but the figure is not clear there's no clear description of the figure (what is the x-axis? what is the class?). The authors should take all classes which demonstrate such distribution discrepancy and show if the performance is actually compromised. A2: The definition of the x-axis of Figure 1c is the distribution discrepancy between seen and unseen HOI categories. As stated in the caption of Figure. 1, the distribution statistics are the angles (quantized into 90 discrete bins) between the line from the person to the object and the x-axis. The discrepancy is measured with KL divergence. A more detailed explanation can be found in Section 5.2 “Robustness to Distribution Discrepancy” and also in Section A.2 in the supplementary material. We indeed take all classes into consideration. Specifically, we first compute the positional distribution discrepancy between the seen and unseen HOI categories corresponding to each object. Then, according to the degree of distribution discrepancy, we divide the whole test set into seven subsets by object. On each subset, we report the performance improvement of our CLIP4HOI compared to the previous method EoID [1], and thus Figure 1c is obtained. Results show that our CLIP4HOI exhibits better performance when encountering larger positional distribution discrepancies (The mAP gain increased from 2.74 to 9.31 as the distribution discrepancy increased). *** Q3: It's not clear how the proposal generation is different from prior works. Prior works also generate all human-object pairs as proposals. How is the proposed approach different from such works? A3: In fact, instead of designing a new HOI proposal generation strategy as the main contribution of this paper, we try to use a traversal-based prior-agnostic HOI proposal generation strategy in a two-stage framework to alleviate the detector from overfitting the joint positional distribution of human-object pairs. As stated in Section 4.2, we borrow the design of the lightweight interaction head in UPT [2] to realize the proposal generation process. *** Q4: In lines 169-170, the authors talk about "feature interaction". What exactly is feature interaction? A4: Feature interaction is done with the HO interactor, which consists of a cooperative layer and a competitive layer. The technical details of these two layers are included in the supplementary material. In a nutshell, the cooperative layer operates on the features of individual human and object instances, while the competitive layer operates on the features of human–object pairs. Both of these two layers are based on the self-attention mechanism (with pairwise positional encoding). *** Q5: Similarly, in several places, the authors talk about "valid" human-object pairs. This is not defined anywhere. What are valid human-object pairs? Are the authors removing any pairs using some heuristics? A5: The definition of valid human-object pairs can be found in Eq. (2)~Eq. (4). We do not heuristically remove any pairs. We just make sure that the subject must belong to the “human” class. *** Q6: What is the CoopLayer? What is its form? A6: As stated in Section 4.2 (line 171~172), the adopted HO Interactor borrows the design of the lightweight interaction head in UPT [2], which consists of a cooperative layer (CoopLayer in Eq. (1)) and a competitive layer (CompLayer in Eq. (5)). For completeness, we have introduced them in detail in Section A.1 of the supplementary material. *** References: [1] Wu, Mingrui, et al. "End-to-end zero-shot hoi detection via vision and language knowledge distillation." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 37. No. 3. 2023. [2] Zhang, Frederic Z., Dylan Campbell, and Stephen Gould. "Efficient two-stage detection of human-object interactions with a novel unary-pairwise transformer." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
Summary: This paper introduces a new framework to leverage CLIP knowledge for zero-shot HOI detection. The paper proposes an HO interactor for pairwise HOI proposal generation to avoid the overfitting issue associated with the joint localization of humans and objects. Instead of using distillation, this paper directly utilizes CLIP knowledge to overcome the absence of unseen categories during training. Strengths: * The HOI proposal generation strategy (HO interactor) and the usage of CLIP knowledge (directly using CLIP representation instead of knowledge distillation) are reasonable and effective. * The overall performance of HOICLIP surpasses its baseline due to its ability to generate novel HOI proposals and efficiently use CLIP information. * The overall pipeline is clear and easy to understand. Weaknesses: * The HOI proposal method in this paper (HO interactor) requires traversing all feasible HO combinations for pairwise HOI proposal generation, which may lead to increased computational complexity. * The motivation behind the specific design of the HOI Decoder is not clear enough. * A discussion of recent works in CVPR2023 (HOICLIP: Efficient Knowledge Transfer for HOI Detection with Vision-Language Models) is expected. Technical Quality: 3 good Clarity: 3 good Questions for Authors: * Regarding weakness (1), the author should demonstrate the amount of computational complexity introduced by the process of traversing all feasible HO combinations. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: * Although the method in this paper is able to avoid the overfitting issue associated with the joint localization of humans and objects, thus improving the ability to detect novel UC, UV, UO HOI pairs, the zero-shot performance may still be limited by the pretrained DETR detector. If the object or human is not detected by the detector, there is no chance that the HOI proposal could be generated by the following modules. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for the insightful comments and clarify the concerns as follows. *** Q1: The HOI proposal method in this paper (HO interactor) requires traversing all feasible HO combinations for pairwise HOI proposal generation, which may lead to increased computational complexity. The author should demonstrate the amount of computational complexity introduced by the process of traversing all feasible HO combinations. A1: In our approach, the traversal-based proposal generation is adopted to avoid the problem of over-fitting the joint positional distribution of seen human-object pairs during training. We agree that such a traversal-based strategy may bring non-negligible computational overhead. In practice, we follow [1] to filter out detections with scores lower than 0.2, and sample at least 3 and up to 15 humans and objects each. In extreme cases, up to 435 pairwise HOI proposals may be generated. But in most cases, the number of instances (humans + objects) input to the HO interactor is less than 10. Therefore, the computational overhead of our method remains within an acceptable range. *** Q2: The motivation behind the specific design of the HOI Decoder is not clear enough. A2: In this work, the CLIP model is adapted for fine-grained HOI discrimination. To better exploit the joint visual-linguistic space obtained by CLIP pre-training, the visual features of HOI proposals need to be aligned with those of the HOI descriptions, which leads to the introduction of the HOI decoder. In the HOI decoder, given the pairwise HOI tokens produced by the HO interactor, we first perform cross-attention between the tokens and the patch-level CLIP image features to aggregate corresponding CLIP visual features for each HOI proposal. Then, considering that the CLIP image token contains rich global visual cues and is directly aligned with the CLIP text class token during pre-training, we incorporate it in the subsequent self-attention operation for better feature alignment. *** Q3: A discussion of recent works in CVPR2023 (HOICLIP: Efficient Knowledge Transfer for HOI Detection with Vision-Language Models) is expected. A3: Thanks for your valuable suggestion. As a common concern with Reviewer Rad1, we put the answer in the global response letter. *** Q4: The zero-shot performance may still be limited by the pre-trained DETR detector. If the object or human is not detected by the detector, there is no chance that the HOI proposal could be generated by the following modules. A4: Thanks for the insightful concern. We agree it should be a common problem of to-date two-stage zero-shot HOI detectors. We also discuss this limitation of our method in Section B of the supplementary material. Fortunately, given the two-stage design of our CLIP4HOI, a promising solution is to integrate advanced open-vocabulary object detectors like OV-DETR [2] into our CLIP4HOI, as discussed in the supplementary material. *** References: [1] Zhang, Frederic Z., Dylan Campbell, and Stephen Gould. "Efficient two-stage detection of human-object interactions with a novel unary-pairwise transformer." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. [2] Zang, Yuhang, et al. "Open-vocabulary DETR with conditional matching." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022.
Summary: The paper addresses the problem of Zero-shot Human Object Interaction (HOI) detection, which aims to detect bounding boxes of both seen and unseen interactions. To achieve this, the paper leverages semantic knowledge from pretrained CLIP models to recognize novel combinations of object-verb as well as unseen verbs or objects. The main novelty of the work is to propose a two-stage detection framework that processes all pairs of human-object via a Human Object Interactor to avoid overfitting on interactional positions learned from seen HOI. Moreover, the paper introduces a fine-grained HOI classifier to directly use CLIP for detection without the need for knowledge distillation. The paper experiments on HOI datasets of HICO-DET and V-COCO to show its effectiveness. Strengths: + The proposed idea of using Human-Object Interactor and prompt learning with HOI classifier to directly adapt CLIP model toward the task of human-object interaction is sensible and interesting. This not only helps to significantly simplify the detection pipeline but also improves detection performance. + The paper shows strong improvement in performance compared to SOTA on challenging datasets. + The paper is self-contained and easy to follow. Weaknesses: + The proposed method seems to be mostly effective with novel combinations of verbs and objects, which have been observed in seen interaction during training but becomes less effective for interactions with unseen objects and especially on unseen verbs. Thus, it seems to show that the proposed method is not yet generalizable toward truly novel HOI detection. Can the paper provide justifications for this? + Can the paper explain why removing the Global HOI score achieves the best seen HOI performance in Table 4? Removing Global HOI score supposes to reduce seen class overfit, which should have improved unseen instead of seen class performance. + The reviewer finds it surprising that prompt learning significantly improves unseen HOI detection, given that the prompt is learned with seen classes and is unaware of unseen classes. Can the paper justify how the proposed method can leverage this prompt for unseen classes? Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to the weakness section Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: Sufficiently addressed Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for the insightful comments and clarify the concerns as follows. *** Q1: Justification for the generalization ability of the proposed method. A1: We would like to kindly emphasize the advanced performance of our method that demonstrates the generalization ability. In Table 1, we report the zero-shot performance of our CLIP4HOI under three settings, i.e., unseen composition (UC), unseen object (UO), and unseen verb (UV). For each setting, we report full mAP (including seen HOI and unseen HOI), seen mAP (only including seen HOI), and unseen mAP (only including unseen HOI). It is unseen mAP that reflects the generalization ability of a method towards unseen HOI categories. We would like to clarify our CLIP4HOI achieves the best performance among the compared methods in terms of unseen mAP under all three settings (18.92 mAP vs. 10.51 mAP under the UO setting, 26.02 mAP vs. 22.71 mAP under the UV setting, and 27.71 mAP vs. 23.01 mAP under the UC setting). *** Q2: Can the paper explain why removing the Global HOI score achieves the best seen HOI performance in Table 4? Removing Global HOI score supposes to reduce seen class overfit, which should have improved unseen instead of seen class performance. A2: The global HOI score is responsible for perceiving the HOIs existing in the entire receptive field, which provides discriminative knowledge from a global perspective. The pairwise HOI score only focuses on the perception of human-object interactions corresponding to a provided proposal, which is easily affected by the quality of the proposal generation process. In practice, we found that the pairwise HOI score may sometimes have a high response to the HOI categories that do not exist on the image. Such values are usually lower than the score of the seen HOI categories. Therefore, it hardly affects the discrimination of seen HOI categories. However, for the unseen HOI categories that typically exhibit lower HOI scores than the seen HOI categories, the impact of these noises is relatively large. The global HOI score helps mitigate this interference. Therefore, we argue that the degradations in discrimination capability for unseen categories are justifiable after removing the global HOI score. *** Q3: The reviewer finds it surprising that prompt learning significantly improves unseen HOI detection, given that the prompt is learned with seen classes and is unaware of unseen classes. Can the paper justify how the proposed method can leverage this prompt for unseen classes? A3: The function of prompt learning is to tune the input of the CLIP text encoder into a sentence pattern suitable for a certain downstream task without finetuning the weights of the whole model, which may cause catastrophic forgetting. The learnable prompt we adopt is not limited to seen HOI categories (category-agnostic). Although only seen verbs and objects are available during training, thanks to the general knowledge owned by the large-scale pre-trained CLIP model, the learned prompt can also be used for unseen HOI categories. Moreover, the primary manual prompt such as "a photo of a person xxx" which is commonly used in image recognition was found to be not the optimal solution for the HOI task [3]. Therefore, compared to the manual prompt, the learnable prompt used in our approach leads to significant performance improvements. *** References: [1] Zhou, Kaiyang, et al. "Learning to prompt for vision-language models." International Journal of Computer Vision 130.9 (2022): 2337-2348. [2] Zhou, Kaiyang, et al. "Conditional prompt learning for vision-language models." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. [3] Wang, Suchen, et al. "Learning transferable human-object interaction detector with natural language supervision." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. --- Rebuttal Comment 1.1: Comment: Dear authors, Thanks for clarifying my concerns in the rebuttal. As such, I will keep my original score. --- Reply to Comment 1.1.1: Title: Thanks for your support Comment: Thanks for your valuable support!
Rebuttal 1: Rebuttal: The following is a common question raised by Reviewer baYx and Rad1. *** Q1: The paper omits some important baselines, such as HOICLIP. Although HOICLIP is a CVPR2023 paper, it was uploaded to Arxiv in Mar. 2023. The authors should provide a detailed discussion and comparison with it. A1: Thanks for pointing out this concurrent work. HOICLIP [1] adopts the one-stage design following GEN-VLKT [2] and proposes query-based knowledge retrieval for efficient knowledge transfer from CLIP to HOI detection tasks. In addition, it exploits zero-shot CLIP knowledge as a training-free enhancement during evaluation. Differently, our CLIP4HOI leverages the two-stage proposal generation strategy to mitigate the overfitting of the method to the joint positional distribution of human-object pairs during training. As for the similarities, HOICLIP and our CLIP4HOI both retain the image encoder of CLIP to better exploit the general knowledge learned by large-scale pre-training, although the implementation details differ. We compare the performance of our CLIP4HOI with HOICLIP under five zero-shot settings in the following table. Results show that: 1) Under UC, UO, and UV settings, our CLIP4HOI performs on par or slightly inferior in terms of seen mAP. 2) Under all five zero-shot settings, our CIP4HOI outperforms HOICLIP in terms of unseen mAP. This demonstrates that our proposed CLIP4HOI exhibits a stronger generalization ability for unseen HOI categories than HOICLIP. We will follow your suggestion to incorporate this discussion and comparison into the revised manuscript. | Method | Setting | Full | Seen | Unseen | | :---- | :----: | :----: | :----: | :----: | | HOICLIP | UC | **32.99** | **34.85** | 25.53 | | **CLIP4HOI (Ours)** | UC | 32.11 | 33.25 | **27.71** | | | | | | | | HOICLIP | UO | **28.53** | **30.99** | 16.20 | | **CLIP4HOI (Ours)** | UO | 28.44 | 30.34 | **18.92** | | | | | | | | HOICLIP | UV | **31.09** | **32.19** | 24.30 | | **CLIP4HOI (Ours)** | UV | 30.42 | 31.14 | **26.02** | | | | | | | | HOICLIP | RF-UC | 32.99 | 32.99 | 25.53 | | **CLIP4HOI (Ours)** | RF-UC | **34.08** | **35.48** | **28.47** | | | | | | | | HOICLIP | NF-UC | 27.75 | 28.10 | 26.39 | | **CLIP4HOI (Ours)** | NF-UC | **28.90** | **28.26** | **31.44** | *** References: [1] Ning, Shan, et al. "HOICLIP: Efficient Knowledge Transfer for HOI Detection with Vision-Language Models." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. [2] Liao, Yue, et al. "GEN-VLKT: Simplify association and enhance interaction understanding for hoi detection." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. Pdf: /pdf/33faf24e855c89f17960185d36798a695dd2566c.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Truncated Affinity Maximization: One-class Homophily Modeling for Graph Anomaly Detection
Accept (poster)
Summary: This paper proposed an unsupervised algorithm for detecting abnormal nodes in a graph. Based on the one-class homophily assumption and the overwhelming presence of normal nodes in a graph, this paper uses the truncated affinity maximization to enable stronger local affinity for normal nodes than abnormal ones. To eliminate the training bias brought by the non-homophily edges, Normal Structure-preserved Graph Truncation is proposed to remove non-homophily edges iteratively. The proposed method achieves competitive performance on datasets with synthetic and real anomalies. Strengths: 1. The paper is well-structured and easy to follow. 2. The proposed method TAM has strong motivation. The unsupervised setting is close to realistic scenario that normal nodes and anomalies exist in a graph and their labels are unknown. 3. Extensive experiments prove that TAM is a strong baseline for unsupervised anomaly node detection Weaknesses: 1. I suggest adding some supervised graph anomaly detection method on section 2, such as CARE-GNN [1], PC-GNN [2] and Fraudre [3]. Although the supervised setting is far from reality, these algorithms also proposed some ideas for eliminating the negative impact of heterophily edges on anomaly detection. 2. Is TAM wide-applicable on different GNNs? How does TAM perform on other GNN backbones other than GCN? 3. In table 1, why baselines and TAM perform so poor regarding AUPRC. It would be better to draw the roc curve and precision-recall curve. 4. It is difficult to understand Figure 2 (b), what are $c_1$, $c_2$, $c_k$? I suggest reorganizing this illustration. 5. I suggest placing the limitation in a separate paragraph or subsection. Besides the limitations discussed in Conclusion, are there any other shortcoming of TAM? [1] Enhancing graph neural network-based fraud detectors against camouflaged fraudsters [2] Pick and choose: a GNN-based imbalanced learning approach for fraud detection [3] Fraudre: Fraud detection dual-resistant to graph inconsistency and imbalance Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to Weaknesses. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for the overall positive rating and constructive comments. We are grateful for the positive comments on our paper clarity, research motivation, and empirical justification. Please see our response to your comments one-by-one below. > **Weaknesses #1**: To add some supervised graph anomaly detection methods in Sec. 2 Thank you very much for the suggestion. Supervised graph anomaly detection has drawn increasing attention in recent years, such as the emerging of multiple supervised methods like CARE-GNN, PCGNN, Fraudre , BWGNN , and GHRN. These methods propose various supervised effective methods to deal with heterophily edges in GNN aggregation, so as to better improve the performance of anomaly detection. We agree that despite being supervised methods, they may inspire designs on exploring the one-class homophily property and alike for better unsupervised graph anomaly detection. We will discuss the corresponding references to discuss those supervised methods and possible opportunities along this research line in Sec. 2. > **Weaknesses #2**: How does TAM perform on other GNN backbones other than GCN? As shown by the results in Tables A1 and A2 below, TAM can also work effectively when using GNN backbones other than GCN. Particularly, TAM using GAT can have some large improvement on multiple datasets like Amazon and Facebook than GCN, while having similar performance on the other datasets. GraphSage and GIN perform inferior to GCN on most of the datasets, since using backbones like GraphSage would lead to the loss of important local information of some nodes due to its small node sampling size during neighborhood aggregation. The inferior effect of GIN might be because our task is different from supervised node classification, where we emphasize more on how to learn the representation of nodes while our method emphasizes more on learning the local node affinity. In general, TAM can achieve good performance using other GNN backbones, but it can have large performance drop on some datasets when comparing to the performance using GCN/GAT, so TAM works best using GCN or GAT. ``` Table A1. AUC-ROC for Different GNN Backbones. ``` |**Data**|**GCN**|**GAT**|**GraphSage**|**GIN**| |:----|:----:|:----:|:----:|:----:| |BlogCatalog|**0.8248**|0.8145| 0.7767|0.7682| |ACM|0.8878|**0.8891**|0.7313|0.7833| |Amazon|0.7064|0.7245|**0.7532**|0.7034| |Facebook |0.9144|**0.9394**|0.7372|0.8216| |Reddit |**0.6023**|0.5999|0.5629|0.5348| |YelpChi|0.5643|**0.5787**|0.5243|0.5614| ``` Table A2. AUC-PR for Different GNN Backbones. ``` |**Data**|**GCN**|**GAT**|**GraphSage**|**GIN**| |:----|:----:|:----:|:----:|:----:| |BlogCatalog|**0.4182**|0.3952|0.3844|0.3615| |ACM|0.5124|**0.6701**|0.4940|0.4087| |Amazon|0.2634|0.3127|**0.4407**|0.2666| |Facebook |0.2233|**0.3010**|0.1384|0.1578| |Reddit |**0.0446**|0.0407|0.0409|0.0377| |YelpChi|0.0778|**0.0778**|0.0665|0.0727| > **Weaknesses #3**: In table 1, why baselines and TAM perform so poor regarding AUPRC. To draw the roc curve and precision-recall curve. Thank you very much for the question and suggestion. Since 1) anomaly detection datasets are often an extremely imbalanced one and 2) no labeled data is available for training, it is very difficult for unsupervised detectors to achieve both high precision and recall rates on detecting the rare anomalies, leading to small values of AUC-PR (or AUPRC). As for AUC-ROC, its performance is often overoptimistic because 1) it is affected by performance on both normal and anomaly classes and 2) the normal class is the dominant class whose performance would bias the overall performance. As a result, it is common to achieve pretty good AUC-ROC but poor AUC-PR performance in unsupervised anomaly detection task (see the results of unsupervised detectors on the far left in Figures D4 & D5 in *Han, S., Hu, X., Huang, H., Jiang, M., & Zhao, Y. (2022). ADBench: Anomaly Detection Benchmark* for some evidence), except that the test data contains many anomalies . AUC-PR and AUC-ROC are commonly used as complementary evaluation metrics in graph anomaly detection in the community, because AUC-PR reflects more realistic performance on the anomaly class while AUC-ROC has a good probabilistic interpretation. We have inserted the ROC curve and Precision-Recall curve of TAM on the six datasets used in Figure 2 in the pdf file in the Author Rebuttal section above. We will add them in the final paper. > **Weaknesses #4**: Difficult to understand $c_1$, $c_2$, and $c_k$ in Figure 2 (b). Thank you very much for the question and suggestion. We originally intended to use $c_1$, $c_2$, and $c_k$ to represent the progressive iteration of our graph truncation. We will reorganize this illustration to provide a more straightforward view of the iterative graph truncation. > **Weaknesses #5**: Are there any other shortcoming of TAM? Thank you very much for the suggestion and question. We will place the discussion of TAM's limitations in a separate paragraph. In addition to the possible presence of non-homophily cases in some real-world datasets, as also pointed out by Reviewer **NSnj**, another potential limitation is that TAM cannot directly handle primarily isolated nodes in a graph, though those isolated nodes are clearly abnormal if they are rare and the other nodes are connected to at least some nodes. Additionally, like many GNN-based approaches, including graph anomaly detection methods, TAM also requires a large memory to perform on graphs with a very large node/edge set. Note that these limitations do not affect the detection effectiveness of TAM on a variety of popular real-world graph anomaly detection datasets. Most importantly, it is a seminal work on the idea of one-class homophily and local node affinity maximization specifically for graph anomaly detection, which opens up some great opportunities for exploring substantially more effective anomaly detectors from a new perspective.
Summary: This paper studies the problem of graph anomaly detection and the authors proposed a novel method based on an identified property named one-class homophily. It is observed that normal nodes have strong connections with each other while abnormal nodes have weaker connections. Existing GAD methods overlook this property. The authors propose a novel unsupervised anomaly scoring measure, i.e., local node affinity, that considers the similarity of nodes to their neighbors. They introduce Truncated Affinity Maximization (TAM) to learn tailored node representations for this measure. TAM optimizes on truncated graphs to mitigate bias from non-homophily edges. Experimental results on several real-world graphs with both manually injected and real anomalies show that TAM outperforms several competing models, achieving over 10% improvement in AUROC/AUPRC on challenging datasets. Strengths: + This paper focuses on an interesting and important problem. Graph anomaly detection is of value in both research and practice, especially since it is beneficial for various applications. + The proposed method is technically sound. The performance in several data with both manually injected and real-world anomalies is promising. + The experimental results show that the proposed method achieves consistent performance improvements over the baselines, which demonstrates its superiority. Also, a comprehensive ablation study to show the effectiveness of different components. Weaknesses: - There is no theoretical analysis to justify the rationale of the proposed method. - The proposed method has not been tested on large-scale graphs to show its efficiency. - The design of some components may need more investigation empirically (see details below). Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Theoretical analysis. Although the proposed method has shown its effectiveness using comprehensive experiments, i.e., comparing seven SOTA on real-world graphs with both manually injected and real anomalies, there is no theoretical analysis to justify the rationale of the proposed method. - Efficiency test. The largest graph used in the experiments contains ~4k nodes and ~175k edges, which are relatively not very large. To better demonstrate the efficiency of the method, larger graph benchmarks may be needed, e.g., OGB. - Component design and impact. Some components may need more investigation empirically, especially the graph truncation strategy as well as the impact of different K. In detail, (1) in section 4.2, the raw graph and (random) edge drop have been compared. A straightforward question is what is the performance of simple similarity-based methods, e.g., removing some less similar edges? (2) Since there are K truncated graphs and each one relies on the previous one, the number of edges will be less and less. Is it possible that for larger K, the graph will contain quite some isolated small subgraphs? If so, is there any negative impact on the performance? Some minor comments: - Figure 4(b) exhibits a variation in performance trends across different graphs. Particularly, on the Reddit and YelpChi datasets, the AUPRC decreases as the value of K increases. Is there an explanation for this contrasting behavior? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: No potential negative societal impact or the authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for the overall positive rating and constructive comments. We are grateful for the positive comments on our studied problem, technical contribution and empirical justification. Please see our detailed response below. > **Weaknesses/Questions #1**: Lack of theoretical analysis to justify our method. Thank you very much for the comments. We, for the first time, reveal the one-class homophily property in graph anomaly detection datasets by a number of empirical justifications. We further introduce a novel graph anomaly detection approach that provides a principled framework to leverage this property for the task. Unlike the overwhelming reconstruction-based and self-supervised learning frameworks, our approach offers a fundamentally different perspective on designing effective graph anomaly detection. Thus, although there is a lack of theoretical analysis, our work presents some solid findings and interesting insights into the graph anomaly detection problem. Therefore, while we're still working on an in-depth theoretical analysis of the proposed property and TAM, it would be beneficial to the anomaly detection community by publishing these interesting findings and novel insights. > **Weaknesses/Questions #2**: Results on large-scale graph datasets. Please refer to our reply to **Global Response to Shared Concern #2** in the overall Author Rebuttal section above for detailed discussions on this point. > **Weaknesses/Questions #3**: The design of some components may need more investigation empirically (see details below), e.g., (1) simple similarity-based graph truncation methods, and (2) the impact of larger $K$ and possibly isolated small subgraphs. **Re: (1) Simple similarity-based non-homophily edge removal**. We agree that we may perform a deterministic similarity-based removal of less similar edges to obtain the truncated graph, and then optimize LAMNet using such a truncated graph. We present the experimental results of this TAM variant in Tables A1 and A2 below using varying similarity threshold $\theta$. It is clear that this TAM variant significantly underperforms our default TAM on most of the datasets, i.e., BlogCatalog, Amazon, Facebook, and YelpChi. This is mainly because this variant would fail to take account of local affinity distribution of each node as being captured in NSGT. As a result, it could remove not only non-homophily edges but also homophily edges associated with normal nodes whose local affinity is not as strong as the other normal nodes, which would be the opposite to the objective of the optimization in our approach TAM, leading to less effective detection performance. Thus, the simple similarity-based method cannot serve as an effective alternative to NSGT. ``` Table A1. AUC-ROC Results of Using Similarity-based Edge Removal. ``` |**Data**|**BlogCatalog**|**ACM**|**Amazon**|**Facebook**|**Reddit**|**YelpChi**| |:----|:----:|:----:|:----:|:----:|:----:|:----:| |$\theta$=0.05|0.6650|0.8668|0.5856|0.6951|0.6007|0.4910| |$\theta$=0.1|0.6526|0.7986|0.5827|0.7293|0.5945|0.4872| |$\theta$=0.3|0.6583|0.6911|0.6106|0.7934|0.5758|0.4754| |TAM|**0.8210**|**0.8878**|**0.7064**|**0.9144**|**0.6028**|**0.5674**| ``` Table A2. AUC-PR Results of Using Similarity-based Edge Removal. ``` |**Data**|**BlogCatalog**|**ACM**|**Amazon**|**Facebook**|**Reddit**|**YelpChi**| |:----|:----:|:----:|:----:|:----:|:----:|:----:| |$\theta$=0.05|0.1621|0.5109|0.0924|0.0410|**0.0467**|0.0598| |$\theta$=0.1|0.1829|0.5068|0.1092|0.1154|0.0414|0.0509| |$\theta$=0.3|0.1729|0.4996|0.2079|0.1374|0.0420|0.0519| |TAM|**0.4152**|**0.5124**|**0.2634**|**0.2233**|0.0446|**0.0771**| **Re: (2) the impact with increasing truncation steps K?** A short answer is: No. Specifically, in the process of early graph truncation, the edge will gradually decrease. However, since our truncation is based on the average similarity of the entire graph, all edges would stabilize near the average similarity after some truncation iterations. The changes in the graph also gradually stabilize. Since we do not have access to class labels, it is difficult to evaluate whether the truncation is too aggressive. Given the fact that the number of abnormal nodes per graph is assumed to be small, and so does the number of non-homophily edges, and thus, a small $K$, $K=5$, is used by default in our experiments. **Re: (2) any isolated small subgraphs produced by NSGT and their potential impact?** Yes, it is possible. If we perform multiple graph truncations, there would be some isolated small subgraphs, but this does not affect our subsequent affinity maximization. This is because NSGT is proposed to mitigate the bias toward non-homophily edges in the GNN neighborhood aggregation. This would only affect the neighborhood aggregation in GNNs; the calculation of the local node affinity-based anomaly score is still based on the original graph structure. Further, the small subgraphs resulting from NSGT indicate many nodes of the subgraphs have small affinity to the remaining nodes, indicating some sort of weak affinity of the nodes in the subgraphs. As a result, the local node affinity-based anomaly score for these subgraph nodes would be large, and furthermore, given the weak affinity of these subgraph nodes, their anomaly scores would also be large even if the subgraphs are not isolated after NSGT. > **Minor Comments**: a variation in the performance on Reddit and YelpChi in Figure 4(b). Thank you very much for the question. Being a probabilistic method, the graph truncation has the risk of removing some homophily edges with relatively large distances. This risk can increase with increasing K values, as shown in Figure 5, Appendix C.3. We have considered this situation and addressed this potential issue by (1) adopting an ensemble-based scoring method that aggregates the anomaly scores across different K values to obtain stabilize the detection performance and (2) using a relatively small K value. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: Thanks for the rebuttal especially the added results. I read the authors' responses as well as other reviews and responses. Most of my concerns have been addressed. I would like to increase my rating. --- Reply to Comment 1.1.1: Title: Thanks for the increased rating Comment: It's great that our rebuttal has addressed most of your concerns. Thank you very much for the increased rating and your support on our work!
Summary: In this paper, the authors introduced a novel unsupervised anomaly scoring measure (local node affinity) for GAD, and further proposed a Truncated Affinity Maximization (TAM) for GAD. TAM learns tailored node representations for our anomaly measure by maximizing the local affinity of nodes to their neighbors, and is optimized on truncated graphs where non-homophily edges are removed iteratively to mitigate this bias. The authors also conduct a series of experiments to evaluate the anomaly detection performance of TAM. Strengths: 1. This paper is well-motivated and easy to follow. 2. The idea of exploring the one-class homophily characteristic for designing a GAD method is interesting. 3. The experimental section provides comprehensive evaluations of the proposed method from different perspectives, and demonstrates the effectiveness of the proposed method. Weaknesses: 1. In the related work section, the authors should discuss the connections and differences between the proposed methods and other existing works, as well as the necessity for this work. 2. As the authors claimed, one-class homogeneity does not hold in all cases. Therefore algorithms designed on this basis would be relatively limited. 3. The authors propose to optimize TAM on truncated graphs to avoid the negative impact of non-homophily edges. How the author guarantees to obtain ideal truncated graphs with the removal of non-homophily edges iteratively. Besides, does it have some criteria to judge whether the truncation map obtained is ideal or not? Technical Quality: 3 good Clarity: 3 good Questions for Authors: See weakness. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: See weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for the overall positive rating and constructive comments. We are grateful for the positive comments on our research motivation, readability, our technical design and empirical justification. Please see our response to your comments one-by-one below. > **Weaknesses #1**: In the related work section, the authors should discuss the connections and differences between the proposed methods and other existing works, as well as the necessity for this work. Thank you very much for the comment and suggestion. We will add discussions on the connections and differences w.r.t. existing studies in the Related Work section in the final version of the paper. Specifically, current unsupervised graph anomaly detection methods can be generally grouped into reconstruction-based and self-supervised-based approaches. Being GNN-based approaches, both the reconstruction-based approaches and our approach TAM rely on node's neighborhood information to calculate the anomaly scores, but we explicitly define an anomaly measure from a new perspective, i.e., local node affinity. This results in fundamentally different anomaly detection optimization objectives. Some of these points have been elaborated in detail in Section 3.2. We will include this discussion into Related Work to brief the connections and the differences. Similar to the self-supervised approaches, the optimization of TAM also relies on an unsupervised objective. However, the self-supervised approaches require the use of some pre-text tasks like surrogate contrastive learning or classification tasks to learn the feature representations for anomaly detection. By contrast, the optimization of TAM is directly driven by a new, plausible anomaly measure, which enables end-to-end optimization of an explicitly defined anomaly scoring measure. Overall, TAM offers a very different perspective on devising graph anomaly detection methods, which is tasked to learn tailed node representations for an explicitly defined graph anomaly measure; whereas both reconstruction-based and self-supervised-based approaches are focused more on learning latent node feature representations for a proxy task. Thus, being driven by a prevalent one-class homophily property, TAM and its future variants are expected to align better with the graph anomaly detection problem. > **Weaknesses #2**: As the authors claimed, one-class homogeneity does not hold in all cases. Therefore algorithms designed on this basis would be relatively limited. Thank you very much for the comment. Please refer to our reply to **Global Response to Shared Concern #1** in the overall Author Rebuttal section above for a detailed discussion on this point. > **Weaknesses #3**: The authors propose to optimize TAM on truncated graphs to avoid the negative impact of non-homophily edges. How the author guarantees to obtain ideal truncated graphs with the removal of non-homophily edges iteratively. Besides, does it have some criteria to judge whether the truncation map obtained is ideal or not? Thank you very much for the comment and the question. Since it is unsupervised anomaly detection, no ground truth is given to evaluate/prove whether a resulting truncated graph is better than another one. Instead we introduce a plausible probabilistic method that helps iteratively eliminate non-homophily edges with a high probability, while preserving the genuine homophily, as elaborated in the NSGT subsection in the paper. In addition, we utilize an ensemble-based anomaly scoring method that aggregates the anomaly scores obtained from a set of multiple graph truncation scales, which enables TAM to 1) better utilize the dominant homophily edges for anomaly detection and 2) reduce the risk of being affected by non-homophily edges. As shown by extensive empirical results and the new results presented in the other replies to reviewers, TAM and its components show promising detection performance on a wide range of datasets, justifying their effectiveness in real-world graph anomaly detection applications. --- Rebuttal Comment 1.1: Title: Rebuttal feedback Comment: I have read the authors' rebuttal carefully, and their response addresses my concerns, e.g., one-class homogeneity and obtaining ideal truncated graphs. Based on that, I would like to increase my rating. --- Reply to Comment 1.1.1: Title: Discussion Comment: We're very pleased to hear that our rebuttal helps address your concerns. Thanks a lot for the positive comments and the increased rating!
Summary: Graph anomaly detection aims to identify abnormal nodes in a given graph. The manuscript argues that the existing graph anomaly detection datasets have one-class homophily where the homophily of normal nodes is much stronger than abnormal nodes. To utilize the one-class homophily phenomenon in graph anomaly detection, a new similarity-based anomaly scoring measure, named local node affinity, is proposed. The proposed method, TAM, learns the parameters of multiple graph neural networks called LAMNets by maximizing the sum of local node affinity on the original graph. The iterative graph truncation technique (NSGT) removes the links between normal and abnormal nodes in each step, and the resulting graphs are fed into LAMNets. During the message passing, LAMNet uses the truncated adjacency matrix. The local node affinity-based anomaly score is measured on the original graph using node representations generated by LAMNets. Experimental results show that TAM achieves high AUROC and AUPRC values compared to seven graph anomaly detection methods on six benchmark datasets. Strengths: 1. NSGT introduces randomness by removing the non-homophily edges in a probabilistic approach. To utilize such randomness, TAM uses an ensemble strategy by performing NSGT multiple times and feeding the resulting graphs to multiple LAMNets. Table 2 in Appendix C shows that the ensemble strategy improves the performance of TAM. 2. The manuscript provides empirical justifications for the major claims. For instance, the homophily distribution in Figure 1(a) supports the one-class homophily phenomenon. Figure 3(a) and Figure 3(b) show that the Euclidean distance between the nodes connected by a non-homophily edge tends to be greater than that between the nodes connected by a homophily edge. 3. Ablation studies with various variants of TAM and hyperparameter sensitivity analyses show that each component of TAM is effective. Weaknesses: 1. It is not guaranteed that the one-class homophily property holds for all graph anomaly detection datasets. For instance, both normal and abnormal nodes in the YelpChi-RUR dataset are homophilic, as shown in Figure 1 (d) in Appendix A. 2. The performance of NSGT might depend on the quality of the node attributes. NSGT performs the graph truncation by considering the node attributes of the original graph. As mentioned in lines 170-171, if the original node features contain many irrelevant attributes, NSGT might not work well. Furthermore, features of abnormal nodes can be intentionally camouflaged [1, 2]. [1] Liu et al., Alleviating the Inconsistency Problem of Applying Graph Neural Network to Fraud Detection, SIGIR 2020. [2] Dou et al., Enhancing Graph Neural Network-based Fraud Detectors against Camouflaged Fraudsters, CIKM 2020. - One suggestion is to utilize the node representation vectors calculated by LAMNets. In addition, an attribute selection strategy can be introduced to remove irrelevant or redundant node attributes. 3. While this manuscript assumes an unsupervised learning setting, one of my concerns is that most state-of-the-art methods assume a semi-supervised learning setting. Indeed, in many real practices, some supervision can be provided for graph anomaly detection, e.g., labeling undesirable nodes or patterns. Given this, the benefit of the proposed method can be limited. 4. Existing studies [3] convert YelpChi by merging different relations into a single relation. Why YelpChi-RUR and Amazon-UPU are used instead of the full datasets? Instead of selecting one particular relation, the authors can utilize all relations and treat them as a single relation. [3] Chen et al., GCCAD: Graph Contrastive Learning for Anomaly Detection, TKDE 2022. 5. The statistics of Amazon and YelpChi differ from the actual ones. For Amazon, the number of nodes is 10,224, and the number of anomalies is 693 (6.78%). For YelpChi, the number of nodes is 23,831, and the number of anomalies is 1,217 (5.11%). 6. Minor Comments: - Lines 206-207 are confusing. According to lines 206-207, the truncated adjacency matrix instead of the original adjacency matrix is used to optimize LAMNets. However, the local affinity is calculated based on the original adjacency matrix during optimization, as stated in lines 261-262. - In lines 215-221, it is not specified how to handle the case where $d_{i, max}$ is less than $d_{mean}$. - In lines 116-117, "such degree" should be modified to "such as degree". Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. Why does NSGT utilize raw attributes instead of node representations learned from LAMNet? 2. Is there any experimental result on large-scale graph anomaly detection datasets? 3. Isolated nodes can appear after performing NSGT. How does TAM-T compute the anomaly scores of such isolated nodes? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: 1. As the TAM model works solely on the assumption of one-class homophily, it might not perform effectively on the graphs where abnormal nodes are homophilic, e.g., YelpChi-RUR. 2. Since NSGT directly utilizes the raw node features to truncate the graph, the quality of the graph truncation might be affected by the quality of the raw features. However, there can be many irrelevant or redundant attributes in the original features. 3. Due to the definition of local node affinity measure, TAM cannot handle isolated nodes in the graph even though the node features are available. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for the constructive comments and questions. We are grateful for the positive comments on our design, empirical justification and ablation study. Please see our detailed one-by-one responses below. > **Weaknesses/Limitations #1**: Does one-class homophily always hold? Please refer to our reply to **Global Response to Shared Concern #1** in the overall Author Rebuttal section above for this concern. > **Weaknesses/Limitations #2, Questions #1**: The dependence of the performance of NSGT/TAM on the quality of raw node attributes. In TAM, NSGT performs truncation on the graph sequentially based on node attribute similarities to reduce the negative impact of non-homophily edges on message passing. So, it's true that the performance of NSGT (and subsequently TAM) relies on the quality of node attributes, but we show below that performing NSGT directly on the original attributes is a simple, easy-to-use, yet effective way. The two alternative ways you suggested for NSGT do not show clear advantages. We discuss this argument in detail below. **Using Attribute Selection Before NSGT**. The table below shows the AUC-ROC results of TAM on datasets resulting from the popular unsupervised Laplacian score-based attribute selection (X% means X% top-ranked attributes are selected). |**Data**|**100%**|**80%**|**60%**|**40%**|**20%**| |:----|:----:|:----:|:----:|:----:|:----:| |Amazon|0.7064|0.7329|0.7136|0.6924|0.6893| |Facebook |0.9144|0.9151|0.8739|0.8802|0.8177| |Reddit |0.6023|0.5845|0.5789|0.5778|0.5664| |YelpChi|0.5643|0.5695|0.5749|0.5793|0.5354| The results show that a careful unsupervised attribute selection can improve the performance of TAM, such as retaining 80% top-ranked attributes in Amazon, Facebook, and YelpChi, but the improvement is often relatively very marginal. Further, if there is any issue in the attribute selection process (e.g., an undesired selection threshold is used), the relevant attributes may also be removed together with the irrelevant ones, leading to degraded performance of TAM, e.g., the case in Reddit. Note that the node attributes in BlogCatalog and ACM are node embedding features, which are mostly all relevant in these two cases. So, we did not include these two datasets in the attribute selection above. **Using Learned Node Representations from LAMNet**. As suggested by you, we can perform NSGT on the node representations learned by LAMNet, meaning we first run the full TAM on the original graph and then obtain the newly learned node representations from LAMNet as input to re-run TAM. The AUC-ROC results in the table below that this strategy is not as effective as our original TAM that performs NSGT based on raw attributes. |**Data**|**RAW Attributes**|**LAMNet-based Representations**| |:----|:----:|:----:| |BlogCatalog|**0.8248**|0.7024| |ACM|**0.8878**|0.8670| |Amazon|**0.7064**|0.6766| |Facebook |0.9144|**0.9164**| |Reddit |**0.6023**|0.5921| |YelpChi|**0.5643**|0.5428| **Handling Camouflage Features**. We evaluate the performance of TAM when there are camouflaged features in the raw attributes. Particularly, we replace 10%/20%/30% randomly sampled original features with camouflages features, in which the feature value of the abnormal nodes is replaced (camouflaged) with the mean feature value of the normal nodes. The AUC-ROC results are shown in the table below, which demonstrate that TAM is robust to a high level of camouflaged features and maintains good superiority over the SOTA models that work on the original features. |**Data**|**0%**|**10%**|**20%**|**30%**| |:----|:----:|:----:|:----:|:----:| |BlogCatalog|0.8218|0.8045|0.8022|0.7831| |ACM|0.8878|0.8727|0.8688|0.8652| |Amazon|0.7064|0.7036|0.6954|0.6838| |Facebook |0.9144|0.8870|0.8804|0.8650| |Reddit |0.6023|0.5998|0.5915|0.5876| |YelpChi|0.5649|0.5447|0.5271|0.5301| We will add the above results and discussions into our final paper to provide these important insights for readers. >**Weaknesses #3**: Unsupervised vs. supervised anomaly detectors? There are methods from two different paradigms having a different set of pros and cons. >**Weaknesses #4/Questions #2**: A larger scale of Amazon and YelpChi, and other large datasets? Please refer to **Global Response to Shared Concern #2** in the overall Author Rebuttal section above. >**Weaknesses #5, Questions/Limitations #3**: (1) Difference of statistics in YelpChi-RUR and Amazon-UPU? (2) Handling isolated nodes? (1) We use exactly the same data sources, but we remove primarily isolated nodes in both datasets in data preprocessing, which is the cause of the difference. (2) There can be two types of isolated nodes. One is the primarily isolated nodes in a graph. Since this kind of nodes does not have any structure information, it has no impact on the graph structure learning, so one commonly used practice is to not consider these nodes in training and evaluation. We also follow this practice. Using only the node attributes to perform anomaly detection will go down to general tabular anomaly detection. The other one is the isolated nodes that appear after truncating the graph. For this type of isolated nodes, TAM can well handle them. This is because the LAMNet is performed on the truncated graph structure, while the local affinity-based anomaly scoring is done using the original graph structure. Thus, TAM can effectively calculate the anomaly scores for this type of isolated nodes. Normally these nodes would have a small local affinity and a large anomaly score, since all their edges are cut off during our graph truncation. TAM-T has different anomaly scoring from TAM, which is based on the truncated graph to calculate the affinity, so TAM-T takes a simplified approach to deal with isolated nodes: it directly takes the isolated nodes emerging during truncation as anomalies. **Note that due to space limitation, above we present the AUC-ROC results only. Similar findings can be observed in AUC-PR.**
Rebuttal 1: Rebuttal: Dear All Reviewers, Thank you very much for the time and effort on reviewing our paper, and for the constructive and positive comments. Our rebuttal consists of two part: **Global Response** where we address shared concerns from two or more reviewers and **Individual Response** where we provide a detailed one-to-one response to address your questions/concerns individually. > **Global Response to Shared Concern #1**: The one-class homophily property may not hold for all graph anomaly detection datasets, such as YelpChi-RUR. As we show in the paper that the one-class homophily generally holds in popular real-world graph anomaly detection datasets, including datasets with either injected or genuine anomalous nodes. Further, this homophily property also holds for the four large-scale datasets we newly add, T-Finance , Amazon-all, YelpChi-all and OGB-Protein (see Figure 1(a-d) in the uploaded pdf file for the visualization). We agree that the one-class homophily may not alway hold, or it is weak in some datasets. YelpChi-RUR is an example of the latter case. As shown in Figure 1 (d) in Appendix A, the homophily distribution of normal and abnormal nodes is similar in YelpChi-RUR, whose pattern is different from the large distribution gap in the other three datasets. Nevertheless, despite having a similar distribution, it is clear that the homophily distribution of normal nodes is still much stronger than that of abnormal nodes on YelpChi-RUR, as shown in the figure. As a result, when applying NSGT for graph truncation, we can still successfully remove the non-homophily edges that connect normal and abnormal nodes with a higher probability than the homophily edges, resulting in a truncated graph with stronger one-class homophily (see Figure 1(e)(f) in the uploaded pdf file for a comparison of homophily before and after the truncation). This enables better detection performance of our method (an AUC of 0.5643) on YelpChi-RUR compared to SOTA models (e.g., an AUC of 0.4956 for the best competing method). It should be noted that the widely-used general homophily property can also do not hold in some real-world datasets, such as those for node classification [Ref1-Ref3], where many connected nodes are from different classes. This situation can also apply to our proposed one-class homophily property. However, as the first work on the one-class homophily, we focus on the justification of this property and a novel approach to effectively utilize it for popular graph anomaly detection datasets, including those that has a weaker one-class homophily property and datasets of diverse characteristics. This offers a new perspective to design graph anomaly detection methods, promoting the development of more effective graph anomaly detection algorithms that do not use the overwhelming reconstruction/self-supervised frameworks. Exploring the possible presence of heterophily phenomenon in graph anomaly detection datasets and how to effectively avoid such issues in the detection algorithms (including our method TAM) would be some important, interesting follow-up problems to be addressed. **References** - [Ref1] Graph neural networks with heterophily. AAAI. - [Ref2] Finding global homophily in graph neural networks when meeting heterophily. PMLR. - [Ref3] Graph neural networks for graphs with heterophily: A survey. arXiv preprint arXiv:2202.07082. > **Global Response to Shared Concern #2**: Any empirical support from results on large-scale datasets? Inspired by your comments, we add more experiments on two dataset with a large set of edges, Amazon-all and YelpChi-all by treating the different relations as a single relation. In addition, we add another two large datasets T-Finance and OGB-Proteins with a large set of edges and/or nodes. Their key statistics are given in Table A1 below. ``` Table A1. Key Statistic of New Datasets. ``` |**Data**|**Nodes**|**Edges**|**Features**|**Anomaly**| |:----|:----:|:----:|:----:|:----:| |Amazon-all| 11,944|4,398,392 |25|9.5%| |YelpChi-all| 45,954|3,846,979|32|14.5%| |T-Finance | 39,357|21,222,543|10|4.6%| |OGB-Protein |132,534 |39,561,252 |8|4.5%| The results in the two tables below show that TAM also obtains large detection improvement over four best-performing competing methods across four datasets (comparably well to DOMINANT in AUC-PR on OGB-Protein). ``` Table A2. AUC-ROC Results. ``` |**Data**|**Amazon-all**|**YelpChi-all**|**T-Finance**|**OGB-Proteins**| |:----|:----:|:----:|:----:|:----:| |DOMINANT|0.6937|0.5390|0.5380|0.7267| |ComGA|0.7154|0.5352|0.5542|0.7134| |CoLA|0.2614|0.4801|0.4829|0.7142| |SL-GAD|0.2728|0.5551|0.4648|0.7371| |TAM|**0.8476**|**0.5818**|**0.6175**|**0.7449**| ``` Table A3. AUC-PR Results. ``` |**Data**|**Amazon-all**|**YelpChi-all**|**T-Finance**|**OGB-Proteins**| |:----|:----:|:----:|:----:|:----:| |DOMINANT|0.1015| 0.1638|0.0474|**0.2217**| |ComGA|0.1854|0.1658|0.0481|0.1554| |CoLA| 0.0516|0.1361|0.0410| 0.1349| |SL-GAD|0.0444|0.1711|0.0386|0.1771| |TAM|**0.4346**|**0.1886**|**0.0547**|0.2173| Performing experiments of using TAM and its competing methods on datasets with a even larger number of nodes, such as DGraph that has millions of nodes, requires a GPU server with an extremely large memory. We're currently lack of those computing resource to do that. Note that many existing graph anomaly detection methods, and GNN-based methods for other tasks as well, have similar issues w.r.t. scaleup to such large datasets. We will try to add results on DGraph in the final paper by tapping Amazon cloud computing service for the experiments. As for **Individual Response**, we have provided a detailed one-by-one response to answer/address your questions/concerns after your individual review. We very much hope our response has clarified the confusions, and addressed the concerns. We're more than happy to take any further questions if otherwise. Please kindly advise, and have a great week ahead! Best regards, Authors of Paper 9585 Pdf: /pdf/b9a7ee11a4083b396596ad687b911a27af324ed3.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
Marginal Density Ratio for Off-Policy Evaluation in Contextual Bandits
Accept (poster)
Summary: This paper addresses Off-Policy Evaluation (OPE) in contextual bandits, a significant problem in fields such as healthcare and personalized recommendation systems. OPE involves evaluating the performance of new policies using only existing data generated by a current policy, which can pose a challenge due to high variance in estimators, particularly in situations with large action or context spaces. The study proposes a new OPE estimator called the Marginal Ratio (MR) estimator, which mitigates this issue by considering the shift in the marginal distribution of the outcome resulting from the policy shift, rather than the policy shift itself. This approach makes the MR estimator more robust to increasing sizes of action and context spaces than existing methods like Inverse Probability Weighting (IPW) or Doubly Robust (DR). Strengths: - The paper tackles the practically relevant problem of off-policy evaluation - The paper is well-written and it was easy to follow the main arguments. The intuition statements right after each theorem were also useful - The paper proposes a new estimator called the MR estimator, which defines the weight in terms of the shift in the conditional reward distribution between the logging and target policies and is expected to reduce the variance compared to IPW, DR, and MIPS. In an ideal case, MR can also be unbiased. - Advantages of the proposed method over a range of conventional methods such as IPW, DR, and MIPS are demonstrated in a comprehensive way Weaknesses: - Eq.(3) may be unstable in cases where the target and behavior policies greatly differ, and there are many actions where the variance of $\rho(x,a)$ becomes very large. - It is intuitive that MR reduces the variance from IPS and MIPS, but it needs to estimate the weight in terms of the marginal reward distributions, which are always unknown. If we can estimate the reward distributions well, then we could simply rely on DM. Therefore, it would be necessary to formalize the advantages of MR over DM in the case of an unknown $w(y)$. - The experiment design could be improved. As I understand it, in the experiments, both $\rho(x,a)$, $p(e|x,a)$, and $w(y)$ are estimated from the logged data. While it's true that $w(y)$ is always unknown, in many industry applications where we can control $\pi_b$, $\rho(x,a)$ of IPW&DR and $p(e|x,a)$ of MIPS are known. Therefore, it would be necessary to compare MR with estimated $\rho(x,a)$ and IPS, DR, and MIPS with their respective true weights. - This is not critical, but there are a few concurrent work that aims to further improve MIPS so would be great to discuss these papers in a revised version: - Jie Peng, Hao Zou, Jiashuo Liu, Shaoming Li, Yibao Jiang, Jian Pei, Peng Cui. Offline Policy Evaluation in Large Action Spaces via Outcome-Oriented Action Grouping. WWW2023. https://dl.acm.org/doi/abs/10.1145/3543507.3583448 - Yuta Saito, Qingyang Ren, Thorsten Joachims. Off-Policy Evaluation for Large Action Spaces via Conjunct Effect Modeling. https://arxiv.org/abs/2305.08062 Technical Quality: 3 good Clarity: 3 good Questions for Authors: - Could you provide some additional experiment results comparing MR with an estimated weight $\hat{w}(y)$ and IPS, DR, and MIPS with the true respective weights? - How does MR perform when $\rho(x,a)$ takes some extreme values, making solving Eq.(3) hard? Specifically, I would love to see experiment results, for example with $n =5,000, n_a = 5000, \alpha = 1.0$, which should produce the situation I am interested in. Figure 8 in the appendix is relevant, but $n$ is too small to observe large empirical $\rho(x,a)$. - What would be the theoretical comparison between DM and MR? Why does MR perform much better than DM, even though both perform OPE by estimating some aspect of the reward distribution? - Can we extend MR to a doubly-robust version? How would it look, and how would its variance compare to that of DR with $\rho(x,a)$? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: The limitations of the work are stated in the final section and they look reasonable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their useful comments and acknowledging the practical relevance of our proposed method. We clarify some of the misunderstandings below. > Eq.(3) may be unstable in cases where the target and behavior policies greatly differ, and there are many actions where the variance of $\rho(a, x)$ becomes very large. We agree with the reviewer that the variance of $\rho(A, X)$ might increase for large policy shifts. However, since Eq. (3) is a simple scalar to scalar regression it is easy to optimise in practice. Our experimental results which include settings where variance of $\rho$ is large, show that even with 1000 to 2000 training datapoints we are able to estimate the weights $\hat{w}$ well. For example, Figures 10 and 20 in the App. Additionally, we emphasise that while an increase in the variance of $\rho(a, x)$ could lead to an increase in the variance of MR, it can also lead to a **comparatively larger** increase in the variance of IPW and DR (see Prop. 3.3 and 3.4). Recall the Eq. 4 in Prop. 3.7 of our paper (after minor modifications): $$ Var_{\pi^b}[\tilde\theta_{IPW}] - Var_{\pi^b}[\tilde\theta_{MR}]= \frac{1}{n}E_{\pi^b}[Var_{\pi^b}[\hat{\rho}(A, X)\mid Y]Y^2] + \Delta. $$ Intuitively $\Delta$ is an error term arising from the approximation error of weights $\hat{w}(Y)$ and $\Delta \rightarrow 0$ as $E_{\pi^b}[(\hat{w}(Y) - E_{\pi^b}[\hat{\rho}(A, X)\mid Y])^2] \rightarrow 0$. Hence as $Var_{\pi^b}[\hat{\rho}(A, X)\mid Y]$ increases, so will the difference $Var_{\pi^b}[\tilde\theta_{IPW}] - Var_{\pi^b}[\tilde{\theta}_{MR}]$. In other words, the variance of the IPW estimator is likely to increase relative to that of MR as the variance of $\hat{\rho}(A, X)$ increases. Our results confirm this claim. In all our experiments, using both synthetic and real-world data, we observed that the variance of the MR estimator increases with the policy shift. However, the increase is significantly smaller compared to other baselines, particularly IPW and DR methods. See Figures 2b, 6, and 18. > If we can estimate the reward distributions well, then we could simply rely on DM. [...] Why does MR perform much better than DM, even though both perform OPE by estimating some aspect of the reward distribution? We emphasise that MR does not rely on estimating the target reward distribution directly (i.e. $p_{\pi^\ast}(y)$ or $p(y\mid x, a)$). Instead, the MR estimator involves estimating the ratio $w(y) = p_{\pi^\ast}(y)/p_{\pi^b}(y)$ which, as shown in Lemma 3.1 can be estimated by solving a simple scalar to scalar regression: $$ w = \arg\min_{f} E_{\pi^b}[(\hat{\rho}(A, X) - f(Y))^2]. $$ In contrast, DM involves estimating the expected conditional reward $\mu(a, x)\coloneqq E[Y\mid X=x, A=a]$, which is obtained by regressing from $(X, A)$ to $Y$. Intuitively, this can be a significantly more challenging regression in practice when the $(X, A)$ space is high-dimensional, as compared to regressing from scalars to scalars (as in MR). Therefore DM might suffer from higher bias. When the action space is discrete, estimating $\mu(a, x)$ requires using only datapoints with $A=a$. This is limiting, especially with many actions and few observed instances of some actions. Unlike this, MR doesn't need such data partitioning, making better use of training data. Section 5 and Appendix F show MR outperforms DM in MSE and bias. > The experiment design could be improved [...] Could you provide some additional experiment results comparing MR with an estimated weight $\hat{w}(y)$ and IPS, DR, and MIPS with the true respective weights? Our setting of unknown importance policy ratios $\rho(a, x)$ captures a wide variety of real-world applications, ranging form health-care to autonomous driving. In addition, to demonstrate the utility of MR in settings with **known** $\rho(a, x)$ and $p(e\mid x, a)$ and **unknown** $w(y)$ (for our proposed method, MR), we have conducted additional experiments. Here, we use a fixed budget of datapoints (denoted by $N$) for each baseline and for MR we allocate $m=2000$ of the available datapoints to estimate $\hat{w}(y)$ and used the remaining for evaluating the MR estimator (i.e., $n=N - 2000$ for MR). In contrast, for IPW and MIPS (since the importance ratios are already known), we use all of the $N$ datapoints to evaluate the off-policy value (i.e. $n=N$ for IPW and MIPS). The results included in Table 1 in our rebuttal show that MR achieves the smallest MSE among the baselines for $N\leq 6400$. However, we observe that the MSE of IPW, DR and MIPS (with true importance weights) falls below that of MR (with estimated weights $\hat{w}$) when the data size $N$ is large enough (i.e., $N\geq 10000$). This is to be expected since IPW, DR and MIPS are unbiased (i.e., use ground truth importance ratios $\rho(a, x)$) whereas MR uses estimated weights $\hat{w}(y)$ (and hence may be biased). MR still performs the best when $N\leq 6400$. > I would love to see experiment results, for example with $n=5,000, n_a=5000, \alpha=1.0$ [...] We have included the requested results in Figure 1 of our rebuttal document. The figure shows that MR achieves the smallest MSE and bias among all the baselines considered even when $n_a = 5000$. > Can we extend MR to a doubly-robust version? We thank the reviewer for this suggestion. We in fact discuss this in the Appendix D.2 (line 764), where we point out that the natural idea used to derive DR extensions of IPW does not work in the case of MR. However, this does not completely rule out the existence of **some** doubly-robust extension of MR. This is an interesting direction, which we leave for future work. We hope that the above has addressed the questions raised by the reviewer adequately, and that the reviewer will consider raising their score. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' clarifications. Most of my main concerns were addressed nicely. I still have minor points as below, but I can increase my score to 5 to indicate that at least I am no longer on the negative side. > However, since Eq. (3) is a simple scalar to scalar regression it is easy to optimise in practice. It is hard to believe that every scalar to scalar regression is easy. I do consider the empirical analysis of the paper that demonstrates the effectiveness of MR with the estimated weights in a range of situations to be convincing, but since the estimation of $w(y)$ is such a crucial step of MR, it might be possible to further improve the paper by, for example, showing the accuracy of this regression problem, including the case where $\rho(a,x)$ has high variance, in the empirical analysis. > In contrast, DM involves estimating the expected conditional reward $\mu(a,x)$, which is obtained by regressing from $(a,x)$ to $y$. This is true if there are only action index, but in practice, there are often many useful action features and it is typical to use them when performing OPL or OPE. This should only increase the accuracy of DM or DR via improving the reward regression accuracy, and thus I am not still very optimistic that the superior empirical performance of MR can be generalized to this practically typical setup without further analysis. > Our setting of unknown importance policy ratios captures a wide variety of real-world applications, ranging form health-care to autonomous driving. This is totally true, but my argument was that there are also many situations especially in the industry that we know the true importance weights for IPS and DR. Thus, the comparison of MR with the estimated weights and IPS/DR/etc. with the true weights should also be the content in the main text, but it was nice to see the authors' additional efforts to perform empirical analysis about this situation. --- Reply to Comment 1.1.1: Title: Thank you! Comment: Firstly, we are deeply thankful to the reviewer for increasing their score. >it might be possible to further improve the paper by, for example, showing the accuracy of this regression problem [...] We take the reviewer's suggestion onboard and will include additional investigation into the accuracy of the regression problem with increasing variance of policy ratios $\rho(a, x)$. >There are often many useful action features and it is typical to use them when performing OPL or OPE. This should only increase the accuracy of DM or DR via improving the reward regression accuracy [...] We agree that in settings where the action features are highly predictive of the outcome $Y$, the reward model $\mu(a, x)$ may be easy to learn and DM may perform well. However, we would like to note that in many settings, such as healthcare (where the outcome $Y$ is often noisy), this may not be the case leading to a high bias in DM due to model misspecification [Saito et al., 2020; Farajtabar et al., 2018]. We will include a more detailed discussion on this in our paper. Lastly, the suggestion to incorporate a comparison of MR with estimated weights and IPS, DR and MIPS with true weights in the main text is duly noted, and we will include the empirical results for this setting in the revised version of our paper. Once again, we appreciate reviewer's thoughtful feedback and are grateful for their reconsideration of our paper's score. Yuta Saito, Aihara Shunsuke, Matsutani Megumi, and Narita Yusuke. Open bandit dataset and pipeline: Towards realistic and reproducible off-policy evaluation. arXiv preprint 474 arXiv:2008.07146, 2020. Mehrdad Farajtabar, Yinlam Chow, and Mohammad Ghavamzadeh. More robust doubly robust off-policy evaluation. In International Conference on Machine Learning, volume 80, pages 1447–1456. PMLR, 2018.
Summary: This paper proposes a marginal OPE estimator that directly corrects the distribution shift wrt rewards instead of correcting the distribution shift wrt actions. The proposed estimator generalizes the idea of marginal IPS (MIPS) and achieves the minimum variance among them when the marginal importance weight is accurately estimated. Experiments on both synthetic and classification-to-bandit datasets demonstrate that the proposed estimator enables more accurate OPE than baseline estimators. Strengths: 1. The idea of applying importance sampling on rewards rather than actions is reasonable from the context. The paper is easy to follow, and the contribution is clearly stated. 2. The paper also provides a data-driven method to estimate the marginal importance weight, which is a simple and straightforward approach. 3. The connection between MIPS, which estimate the expected outcome by applying marginal importance weight wrt action embeddings is discussed, and the theory suggests that the proposed MR achieve minimum variance among marginal estimators. Weaknesses: 1. As author(s) discuss in the limitation, one potential concern lies in the accuracy of the marginal importance weight. In my understanding, the proposed method seems to work well when the reward is discrete, as an adequate amount of data is observed for each reward. However, when the reward is continuous, I guess sometimes only one importance weight is observed for each reward, which leads to the variance in the estimation of the marginal importance weight. This may lead to a similar difficulty as [Kallus&Zhou, 18]. I have several questions regarding this concern, which I list in the following “Questions” section. Nathan Kallus and Angela Zhou. “Policy Evaluation and Optimization with Continuous Treatments” (AISTATS’18) 2. The marginal importance weight on reward has not been discussed in the standard OPE in contextual bandits, but some similar ideas have been discussed in OPE of RL [Rowland et al., 20] and another OPE framework which estimates the cumulative distribution function of a policy [Xu et al., 22] [Wu et al., 23]. As discussions with these methods are important to clarify the novelty of this work, I suggest discussing connections between these papers in related work. Mark Rowland, Anna Harutyunyan, Hado van Hasselt, Diana Borsa, Tom Schaul, Rémi Munos, and Will Dabney. “Conditional Importance Sampling for Off-Policy Learning” (ICML’20) Yang Xu, Chengchun Shi, Shikai Luo, Lan Wang, and Rui Song. “Quantile Off-Policy Evaluation via Deep Conditional Generative Learning” (‘22) Runzhe Wu, Masatoshi Uehara, and Wen Sun. “Distributional Offline Policy Evaluation with Predictive Error Guarantee” (ICML’23) Technical Quality: 3 good Clarity: 3 good Questions for Authors: The following questions are related to my concern discussed in Weakness 1. 1. In which condition the marginal importance weight is accurately estimated? 2. Could you provide an error analysis on the estimation of the marginal importance weight (especially when the reward is continuous)? 3. How the marginal importance weight is parametrized and estimated? Are there any assumptions or regularization? How difficult the hyperparameter tuning is? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: As discussed in Weakness 1, it would be valuable to discuss the potential difficulty in estimating the marginal importance weight when the reward is continuous. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time reviewing our paper and for appreciating the clarity of our paper. Below we clarify some of the questions raised: > As author(s) discuss in the limitation, one potential concern lies in the accuracy of the marginal importance weight. In my understanding, the proposed method seems to work well when the reward is discrete, as an adequate amount of data is observed for each reward. However, when the reward is continuous, I guess sometimes only one importance weight is observed for each reward, which leads to the variance in the estimation of the marginal importance weight. This may lead to a similar difficulty as [Kallus\&Zhou, 18]... In which condition the marginal importance weight is accurately estimated? When the reward $Y$ is continuous, the estimation of our marginal ratios $\hat{w}(y)$ can be obtained by solving a simple 1d regression problem given a policy ratio estimate $\hat{\rho}$, as shown in the Eqn 3 in our paper: $$ w = \arg\min_f E_{\pi^b}[(\hat{\rho}(A, X) - f(Y))^2]. $$ This is a straightforward regression problem since both, the inputs and outputs of $f$ are scalars. Moreover, this also means that given enough training data and an expressive enough model class (e.g. neural networks), we will be able to accurately estimate the marginal ratios $\hat{w}$. In our experiments (Section 5 in main paper and Appendix F), we consider two synthetic data setups where the reward is continuous. Figures 10 and 21 in the Appendix show that in both setups, even a moderately sized training dataset (roughly 1000 datapoints) is enough to estimate the marginal ratios well enough that the MR estimator outperforms all the baselines considered. In comparison, the setup considered in [Kallus \& Zhou, 18] is different as they consider continuous valued actions (in their setting, the reward may or may not be continuous). Consequently, the challenges posed in their setting are orthogonal to our setting because their methodology involves estimating the density of the continuous valued actions under observational data, which may be a difficult task in general. Our methodology in contrast only involves a simple scalar to scalar regression which is much easier to estimate in general. > Could you provide an error analysis on the estimation of the marginal importance weight (especially when the reward is continuous)? Section 3.1.2 and Appendix C in our paper provides theoretical analyses regarding weight estimation errors which hold when the reward is continuous. Specifically, let $\epsilon \coloneqq \hat{w}(Y) - E_{\pi^b}[\hat{\rho}(A, X)\mid Y]$ denote the approximation error of weights $\hat{w}(y)$. In Appendix C, we use some recent results regarding the generalization of 2-layer wide neural networks [Lai et al., 2023] to show that, when using wide neural networks to estimate the weights $\hat{w}(y)$, we have that $$E_{\pi^b}[\epsilon^2] \leq \mathcal{O}(m^{-2/3})$$ holds with a high probability, where $m$ is the number of training data. In addition, Proposition 3.7 shows how the variance and bias of the MR estimator depends on the weight approximation error $\epsilon$. Specifically, the result shows that as $E_{\pi^b}[\epsilon^2]\rightarrow 0$, MR achieves a lower variance than IPW, while incurring little extra bias. This result is general as it is not specific to any model class and does not require any strong assumptions. > How the marginal importance weight is parametrized and estimated? Are there any assumptions or regularization? How difficult the hyperparameter tuning is? Throughout our experiments, whenever the outcome $Y$ was continuous, we used a fully connected neural network with three hidden layers with 512, 256 and 32 nodes respectively (and ReLU activation function) to estimate the weights $\hat{w}(y)$. Moreover, we used sklearn library with the default learning rate of 0.001, and $l_2$ regularization coefficient of 0.01 to perform the regression. We found that the same architecture and hyperparameters worked well across all of our experimental settings with continuous rewards, and therefore did not need to perform hyperparameter tuning separately for each setting or dataset. While we mention the model architecture details in Appendix F, we will also add the hyperparameter details in the updated version of our paper. > The marginal importance weight on reward has not been discussed in the standard OPE in contextual bandits, but some similar ideas have been discussed in OPE of RL [Rowland et al., 20] and another OPE framework which estimates the cumulative distribution function of a policy [Xu et al., 22] [Wu et al., 23]. As discussions with these methods are important to clarify the novelty of this work, I suggest discussing connections between these papers in related work. We thank the reviewer for the relevant literature. We will add a detailed discussion regarding these works in the updated version of our paper. We hope the above addressed the concerns raised by the reviewer and that the reviewer will consider increasing their score. Jianfa Lai, Manyun Xu, Rui Chen, and Qian Lin. Generalization ability of wide neural networks on $\mathbb{R}$, 2023. URL https://arxiv.org/abs/2302.05933. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed answers to the questions, and the clarifications on the experimental settings and results resolved some of my concerns. However, let me clarify my intention in the comment below. > **Q.** when the reward is continuous, I guess sometimes only one importance weight is observed for each reward, which leads to the variance in the estimation of the marginal importance weight. This may lead to a similar difficulty as [Kallus&Zhou, 18]. > **A.** In comparison, the setup considered in [Kallus & Zhou, 18] is different as they consider continuous valued actions (in their setting, the reward may or may not be continuous). Consequently, the challenges posed in their setting are orthogonal to our setting because their methodology involves estimating the density of the continuous valued actions under observational data, which may be a difficult task in general. Our methodology in contrast only involves a simple scalar to scalar regression which is much easier to estimate in general. I do understand that [Kallus & Zhou, 18] consider a different setting where the action space is continuous. What I meant here is that, [Kallus & Zhou, 18] needed to incorporate the nearby actions to deal with the variance issue of IPS caused by sparsely observed actions in the action range of $(-\infty, \infty)$ even when the action space is 1-dimensional. In your setting, a similar situation can happen, because it also considers importance sampling on the 1-dimensional reward space in the range of $(-\infty, \infty)$. For instance, how about the case where the reward function is given as E[r|x, a] = a and Var(r) = 0? In this case, the true reward propensity is equivalent to the action propensity, and thus similar issues to [Kallus & Zhou, 18] can happen with the proposed method. --- Reply to Comment 1.1.1: Title: Thank you for your question! Comment: We thank the reviewer for their insightful questions and continued engagement. Below we address the reviewer's comments by elucidating the similarities and differences between our setup and that of [Kallus & Zhou, 18]. Both, [Kallus & Zhou, 18] and our work address the problem of off-policy evaluation using importance sampling methodologies. The importance ratios in [Kallus & Zhou, 18] are the policy ratios $\rho(a, x) = \pi^*(a\mid x)/\pi^b(a\mid x)$ whereas the importance ratios in our setting is the ratio of marginal distributions $w(y) = p_{\pi^*}(y)/p_{\pi^b}(y)$. [Kallus & Zhou, 18] consider a setting with continuous action space (e.g. Medicine doses) and a target policy $\pi^*(a\mid x)$ which is Dirac delta at a specific action value $t(x)$ (i.e. $\pi^*(a\mid x) = \delta_{t(x)}(a)$). Since the action space is continuous, we have that $p_{\pi^b}(A = t(X)) = 0$ and we can not use the traditional IPW estimator shown below (as it will almost surely be 0): $$ \hat\theta_{IPW}= 1/n\sum_{i=1}^n \frac{\mathbb{1}(a_i = t(x_i))}{\pi^b(a_i \mid x_i)} Y_i. $$ Instead, the authors propose relaxing the unit mass of the Dirac delta function using a kernel function $K(u)$. To summarise, the main challenge being addressed in this work is the estimation of $\rho(a, x)$ in the case when action space is continuous and target policy is deterministic (conditioned on $X$). The authors address this problem by replacing the dirac delta target policy $\pi^*$ with the smoothed out kernel density estimation. In contrast, for MR estimator, given a policy ratio $\rho(a, x)$, we can estimate the marginal ratios $w(y)$ using the regression shown in Eq 3 when the rewards are continuous. In other words, compared to IPW, the MR method only involves the additional step of estimating the marginal ratios $w(y)$, which can be estimated directly using a simple regression and does not involve estimating the marginal distributions $p_{\pi^*}(y)$ or $p_{\pi^b}(y)$ separately. In practice, when we use neural networks (for example) to perform this regression, this imposes an implicit restriction of smoothness on the computed ratios $\hat{w}(y)$. Therefore, if a specific value of reward $y'$ is unobserved in observational data, our trained regression model will infer the value of the ratio $\hat{w}(y')$ by interpolation. >how about the case where the reward function is given as E[r|x, a] = a and Var(r) = 0? We assume that the reviewer means that $Var_{\pi^*}(r) = 0$, (i.e. the target reward distribution is Dirac delta). Given the two conditions above, it can be shown that $Y = A = a'$ almost surely under the target policy $\pi^*$ for some fixed value $a'$, and therefore the target policy is also Dirac delta $\pi^*(a\mid x) = \delta_{a'}(a)$. Hence, in the example given above, the setting is the same as the one considered in [Kallus & Zhou, 18], i.e. the action space is continuous and the target policy is Dirac delta. This means that estimating the policy ratios $\rho(a, x)$ will involve the same challenge as considered in [Kallus & Zhou, 18]. However, we emphasise that this problem does not arise because of continuous reward space, but is more generally present whenever we are dealing with Dirac delta target policies and continuous action spaces, and will also be present if the reward space was discrete in this example. Moreover, this challenge is not specific to the MR estimator, and will also be present in any methodology which uses the policy ratios $\rho(a, x)$ (such as IPW, DR, MIPS, etc). In comparison, specifically for the MR estimator, if we were **given an estimate of the behaviour policy $\rho(a, x)$** (obtained using the methodology in [Kallus & Zhou, 18], for example) then we can simply estimate $\hat{w}(y)$ using a regression, and the rest of our methodology remains the same. In other words, MR does not impose any significant additional complexity compared to IPW as it only involves an additional scalar to scalar regression. We do not need to explicitly incorporate any smoothing methodologies when estimating $\hat{w}$, as the regression implicitly performs this smoothing in practice. We hope that the above addresses the reviewer's question adequately.
Summary: This paper introduced a new off-policy evaluation (OPE) method called Marginal Ratio (MR) to contextual bandits problems. Conventional methods such as Inverse Probability Weighting (IPW) and Doubly Robust (DR) rely on estimating policy ratios, which could incur large variances when there is low overlap between the target policy and the behavior policy. Variance and bias analyses of the MR estimator are provided. Specifically, the MR estimator achieves smaller variance compared to both IPW and DR when the important weights $\rho$ and marginal ratio $\omega$ are known. When both of them are unknown, the authors proposed a method that can approximate the weights and showed that with high probability the MR estimator is more favorable than the IPW counterpart. Extensive numerical experiments are provided, where the MR estimator outperformed baselines in terms of the mean square losses while maintained small variances. Strengths: The paper is overall well-written and easy to follow. The theorems, propositions and comparisons are well-organized. The authors provided a good motivation to study the MR estimator, which addressed the high variance issues when using IPW and DR in practice. The idea is novel and interesting and the simulation results showed great potentials of the proposed estimator. Weaknesses: The assumptions of proposition 3.3 and 3.4 are quite strong. Though the MR estimator achieves lower variances comparing to IPW and DR estimators, I think the assumption of knowing the ratio $\omega$ is much stronger than knowing the importance weight $\rho$. Thus, it is not a very fair comparison. When the behavior policy and the marginal ratio $\omega$ are known, the authors showed that as the training data size $m$ increases, the biases of MR and IPW estimators are almost the same with high probability and the variance of MR estimator could be smaller than the IPW counterpart. I don't think this is a strong result as the MR estimator depends on the unknown importance ratio $\rho$ and the order term $\mathcal{O}(m^{-1/3})$ in the variance analysis could be very large in practice. Overall, I think the theoretical results the MR estimator are sound yet incremental compared to existing methods. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. Can the order in the term $\mathcal{O}(m^{-1/3})$ be improved, e.g., $\mathcal{O}(m^{-1/2})$? 2. I am a bit confused by the connections to MIPS and the application to causal inference. As you mentioned, MIPS is trying to solve large action space problem while in causal inference the action space is usually binary. What is the benefits of using the MR estimator? Is it more applicable to large action space problems or small action space problems? Also, the connection and comparison to MIPS is not very straightforward to me as the final analyses do not explicitly depend on the action space size. Can you provide more explanations here? Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N.A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for acknowledging the novelty of our idea and that our ``simulation results show great potential'' of our method. Below we clarify some of the questions raised by the reviewer: > [...] I think the assumption of knowing the ratio $w(y)$ is much stronger than knowing the importance weight $\rho(a, x)$. Thus, it is not a very fair comparison. While our results in Propostions 3.3 and 3.4 assume that the importance weights $w(y)$ are known, we also include theoretical results in the case where the weights $w(y)$ are not known in Section 3.1.2 and Appendix C. Specifically, Proposition 3.7 intuitively shows that if our weights $\hat{w}$ are estimated `well enough', then the variance of the MR estimator will be smaller than that of the IPW estimator while incurring little extra bias. We note that this result also holds for the case where $\rho(a, x)$ is known exactly, whereas $w(y)$ is not. Our empirical results also corroborate this finding, as in all of our experiments the marginal ratio $\hat{w}(y)$ are estimated (and not exact), and we observe that overall MR consistently achieves a lower variance than most of the baselines considered. Furthermore, we conducted additional experiments with **known** policy ratios $\rho(a, x)$ and $p(e\mid x, a)$ (for IPW and MIPS estimators), and **estimated** marginal ratios $\hat{w}(y)$ (for MR estimator). For a fair comparison, we use a fixed budget of datapoints (denoted by $N$) for each baseline and in the case of MR we allocate $m=2000$ of the available datapoints to estimate $\hat{w}(y)$ and the rest of data to evaluate the MR estimator (i.e. $n = N-2000$ for MR). In contrast, for IPW and MIPS since the importance ratios are already known, we use all of the $N$ datapoints for evaluation of the off-policy value (i.e. $n=N$ for IPW and MIPS). The results in Table 1 of our rebuttal document show that MR achieves the smallest MSE among the baselines for $N\leq 6400$. However, we observe that the MSE of IPW, DR and MIPS (with true importance weights) falls below that of MR (with estimated weights $\hat{w}$) when the data size $N$ is large enough (i.e., $N\geq 10000$). This is to be expected since in this setting, the IPW, DR and MIPS estimators are unbiased (i.e., use ground truth importance ratios) whereas MR uses estimated weights (and hence may not be unbiased). MR still performs the best when $N\leq 6400$. > When the behavior policy and the marginal ratio are known, [...] as the training data size increases, the biases of MR and IPW estimators are almost the same [...]. I don't think this is a strong result [...]. We thank the reviewer for their comment. We assume that the reviewer meant ``When the behavior policy and the marginal ratio are **unknown**, [...]''. We would like to note that the main purpose of Theorem C.1 (which provides the $\mathcal{O}(m^{-1/3})$ order term), is to show that when estimating the marginal ratios $\hat{w}(y)$, there exists one model class (i.e. neural networks) for which at least asymptotically in the number of training data, the MR estimator will achieve a lower variance than that of the IPW estimator, while incurring no additional bias. As we mention in Section 3.1.2, this theorem uses recent results regarding the generalization of 2-layer wide neural networks [Lai et al., 2023], and therefore may not capture the convergence rate for general models. In contrast, Proposition 3.7 which expresses the variance of MR in terms of the weight approximation error $\epsilon$ is a general result which is not specific to a model class, and does not rely on any regularity assumptions (unlike Theorem C.1). We will further clarify this in the updated version of the paper. > Can the order in the term $\mathcal{O}(m^{-1/3})$ be improved, e.g., $\mathcal{O}(m^{-1/2})$? Our result uses Theorem 4.1 from [Lai et al., 2023] which theoretically investigates the generalization of wide neural networks with 1d inputs and outputs, as we mention in Section 3.1.2. This theorem provides the convergence rate of $\mathcal{O}(m^{-1/3})$. Improving this convergence rate is not straightforward as it will require improving upon the results provided in [Lai et al., 2023]. This is nonetheless an interesting direction to explore for future work. > [...] What is the benefits of using the MR estimator? Is it more applicable to large action space problems or small action space problems? Also, [...] the final analyses do not explicitly depend on the action space size. Can you provide more explanations here? We would like to emphasise that the performance of MR is not directly dependent on the size of the action space, but rather on the variance of policy ratios $\rho(a, x)$. Specifically, our theoretical results in Propositions 3.3, 3.4 and 3.7 show that MR performs especially well relative to the IPW and DR estimators in cases where the policy ratios $\rho(a, x)$ have high variance. This high variance in $\rho(a, x)$ can stem from many reasons such as large action spaces [Saito \& Joachims, 2022] as well as large policy shifts (which can often occur in the setting of causal inference when estimating ATE). This is also supported by our experimental results where we observe that when either the policy shift increases (e.g. Figure 2b) or the size of action space increases (e.g. Figure 8), the variance and MSE of all baselines increases but this increase is smallest for the MR estimator (among all the baselines considered). We hope that we were able to address all the reviewer's questions in the above and hope that the reviewer would consider increasing their score. Jianfa Lai, Manyun Xu, Rui Chen, and Qian Lin. Generalization ability of wide neural networks on $\mathbb{R}$, 2023. URL https://arxiv.org/abs/2302.05933. Yuta Saito and Thorsten Joachims. Off-policy evaluation for large action spaces via embeddings. In Proceedings of the 39th International Conference on Machine Learning, pages 19089–19122. PMLR, 2022. --- Rebuttal Comment 1.1: Comment: Thanks for the response. The authors addressed my concerns and I raised the score.
Summary: The paper looks at the problem of off-policy evaluation (OPE) in contextual bandits. The problem consists of estimating the expected reward obtained from a policy different from the one that collected the data. The authors first analyze classic OPE estimators used for this problem, such as Inverse Probability Weighting (IPW) and Doubly Robust (DR). The authors notice how such estimators have high variance (especially IPW), and this is due to their focus on policy shift. The authors propose a novel estimator, called Marginal Ratio (MR) estimator, which focuses on the shift of the marginal distribution of the rewards instead. The authors also theoretically show that, whenever the marginal ratio is known exactly, MR has lower or equal variance compared to IPW. The authors also derive a relationship between the variances of MR and DR. In this case, however, they show that it is not true that the variance of MR is always lower or equal to the one of DR. Furthermore, the authors show that MR has lower variance than Marginalized IPS (MIPS, a recently proposed estimator), under the same assumption used by MIPS. Strengths: - Simple and interesting idea with a novel point of view - Clear presentation - Careful experimental analysis (also considering the appendix) and strong empirical performance Weaknesses: - Since the problem with IPW and DR is high variance, a really simple way to reduce the variance for such estimators is the Self-Normalization trick [1]. I would have preferred to see the empirical comparison also against the Self-Normalized variant of IPW and DR. - There is no investigation on whether the Self-Normalization trick is applicable to the proposed MR. This could further improve the performance of the proposed estimator (if applicable) - The experimental setting seems to be such that IPW has high variance, which is fine since it is a known characteristic of IPW. However, there are also settings where IPW performs well (see [2] for an in-depth review of the performance of state-of-the-art OPE estimators). It could have been useful to see a comparison between the baseline estimators and MR in such settings, even only in the appendix. [1] A. Swaminathan and T. Joachims, The self-normalized estimator for counterfactual learning, NeurIPS 2015. [2] Saito et al., Evaluating the Robustness of Off-Policy Evaluation, RecSys 2021. Technical Quality: 3 good Clarity: 4 excellent Questions for Authors: Taken from the above section: - Do you think that the Self-Normalization trick is applicable to MR? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 4 excellent Contribution: 3 good Limitations: The authors have addressed the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for appreciating the novel point of view presented in our paper and our careful experimental analysis. Below we address some of the questions raised: > Since the problem with IPW and DR is high variance, a really simple way to reduce the variance for such estimators is the Self-Normalization trick [1]. I would have preferred to see the empirical comparison also against the Self-Normalized variant of IPW and DR. This is an interesting suggestion and indeed, self-normatlization trick is applicable to the MR estimator. Specifically, the self-normalized MR estimator (denoted as $\hat{\theta}_{\text{SNMR}}$) can be written as follows: $$ \hat{\theta}\_{SNMR} \coloneqq\sum_{i=1}^n \frac{\hat{w}(Y_i)}{\sum_{j=1}^n \hat{w}(Y_j)}Y_i. $$ We conducted experiments to investigate the effect of self-normalisation on the performance of the IPW, DR and MR estimators as suggested. Figure 2 in our rebuttal show results for three different choices of parameter configurations. Overall, we observe that in all settings, the MR and self-normalised MR (SNMR) estimator outperform all other baselines including the self-normalised IPW and DR estimators (denoted as SNIPW and SNDR respectively). Moreover, in some settings, where the importance ratios $\rho(a, x)$ achieve very high values, self-normalisation can reduce the variance and MSE of the corresponding estimator (for example, Figure 2b). However, we also observe cases in which self-normalization does not significantly change the results (Figure 2a), or may even slightly worsen the MSE of the estimators (Figure 2c). We thank the reviewer again for their insightful suggestion and will add these experiments in the updated version of our paper. It is worth noting that our investigation of self-normalization only considers the synthetic data setup, and we will include a more thorough investigation in the updated version of the paper. > The experimental setting seems to be such that IPW has high variance, which is fine since it is a known characteristic of IPW. However, there are also settings where IPW performs well (see [2] for an in-depth review of the performance of state-of-the-art OPE estimators). It could have been useful to see a comparison between the baseline estimators and MR in such settings, even only in the appendix. We thank the reviewer for their suggestion and in the updated version we will include the settings considered in [Saito et al., 2021] where IPW performs relatively well. It is worth mentioning here that we conducted additional experiments with **known** policy ratios $\rho(a, x)$ and $p(e\mid x, a)$ (for IPW and MIPS estimators), and **estimated** marginal ratios $\hat{w}(y)$ (for MR estimator). For a fair comparison, we use a fixed budget of datapoints (denoted by $N$) for each baseline and in the case of MR we allocate $m=2000$ of the available datapoints to estimate $\hat{w}(y)$ and the rest of data to evaluate the MR estimator (i.e. $n = N-2000$ for MR). In contrast, for IPW and MIPS since the importance ratios are already known, we use all of the $N$ datapoints for evaluation of the off-policy value (i.e. $n=N$ for IPW and MIPS). The results in Table 1 of our rebuttal document show that MR achieves the smallest MSE among the baselines for $N\leq 6400$. However, we observe that the MSE of IPW, DR and MIPS (with true importance weights) falls below that of MR (with estimated weights $\hat{w}$) when the data size $N$ is large enough (i.e., $N\geq 10000$). This is to be expected since in this setting, the IPW, DR and MIPS estimators are unbiased (i.e., use ground truth importance ratios) whereas MR uses estimated weights (and hence may not be unbiased). MR, despite being at a disadvantage in this setting, still performs the best when $N\leq 6400$. We hope the above addressed the questions raised sufficiently. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed responses and clarifications. I believe that the additional experiments mentioned in your comment will greatly benefit the paper. I remain convinced that this paper is deserving of acceptance, and thus, I will maintain my score of 7.
Rebuttal 1: Rebuttal: Firstly, we would like to thank the reviewers for taking the time to review our paper, appreciating the quality of our work and providing many insightful comments regarding it. To address some of the concerns raised, we have conducted additional experiments and included the results in the attached PDF file. We hope that our rebuttal along with the additional results answer the reviewers' questions, and that the reviewers will consider increasing their scores. Pdf: /pdf/84dfef7a7d23ad0f68ec5fbdbea1cd1b96bc2995.pdf
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper addresses the problem of off-policy evaluation (OPE) using a variant of the inverse propensity score (IPS) estimator on logged bandit data. The authors specifically aim to reduce the variance of IPS by focusing on the shift in the marginal distribution of rewards instead of the policies themselves, resulting in a new estimator called the Marginal Ratio (MR) estimator. The proposed estimator is extensively analyzed, including comparisons with other related works such as IPS, DR, and MIPS. Furthermore, the authors demonstrate the utility of the MR estimator in causal inference for estimating average treatment effects (ATE). The effectiveness of the MR estimator in OPE is evaluated through a standard experimental setup, involving both synthetic and semi-synthetic data. # POST-REBUTTAL: I have read the author’s rebuttal and the authors addressed all my concerns. Strengths: The clarity and quality of writing in this paper are solid. The authors not only address the standard contextual bandit setting but also extend their work to causal inference for estimating average treatment effects (ATE). The overall idea of considering the shift in the marginal distribution of outcomes instead of the policies themselves is simple and very interesting. The theoretical concepts are effectively explained, and the proposed estimator is comprehensively compared to standard estimators such as IPS, DR, and MIPS. To the best of my knowledge, the derived guarantees are original contributions. The authors employ a standard experimental setup to validate their claims and showcase the favorable performance of their estimator through an extensive set of experiments. Weaknesses: **Computational Efficiency:** The proposed MR estimator introduces an additional step of estimating $w(y)$, which may pose computational challenges compared to IPS, where the importance weights $\pi/\pi_0$ are readily accessible. While this might be manageable in the context of off-policy evaluation, where a policy $\pi$ is given and the goal is to estimate its value using $\hat{\theta}$ through IPS, DR, MIPS, or MR, it may become problematic in off-policy learning. In off-policy learning, we optimize $\hat{\theta} = \hat{\theta}(\pi)$ with respect to $\pi$ to find the policy that maximizes the reward. Then, performing a regression to estimate $w(y)$ in each optimization step with respect to $\pi$ can be computationally demanding. This presents a notable challenge. **Experiments:** The experimental setup presented in the paper is solid. However, it would be beneficial to incorporate more complex computer vision datasets such as Fashion-MNIST and CIFAR-100, as seen in recent studies (e.g., [1, 2]). These datasets feature larger action sets and contexts when converted to contextual bandit instances. **Related Work:** While not a weakness, it would be valuable to include citations for two recent papers [1, 2] that propose new corrections for the importance weights to reduce variance. In [1], the authors suggest clipping only the propensity scores $\pi_0$ (as $\rm{max}(\pi_0, 0)$) instead of the importance weight $\pi/\pi_0$. Additionally, [2] introduces a smooth regularization of the importance weights by incorporating $\pi/\pi_0^\alpha$, where $\alpha \in [0, 1]$. While it is not necessary to directly compare the proposed approach with these papers, acknowledging their existence and contributions would be appropriate. [1] Ben London and Ted Sandler. Bayesian counterfactual risk minimization. In International Conference on Machine Learning, pp. 4125–4133. PMLR, 2019. Preprint version arXiv:1806.11500. [2] Imad Aouali, Victor-Emmanuel Brunel, David Rohde, and Anna Korba. Exponential Smoothing for Off-Policy Learning. arXiv preprint arXiv:2305.15877, 2023. Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: - In terms of computational efficiency, how does the efficiency of the MR estimator compare to IPS? - Is there an efficient method to optimize the MR estimator with respect to the policy $\pi$ (i.e., in the context of off-policy learning) considering that estimating $w(y)$ is required in each iteration during the optimization process? Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors adequately addressed limitations in Section 6. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful comments on our contributions and soundness of our approach. Below we respond to the questions raised: > In terms of computational efficiency, how does the efficiency of the MR estimator compare to IPS? In the setting of off-policy evaluation, computing the MR estimator involves an additional step of estimating the weights $\hat{w}(y)$ (compared to the IPW estimator). When the outcome $Y$ is continuous, the estimation of weights $\hat{w}(y)$ can be achieved by solving a simple regression over functions $f:\mathbb{R}\rightarrow \mathbb{R}$ as we show in Lemma 3.1: $$ w = \arg\min_{f} E_{\pi^b}[(\hat{\rho}(A, X) - f(Y))^2]. $$ This is a simple scalar to scalar regression, which does not impose any significant computational overhead, and in practice $\hat{w}(Y)$ is well estimated given a moderately sized training data (around 1000 to 2000 datapoints). See Figure 10 in the appendix, for example. In the case when $Y$ is discrete, this estimation becomes even more straightforward as we can directly use the formulation $w(y) = E_{\pi^b} [\rho(A, X)\mid Y=y]$ and estimate $w(y)$ by computing the empirical mean of $\hat{\rho}(A, X)$ over datapoints where $Y=y$. > The proposed MR estimator introduces an additional step of estimating $w(y)$, which may pose computational challenges compared to IPS, where the importance weights $\pi/\pi_0$ are readily accessible. While this might be manageable in the context of off-policy evaluation, where a policy $\pi$ is given and the goal is to estimate its value using $\hat{\theta}$ through IPS, DR, MIPS, or MR, it may become problematic in off-policy learning. In off-policy learning, we optimize $\hat{\theta} = \hat{\theta}(\pi)$ with respect to $\pi$ to find the policy that maximizes the reward. Then, performing a regression to estimate $w(y)$ in each optimization step with respect to $\pi$ can be computationally demanding. This presents a notable challenge. Is there an efficient method to optimize the MR estimator with respect to the policy $\pi$ (i.e., in the context of off-policy learning) considering that estimating $w(y)$ is required in each iteration during the optimization process? We emphasise that this paper mainly addresses the problem of off-policy evaluation. We agree that directly applying our methodology to off-policy learning will indeed incur additional computational overhead since we need to run additional optimisation to update our ratios $\hat{w}(y)$ after every policy update. However, one possible way to avoid this computational overhead is to use ideas from transfer learning literature [Lu et al., 2021]. Specifically, Section 5.1 of [Lu et al., 2021] proposes a methodology which can be used to combine the two optimisation problems into one, and hence could be used to avoid running regression separately after each optimisation step. We reiterate, however, that our paper is specific to off-policy evaluation and extending this to off-policy learning is an interesting avenue for future research. > The experimental setup presented in the paper is solid. However, it would be beneficial to incorporate more complex computer vision datasets such as Fashion-MNIST and CIFAR-100, as seen in recent studies (e.g., [1, 2]). These datasets feature larger action sets and contexts when converted to contextual bandit instances. We thank the reviewer for their suggestion. However, like the fashion-MNIST dataset, MNIST dataset included in our experiments also comprises 10 labels and $28 \times 28$ images, and therefore the size of the action and context spaces is the same for both datasets. Additionally, our synthetic data setup also includes experiments with significantly larger actions spaces (of sizes up to 5000). We will consider adding additional experiments on the image data like CIFAR-100 as suggested in the updated version of our paper. > While not a weakness, it would be valuable to include citations for two recent papers [1, 2] that propose new corrections for the importance weights to reduce variance. Thank you for this suggestion, we will cite these works in the updated version of our paper. We hope our clarifications have addressed the reviewer's concerns, and kindly ask them to consider increasing their score. Nan Lu, Tianyi Zhang, Tongtong Fang, Takeshi Teshima, Masashi Sugiyama. Rethinking Importance Weighting for Transfer Learning, 2021. --- Rebuttal Comment 1.1: Title: Thank you! Comment: I appreciate your comprehensive response. My concerns have been addressed. I recommend incorporating a detailed discussion that explores the challenge of applying MR in off-policy learning as it is an important task related to off-policy evaluation, alongside a comparison with the missing related works. Additionally, the inclusion of the suggested experiments and the new CIFAR-100 trials would significantly enrich the paper. __Given your responses and assuming that the paper will be revised accordingly, I am pleased to raise my score to 7.__ --- Reply to Comment 1.1.1: Title: Thank you for the feedback! Comment: We are deeply grateful to the reviewer for providing positive feedback and for increasing their score. We also appreciate the useful suggestions which will greatly benefit our work and will make the suggested changes to our paper.
null
null
null
null
null
null
Joint Feature and Differentiable $ k $-NN Graph Learning using Dirichlet Energy
Accept (poster)
Summary: The authors present a method to jointly learn important features as well as a $k$-nearest neighbour graph for data. They extensively motivate and evaluate their method. Strengths: - Great method, very novel and sensible. - Excellent and thorough evaluation. - Method is well-justified by theoretical explanations. - Good ablation studies allow disentangling of the different components of the method. - It is very admirable that the authors include the reconstruction metric in their main results, where their method does not perform as well. I agree with the authors that this is not a good metric for noisy data, but it is great that the authors do not try to hide their performance nonetheless. Weaknesses: - Very densely written, the paper should be more accessible than this. If the authors are able to improve the clarity of the writing, the impact of this work will greatly increase. - To the point above, I would recommend a few sentences about the intuition behind the method. **Overall**: Excellent work, some intuitive explanations about the method would greatly improve the reader's understanding of how the method works. Technical Quality: 4 excellent Clarity: 2 fair Questions for Authors: - In equation 5, if we require $s_{i, i} = 0$, then why is $\mathbf{S} = I$ a solution? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 2 fair Contribution: 4 excellent Limitations: The authors do a great job describing the limitations of their method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8: Strong Accept: Technically strong paper, with novel ideas, excellent impact on at least one area, or high-to-excellent impact on multiple areas, with excellent evaluation, resources, and reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We greatly appreciate the reviewer's positive comments on our work. We provide our response below: ### Response to the point in Weaknesses **P1: "I would recommend a few sentences about the intuition behind the method."** **A1:** Thank you. The main focus of our work is to select features in neural networks using the Dirichlet Energy. The motivation behind our method is that noisy and irrelevant data will deteriorate the quality of the graph structure, therefore degrading the performance of feature selection. To address this issue, we propose a deep FS method that uses the Dirichlet energy to learn features and update the $k$-NN graph structure jointly, which is the first contribution of our work illustrated in the top panel of Fig. 2 in our paper. As we depicted in the constraints in problem (2), the result of feature selection should be discrete and unique, although traditional gumbel-softmax method is capable of approximating the discrete result, the issue of uniqueness is still not addressed. To this end, we propose the second contribution of our paper, namely, the algorithmically designed UFS module, as illustrated in the bottom left panel of Fig. 2. Moreover, we propose to learn the $k$-NN graph by minimizing the Dirichlet Energy. Note that the traditional $k$-NN learning methods cannot be employed in neural networks due to the nondifferentiability nature of sorting operation. To address this issue, we employ the OT technique to learn $k$-NN differently, which is the third contribution of the proposed method illustrated in the bottom right panel of Fig. 2. We have also listed the novelty of the proposed method in our response to Reviewer JXCq in Q1, kindly refer to this response for extra information. The proposed framework is not only a novel FS method, but also provides a new paradigm for differentiable graph learning, which is demanded by existing literature. ### Response to point in Questions **Q1: "In equation 5, if we require $s_{i,i}=0$, then why is $S=I$ a solution?"** **A1:** Thanks. The trivial solution $S=I$ applies to $\\min_S tr(F^\\top LF)$. When the problem changed into $$ \\min_S tr(F^\\top LF)\\ s.t. S1_n=1_n,s_{i,j}\\ge0,s_{i,i}=0, $$ this problem is equivalent to solving each row independently as $$ \\min_{s_{i,j}} \\frac{1}{2} \\sum_{j=1}^ne_{i,j}s_{i,j}\\ s.t. \\sum_{j=1}^ns_{i,j}=1,s_{i,j}\\ge0,s_{i,i}=0, $$ which can be derived from Eq. (S4) and Eq. (S5) in Appendix S3. It is easy to obtain that only the nearest data point can serve as the neighbour of $\\boldsymbol x^i$ with probability $1$, while all the other data points will not be the neighbours of $\\boldsymbol x^i$, which is also not desirable in $k$-NN graph learning. --- Rebuttal Comment 1.1: Title: Thank you for your rebuttal Comment: Thanks for your detailed rebuttal. I maintain my high rating and hope this work gets accepted. If the authors wish, I believe some of the explanations provided in the rebuttals should be included in the paper.
Summary: This paper proposes a deep FS method that simultaneously conducts feature selection and differentiable k-NN graph learning based on the Dirichlet Energy. The Dirichlet Energy identifies important features by measuring their smoothness on the graph structure, and facilitates the learning of a new graph that reflects the inherent structure in the new feature subspace during the training process using selected features. The authors employ the Gumbel Softmax technique and the Optimal Transport theory to address the non-differentiability issues of learning discrete FS results and learning k-NN graphs in neural networks, which theoretically makes our model applicable to other graph neural networks. Furthermore, the proposed framework is interpretable, since all modules are designed algorithmically. Strengths: The originality of this paper is satisfying, which proposes a deep FS method that simultaneously conducts feature selection and differentiable k-NN graph learning based on the Dirichlet Energy and the significance is mainly built on this. The quality and clarity of this paper is good based on the clear presentation of the proposed method shown in Figure 2. Weaknesses: 1. The biggest problem of this paper is the limited novelty in formulation of the proposed method, which is mainly obtained by combination of the existing works, i.e., dirichlet energy or the differentiable learner. The authors should better highlight their novelty in Section 3 of this paper, which make it significantly different from the existing works. 2. The compared methods in the experiment are not enough for comparison and the authors can add one or more methods for comparison to validate the effectiveness of the proposed method. 3. The adopted datasets in the experiment are almost with small scales. The largest number of the adopted datasets is 2600, i.e., madelon. The authors can add one or more datasets with large scales for comparison in the experiment. 4. The experimental improvements of the proposed method compared with the existing methods are not obvious, i.e, the proposed method on SRBCT is 0.98 while UDFS is 1.00. 5. The authors can further explain the interpretability of the proposed method in the corresponding section , which is also an advantage of the given network. Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. The biggest problem of this paper is the limited novelty in formulation of the proposed method, which is mainly obtained by combination of the existing works, i.e., dirichlet energy or the differentiable learner. The authors should better highlight their novelty in Section 3 of this paper, which make it significantly different from the existing works. 2. The compared methods in the experiment are not enough for comparison and the authors can add one or more methods for comparison to validate the effectiveness of the proposed method. 3. The adopted datasets in the experiment are almost with small scales. The largest number of the adopted datasets is 2600, i.e., madelon. The authors can add one or more datasets with large scales for comparison in the experiment. 4. The experimental improvements of the proposed method compared with the existing methods are not obvious, i.e, the proposed method on SRBCT is 0.98 while UDFS is 1.00. 5. The authors can further explain the interpretability of the proposed method in the corresponding section , which is also an advantage of the given network. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: 1. The biggest problem of this paper is the limited novelty in formulation of the proposed method, which is mainly obtained by combination of the existing works, i.e., dirichlet energy or the differentiable learner. The authors should better highlight their novelty in Section 3 of this paper, which make it significantly different from the existing works. 2. The compared methods in the experiment are not enough for comparison and the authors can add one or more methods for comparison to validate the effectiveness of the proposed method. 3. The adopted datasets in the experiment are almost with small scales. The largest number of the adopted datasets is 2600, i.e., madelon. The authors can add one or more datasets with large scales for comparison in the experiment. 4. The experimental improvements of the proposed method compared with the existing methods are not obvious, i.e, the proposed method on SRBCT is 0.98 while UDFS is 1.00. 5. The authors can further explain the interpretability of the proposed method in the corresponding section , which is also an advantage of the given network. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the Reviewer for the feedback and suggestions. Below we answer all questions and provide some additional experimental results. **Q1: Concern about the limited novelty of the proposed method** **A1:** Thank you for the suggestion about highlighting the novelty in Section 3. However, we respectfully disagree that our method has limited novelty. Our novelty lies in at least three aspects. - Firstly, the proposed method is a model-driven framework, the design of which are motivated naturally by the issue of noisy and irrelevant data. It is noteworthy that each module in our framework is related to a specific algorithm design, kindly refer to our response to Q5. - Secondly, the proposed UFS module is novel. Note that Equation (4) forms the core idea of CAE [1], a notable baseline advocating the use of Gumbel-softmax for feature selection (FS). However, CAE was criticized for duplicate selection [2, Sec. 2.2], as far as we know, few studies have addressed this issue. While UFS can proposes an improvement to CAE, with its efficacy demonstrated in our toy experiment. - Thirdly, the proposed differentiable $k$-NN graph learning is novel. While OT techniques have been used for sorting before, to our knowledge, they have not been used for $k$-NN graph learning. In fact, there are few studies on the differentiable $k$-NN graph, and this point is supported by literature [3]. In order to obtain a differentiable graph, the authors in [3] simply relax the hard connection $w_{u,v}\\in\\{0,1\\}$ between node $u$ and node $v$ into a soft connection $w_{u,v}\\in(0,1)$, which potentially create a dense graph. In contrast, our method can generate a sparser graph structure, which provides a new paradigm for differentiable graph learning. We're gratified by reviewers EQBi and MNAm acknowledging the novelty of our approach, while also recognizing MNAm's critique regarding the density of our writing. This may have obscured full comprehension of our contributions. We will add more explainations to make our method more accessible in the revised version. **Ref:** [1] Balın, M. F., et al. (2019, May). Concrete autoencoders: Differentiable feature selection and reconstruction. in Proc. ICML (pp. 444-453). [2] Atashgahi, Z., et al. (2022). Quick and robust feature selection: the strength of energy-efficient sparse training for autoencoders. Machine Learning, 1-38. [3] Miao, S., et al. (2022). Interpretable geometric deep learning via learnable randomness injection. in Proc. ICLR, 2023. **Q2: Concern about the compared methods** **A2:** Thank you for the comment. We have added the results of two popular FS methods (FIRDL [1] and STG [2]) on twelve datasets. The complete result is provided in Table 1 of the attached PDF file, below we provide some statistical results to show the superior performance of our method: | Task | Methods | STG | FIRDL | Our | | -------------- | ------------ | ---- | ----- | ------- | | Classification | Average rank | 2.1 | 2.3 | **1.1** | | | Top-1 counts | 2 | 1 | **11** | | Clustering | Average rank | 2.3 | 2.3 | **1.1** | | | Top-1 counts | 2 | 1 | **11** | | Reconstruction | Average rank | 1.7 | 1.8 | **1.8** | | | Top-1 counts | 5 | 5 | **7** | **Ref:** [1] Wojtas, M., et al. (2020). Feature importance ranking for deep learning. in Proc. NIPS, 33, 5105-5114. [2] Yamada, Y., et al. (2020, November). Feature selection using stochastic gates. in Proc. ICML (pp. 10648-10659). **Q3: Concern about the datasets** **A3:** Thanks. As we mentioned in Discussion in our paper, the major limitation of our method is the lack of scalability, for which we did not adopt large datasets. We will work on this issue in our future study. **Q4: Concern about the experimental improvements of the proposed method** **A4:** Thank you. Given that Feature Selection (FS) outputs are typically used for various downstream tasks, and the specific task is often unknown in practical applications, the goal of FS is usually to select features that perform well across multiple tasks, which is named the 'unsupervised nature' of FS by existing literature [1]. Given this, we believe that evaluating the overall performance across various tasks and datasets provides a more comprehensive demonstration of a method's efficacy. Hence, our approach exhibits superior performance relative to existing methods (evident from the Average Ranking and # Top 1 in Table 2). **Ref:** [1] Balın, M. F., et al. (2019, May). Concrete autoencoders: Differentiable feature selection and reconstruction. in Proc. ICML (pp. 444-453). **Q5: Concern about the interpretability of the proposed method** **A5:** Thank you. Our work is rooted in model-driven neural networks, related work can be seen in [1] [2]. Interpretability here signifies comprehending the function of each module during learning. Unlike most deep learning networks with complex components that are that are tough to decipher, each core module in our framework has an algorithmic, physically meaningful design, which greatly facilitates observing and understanding the network's internal operations during the learning process. Specifically, UFS, designed from Algorithm 1 and Proposition 3.1, produces an output resembling an orthogonal and discrete matrix (as detailed in our response to Reviewer f5Nx in P3). Moreover, the design of DGL, based on problem (5), assures that the learned graph reflects the intrinsic structure of the selected features. We will add more explanations about interpretability in the revised version. **Ref:** [1] Xie, Q., et al. (2020). MHF-Net: An interpretable deep network for multispectral and hyperspectral image fusion. IEEE Trans. Pattern Anal. Mach. Intell., 44(3), 1457-1473. [2] Wang, H., et al. (2020). A model-driven deep neural network for single image rain removal. in Proc. CVPR (pp. 3103-3112). --- Rebuttal Comment 1.1: Comment: I appreciate the authors for their replies (i.e., performing experiments for validation by adding two recent works). The replies regarding the novelty in formulation still does not fully convince me and I keep my rating.
Summary: This article proposes an unsupervised feature selection method by minimizing the Dirichlet energy, and the energy function is on the other hand based on the k-NN graph computed from the selected features. In this sense, the features and the k-NN graph are jointly learned. To avoid discrete operations, the author(s) use Gumbel softmax to approximate the selection operation, and use OT-based sorting to construct k-NN graph. The proposed method is validated using a series of numerical experiments. Strengths: Overall the proposed method is an interesting joint feature selection and graph learning method, and the whole procedure is differentiable, thus useful for downstream analyses. Weaknesses: 1. One major weakness of the proposed algorithm is its large computational cost, which the author(s) have mentioned in the article. The bottleneck seems to be in the sorting part. In fact, there are different variants of differentiable sorting, and it has been shown in some reports that OT-based sorting can be slow [1]. The author(s) may try other differentiable sorting algorithms such as [1], which has $O(n\log n)$ forward complexity and $O(n)$ backward complexity. 2. There exist several inconsistencies between the proposed method and the actual implementation, such as the symmetrized k-NN graph (page 2) and the actual computation of the $F$ matrix in page 5. Although the author(s) have explained their motivation, it downgrades the rigor of the method. Also Proposition 3.1 looks irrelevant if the actual $F$ is not computed in this way. 3. Following point 2, I think a deeper reason for such inconsistencies is that the optimization problem (2) is difficult by nature, and Section 3.1.1 and Section 3.1.2 are tackling the constraints in (2) separately, not jointly. For example, Section 3.1.1 attempts to make $F$ approximately discrete, and Section 3.1.2 is concerned with $F^{T}F=I$. It is very likely that they are not met simultaneously using the current algorithm. [1] Blondel, M., Teboul, O., Berthet, Q., & Djolonga, J. (2020). Fast differentiable sorting and ranking. In International Conference on Machine Learning (pp. 950-959). Technical Quality: 2 fair Clarity: 3 good Questions for Authors: 1. I think both the title of Section 3.1.1 and the statement "having selected *exact* original features" in Section 3.1.2 need to be modified, as the Gumbel softmax technique is an approximation to the discrete optimization problem, not necessarily an exact one. 2. The author(s) may consider a direct convex relaxation of the constraints in (2), which at least jointly handles the two different constraints. 3. Some comments on the points in the weaknesses section would be helpful. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: The author(s) have remarked that the proposed method may be difficult to scale up. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback and positive comments. Below we provide our response: ### Response to the points in Weaknesses **P1: Suggestion about a more computationally efficient method for differentiable sorting in Weakness 1** **A1:** Thank you for this constructive comment. Reference [1] proposed to construct differentiable sorting operators as projections onto the permutahedron, and the $\\mathcal{O}(n \\log n)$ time and $\\mathcal{O}(n)$ space complexity was achieved with isotonic optimization. We will try this method in our future work. However, recall that there are $n$ sorting tasks in our network since we have $n$ samples, a space complexity of at least $\\mathcal{O}(n^2)$ would be required to store intermediate variables. While graph neural networks mitigate memory overhead by subgraph sampling [2], the neighbours of each sample in our method are determined on global information, which requires full information being loaded at each iteration. Hence, we have mentioned batch learning as a future research direction in our Discussion section for its potential to enhance scalability. **Ref:** [1] Blondel, M., et al. (2020). Fast differentiable sorting and ranking. in Proc. ICML (pp. 950-959). [2] Hamilton, W., et al. (2017). Inductive representation learning on large graphs. in Proc. NIPS, 30. **P2: Concern about the symmetrized k-NN graph in Weakness 2** **A2:** Thank you. The use of $\\hat{S} = (S+S^\\top)/2$ is indeed a common practice in Spectral graph methods, kindly see [1, Sec. 2.2] and [2, Sec. 2.1] for example. The underlying rationale is that the $k$-NN graph we learned is a directed graph, whereas the computation of the Laplacian matrix is based on the undirected graph. To make the graph undirected, a common approach is to ignore the directions of the edges. That is, we connect $x^i$ and $x^j$ if $x^i$ is among the $k$-NN of $x^j$ or $x^j$ is among the $k$-NN of $x^i$. After symmetrization, the Dirichlet Energy will select features based on the symmetrized undirected graph $\\hat{S}$, instead of the original graph $S$. We will clarify this in the revised version. **Ref:** [1] Von Luxburg, U. (2007). A tutorial on spectral clustering. Stat. Comput., 17, 395-416. [2] He, X., et al. (2005). Laplacian score for feature selection. in Proc. NIPS, 18. **P3: Concern about the actual computation of F and the Proposition 3.1 in Weakness 2** **A3:** Thank you, we agree that the actual computation of $F$ downgrades the rigor of the method, but we respectfully disagree that Proposition 3.1 is irrelevant to the actual computation. The significance of Proposition 3.1 is twofold. Firstly, since Cholesky decomposition is conditional, Eq. (S1) in Proposition 3.1 confirms that we are able to obtain a lower triangle matrix $L$ for any real matrix $\\hat{F}$, as $\\epsilon>0$ guarantees the positive-definiteness of $F^\\top F+\\epsilon{I}_m$. Secondly, Proposition 3.1 validates the efficacy of both Algorithm 1 and the actual computation in solving the orthogonal constraint. Actually, $F = \\hat{F}(L^{-1})^\\top$ is an approximation of Algorithm 1 since we have $${F}^\\top{F} =L^{-1}\\hat{F}^\\top\\hat{F}(L^{-1})^\\top=L^{-1}(A-\\epsilon{I}_m)(L^{-1})^\\top=L^{-1}A(L^{-1})^\\top-\\epsilon L^{-1}(L^{-1})^\\top=L^{-1}LL^\\top(L^{-1})^\\top-\\epsilon L^{-1}(L^{-1})^\\top=I_m-\\epsilon L^{-1}(L^{-1})^\\top,$$ which can be viewed as an $\\epsilon$-approximation to orthogonality. While it cannot strictly attain orthogonality, the toy experiment nonetheless indicates that this design is effective for unique feature selection. ### Response to the points in Questions **Q1: "... both the title of Section 3.1.1 and the statement "having selected *exact* original features" in Section 3.1.2 need to be modified,..."** **A1:** Thank you for the comment. We will modify the title and the statement in the revised version. **Q2: "The author(s) may consider a direct convex relaxation of the constraints in (2), ..."** **A2:** Thank you for this valuable comment. Inspired by reference [1], below we provide a relaxed version of the constraint in problem (2) in our paper. Instead of learning the selection matrix $F$, we learn a discrete vector $\mathbf v$ containing only $k$ non-zero components: $$ \\min_{{\mathbf v}} \\mathrm{tr}(\\hat{{X}}^\\top{L}\\hat{{X}})\\quad\\mathrm{s.t.}\\ \\hat{{X}} = {X}\\mathrm diag({\mathbf v}), {1}_d^\\top{\mathbf v}=k,{\mathbf v}\\in\\{0,1\\}^d $$ Based on this, we can relax the discrete constraint to be the intersection of a solid cube and a shifted $\\ell_2$-sphere: $$ {\mathbf v}\\in\\{0,1\\}^d\\Leftrightarrow\\{{\mathbf v}|{\mathbf v}\\in[0,1]^d\\}\\cap\\{{\mathbf v}|\\|{\mathbf v}-({1}_d/2)\\|^2_2=d/4\\}, $$ as illustrated in Fig. 1 of the attached pdf file. While [1] suggests the relaxed problem is readily solved using the ADMM method, it's crucial to highlight their optimization is for traditional ML models. To implement the alternating optimization process into a neural network, we can design modules similar to ADMM-net [2], where each module represents an optimization step and is iteratively repeated, kindly see [2] [3] for example. **Ref:** [1] Zhang, X., et al. (2020). Top-k feature selection framework using robust 0–1 integer programming. IEEE Trans. Neural Netw. Learn. Syst., 32(7), 3005-3019. [2] Sun, J., et al. (2016). Deep ADMM-Net for compressive sensing MRI. in Proc. NIPS, 29. [3] Lin, Z., wt al. (2011). Linearized alternating direction method with adaptive penalty for low-rank representation. in Proc. NIPS, 24. ### Extra words We are grateful for these insightful comments. Despite limited space leading to a dense presentation, as noted by reviewer MNAm, we have endeavored to clearly detail our approach. We believe that this transparency can spur the evolution of improved solutions, just like P1 and Q2. Such insights are invaluable, as they significantly contribute to the continuous refinement of our work. --- Rebuttal Comment 1.1: Comment: I would like to thank the author(s) for their careful response and detailed clarification. The rebuttal as well as other reviewers' comments gives a clearer picture of the proposed method, and I expect that this framework has great potential for end-to-end feature selection in deep learning. On the other hand, there are indeed some limitations of the current work as pointed out in the reviews, including the scalability of the method, formulation of the constraints, reduced rigor for practical implementation, etc. Judging from both sides, I would like to keep my previous rating as borderline accept. I think one possible direction to enlarge the impact of this work is to design some approximate but more efficient algorithms for better scalability, for example, a differentiable approximate k-NN algorithm. Feature selection itself is a hard combinatorial optimization problem, and I agree that at least some sort of approximation needs to be used. But after paying the price of reduced rigor, there should be some visible benefits on other aspects. For this framework, I believe the computational cost is a key factor to consider.
Summary: In this paper, the authors propose an unsupervised feature selection method using Dirichlet energy. The proposed method learns the KNN graph and the feature selection jointly to reduce the influence on the feature selection quality by the noisy and irrelevant features. The feature selection component minimizes the Dirichlet energy with Gumbel Softmax smoothed one-hot feature selection matrix. To ensure each resulting feature dimension is unique, the column orthogonality is enforced by linear decompositions. The graph learning component also minimizes the Dirichlet energy. To avoid a trivial solution (graph with only self-connecting edges), Tikhonov regularization is applied to ensure the probabilistic behavior of similarity metrics and no self-loops. Empirical results on both synthetic datasets and real-world applications show that the proposed method can effectively select useful features and has better performance than other baseline methods. Strengths: * The paper is clearly motivated and well-structured. The derivations are clear and in detail. Overall the paper presentation is good. * The proposed method is interesting and novel, and the effectiveness of the proposed method is supported by both synthetic and real-world datasets. * The authors also discussed the limitations of the method and lies out the future directions to improve. Weaknesses: * As the authors mentioned in the discussion section, the method does not scale to large-scale problems. * The method is under the assumption that the features are continuous and of low dimension, whereas many real-world problems may face noncontinuous (e.x. categorical) or high dimensional (e.x. sparse BOW) features. Technical Quality: 4 excellent Clarity: 4 excellent Questions for Authors: * How can the method be generalized to handle noncontinuous categorical features? * Is a constant feature a trivial minimizer for the Dirichlet energy? How does the proposed method handle this? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 4 excellent Presentation: 4 excellent Contribution: 3 good Limitations: See the weaknesses above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the positive comments. We provide our response below. ### Response to points in Questions **Q1: "How can the method be generalized to handle noncontinuous categorical features?"** **A1:** Thanks. Recall that Dirichlet Energy is inherently reliant on the manifold distribution of continuous features. Contrastingly, discrete data does not manifest a discernible distribution characteristic. The most straightforward example is within categorical features, where the distance between type 1 and type 3 is not inherently larger than that between type 1 and type 2. Therefore, we cannot directly apply continuous measurements for categorical features. In order to incorporate discrete data into Dirichlet Energy calculations, we can first map the discrete data onto a continuous vector space, and then measure the distances between different vectors within this space. Specifically, suppose we have a feature set $\\mathbb{x}=\\mathbb{x}\_{con}\\cup\\mathbb{x}\_{cate}$ composed of continuous features $\\mathbb{x}\_{con}$ and categorical features $\\mathbb{x}\_{cate}$. For the categorical feature $\\hat{\\boldsymbol x}\_p$, the Dirichlet Energy in Eq. (1) can be extended into the following form: $$ \\mathcal{L}\_{dir}(\\hat{\\boldsymbol x}\_p) = \\frac{1}{2}\\sum\_{i=1}^n\\sum\_{j=1}^ns\_{i,j}\\|\\mathfrak{f}(\\hat{x}\_{i,p})-\\mathfrak{f}(\\hat{x}\_{j,p})\\|\_2^2 $$ where $\\mathfrak{f}$ is a nonlinear map function. For example, we can first convert the categorical feature $\\hat{x}\_{i,p}$ into a one-hot vector, and then further map the one-hot vector into a continuous $h$-dim vector $\\boldsymbol z\_{(i,p)}$ with a MLP, namely, $\\boldsymbol z\_{(i,p)} = \\mathfrak{f}(\\hat{x}\_{i,p})$. Consequently, the distance between $\\hat{x}\_{i,p}$ and $\\hat{x}\_{i,q}$ can be measured by the $\\ell\_{2}$-norm distance between $\\boldsymbol z\_{(i,p)}$ and $\\boldsymbol z\_{(j,p)}$. Accordingly, the calculation of the distance matrix in line 163 in our paper is changed into $$ e\_{i,j}=\\sum\_{\\hat{\\boldsymbol x}\_p\\in \\mathbb{x}\_{cate}}\\|\\mathfrak{f}(\\hat{x}\_{i,p})-\\mathfrak{f}(\\hat{x}\_{j,p})\\|\_2^2+\\sum\_{\\hat{\\boldsymbol x}\_q\\in \\mathbb{x}\_{con}}(\\hat{x}\_{i,q}-\\hat{x}\_{j,q})^2. $$ **Q2: "Is a constant feature a trivial minimizer for the Dirichlet energy? How does the proposed method handle this?"** **A2:** Yes, according to Eq. (1) in our paper, the constant feature is a trivial solution to the Dirichlet Energy. However, as stated in the first paragraph in Section 2 (line 69), we assume that all features have zero means and normalized variances, namely, $\\boldsymbol 1\_n^\\top\\boldsymbol x\_p=0$ and $\\boldsymbol x\_p^\\top\\boldsymbol x\_p=1$. This practice is not unusual and is frequently employed during feature preprocessing in various Feature Selection studies, e.g., see [1], [2], and [3]. Under this assumption, constant features, which cannot satisfy this condition, will be dismissed at the preprocessing stage. We will clarify this in the revised version. **Ref:** [1] Lindenbaum, O., et al. (2021). Differentiable unsupervised feature selection based on a gated laplacian. in Proc. NIPS, 34, 1530-1542. [2] Sokar, G., et al. (2022). Where to pay attention in sparse training for feature selection?. in Proc. NIPS, 35, 1627-1642. [3] Nie, F., et al. (2010). Efficient and robust feature selection via joint ℓ2, 1-norms minimization. in Proc. NIPS, 23. --- Rebuttal Comment 1.1: Comment: I'd like to thank the authors for their detailed reply and discussion. I will keep my score and vote for accept.
Rebuttal 1: Rebuttal: We thank all the reviewers for the positive reviews and constructive comments that help us to emphasize the contributions of our approach. We are encouraged to hear that the reviewers found the approach **interesting** (Reviewers EQBi, f5Nx), **novel** (Reviewers EQBi, MNAm), and **well-motivated** (Reviewers EQBi, MNAm). We appreciate that the reviewers acknowledge that the paper is **well-presented** (Reviewers EQBi, JXCq). In response to the thoughtful comments, we have addressed them one by one in the individual responses. In particular, we have highlighted our contribution in our response to Reviewer JXCq in Q1 and also described the intuition of the proposed method in the response to Reviewer MNAm in P1, please see these responses for detailed information. **Below we have uploaded a pdf file that includes the illustration of the relaxed constraint in our response to Reviewer f5Nx in Q2, as well as the detailed experimental results in our response to Reviewer JXCq in Q2.** Thanks again for all of your valuable suggestions, we appreciate the reviewers' time to check our response. Pdf: /pdf/f7c11493b1f0d073e88e6467e1ac3a3bf1a919b5.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null
XYZ Data Efficiency: Improving Deep Learning Model Quality and Training Efficiency via Efficient Data Sampling and Routing
Reject
Summary: This work introduces XYZ Data Efficiency, a framework that combines curriculum learning and data routing techniques to improve data efficiency in training recent large models. In detail, authors implemented an efficient difficulty metric calculation method for large datasets by utilizing map-reduce, on top of which authors tried various curriculum learning techniques. Furthermore, by analyzing the limitations of existing data routing techniques, authors developed random-LTD that drops different tokens for different Transformer layers. Finally, authors demonstrate their XYZ Data Efficiency framework leads to achieve the baseline accuracy with less data, or achieve the better accuracy with the same amount of data. Strengths: 1. Developing a general, efficient, and easy-to-use framework for curriculum learning for large models hasn't been explored before to the best of my knowledge. Given the high cost of training recent large models, such library can enable more active research in this field. 2. Their data routing technique (i.e. random-LTD) is thoughtfully designed, and seems to improve the final performance of large Transformers across different tasks. Weaknesses: This paper touches on multiple aspects of improving training efficiency of large models, especially from the data perspective, but none of them seem to meet the NeurIPS standard. 1. Firstly, I would argue that random-LTD has almost nothing to do with data efficiency. It looks to me that random-LTD is actually closer to some regularization, particularly dropout [1]. Just because one can achieve the same performance with 2x less data with some regularization techniques (e.g. weight decay), calling them as a "data efficiency trick" cannot be justified in my opinion. Since it bypasses some computations of some tokens in some layers, I believe it's closer to a computation efficiency rather than data efficiency trick. From the systems perspective, however, random-LTD consistently hurts the overall throughput as shown in Table 3 & 4 somehow. 2. I doubt the practical utility of map-reduce-based data difficulty calculation. The metrics used in this paper are all offline metrics in that they can be calculated only once before training and can be reused later. While I understand even such preprocessing can take a painfully long time with recent large datasets (e.g. Pile or C4), I don't think the value practitioners will get from this paper would be not so significant. If they can show their framework can be combined with some online or dynamic metrics (e.g. loss value for each token), I would be more convinced. 3. XYZ Data Efficiency framework seems to lack the flexibility and/or modularity, a highly important aspect in the framework. For example, the use of certain CL techniques require specific LR schedules to enjoy the maximal improvement. This essentially means that users get a reduced flexibility in choosing their own LR schedulers. Such entanglement between LR schedulers and data sampling strategies can further harm the user experience when they want to implement their custom data sampling strategies. Overall, my impression is that XYZ Data Efficiency doesn't allow much flexibility for users to try out different things, but rather enforces users to follow their predefined pipeline, in this case, composed of random-LTD and several CL strategies. To summarize, I find two major framing issues in this paper. First, while CL can be approached from the data efficiency perspective, I believe random-LTD (or data routing) has little to do with data efficiency. Second, I believe XYZ Data Efficiency is more of a combination of two algorithms (i.e. CL and random-LTD) rather than a some general framework due to its lack of flexibility and modularity. [1] Liu et al., Gating dropout: Communication-efficient regularization for sparsely activated transformers Technical Quality: 2 fair Clarity: 2 fair Questions for Authors: 1. Can authors provide full train/validation loss curves (from beginning to end)? It's emphasized in the abstract that XYZ Data Efficiency achieves 95% of baseline performance with up to 12.5x less data/time. However, it's generally true that training significantly slows down as it gets closer to convergence. Therefore, it is also possible that the baseline training run also achieves 95% of its maximum performance in the very early stage and takes a very long time to improve final 5%. In this case, comparing time-to-95% would be a more fair metric. Such confusion can be easily resolved by having whole training loss curves. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 2 fair Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments and below are our replies. <Comment 1> "Firstly, I would argue that random-LTD has almost nothing to do with data efficiency. It looks to me that random-LTD is actually closer to some regularization, particularly dropout [1]. Just because one can achieve the same performance with 2x less data with some regularization techniques (e.g. weight decay), calling them as a "data efficiency trick" cannot be justified in my opinion. Since it bypasses some computations of some tokens in some layers, I believe it's closer to a computation efficiency rather than data efficiency trick. From the systems perspective, however, random-LTD consistently hurts the overall throughput as shown in Table 3 \& 4 somehow." <Reply 1> We agree that the term "data efficiency" could bring confusions. We regarded random-LTD as "data efficient" because some tokens are dropped for a subset of the layers, but it's true that one can also argue that there is still exist layers excluded from token dropping. We believe that it's possible to mitigate this confusion by only calling random-LTD a "data routing method that improves training efficiency". We are also open to any suggestions of how to fix this. Regarding random-LTD hurting the overall throughput, it's true that random-LTD introduces some computation overhead when performing the token dropping. However, since the token dropping also leads to less computation for a subset of the layers, random-LTD overall is able to lead to substantial computation and time savings while maintaining the model quality as shown in Table 3 \& 4. Thus we believe random-LTD brings far more benefits than its drawbacks. <Comment 2> "I doubt the practical utility of map-reduce-based data difficulty calculation. The metrics used in this paper are all offline metrics in that they can be calculated only once before training and can be reused later. While I understand even such preprocessing can take a painfully long time with recent large datasets (e.g. Pile or C4), I don't think the value practitioners will get from this paper would be not so significant. If they can show their framework can be combined with some online or dynamic metrics (e.g. loss value for each token), I would be more convinced." <Reply 2> First of all, we agree that the map-reduce-based data analyzer would provide more benefit for online/dynamic metrics, which we believe is a promising future direction. On the other hand, we believe that the benefit for offline case is still substantial. First, as mentioned in section 3.1, when using 40 threads it took up to 80 hours for our data analyzer to analyze one difficulty metric for the BERT data. Without map-reduce, this would take 133 days to finish in sequential. Furthermore, the amount of data used in training is exploding in recent AI research. In our work, we follow the GPT-3 work in 2020 which used 300B tokens for up to 175B model. Recently, Llama 2 model (arXiv:2307.09288) achieves new SOTA model quality for smaller-scale models, and they used 2 trillion tokens to train up to 70B model. That's 6.7x data scale increase in just 3 years. This trend of exploding data size further proves the necessity of our work. <Comment 3> "XYZ Data Efficiency framework seems to lack the flexibility and/or modularity, a highly important aspect in the framework. For example, the use of certain CL techniques require specific LR schedules to enjoy the maximal improvement. ...... Overall, my impression is that XYZ Data Efficiency doesn't allow much flexibility for users to try out different things, but rather enforces users to follow their predefined pipeline, in this case, composed of random-LTD and several CL strategies." <Reply 3> First, we would like to clarify that the proposed methods does NOT require specific LR schedules. Users can use their own LR schedules, which is also needed to better measure the benefit provided by our methods. The only required change, as described in section 3.3, is to change the decay of the used LR schedule from commonly step-based (reduce LR by x after y steps) to token-based (reduce LR by x after y tokens), simply because the proposed methods lead to different amount of consumed tokens in some steps. Regarding flexibility and/or modularity, first we want to clarify that our work does NOT require users to always compose the two proposed methods. For example, Table 5 shows that for GPT-2 finetuning only using CL actually provides the most benefit. In addition, our methods focus on the data dimension, making them highly compatible with other techniques such as novel model architecture changes and system acceleration techniques. Last but not least, we do not claim that our methods are the ultimate solutions for improving data efficiency. Instead, we aim to create a useful and extensible framework for users to facilitate users to explore and add different data efficiency strategies, which potentially ends up with findings of even better methods. <Comment 4> "Can authors provide full train/validation loss curves (from beginning to end)? ...... In this case, comparing time-to-95\% would be a more fair metric. Such confusion can be easily resolved by having whole training loss curves." <Reply 4> In Figure 2 we performed GPT-3 pretraining under a wide range of training budget from 3B to 300B tokens. Results show that the baseline achieves 94\% model quality when trained with 16\% data, while the proposed work achieves 95\% model quality when trained with 8\% data, a 2x data saving. This is consistent with the main results described in the abstract: when no model quality degradation is allowed, our approach can achieve 2x speedup and cost saving; when 5\% model quality degradation is acceptable, the benefit would further increase to up to 12.5x. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: Thanks for the rebuttal, which resolved my concern regarding Q4. I still have a major concern regarding the "training (or data) efficiency" frame of this paper. More importantly, I don't think this problem can be resolved without a major rewriting. In detail, **Random-LTD** Authors stated that they framed random-LTD as a data efficiency strategy because it drops a fraction of tokes for some layers. In addition, authors provided another interpretation of random-LTD as a "data routing method that improves training efficiency". I unfortunately don't agree with both claims for the same reason I wrote in my original review: random-LTD is closer to a regularization technique that improves "generalization" than "efficiency". As shown in Table 3 & 4, random-LTD consistently leads to improved downstream task performances. In the paper, authors interpret this as "users can achieve the original performance with a reduced cost". However, when users can achieve a better performance with random-LTD than the baseline, why would users suddenly stop training after achieving the baseline performance? To make a stronger argument around efficiency, I believe random-LTD shouldn't lead to a noticeable performance improvement while achieving the baseline performance faster. That being said, even though I believe random-LTD would be a very useful training trick for large models, it still doesn't look like an efficiency trick to me. However, as the storyline of the paper is fully formed around "efficiency", I don't think this issue can be resolved without major rewriting. **LR scheduler** The adoption of CL may require adapting LR scheduler from a step-based approach to token-based approach *to enjoy the improved performance*. This necessarily requires some changes in the code, ranging from modifying the LR update rule inside the schedule class to the call of `scheduler.step()` (maybe something like `scheduler.step(num_tokens)`. When advertised as an "easy to use framework", I don't know how these required code changes are handled within XYZ Data Efficiency. Furthermore, even though it may not be a heavy workload for users, it seems to me the proposed method is closer to a training "trick" rather than "framework". To sum up, even though two proposed methods themselves are interesting, I believe the paper is selling their methods in a misleading way. That being said, I think the paper can be much more improved when their methods are advertised correctly. However, I can't recommend the acceptance for NeurIPS 2023 in its current form, as it will require a large amount of rewriting.
Summary: In this paper, the author proposes XYZ data efficiency framework to improve the data/training efficiency in the foundation model training. The proposed framework mainly consists of two techniques, i.e., (1) the efficient data sampling via general curriculum learning library and (2) efficient data routing via random layer-wise token dropping. Strengths: * The CL-based library is open-sourced and compatible with PyTorch. * The proposed method achieves considerable training acceleration with minor or no accuracy degradation. * The author evaluates their method on several different models, including the large language models. Weaknesses: * Missing the full term of “CL” in the introduction section (line 44). The first explanation shows in line 64. * The author argues that the previous methods require changing the data loader, data sampler etc. However, the proposed method still needs to change then as well. * Besides the TokenBypass, there are also several data routing techniques for foundation model training. E.g., [1] [2]. The author should also include those works for discussion and comparison. [1] EViT: Expediting Vision Transformers via Token Reorganizations. ICLR 2022 [2] Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision. AAAI 2023 * I think the previous work [2] also explores the efficiency at the data sample-level and data routing level. So, it is probably not appropriate to claim the proposed method is “the first to demonstrate that composing data sampling and routing techniques can lead to even better data/training efficiency …” * As the author claims the proposed framework is easy-to-use and admits being open-sourced as one of the contributions of this work, it would be better if the author could submit the anonymous code with the supplementary materials. * Though the proposed method can achieve considerable overall acceleration, it would be great if the author can provide a discussion about the overhead of data sampling and routing part. Technical Quality: 3 good Clarity: 3 good Questions for Authors: Please refer to the weakness part. Confidence: 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully. Soundness: 3 good Presentation: 3 good Contribution: 3 good Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4: Borderline reject: Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments and below are our replies. <Comment 1> "Missing the full term of “CL” in the introduction section (line 44). The first explanation shows in line 64." <Reply 1> Thank you for catching this and we will make sure to fix this and double check all terminologies in the final version of our paper. <Comment 2> "The author argues that the previous methods require changing the data loader, data sampler etc. However, the proposed method still needs to change then as well." <Reply 2> We agree that this part of description did not clearly differentiate the previous methods and the proposed method. Although both of them requires replacing the data loader etc., the advantage of our approach is that it's much easier to analyze the data with different difficulty metrics and sample/load data based on the new metric. For existing methods, applying a new metric would require nontrivial code changes inside different components, while for our approach we generalized the curriculum learning pipeline so that the only requirement is a separate new function to compute the new difficulty metric for each sample. We will improve this part of description in the final version of our paper. <Comment 3> "Besides the TokenBypass, there are also several data routing techniques for foundation model training. E.g., [1] [2]. The author should also include those works for discussion and comparison. [1] EViT: Expediting Vision Transformers via Token Reorganizations. ICLR 2022. [2] Peeling the Onion: Hierarchical Reduction of Data Redundancy for Efficient Vision. AAAI 2023. I think the previous work [2] also explores the efficiency at the data sample-level and data routing level. So, it is probably not appropriate to claim the proposed method is “the first to demonstrate that composing data sampling and routing techniques can lead to even better data/training efficiency …”" <Reply 3> Thank you for catching the missing related works and we agree this make it necessary to rephrase the description of our proposed method. On the other hand, we believe that our work provide sufficient contributions beyond these two related works: First, both works only verify their methods on ViT models, while our methods are verified on both NLP and CV large-scale models. Second, both works provide less benefit than our work. When zero model quality degradation is required, the EViT work can only provide 1.15x training speed for ImageNet training (Table 11 in their paper). In contrast, in our work Table 6, our random-LTD method can provide 1.3x training speedup while slightly improving the model quality. The Peeling the Onion work achieves 1.15x training speedup while slightly improving the model quality, which is again less speedup gain than our random-LTD. Furthermore, when combining both of our proposed methods we demonstrate even better training speedup gain. <Comment 4> "As the author claims the proposed framework is easy-to-use and admits being open-sourced as one of the contributions of this work, it would be better if the author could submit the anonymous code with the supplementary materials." <Reply 4> The proposed XYZ framework has been open sourced as part of a popular deep learning acceleration framework developed by us (20K+ stars on GitHub). As a result, we find it extremely difficult to anonymize the code. Thus to avoid the risk of violating double-blind policy we could not provide the code during submission. We will definitely include the clear citation to the open sourced code in the final version of our paper. <Comment 5> "Though the proposed method can achieve considerable overall acceleration, it would be great if the author can provide a discussion about the overhead of data sampling and routing part." <Reply 5> We agree and will include numerical analysis of the overhead in the final version of our paper. In section 4.2 we did include some discussion of the overhead introduced by random-LTD for BERT-large pretraining. Due to space limit we didn't include further description about other cases where the overhead is more trivial.
Summary: This paper draws inspiration from the observation of training costs increasing quadratically with data size, leading to a focus on enhancing data efficiency. To address this issue, the paper presents a framework that optimizes data utilization, improves training efficiency, and enhances model quality. The framework introduces efficient data sampling and data routing methods designed to overcome the challenges associated with data size. Extensive experimental results conducted on various foundation models confirm the effectiveness of the proposed methods, validating their ability to achieve improved data efficiency and overall model performance. Strengths: 1. The paper introduces a framework that combines general continual learning (CL) techniques with random layerwise token dropping for data sampling and data routing. This framework aims to address the challenges of CL by incorporating the random dropping of tokens at each layer, enabling efficient data processing and routing during the learning process. 2. The effectiveness of the proposed method is demonstrated through experiments conducted on various foundation models. The results highlight the remarkable data efficiency achieved by the framework, showcasing its ability to handle continual learning tasks effectively while maintaining high performance with limited data. Weaknesses: 1. While data efficiency is recognized as crucial for various tasks, the paper could provide a more comprehensive study and presentation of how the proposed method enhances models across different data sizes. A more thorough investigation and analysis of the impact of the proposed method on models of varying data sizes would contribute to a deeper understanding of its effectiveness. 2. It is worth noting that the paper's verification of the proposed method is limited to four models. Expanding the experimental evaluation to include a broader range of models would provide a more robust assessment of the method's performance and its applicability across different architectures. This would enhance the credibility and generalizability of the findings. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. A direct comparison between the proposed method and Scaling Law [1] is not provided in the paper. It would be valuable to evaluate and discuss how the proposed method differs from or complements the principles and findings of Scaling Law [1] in terms of data efficiency and model improvement. 2. Random layerwise token dropping, as described in the paper, refers to the process of randomly dropping tokens at each layer during the learning process. Drop path [2], on the other hand, is a technique commonly used in neural architecture search where random paths of layers are dropped during training. While both methods involve dropping components during the learning process, how do they differ in terms of the specific units being dropped and affect the performance? 3. The paper primarily focuses on verifying the proposed method's effectiveness on foundation models. It is not explicitly stated whether the method can be applied to tasks with limited data, such as ImageNet. Further investigation and experimentation would be necessary to determine the performance and applicability of the proposed method on tasks with smaller datasets, beyond the foundation models considered in the paper. [1] Scaling Laws for Neural Language Models [2] Deep Networks with Stochastic Depth Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: Please refer to the weakness and question part. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments and below are our replies. <Comment 1> "While data efficiency is recognized as crucial for various tasks, the paper could provide a more comprehensive study and presentation of how the proposed method enhances models across different data sizes. A more thorough investigation and analysis of the impact of the proposed method on models of varying data sizes would contribute to a deeper understanding of its effectiveness." <Reply 1> We totally agree with the value of testing the proposed methods on more different data sizes. On the other hand, here we would like to summarize the different data scales that we already tested in the paper: First, in Figure 2 we performed GPT-3 pretraining under a wide range of training budget from 3B to 300B tokens, and the proposed method provides consistent model quality gain at most of the budgets, except the smallest 3B which is probably a too small budget for prtraining. Second, in section 4.3 we performed GPT-2 finetuning on PTB and ViT finetuning on ImageNet and CIFAR. These tasks have small data scale, and the proposed methods are still able to provide similar training efficiency and/or model quality gain. <Comment 2> "It is worth noting that the paper's verification of the proposed method is limited to four models. Expanding the experimental evaluation to include a broader range of models would provide a more robust assessment of the method's performance and its applicability across different architectures. This would enhance the credibility and generalizability of the findings." <Reply 2> We totally agree that testing the proposed methods on additional models would enhance the credibility and generalizability. On the other hand, our work focused on Transformer models because currently they have the largest size, highest training cost, and highest popularity. Thus we believe that our work has sufficient merit to the general AI research community, substantially lowering the cost and difficulty on researching the Transformer model training. <Comment 3> "A direct comparison between the proposed method and Scaling Law [1] is not provided in the paper. It would be valuable to evaluate and discuss how the proposed method differs from or complements the principles and findings of Scaling Law [1] in terms of data efficiency and model improvement." <Reply 3> We believe our work is orthogonal to the findings in the Scaling Law work. In the Scaling Law work, the are 4 key findings related to data: (1) Model performance depends strongly on the size of the dataset. (2) Model size and data size have to be increased simultaneously in order to consistently achieve better model quality. (3) Large models are more sample-efficient than small models, reaching the same level of performance with fewer optimization steps. (4) Under a fixed computation budget, large model with less data can lead to better performance. In our work, Figure 2 reconfirms their finding 1, and our overall experience agree with their other 3 findings. Our proposed methods aim to further improve the data efficiency, but the overall relationship between data and model performance still holds the same. We will add this discussion in the final version of our paper. <Comment 4> "Random layerwise token dropping, as described in the paper, refers to the process of randomly dropping tokens at each layer during the learning process. Drop path [2], on the other hand, is a technique commonly used in neural architecture search where random paths of layers are dropped during training. While both methods involve dropping components during the learning process, how do they differ in terms of the specific units being dropped and affect the performance?" <Reply 4> We believe the Drop path work is more like a special case of our work. In Drop path, the whole mini batch are skipped for a subset of layers every time. In our work, we only skip a subset of tokens for each training sample at each layer. We believe Drop path's complete dropping a subset of layers could lead to worse convergence and/or less stable training for the modern Transformer-based models, as we discussed in section 3.2. Furthermore, the TokenBypass work mentioned in our paper also make some tokens fully skip a subset of layers (similar to Drop path), and in appendix A.5 we provided a thorough comparison between random-LTD and the existing work TokenBypass. Results show that random-LTD provides better benefits on both GPT-2 finetuning and GPT-3 pretraining tasks. We will add this discussion in the final version of our paper. <Comment 5> "The paper primarily focuses on verifying the proposed method's effectiveness on foundation models. It is not explicitly stated whether the method can be applied to tasks with limited data, such as ImageNet. Further investigation and experimentation would be necessary to determine the performance and applicability of the proposed method on tasks with smaller datasets, beyond the foundation models considered in the paper." <Reply 5> Actually in section 4.3 we did perform ViT finetuning on ImageNet and CIFAR. Results show that the proposed method is able to provide similar training efficiency and/or model quality gain compared to the foundation model pretraining cases.
Summary: This paper proposes XYZ Data Efficiency, a framework that makes better use of data, increases training efficiency, and improves model quality. The proposed framework features efficient data sampling, efficient data routing, and an easy-to-use practical framework that is integrated into an existing library. Strengths: - The targetted application on improving the data and training efficiency is an important problem, especially in the era of large models. This paper makes a practical step in this direction. - I like the writing of the introduction and related works, which provides a structured review and highlights the goal of this paper and its difference from other papers. - The description of the proposed method is pretty detailed, which can help the reader to have a detailed understanding of how this framework is implemented and part of the time/computation overhead to run this framework. - Overall, I would say this paper makes good engineering efforts to make the Weaknesses: - My main concern about this paper is the evaluation observations. Specifically, it seems that under lower-budget training settings (e.g., training with less data and training time), the improvement over the baseline actually shrinks. Will this make this method can only be suitable for teams with a large amount of data and computation resources? - In the paper, the authors mentioned that previous methods for improving data/training efficiency fail to achieve satisfactory performance under large-scale settings. Is there some numerical evidence for this claim other than the one shown in Table 1? Technical Quality: 4 excellent Clarity: 3 good Questions for Authors: Please refer to weakness. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 4 excellent Presentation: 3 good Contribution: 3 good Limitations: The authors may want to address the limitations of this paper in their final version. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments and below are our replies. <Comment 1> "My main concern about this paper is the evaluation observations. Specifically, it seems that under lower-budget training settings (e.g., training with less data and training time), the improvement over the baseline actually shrinks. Will this make this method can only be suitable for teams with a large amount of data and computation resources?" <Reply 1> We are not sure which particular results make you feel that the improvement over the baseline shrinks under lower-budget training setting, so here we just try to clarify this with results that demonstrate the opposite: First, in Figure 2 we performed GPT-3 pretraining under a wide range of training budget from 3B to 300B tokens, and the proposed method provides consistent model quality gain at most of the budgets, except the smallest 3B which is probably a too small budget for prtraining. Second, in section 4.3 we performed GPT-2 finetuning on PTB and ViT finetuning on ImageNet and CIFAR. These tasks have small data scale, and the proposed methods are still able to provide similar training efficiency and/or model quality gain. <Comment 2> "In the paper, the authors mentioned that previous methods for improving data/training efficiency fail to achieve satisfactory performance under large-scale settings. Is there some numerical evidence for this claim other than the one shown in Table 1?" <Reply 2> Yes we did provide additional numerical evidence other than Table 1. For curriculum learning, as mentioned in section 4.1, the CL\_seqtru case in our results represents the existing work about curriculum learning for pretraining tasks. This existing work is a specialized implementation and cannot be easily modified to explore other curriculum learning. In contrast, our work makes it easy to test different metrics, and the best metric we found did provide better model quality than the existing work in table 3 (case 2 vs case 5), table 4 (case 2 vs case 5), and table 5 (case 2 vs case 3). For random-LTD, (due to the space limit issue) in appendix A.5 we provided a thorough comparison between random-LTD and the existing work TokenBypass. Results show that random-LTD provides better benefits on both GPT-2 finetuning and GPT-3 pretraining tasks. <Comment 3> "The authors may want to address the limitations of this paper in their final version." <Reply 3> Yes it's true that we were not able to explicitly address the limitations and impact, partially due to the space limit issue. In terms of limitation, in section 4.2 we mentioned one limitation is that there is one case where composing two techniques provides less benefit than only using rLTD, which might need further investigation in future. Similarly, it'd be helpful to test the proposed methods on other model architectures and training tasks different from what's tested in our paper. In terms of potential negative societal impact, we believe there is no additional impact introduced by our work given the proposed methods focus on improving training efficiency of existing model architecture and training tasks. We will add a "limitation and impact" section in final version of our paper. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: I'd like to thank the authors for their rebuttal. I'll keep my rating as it is.
null
NeurIPS_2023_submissions_huggingface
2,023
Summary: This paper introduces XYZ, a data sampling and routing framework designed to enhance the efficiency of training large transformer models. XYZ incorporates a user-defined curriculum learning metric for data sampling and leverages token dropping to reduce computational overhead. The authors propose random layer-wise token dropping (random LTD) to efficiently apply token dropping per layer, capturing attention dependencies between tokens in intermediate layers with high probability. The framework's effectiveness is validated through experiments on pretraining GPT-3, GPT-3 MoE, and BERT, as well as finetuning GPT-2 and ViT, achieving up to a 12.5x reduction in data/time/cost. Strengths: 1. This paper introduces the random layer-wise token dropping technique, which demonstrates a novel approach to enhance the efficiency of large transformer model training. 2. The evaluation is across various models of different sizes, including GPT-3, GPT-3 MoE, BERT, GPT-2, and ViT. 3. The paper provides a comprehensive and detailed account of the training setting used in the experiments. Additionally, the authors present a thorough analysis of the results and observations obtained from the experiments. 4. The paper shows substantial efficiency gains using the XYZ framework. Weaknesses: 1. One notable weakness of the proposed framework is its relatively limited performance compared to the baseline when operating at a smaller data scale, as indicated in Figure 6. Further investigation and clarity on the factors contributing to this limitation would be valuable for understanding the framework's practical applicability across various data scales. 2. Despite claims of open-sourcing the XYZ framework, anonymized code or a link to access the implementation is not provided. 3. The paper does not explicitly address the limitations of their proposed framework or discuss any potential negative societal impact. Technical Quality: 3 good Clarity: 3 good Questions for Authors: 1. How does the proposed XYZ framework compare to the baseline when pre-training GPT-3 1.3B on a data set of 75B tokens? 2. Given the observed slower convergence of XYZ compared to the baseline, what are the underlying factors contributing to this behavior? Specifically, does the convergence difference primarily result from the data sampling methodology (curriculum learning - CL) or the token dropping technique (random layer-wise token dropping - rLTD)? Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: The authors have not explicitly discussed the limitations and potential negative societal impact of their work in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments and below are our replies. <Comment 1> "One notable weakness of the proposed framework is its relatively limited performance compared to the baseline when operating at a smaller data scale, as indicated in Figure 6. Further investigation and clarity on the factors contributing to this limitation would be valuable for understanding the framework's practical applicability across various data scales." "Given the observed slower convergence of XYZ compared to the baseline, what are the underlying factors contributing to this behavior? Specifically, does the convergence difference primarily result from the data sampling methodology (curriculum learning - CL) or the token dropping technique (random layer-wise token dropping - rLTD)?" <Reply 1> First of all, we believe there exist some confusions about Figure 6: The proposed methods' slower convergence in Figure 6(a) does not mean that "the proposed methods have limited performance compared to the baseline when operating at a smaller data scale". This is because no matter how the data scale changes, the best configurations (the number of CL/rLTD steps) of the proposed methods also change in proportion (as summarized in Table 2). Thus no matter how small the data scale/total data budget is, the proposed methods will only have slower convergence at the early stage of that training, yet still provide better final model quality/training efficiency after the full training. The end result will always be similar as Figure 6(b), regardless of data scale. As shown in Figure 2, we did test and demonstrate that the proposed methods provide better final model quality/training efficiency at a wide range of data scales from 3B to 300B. Second, in terms of which of the techniques contribute more to the convergence slowdown at the early stage of training, our results show that CL contributes more to the slowdown. This makes sense because compared to CL which completely focus on easier data, rLTD still have first and last layer acting as normal layers without token dropping. We will add more figures and analysis about this in the final version of our paper. <Comment 2> "Despite claims of open-sourcing the XYZ framework, anonymized code or a link to access the implementation is not provided." <Reply 2> The proposed XYZ framework has been open sourced as part of a popular deep learning acceleration framework developed by us (20K+ stars on GitHub). As a result, we find it extremely difficult to anonymize the code. Thus to avoid the risk of violating double-blind policy we could not provide the code during submission. We will definitely include the clear citation to the open sourced code in the final version of our paper. <Comment 3> "The paper does not explicitly address the limitations of their proposed framework or discuss any potential negative societal impact." <Reply 3> Yes it's true that we were not able to explicitly address the limitations and impact, partially due to the space limit issue. In terms of limitation, in section 4.2 we mentioned one limitation is that there is one case where composing two techniques provides less benefit than only using rLTD, which might need further investigation in future. Similarly, it'd be helpful to test the proposed methods on other model architectures and training tasks different from what's tested in our paper. In terms of potential negative societal impact, we believe there is no additional impact introduced by our work given the proposed methods focus on improving training efficiency of existing model architecture and training tasks. We will add a "limitation and impact" section in final version of our paper. <Comment 4> "How does the proposed XYZ framework compare to the baseline when pre-training GPT-3 1.3B on a data set of 75B tokens?" <Reply 4> We didn't test the case of 75B tokens, but as shown in Figure 2 we did test the case for 48B (16\%) and 96B (32\%) tokens, where the proposed methods provide better model quality than the baseline.
null
null
null
null
null
null
Practical Sharpness-Aware Minimization Cannot Converge All the Way to Optima
Accept (spotlight)
Summary: This paper analyzes the convergence properties of Sharpness-Aware Minimization (SAM) with **constant** perturbation size $\rho$ and gradient normalization applied to the updates; this has not been done in prior works. Both deterministic and stochastic settings are considered. The authors show the following: 1. For the strongly convex + smooth case in the deterministic setting, SAM converges to the global minimum. A matching lower bound for the rate is also presented although I'm not sure if it's correct; please see Weakness #2. For the general convex + smooth case in the deterministic setting, the authors are only able to show convergence to a stationary point (in terms of gradient norm). 2. More importantly, the authors show that in all other settings considered in the paper, SAM cannot converge to the global/local minimum. Specifically, this includes the smooth non-convex and non-smooth Lipschitz convex cases in the deterministic setting and smooth + (strongly) convex and smooth non-convex cases in the stochastic setting. Some lower bounds are also presented to complement the upper bounds. In summary, compared to the case of decaying perturbation size $\rho$ analyzed in prior theoretical works on SAM, this work shows that using a constant $\rho$ can inhibit convergence. Strengths: **1.** Unlike prior theoretical works on SAM, this paper considers the use of constant perturbation size and gradient normalization in the SAM update which is more aligned with practice. **2.** The theory is comprehensive in the sense that the common function classes are covered and there is analysis for both the deterministic and stochastic settings. **3.** In some cases, matching lower bounds are also presented to support the upper bounds. Some of the constructions and techniques for the lower bounds seemed novel to me (for e.g., Theorem 3.6) but I'm not an expert on lower bounds. **4.** The paper is also written more or less clearly. Weaknesses: **1.** In the abstract, the authors write: "*Perhaps surprisingly, in many scenarios, we find out that SAM has limited capability to converge to global minima or stationary points*". But I don't find the non-convergence of SAM **with constant $\rho$** (and constant step-size $\eta$) very surprising; in fact, the asymptotic convergence in Theorems 3.1 and 3.3 surprises me. I say this because of the following reason. Suppose deterministic SAM converges to some point $\tilde{x}$. Then, we must have $\nabla f\Big(\tilde{x} + \rho \frac{\nabla f(\tilde{x})}{||\nabla f(\tilde{x})||}\Big) = 0$. This means that we must have $\tilde{x} + \rho \frac{\nabla f(\tilde{x})}{||\nabla f(\tilde{x})||} = x^{\ast}$, where $x^{\ast}$ is some stationary point of $f$. So, $|| \tilde{x} - x^{\ast}|| = \rho$, i.e., if SAM with constant $\rho$ converges to some point, then that point must be $\rho$ distance away from a stationary point of $f$. Notice that if $\rho$ is a decreasing function of $t$, this (apparent) issue of non-convergence won't arise. **2.** There seems to be an issue in Theorem 3.2 which states that $\frac{\beta}{\mu} \geq 2$. But in the proof of Theorem 3.2, one-dimensional quadratics are used to obtain lower bounds; for these one-dimensional quadratics the **tightest possible** smoothness constant = **tightest possible** strong convexity parameter. So actually, $\beta = \mu$ in all 3 cases. Now in Case 1, $x_0 = 2\rho$. But then, we get $x_t \geq \frac{x_0}{2} = \rho$. Unfortunately, this yields a vacuous lower bound of 0. Can the authors comment on this? It seems that if the current proof strategy is to be used, we would need at least 2-d quadratics where $\beta > \mu$. **3.** In Theorem 3.4, the dependence of the convergence bound w.r.t. $T$ seems loose and I don’t think the Lipschitz assumption is required. In the second equation of the proof, there is a $-\frac{\eta}{2}||\nabla f(y_t)||^2$ term and a $\frac{\eta^2 \beta}{2}||\nabla f(y_t)||^2$ term; these two terms can be combined together and as long as $\eta \beta < 1$, the coefficient of $||\nabla f(y_t)||^2$ is negative. So I think we can get $O(1/T)$ convergence to a $O(\beta^2 \rho^2)$ stationary point with a fixed step-size independent of $T$. Also, if we follow this approach, we don’t even Lipschitzness. **4.** It is unfortunate that the lower bounds for the stochastic case needed to be corrected in the supplementary material. **5.** (Minor weakness but a limitation nevertheless) SAM was proposed to improve generalization. But this paper has no results illustrating how SAM improves generalization such as by converging to points that are *approximately* flat minima (or because it inhibits convergence preventing over-fitting). A missed reference (just for the author's information): https://openreview.net/pdf?id=IcDTYTI0Nx shows the convergence of SAM with non-constant perturbation sizes. Technical Quality: 2 fair Clarity: 3 good Questions for Authors: Please address Weaknesses 1, 2 and 3. My current rating is mainly due to Weakness 1; if the authors can point out some fallacy in my understanding or convince me that the *non-convergence* of SAM isn't all that obvious, I can increase my score. Confidence: 4: You are confident in your assessment, but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Soundness: 2 fair Presentation: 3 good Contribution: 2 fair Limitations: Some minor limitations have been discussed such as the loose dependence on $\beta$ and $\mu$ in Theorem 3.1. But important limitations such as Weakness #5 have not been discussed. No foreseeable negative societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer for dedicating their valuable time to evaluate our paper. Below are our responses to the questions raised. > **Weakness 1. Non-convergence seems obvious.** As the reviewer correctly points out, if deterministic SAM finds a fixed point $\tilde{x}$, it must hold that $\nabla f(\tilde{x}+\rho \frac{\nabla f(\tilde{x})}{\lVert\nabla f(\tilde{x})\rVert})=0$. However, this does not necessarily indicate that $\tilde{x} + \rho\frac{\nabla f(\tilde{x})}{\lVert\nabla f(\tilde{x})\rVert} = x^*\neq \tilde{x}$, where $x^*$ represents a stationary point or a global minimum. This is because we treat $\frac{\nabla f(x)}{\lVert\nabla f(x)\rVert}$ as $0$ whenever $\nabla f(x)=0$ (see lines 123-125), which means that the converged fixed point $\tilde{x}$ can also be $\tilde{x} = x^*$ itself, not necessarily a point satisfying $\lVert\tilde{x}-x^*\rVert=\rho$. In light of this point, we'll clarify why the equation $\lVert\tilde{x}-x^*\rVert=\rho$ is not always true, by presenting examples in convex and nonconvex functions. - **Convex functions.** By utilizing Lemma B.2, we get $\langle\nabla f(\tilde{x}),\nabla f(\tilde{y})-\nabla f(\tilde{x})\rangle\geq 0$, where $\tilde{y}=\tilde{x}+\rho\frac{\nabla f(\tilde{x})}{\lVert \nabla f(\tilde{x})\rVert}$. Rearranging, we get $\langle \nabla f(\tilde{x}),\nabla f(\tilde{y})\rangle\geq\lVert \nabla f(\tilde{x})\rVert^2$. Consequently, $\lVert \nabla f(\tilde{y})\rVert ^2 \geq \lVert \nabla f(\tilde{x})\rVert^2$, which in turn indicates that $\nabla f(\tilde{y})=0$ if and only if $\tilde{x}=x^*$, where $x^*$ is the global minimum. This implies that if SAM converges to $\tilde{x}$, then $\tilde{x}$ **must be** $x^*$. - **Nonconvex functions.** Here, we present scenarios wherein SAM tends to converge in proximity to a stationary point $x^*$, rather than a point located $\rho$-away from $x^*$. **Scenario 1**: Consider a "local maxima function," represented by $f(x)=-\frac{1}{2}x^2$. We can quickly verify that a possible virtual loss (recall Definition 2.7) can be defined as $$ J_f(x)=\\begin{cases} -\\frac{1}{2}(x+\rho)^2, &x\leq0 \\\\ -\frac{1}{2}(x-\rho)^2, &x>0. \\end{cases} $$ Therefore, when $x_t\in[-\rho,\rho]$, SAM updates tend to converge towards $x^*=0$. Precisely, we can check that for $\eta<1$, if $x_t\in[-\eta\rho,\eta\rho]$, the subsequent iterates remain within $[-\eta\rho,\eta\rho]$, implying $|x_t-x^*|\leq\eta\rho$. This means that for sufficiently small $\eta$, the SAM iterates stay in proximity to $x^*$ at a distance **much closer than** $\rho$. **Scenario 2**: Consider a "saddle function," denoted as $f(x,y)=x^2-y^2$. Analogous to scenario 1, the SAM iterates near the saddle point tend to converge to that saddle point. For the detailed illustration, refer to Figure 5 in [1]. From these scenarios, we can check that for nonconvex functions, local maxima and saddle points can serve as attractors within certain regions. Consequently, there are many scenarios that SAM iterates converge to stationary points $x^*$, and we can conclude that the convergence of SAM to points $\lVert \tilde{x}-x^*\rVert=\rho$ **does not always happen** even in nonconvex settings. > **Weakness 2. There seems to be an issue in Theorem 3.2.** As outlined in our discussion on the "Remarks on the validity of lower bound" (lines 634-656), our objective is to establish a lower bound for Equation (7). Specifically, we aim to demonstrate the existence of lower bound for each $A \in \mathcal A$, wherein we select $f \in \mathcal F$ accordingly. In the context of Theorem 3.2, the function class $\mathcal{F}$ represents the collection of all $\beta$-smooth $\mu$-strongly convex functions, where the constants satisfy $\frac{\beta}{\mu} \geq 2$. In order to prove a lower bound, we get to choose ``worst-case'' functions from $\mathcal F$, and this function class naturally includes 1-D quadratic functions: $f(x)=\frac{a}{2}x^2+bx+c$, for any $a$, $b$, and $c$ satisfying $\mu\leq a\leq\beta$. It is easy to verify that the functions we have employed in Cases 1, 2, and 3 of Section B.3 belong to this particular category $\mathcal F$. Therefore, using 1-D quadratic examples does not harm the correctness of our derivation of lower bounds. Additionally, as for the issue raised for Case 1, even in the case of $\beta=\mu$, by scaling $x_0$ (in line 612) up properly and starting at $x_0 = 4\rho$, we can get $x_t \geq \frac{x_0}{2} = 2\rho$ and immediately resolve the issue. > **Weakness 3. Theorem 3.4 seems loose.** Setting $\eta = \frac{1}{\beta}$ and following the same steps as in Theorem 3.4 while also combining the two terms as suggested by the reviewer, we can verify that $\frac{1}{T}\sum_{t=0}^{T-1}\lVert\nabla f(x_t)\rVert^2\leq \frac{2\beta\Delta}{T}+\beta^2\rho^2$. This result shows that we can achieve a convergence rate of $O(1/T)$ to a $O(\beta^2\rho^2)$ stationary point without the Lipschitzness assumption. The same approach applies to stochastic $n$, $m$-SAM as well (with $\eta=\mathrm{min}\\\{\frac{1}{2\beta},\frac{\sqrt{\Delta}}{\sqrt{T\beta\sigma^2}}\\\}$ or $\eta=\frac{1}{\beta}$). We greatly appreciate the reviewer's valuable input which improved our findings, and we will make sure to incorporate these improvements in the revised manuscript. > **Weakness 5. No results regarding generalization** We would appreciate it if you could refer to our response of Question 1 in the ``global response". We hope that our response successfully addresses the issues raised by the reviewer, and we would greatly appreciate it if the reviewer could consider reassessing our paper. Thank you again for the insightful review. --- [1] Kim, H., Park, J., Choi, Y., \& Lee, J. (2023). Stability Analysis of Sharpness-Aware Minimization. arXiv preprint arXiv:2301.06308. --- Rebuttal Comment 1.1: Title: Reply to Rebuttal Comment: I thank the authors for the detailed rebuttal! **Weakness 1**: Thank you for the explanation. Gradient normalization becoming 0 at stationary points is somewhat disconcerting to me. I see that under this seemingly unrealistic definition of normalization, non-convergence is not all that obvious. But I feel this definition is misaligned with reality and hard to reconcile. And perhaps that is why it would be meaningful to incorporate the small corrective $\epsilon$ term in the denominator of normalization. I will raise my score to 6 but I strongly encourage the authors to discuss this limitation and include all the discussion here in the next version of the paper. **Weakness 2**: I agree that the bound can be corrected **w.r.t. $\rho$** by properly up-scaling $x_0$ but the factors of $\beta$ and $\mu$ in the bound don't seem right to me because the functions are 1D quadratics with $\beta = \mu$; I suggest removing the $\frac{\beta}{\mu} \geq 2$ condition in the theorem statement as well as the $\beta$ and $\mu$ terms in the bound. --- Reply to Comment 1.1.1: Title: Additional response to Reviewer y8UR Comment: We appreciate the reviewer for providing thorough insights, and we are glad to know that the reviewer decided to offer a positive evaluation; we find it highly encouraging. As for Weakness 1, we agree to the reviewer that setting $\frac{\nabla f(x)}{\lVert \nabla f(x) \rVert} = 0$ when $\nabla f(x) = 0$ could be deemed a non-standard way of normalization. However, we believe it is reasonable to adopt this approach when it comes to analyzing practical version of SAM. This is because the practical implementations of SAM in fact use $\frac{\nabla f(x)}{\lVert \nabla f(x) \rVert + \epsilon}$, where a small $\epsilon > 0$ is introduced for numerical stability (e.g., the official code of SAM in Tensorflow). We will thoroughly address this discrepancy in our next revision to ensure it is not overlooked. Regarding the issue raised in Weakness 2, we would like to re-emphasize that the $\beta$ and $\mu$ serve as constants defining the **function class** $\mathcal F$, rather than **individual functions** chosen from the class. It is important to note that a 1-d quadratic function $f(x) = \frac{1}{2} a x^2 + b x + c$ is by definition $\beta$-smooth and $\mu$-strongly convex for **any** $\beta$ and $\mu$ satisfying $\mu \leq a \leq \beta$. For instance, the function $f(x) = \frac{1}{2}\cdot\frac{3}{4}x^2$ is $\frac{1}{2}$-strongly convex and $1$-smooth, although these constants are obviously not the tightest. In the proof of Theorem 3.2, our goal is to find the worst-case examples out of the function class $\mathcal F$. This class contains all $\beta$-smooth and $\mu$-strongly convex functions, where $\frac{\beta}{\mu} \geq 2$. Since the **worst-case** functions chosen in our proof are 1-d quadratic functions, with coefficients of quadratic term being $a = \mu$, $\frac{\beta}{2}$, and $\beta$ for Cases 1, 2, and 3, respectively, it becomes evident that $\mu \leq a \leq \beta$ holds for all three cases and they are indeed valid members of $\mathcal F$. Having said that, we acknowledge that it is quite confusing, and it is ideal not to have the assumption $\frac{\beta}{\mu} \geq 2$. We will carefully reconsider it and make necessary adjustments to improve our result and minimize any possible confusion. Again, we appreciate your valuable feedback. We will make sure to incorporate all the discussions into the next version of the paper.
Summary: SAM is a very practical algorithm for improving generalization in deep learning, however, even the convergence properties of SAM are not well understood. The paper studies the convergence/non-convergence of SAM under various standard setups in optimization, including smooth/nonsmooth, convex/strongly convex/nonconvex, deterministic/stochastic optimization problems. The results of this paper provide a detailed and complete description of the convergence of SAM for these problems. Strengths: 1. Compared with existing works, the paper analyzes SAM with fixed $\rho$ and normalization steps, which is the algorithm that people use in practice. Under these practical configurations, this paper provides different characteristics of SAM. 2. The upper bounds give sufficient conditions for the convergence of SAM and the convergence rate of the algorithm under these conditions. Additionally, this paper also provides lower bounds to show the tightness of their upper bounds in the dependency of certain important problem parameters such as $\rho$ and $T$. 3. I found the lower bounds very interesting and they provide many insights about SAM. In contrast to gradient descent (GD), the result in this paper shows that deterministic SAM (full-batch SAM) fails to converge to a stationary point even for smooth nonconvex objectives. In contrast to stochastic gradient descent (SGD), stochastic SAM (m-SAM) even fails to converge to a global minimum for smooth strongly convex objectives. These results justify the limited capability of SAM in optimization and suggest the differences between SAM and other common optimizers such as SGD/Adam/RMSProp. Weaknesses: See the questions part. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: 1. It seems that we only have the non-convergence result of stochastic SAM for smooth and strongly convex functions for m-SAM. Can we prove similar lower bounds for n-SAM? If not, I hope the authors can explain the underlying intuition. By the way, it is proper to say "stochastic SAM for every function classes we consider, fail to converge properly" in Line 359? 2. I can not fully understand the purpose of Sec. 3.4 as well as Thm. 3.6. It seems that for merely convex objectives, gradient descent also fails to converge to a global minima? See the second inequality in Thm. 2.1.7 in Nesterov's celebrated book Lectures on Convex Optimization", Second Edition. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer for their detailed evaluation of our paper. We are truly grateful for your interest in our lower bound results. The responses addressing the questions raised are outlined below. > **Question 1. Can we prove similar lower bounds for $n$-SAM?** Indeed, we have conducted empirical investigations using toy scenarios under $n$-SAM, and observed some scenarios that manifest the same non-convergence phenomenon observed in $m$-SAM. However, the current proof technique employed for establishing the lower bound of $m$-SAM does not readily extend to $n$-SAM. To elaborate on the underlying intuition, in $m$-SAM, the same component function is employed during both the ascending and descending steps (for formal definitions of $n$-SAM and $m$-SAM, please refer to Section 2.2). This symmetry enables us to leverage the concept of virtual loss associated with each component function (component function within the context of stochastic optimization), illustrating how iterations can be trapped within a specific interval. In contrast, $n$-SAM can employ distinct component functions for its ascending and descending steps, rendering the virtual loss of each component function less applicable. Consequently, the existing proof technique cannot be seamlessly transferred to $n$-SAM, prompting the need for an alternative approach. Developing an alternative technique for theoretically establishing the lower bound of $n$-SAM could be a viable direction for future research. One important thing to note, though, is that $n$-SAM is a theoretical variant of SAM, which have never been used in practice. In practical scenarios, it is predominantly $m$-SAM that has gained widespread adoption, as one can check from open-source implementations of SAM on GitHub (we wish we could provide pointers here, but links are not allowed!). Since our focus is centered on the practical settings of SAM, we believe that the absence of lower bound results under $n$-SAM does not significantly harm our paper. Also, thank you for pointing out the subtle issue in line 359. The sentence should indeed be revised to "stochastic $m$-SAM for every function classes we consider, fail to converge properly." > **Question 2. The purpose of Sec 3.4.** In the preceding Sections 3.1-3.3, we study the convergence properties of deterministic SAM for smooth functions under different convexity assumptions. This leads us to wonder whether SAM can similarly provide convergence guarantees for nonsmooth functions. Following this curiosity, we investigate whether SAM can find global minima of nonsmooth convex functions; unfortunately, the answer is on the negative. In Section 3.4, we present a lower bound of $\Omega(\rho)$ on the distance of SAM iterates to the global minimum. This result highlights that, unlike smooth convex functions, SAM is unable to achieve global convergence in nonsmooth convex functions. Indeed, as noted by the reviewer, Theorem 2.1.7 in [1] shows a lower bound on the distance to the global minimum for a smooth convex function. However, the cited theorem has a fundamental difference in comparison to our Theorem 3.6. Importantly, the lower bound in [1] for gradient-based methods exclusively applies to early iterates: specifically, it applies to the range of iterates where $1 \leq t \leq \frac{1}{2}(n-1)$, with $t$ is the iteration index and $n$ represents the dimension of the problem. On the contrary, our established lower bound for SAM holds true for all iterations, even when $t$ goes to infinity. --- [1] Nesterov, Y. (2018). Lectures on convex optimization (Vol. 137, p. 576). Berlin: Springer. --- Rebuttal Comment 1.1: Title: Repsonse to the Rebuttal Comment: The author's rebuttal addressed my concern well. I think it is a very interesting paper, and I decide to raise the score to 7. --- Reply to Comment 1.1.1: Title: Additional response to Reviewer T74n Comment: We express our gratitude to the reviewer for the valuable discussion and positive evaluation. We are glad to hear that our discussion has cleared your concerns. Please let us know if there are any further inquiries. Best regards, Authors
Summary: This paper studies the convergence properties of sharpness-aware minimization in a specific setting with the use of gradient normalization and arbitrary constant perturbation. The paper established convergence rates for both deterministic and stochastic SAM with various assumptions on the convexity of the objective function. The authors argue the term $\mathcal{O}(\rho^2)$ appearing multiple times in their bounds is in fact unavoidable by establishing corresponding lower bounds. Strengths: - The paper is well-written and easy to follow. - The analysis covers different levels of convexity. - The author proves lower bounds to justify their proposed convergence rate. Weaknesses: - The proof of the upper bounds seems to have followed that of [1]. - Several cases are left out, including the lower bound for the smooth and convex case of deterministic SAM, and the lower bound for the smooth and strongly-convex case of stochastic SAM with small variance. - Generalization is not considered. Technical Quality: 3 good Clarity: 3 good Questions for Authors: - General questions: - The paper considers SAM with constant stepsizes and gradient normalization and this setting seems to be a common choice in practice. However, I do think more justification for this setting is needed, especially when [1] derived faster convergence rates for SAM without gradient normalization. - It seems no methodological conclusions for the implementation of SAM are provided in the paper. Based on the results in this paper, (how) should we decrease the perturbation $\rho$ in the algorithm so that the additive term $\mathcal{O}(\rho^2)$ would be resolved and thus the convergence be faster? - Small issues: - There is only one lower bound result in Table 1, which looks weird to me. - In line 830, does "Assumption 4.2" refer to Theorem 4.2? [1] M. Andriushchenko and N. Flammarion. Towards understanding sharpness-aware minimization. In International Conference on Machine Learning, pages 639–668. PMLR, 2022. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 3 good Contribution: 2 fair Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5: Borderline accept: Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation. Please use sparingly. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for devoting their time to evaluate our paper. Here are our responses to the weaknesses raised. > **Weakness 1. Comparison with [1].** Our proofs might share similarities with those in [1]. However, our analyses overcome unique difficulties and hence different from [1]; allow us to elaborate below. First, our paper takes the normalization term into consideration, which is not addressed in [1]. This introduces certain unfavorable terms throughout the proof procedure. One obvious example can be found in line 772. If we ignore normalization, we would have gotten the term $\lVert\rho g(x_t) -\rho \nabla f(x_t)\rVert^2$, which can be easily bounded using Assumption 2.5. However, our paper incorporates the normalization term, leading to a completely different expression: $\lVert\rho\frac{g(x_t)}{\lVert g(x_t)\rVert}-\rho\frac{\nabla f(x_t)}{\lVert\nabla f(x_t)\rVert}\rVert^2$, thereby rendering the analysis more challenging. As a result, employing different techniques (rather than approaches found in [1]) becomes crucial to address these unfavorable terms. Due to these distinctive terms that specifically emerge in our study, many techniques from [1], such as the Lyapunov function method in Theorem 11, cannot be seamlessly transferred to our framework. In cases where these methods are not applicable, we construct an alternative strategy, such as a finely-tailored multi-case analysis based on the values of $\rho$, $\eta$, or $\lVert\nabla f(x_t)\rVert$ (One of the examples that use an alternative method is showcased in Section B.2). Second, the upper bounds presented in our work hold true for all $\rho>0$. In contrast, every theorem in [1] relies on critical assumptions, like bounding $\rho$, decaying $\rho$, or requiring $\rho$ to be sufficiently small. Establishing the results under less restrictive assumptions on $\rho$ naturally requires employing distinct proof techniques. > **Weakness 2. Two cases of lower bound are left out.** We acknowledge that the lower bound for smooth and convex functions in deterministic SAM has not been addressed in this work and is reserved for future investigations. Regarding the case of lower bound for smooth and strongly-convex functions in $m$-SAM with small variance ($\sigma\leq\beta\rho$), a modification to the proof of Theorem 4.2 allows us to at least establish a lower bound for the distance to the global minimum, denoted as $\lVert x_t-x^* \rVert$. In line 827, we can modify the selection of $a = \frac{\beta}{5}$ and instead choose $a = \frac{\sigma}{5\rho} \leq\frac{\beta}{5}$. The validity of the proof remains intact with this revised choice of $a$, even when $\sigma \leq \beta\rho$. Consequently, when the initial point $x_0$ falls within the interval $[c-\rho,c+\rho]$ (where $c=\frac{7}{6}\rho$), all subsequent iterations of $m$-SAM remain within this interval. This observation indicates that even under the condition $\sigma\leq\beta\rho$, an analogous example demonstrates $\lVert x_t-x^* \rVert=\Omega(\rho)$ for all iterates. Of course, this lower bound leads to a similar lower bound on the function value: $f(x_t)-f^* = \Omega(\sigma \rho)$. Nonetheless, we omitted this lower bound, as it does not tightly match the factor $O(\rho^2)$ in the upper bound. However, it is worth highlighting that at least, even in the regime of small $\sigma$, $m$-SAM does not achieve convergence all the way to the global minimum. > **Weakness 3. Generalization not considered.** We would appreciate it if you could refer to our response of Question 1 in the "global response". > **Question 1. Justification for this setting.** SAM is commonly employed in real-world applications, while USAM (SAM without gradient normalization) serves as a theoretical variant designed for theoretical analysis and isn't used practically. As our paper focuses on practical settings, it is natural that we study SAM instead of USAM. Furthermore, USAM shows drastically different behaviors compared to SAM. This distinction is illustrated in Figure 1, where we can observe that USAM has a trajectory closer to GD than SAM. Also, [2] investigates the differences between SAM and USAM, such as the stabilization property (SAM can converge with a wider range of $\eta$ than USAM) or the "drift-along-minima" phenomena (SAM tends to drift along a manifold of minima, while USAM gets stuck at a minimum). So, it is reasonable to regard USAM as an entirely different optimizer when compared to SAM, and hence natural to expect different convergence properties. Hence, it is crucial to notice that the faster convergence rates in [1] for USAM do not imply similar convergence properties for SAM. Our paper specifically addresses the relatively unexplored convergence behavior of SAM under practical setups. > **Question 2. (How) Should we decrease $\rho$?** We would appreciate it if you could refer to our response of Question 2 in the "global response". > **Issues.** For Table 1, some lower bound results have been excluded as they do not provide rates with respect to time $T$. Instead, they demonstrate that the additive factor of $O(\rho^2)$ in upper bounds are unavoidable, thereby having different characteristics compared to the lower bound result (Theorem 3.2) presented in Table 1. Thank you for pointing out the typo in line 830. The correct reference should be "Assumption 2.5", which indeed holds within the interval $[c-\rho,c+\rho]$. We sincerely hope that the reviewer finds our response convincing, and we would appreciate it if the reviewer could consider re-evaluating our score. Thank you again for the helpful feedback. --- [1] Andriushchenko, M., \& Flammarion, N. (2022, June). Towards understanding sharpness-aware minimization. In International Conference on Machine Learning (pp. 639-668). PMLR. [2] Dai, Y., Ahn, K., \& Sra, S. (2023). The Crucial Role of Normalization in Sharpness-Aware Minimization. arXiv preprint arXiv:2305.15287. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I am raising my score to (5), subject to the authors adding the above comments in their paper. --- Reply to Comment 1.1.1: Title: Additional response to Reviewer NH1R Comment: We sincerely appreciate the reviewer for their thorough discussions and for offering a positive evaluation. We will ensure that all the above discussions are incorporated in the next version of the paper. Best regards, Authors
Summary: This paper examines convergence properties of sharpness-aware minimization, which is an empirically-popular algorithm for training deep neural networks, yet whose theoretical properties are poorly understood. This paper makes several contributions to the analysis of sharpness-aware minimization, which are novel in this growing literature. For smooth and convex functions, this paper proves that the best iterate of SAM converges to the global minimizer. This is expected as, for such functions, there exists a global minimizer; thus, even after perturbations, the convergent point should still be the global minimizer. For smooth and nonconvex functions, this paper proves that there is an additional bias term in the convergent point compared with a local minimizer. Furthermore, a lower bound example is constructed, showing that this bias term is necessary. Strengths: - The paper is nicely written and provides enough background knowledge for readers to understand the topic. For each theorem, the authors accompany their results with a sketch of the high-level proof arguments. - The lower-bound examples are especially interesting in terms of complementing the convergence rates. Weaknesses: - The results in the main text are a bit dense. It would be easier to read if the paper is formatted in a more friendly manner. - There are no experiments/simulations to complement the theoretical results. I consider this a limitation since SAM is primarily motivated as an empirically-successful algorithm, so a connection between theoretical results to their practical implications would be important for this audience. Technical Quality: 3 good Clarity: 2 fair Questions for Authors: - I wonder if the authors could discuss the practical implications of their findings. For example, how does the extra bias term affect the empirical performance of SAM; Does it inject some kind of regularizer that is actually helpful for the empirical performance, or is it possible to design a better algorithm that eliminates this bias term altogether? - There is a concurrent paper that studies a very related question: Dai, Y., Ahn, K., & Sra, S. (2023). The Crucial Role of Normalization in Sharpness-Aware Minimization. arXiv preprint arXiv:2305.15287. It would be helpful if the authors could discuss the relation to this paper. Confidence: 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked. Soundness: 3 good Presentation: 2 fair Contribution: 3 good Limitations: The authors discuss the limitations of their results along with presentations in the main text. The potential societal impact of their work should be minimal, given the technical nature of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7: Accept: Technically solid paper, with high impact on at least one sub-area, or moderate-to-high impact on more than one areas, with good-to-excellent evaluation, resources, reproducibility, and no unaddressed ethical considerations. Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful to the reviewer for dedicating their time in evaluating our paper. Your interest in our lower bound examples is deeply appreciated. Below, we provide our responses addressing the raised questions. > **Weakness 1. The results in the main text are a bit dense.** Thank you for providing insights on enhancing readability. We are making an effort to address this matter to improve the quality of our paper. > **Weakness 2. There are no simulations to complement the theoretical results.** We have conducted supplementary simulations for the toy functions utilized in the non-convergence theorems. These simulations corroborate the validity of our theoretical proofs through empirical evidence. The results of these simulations are presented in the PDF file attached in the "global response". Moreover, the simulation result for the non-convergence in nonsmooth convex functions can be found in Figure 4 of Section B.6. We would greatly appreciate it if the reviewer could take a look at these results. If these supplementary experiments are deemed insufficient, we are open to any valuable suggestions for further experimental verification. > **Question 1. Practical implications of our findings.** We would appreciate it if you could refer to our response of Question 2 in the "global response". > **Question 2. Relation to [1].** [1] delves into an examination of the contrasting characteristics between SAM and USAM (SAM without normalization). They particularly focus on the stabilization property of SAM and USAM, i.e., SAM can converge with a wider range of $\eta$ than USAM. Additionally, they study on the "drift-along-minima" phenomena observed in SAM, wherein SAM tends to continue drifting along a manifold of minima, while USAM easily gets stuck at a minimum. In contrast to the results in [1], our paper focuses more on the (non-)convergence properties of SAM pertaining to finding global minima or stationary points. Having said that, the two papers indeed have a couple of directly comparable results too. In the discussion of stabilization (Theorem 1) in [1], the authors show that for $\mu$-strongly convex $\beta$-smooth functions, the iterates of SAM converge to a local neighborhood around the global minimum $x^*$. This theorem can be adapted to establish a convergence guarantee of $f(x_T)-f^* = \tilde{O}(\frac{1}{T})$, achieved by selecting step size $\eta = \mathrm{min}\\\{\frac{1}{\mu T} \mathrm{max}\\\{1, \log(\frac{\mu^2 \Delta T}{\beta^3\rho^2})\\\}, \frac{1}{\beta}\\\}$ and employing a proof technique similar to ours. It is worth highlighting that Theorem 3.1 in our paper achieves a **faster** convergence rate for the same function class: $\tilde{O}(\frac{1}{T^2})$. --- [1] Dai, Y., Ahn, K., \& Sra, S. (2023). The Crucial Role of Normalization in Sharpness-Aware Minimization. arXiv preprint arXiv:2305.15287.
Rebuttal 1: Rebuttal: Dear reviewers, We sincerely thank all the reviewers for their careful evaluation of our paper and their valuable questions and comments. Your reviews have been tremendously helpful in improving our manuscript. We are glad that many reviewers recognized the comprehensiveness of our upper and lower bounds as well as their relevance to practical settings. Below, we would like to start by addressing some concerns that were raised by multiple reviewers. We will address the remaining questions in the individual responses below. > **Question 1. SAM was proposed to improve generalization. But this paper has no results illustrating how SAM improves generalization. (Reviewers y8UR, NH1R)** We definitely agree that the main reason for SAM's widespread adoption is its superior generalization performance. We believe that understanding this characteristic of SAM from a theoretical perspective is of great importance and interest. Indeed, prior researches [1,2] proved SAM's tendency to approach flat minima, but under the crucial assumption that SAM should initially converge near the global minima manifold and/or with a sufficiently small $\rho$. Yet, the question whether practical SAM really converges in the first place still remains open. Our paper tackles this gap by introducing scenarios where SAM can (or cannot) converge, under practical settings. Our paper focuses on the convergence properties of SAM, thus the investigation of SAM's tendency to find flat minima and generalization property is outside the scope of our paper. We wish to tackle the generalization aspect of SAM in the future. > **Question 2. Regarding the practical implication of our findings, How does the extra bias term affect the empirical performance of SAM? Also, is it possible to design a better algorithm that eliminates the extra bias term? (Reviewers AJAE, NH1R)** Although SAM has gained much attention due to the superior generalization ability of models trained by the algorithm, our findings reveal that the capability of SAM as an optimizer is rather limited, which is quite surprising. One straightforward method to eliminate the bias term $O(\rho^2)$ is to use USAM (SAM without gradient normalization) with a sufficiently small (or decaying) $\rho$; as demonstrated in [3], convergence guarantees for USAM have been established under such conditions. Additionally, employing SAM with a sufficiently small constant $\rho$ (which appropriately decays with $T$) can also address the extra bias term. However, we are in fact skeptical if these modifications can be seen as "improved" algorithms, as the modified algorithms will be closer to GD (as outlined in lines 63-89 of our paper), thereby **losing** the distinct features of SAM. The presence of an extra unavoidable bias term in the convergence bounds could potentially relate to SAM's generalization property; however, further investigation is required. As of now, no definitive conclusions can be drawn, making it a promising avenue for future research. > **Additional experiments** As per Reviewer AJAE's suggestion, we conducted additional experiments which demonstrate that SAM indeed does not converge in our worst-case example constructions. The results can be found in the attached pdf file below. --- [1] Bartlett, P. L., Long, P. M., & Bousquet, O. (2022). The dynamics of sharpness-aware minimization: Bouncing across ravines and drifting towards wide minima. arXiv preprint arXiv:2210.01513. [2] Wen, K., Ma, T., & Li, Z. (2022). How does sharpness-aware minimization minimize sharpness?. arXiv preprint arXiv:2211.05729. [3] Andriushchenko, M., & Flammarion, N. (2022, June). Towards understanding sharpness-aware minimization. In International Conference on Machine Learning (pp. 639-668). PMLR. Pdf: /pdf/10d28a3aa4faf6329a38c520d78990b1b8e40e97.pdf
NeurIPS_2023_submissions_huggingface
2,023
null
null
null
null
null
null
null
null